Slide #1.

Dynamic Virtual Clusters in a Grid Site Manager Jeff Chase, David Irwin, Laura Grit, Justin Moore, Sara Sprenkle Department of Computer Science Duke University
More slides like this


Slide #2.

Dynamic Virtual Clusters Grid Services Grid Services Grid Services
More slides like this


Slide #3.

Motivation Next Generation Grid • Flexibility Dynamic instantiation of software environments and services • Predictability Resource reservations for predictable application service quality • Performance Dynamic adaptation to changing load and system conditions • Manageability Data center automation
More slides like this


Slide #4.

Cluster-On-Demand (COD) COD DHCP NIS NFS DNS COD database (templates, status) Virtual Cluster #1 Virtual Cluster #2 Differences: • OS (Windows, Linux) • Attached File Systems • Applications • User accounts Goals for this talk • Explore virtual cluster provisioning • Middleware integration (feasibility, impact)
More slides like this


Slide #5.

Cluster-On-Demand and the Grid Safe to donate resources to the grid • Resource peering between companies or universities • Isolation between local users and grid users • Balance local vs. global use Controlled provisioning for grid services • Service workloads tend to vary with time • Policies reflect priority or peering arrangements • Resource reservations Multiplex many Grid PoPs • Avaki and Globus on the same physical cluster • Multiple peering arrangements
More slides like this


Slide #6.

Outline Overview • Motivation • Cluster-On-Demand System Architecture • Virtual Cluster Managers • Example Grid Service: SGE • Provisioning Policies Experimental Results Conclusion and Future Work
More slides like this


Slide #7.

System Architecture A Provisioning Policy COD Manager Middleware Layer VCM GridEngine VCM GridEngine VCM GridEngine GridEngine Commands B XML-RPC Interface Sun GridEngine Batch Pools within Three Isolated Vclusters Node reallocation C
More slides like this


Slide #8.

Virtual Cluster Manager (VCM) Communicates with COD Manager • Supports graceful resizing of vclusters Simple extensions for well-structured grid services • Support already present Software handles membership changes Node failures and incremental growth • Application services can handle this gracefully COD Manager VCM Service add_nodes remove_nodes resize Vcluster
More slides like this


Slide #9.

Sun GridEngine Ran GridEngine middleware within vclusters Wrote wrappers around GridEngine scheduler Did not alter GridEngine Most grid middleware can support modules COD Manager VCM Vcluster Service add_nodes remove_nodes resize qconf qstat
More slides like this


Slide #10.

Pluggable Policies Local Policy • Request a node for every x jobs in the queue • Relinquish a node after being idle for y minutes Global Policies • Simple Policy Each vcluster has a priority Higher priority vclusters can take nodes from lower priority vclusters • Minimum Reservation Policy Each vcluster guaranteed percentage of nodes upon request Prevents starvation
More slides like this


Slide #11.

Outline Overview • Motivation • Cluster-On-Demand System Architecture • Virtual Cluster Managers • Example Grid Service: SGE • Provisioning Policies Experimental Results Conclusion and Future Work
More slides like this


Slide #12.

Experimental Setup Live Testbed • Devil Cluster (IBM, NSF) 71 node COD prototype • Trace driven---sped up traces to execute in 12 hours • Ran synthetic applications Emulated Testbed • Emulates the output of SGE commands • Invisible to the VCM that is using SGE • Trace driven • Facilitates fast, large scale tests Real batch traces • Architecture, BioGeometry, and Systems groups
More slides like this


Slide #13.

Live Test
More slides like this


Slide #14.

Architecture Vcluster
More slides like this


Slide #15.

Emulation Architecture Provisioning Policy VCM COD Manager VCM VCM Architecture Systems BioGeometry Trace Trace Load Generation Trace Emulated GridEngine FrontEnd qstat Emulato r XML-RPC Interface COD Manager and VCM are unmodified from real system Each Epoch 1. Call resize module 2. Pushes emulation forward one epoch 3. qstat returns new state of cluster 4. add_node and remove_node alter emulator
More slides like this


Slide #16.

Minimum Reservation Policy
More slides like this


Slide #17.

Emulation Results Minimum Reservation Policy • Example policy change • Removed starvation problem Scalability • Ran same experiment with 1000 nodes in 42 minutes making all node transitions that would have occurred in 33 days • There were 3.7 node transitions per second resulting in approximately 37 database accesses per second. • Database scalable to large clusters
More slides like this


Slide #18.

Related Work Cluster Management • NOW, Beowulf, Millennium, Rocks • Homogenous software environment for specific applications Automated Server Management • IBM’s Oceano and Emulab • Target specific applications (Web services, Network Emulation) Grid • COD can support GARA for reservations • SNAP combines SLAs of resource components COD controls resources directly
More slides like this


Slide #19.

Future Work Experiment with other middleware Economic-based policy for batch jobs Distributed market economy using vclusters • Maximize profit based on utility of applications • Trade resources between Web Services, Grid Services, batch schedulers, etc.
More slides like this


Slide #20.

Conclusion No change to GridEngine middleware Important for Grid services • Isolates grid resources from local resources • Enables policy-based resource provisioning Policies are pluggable Prototype system • Sun GridEngine as middleware Emulated system • Enables fast, large-scale tests • Test policy and scalability
More slides like this


Slide #21.

Example Epoch Architecture Nodes 4,6. Format and Forward requests VCM COD Manager 5. Make Allocations Update Database Configure nodes 2a. qstat 1abc.resize 3a.nothing GridEngine 3b.reque GridEngine VCM 7b.add_node st VCM 2b. qstat 8b. qconf add_host Systems Nodes GridEngine 3c.remove 7c.remove_node 8c. qconf remove_host Sun GridEngine Batch Pools within Three Isolated Vclusters 2c. qstat Node reallocation BioGeomet ry Nodes
More slides like this


Slide #22.

New Cluster Management Architecture Cluster-On-Demand • • • • Secure isolation of multiple user communities Custom software environments Dynamic policy-based resource provisioning Acts as a Grid Site Manager Virtual clusters • Host different user groups and software environments in isolated partitions • Virtual Cluster Manager (VCM) Coordinates between local and global clusters
More slides like this


Slide #23.

Dynamic Virtual Clusters Varying demand over time • Negotiate resource provisioning by interfacing with application specific service manager • Logic for monitering load and changing membership Fundamental for the next-generation grid • COD controls local resources • Exports a resource negotiation interface to local grid service middleware • Vclusters encapsulate batch schedulers, Web services, Grid Services • No need to place more complicated resource management into grid service middleware
More slides like this


Slide #24.

Resource Negotiation Flexible, extensible policies for resource management Secure Highly Available Resource Peering (SHARP) • Secure external control of site resources • Soft-state reservations of resource shares for specific time intervals COD Manager and VCM communicate through XML-RPC interface
More slides like this


Slide #25.

Cluster-On-Demand (COD) Clients VCM Web Services COD DHCP NIS NFS DNS VCM Batch Scheduler Differences: • OS (Windows, Linux) • Attached File Systems • Applications • User accounts Clients Goals • Explore virtual cluster provisioning • Middleware integration (feasibility, impact) COD database (templates, status) Non-goals • Mechanism for managing and switching configurations
More slides like this


Slide #26.

Example Node Reconfiguration 1. 2. 3. 4. 5. 6. 7. Node comes online DHCP queries status from database If new config—loads minimum trampoline OS PXELinux 1. Generic x86 Linux kernel and RAM-based root file system Sends summary of hardware to confd Confd directs trampoline to partition drives and install images (from database) COD assigns IP addresses within a subnet for each vcluster 1. Vcluster occupies private DNS domain (MyDNS) 2. Executes within predefined NIS domain, enables access for user identities 3. COD exports NFS file storage volumes 1. Nodes obtain NFS mount map through NIS Web Interface Differences: • OS (Windows, Linux) • Attached File Systems • Applications • User accounts
More slides like this


Slide #27.

System Architecture Local Provisioning Global Provisioning Policy Policy COD Manager Middleware Layer VCM GridEngine VCM GridEngine VCM GridEngine XML-RPC Interface add_nodes remove_nodes resize Sun GridEngine Batch Pools within Three Isolated Vclusters GridEngine Commands Architecture Nodes Load from users Systems Nodes Load from users qconf qstat qsub Node reallocation BioGeomet ry Nodes
More slides like this


Slide #28.

Outline Overview • Motivation • Cluster-On-Demand System Architecture • System Design • Provisioning Policies Experimental Results Conclusion and Future Work
More slides like this