Download presentation
Presentation is loading. Please wait.
1
Cactus in GrADS Dave Angulo, Ian Foster Matei Ripeanu, Michael Russell Distributed Systems Laboratory The University of Chicago With: Gabrielle Allen, Thomas Dramlitsch, Ed Seidel, John Shalf, Thomas Radke
2
Distributed Systems Lab ARGONNE CHICAGO Presentation Outline l Cactus Overview –Architecture –Applications l Cactus and Grid computing –Metacomputing, Worms, … l Proposed Cactus-GrADS project –The “Cactus-G worm” –Tequila thorn and architecture –Issues
3
Distributed Systems Lab ARGONNE CHICAGO What is Cactus? Cactus is a freely available, modular, portable and manageable environment for collaboratively developing parallel, high- performance multidimensional simulations –Originally developed for astrophysics, but nothing about it is astrophysics- specific
4
Distributed Systems Lab ARGONNE CHICAGO Cactus Applications Example output from Numerical Relativity Simulations
5
Distributed Systems Lab ARGONNE CHICAGO Cactus Architecture l Codes are constructed by linking a small core (flesh) with selected modules (thorns) –Custom linking/configuration tools l Core provides basic management services l A wide variety of thorns are supported –Numerical methods –Grids and domain decompositions –Visualization and steering –Etc.
6
Distributed Systems Lab ARGONNE CHICAGO Cactus Architecture Configure CST Flesh Computational Toolkit Operating Systems AIX NT Linux Unicos Solaris HP-UX Thorns Cactus SuperUX Irix OSF Make
7
Distributed Systems Lab ARGONNE CHICAGO Cactus Applications l A Cactus “application” is just another thorn, “linked” with other tool thorns l Numerous Astrophysics applications –E.g., Calculate Schwartzchild Event Horizons for colliding black holes l Potential candidates for GrADS work –Elliptical Solver, BenchADM –Both use 3-D grid abstract topology
8
Distributed Systems Lab ARGONNE CHICAGO Cactus Model (cont.) Building an executable Cactus Source Flesh IOBasic IOASCII WaveToy LDAP Worm … Thorns Configuration Compiler options Tool options MPI options HDF5 options
9
Distributed Systems Lab ARGONNE CHICAGO Running Cactus Parameter File Specify which thorns to activate Specify global parameters Specify restricted parameters Specify private parameters
10
Distributed Systems Lab ARGONNE CHICAGO Parallelism in Cactus l Distributed memory model: each thorn is passed a section of the global grid l The parallel driver (implemented in a thorn) can use whatever method it likes to decompose the grid across processors and exchange ghost zone information - each thorn is presented with a standard interface, independent of the driver l Standard driver distributed with Cactus (PUGH) is for a parallel unigrid and uses MPI for the communication layer l PUGH can do custom processor decomposition and static load balancing l AMR driver also provided
11
Distributed Systems Lab ARGONNE CHICAGO Cactus and Grid Computing: General Observations l Reasons to work with Cactus –Rich structure, computationally intensive, numerous opportunities for Grid computing –Talented and motivated developer/user community l Issues –At core, relatively simple structure –Cactus system is relatively complex –User community is relatively small
12
Distributed Systems Lab ARGONNE CHICAGO Cactus-G: Possible Opportunities l “Metacomputing”: use heterogeneous systems as source of low-cost cycles –Departmental pool or multi-site system l Dynamic resource selection, e.g. –“Cheapest” resources to achieve interactivity –“Fastest” resource for best turnaround –“Best” resolution to meet turnaround goal –Spawn independent tasks: e.g., analysis –Migration to “better” resource for all above
13
Distributed Systems Lab ARGONNE CHICAGO Cactus-G: Common Building Blocks l Resource selection based on resource and application characterizations l Implementation and management of distributed output l (De)centralized logging, accounting for resource usage, parameter selection, etc. l Fault discovery, recovery, tolerance l Code/executable management and creation l Next-generation Cactus that increases flexibility with respect to parameter selection
14
Distributed Systems Lab ARGONNE CHICAGO Proposed Cactus-G Challenge Problem: Cactus-G Worm l Migrate to “faster/cheaper/bigger” system –When system identified by resource discovery –When resource requirements change l Why? –Tests much of the machinery required for Cactus-G (source code mgmt, discovery, …) –Places substantial demands on GrADS –Good potential to show real benefit –Migration approach simplifies infrastructure demands (MPI-2 support not required)
15
Distributed Systems Lab ARGONNE CHICAGO Cactus-G Worm Basic Architecture and Operation Cactus “flesh” “ Tequila” Thorn Compute resource Compute resource … Code repository … Code repository Storage resource Storage resource … Grid Information Service GrADS Resource Selector Application Manager Appln & other thorns (1) Adapt. request (2) Resource request (3) Write checkpoint (4) Migration request (5) Cactus startup (7) Read checkpoint (0) Possible user input (6) Load code (1’) Resource notification Store models, etc. Query
16
Distributed Systems Lab ARGONNE CHICAGO Tequila Thorn Functions l Initiates adaptation on application request or on notification of new resources –Can include user input (e.g., HTTP thorn) l Requests resources from external entity –GIS or ResourceSelector l Checkpoints application l Contacts Application Manager to request restart on new resources –AppManager has security, robustness advantages vs. direct restart
17
Distributed Systems Lab ARGONNE CHICAGO Cactus-G Worm: Approach 1)Uniproc Tequila thorn that speaks to GIS, adapts periodically [done: Cactus group] 2)Tequila thorn that speaks to UCSD Resource Selector [current focus] 3)Integrate accurate performance models 4)Support multiprocessor execution 5)Detailed evaluation 6)Add adaptation triggers: e.g., contract violation, new regime, user input
18
Distributed Systems Lab ARGONNE CHICAGO Tequila Thorn + ResourceSelector l ResourceSelector must be set up as service l Tequila thorn sends request for new bag of resources l ResourceSelector responds with the new bag
19
Distributed Systems Lab ARGONNE CHICAGO Current Status l Tequila thorn prototype developed that speaks to ResourceSelector l Dummy ResourceSelector that returns a static bag of resources l Demonstrated Cactus+Tequila operating l Performance model developed l Expected by May: multiprocessor support, ResourceSelector interface, real performance model
20
Distributed Systems Lab ARGONNE CHICAGO Open Issues l Should we move more management logic into Application Manager? l How does Contract Monitor fit into architecture? l How does PPS fit into architecture? l How does COP and Aplication Launcher fit into architecture (Cactus has its own launcher and compiles its own code)? l How does Pablo fit into architecture (Which thorns are monitored, is flesh monitored)?
21
The End
22
Distributed Systems Lab ARGONNE CHICAGO Request and Response l The Request to the ResourceSelector will be stored in the InformationService l Only the pointer to the data in the IS will be passed to the ResourceSelector l The Response from the ResourceSelector will also be stored in the IS l Only the pointer to the data in the IS will be passed back.
23
Distributed Systems Lab ARGONNE CHICAGO Tequila communication overview Cactus Tequila Thorn Resource Selector Information Service
24
Distributed Systems Lab ARGONNE CHICAGO Cactus Architecture in GrADS Configure CST Flesh Computational Toolkit Operating Systems AIX NT Linux Unicos Solaris HP-UX Thorns Cactus SuperUX Irix OSF Make Toolkit Grads Communi- cation library
25
Distributed Systems Lab ARGONNE CHICAGO Communication Details step 1 l Event sent to Tequila thorn requesting restart Cactus Tqeuila Thorn Resource Selector Information Service
26
Distributed Systems Lab ARGONNE CHICAGO Communication Details step 2 l Tequila store AART in IS Cactus Tqeuila Thorn Resource Selector Information Service
27
Distributed Systems Lab ARGONNE CHICAGO Communication Details step 3 l Tequila sends request to ResourceSelector passing pointer to data in IS Cactus Tqeuila Thorn Resource Selector Information Service
28
Distributed Systems Lab ARGONNE CHICAGO Communication Details step 4 l ResourceSelector retrieves AART from IS Cactus Tqeuila Thorn Resource Selector Information Service
29
Distributed Systems Lab ARGONNE CHICAGO Communication Details step 5 l ResourceSelector stores bag of resources (in AART) in IS Cactus Tqeuila Thorn Resource Selector Information Service
30
Distributed Systems Lab ARGONNE CHICAGO Communication Details step 6 l ResourceSelector responds to Tequila passing pointer to data in IS Cactus Tqeuila Thorn Resource Selector Information Service
31
Distributed Systems Lab ARGONNE CHICAGO Communication Details step 7 l Tequila retrieves AART with new bag of resources from IS Cactus Tqeuila Thorn Resource Selector Information Service
32
Distributed Systems Lab ARGONNE CHICAGO Requirements l Using the IS for communication adds overhead. l Why do this? l GrADS requirement 1: do some things (e.g. compile) at one time and have the results stored in a persistent storage area. Pick these stored results up later and complete other phases.
33
Distributed Systems Lab ARGONNE CHICAGO Sample Tequila Scenario l User asks to run an ADM simulation 400x400x400 for 1000 timesteps in 10s. l Resource selector contacted to obtain virtual machines l Best virtual machine selected based on performance model. l AM starts Cactus on that virtual machine (and monitors execution Contracts?) l User (or application manager) decides that computation advances too slow and decides to search for a better virtual machine l AM finds a better machine, commands the Cactus run to Checkpoint, transfers files and restart Cactus
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.