Cornell Theory Center Aug 15 2000 CCTK The Cactus Computational Toolkit Werner Benger Max-PIanck-Institut für Gravitationsphysik (Albert-Einstein-Institute.

Slides:



Advertisements
Similar presentations
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
Advertisements

INTRODUCTION TO SIMULATION WITH OMNET++ José Daniel García Sánchez ARCOS Group – University Carlos III of Madrid.
Programming Languages Marjan Sirjani 2 2. Language Design Issues Design to Run efficiently : early languages Easy to write correctly : new languages.
1 Generic logging layer for the distributed computing by Gene Van Buren Valeri Fine Jerome Lauret.
Gabrielle Allen*, Thomas Dramlitsch*, Ian Foster †, Nicolas Karonis ‡, Matei Ripeanu #, Ed Seidel*, Brian Toonen † * Max-Planck-Institut für Gravitationsphysik.
Parallel Computation of the 2D Laminar Axisymmetric Coflow Nonpremixed Flames Qingan Andy Zhang PhD Candidate Department of Mechanical and Industrial Engineering.
I/O Analysis and Optimization for an AMR Cosmology Simulation Jianwei LiWei-keng Liao Alok ChoudharyValerie Taylor ECE Department Northwestern University.
Cactus in GrADS Dave Angulo, Ian Foster Matei Ripeanu, Michael Russell Distributed Systems Laboratory The University of Chicago With: Gabrielle Allen,
Cactus in GrADS (HFA) Ian Foster Dave Angulo, Matei Ripeanu, Michael Russell.
Cactus Code and Grid Programming Here at GGF1: Gabrielle Allen, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics,
Java for High Performance Computing Jordi Garcia Almiñana 14 de Octubre de 1998 de la era post-internet.
Cactus-G: Experiments with a Grid-Enabled Computational Framework Dave Angulo, Ian Foster Chuang Liu, Matei Ripeanu, Michael Russell Distributed Systems.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
EU Network Meeting June 2001 Cactus Gabrielle Allen, Tom Goodale Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
1 Developing Native Device for MPJ Express Advisor: Dr. Aamir Shafi Co-advisor: Ms Samin Khaliq.
Albert-Einstein-Institut Cactus: Developing Parallel Computational Tools to Study Black Hole, Neutron Star (or Airplane...) Collisions.
The Pipeline Processing Framework LSST Applications Meeting IPAC Feb. 19, 2008 Raymond Plante National Center for Supercomputing Applications.
4/30/04LSU 2004 Cactus Retreat1 Toward Relativistic Hydrodynamics on Adaptive Meshes Joel E. Tohline Louisiana State University
Grads Meeting - San Diego Feb 2000 The Cactus Code Gabrielle Allen Albert Einstein Institute Max Planck Institute for Gravitational Physics
A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Vignesh Santhanagopalan Graduate Student Department Of CSE.
Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
NeSC Apps Workshop July 20 th, 2002 Customizable command line tools for Grids Ian Kelley + Gabrielle Allen Max Planck Institute for Gravitational Physics.
Model Coupling Environmental Library. Goals Develop a framework where geophysical models can be easily coupled together –Work across multiple platforms,
Albert-Einstein-Institut Using Supercomputers to Collide Black Holes Solving Einstein’s Equations on the Grid Solving Einstein’s.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
Objective of numerical relativity is to develop simulation code and relating computing tools to solve problems of general relativity and relativistic astrophysics.
Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational.
Cornell Theory Center Aug Porting CCTK to NT at the Cornell Velocity NT Cluster Werner Benger Max-PIanck-Institut für Gravitationsphysik (Albert-Einstein-Institute.
The european ITM Task Force data structure F. Imbeaux.
Shannon Hastings Multiscale Computing Laboratory Department of Biomedical Informatics.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Framework for MDO Studies Amitay Isaacs Center for Aerospace System Design and Engineering IIT Bombay.
The Cactus Code: A Problem Solving Environment for the Grid Gabrielle Allen, Gerd Lanfermann Max Planck Institute for Gravitational Physics.
Debugging parallel programs. Breakpoint debugging Probably the most widely familiar method of debugging programs is breakpoint debugging. In this method,
1. 2 Preface In the time since the 1986 edition of this book, the world of compiler design has changed significantly 3.
Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.
Cactus Workshop - NCSA Sep 27 - Oct Cactus For Relativistic Collaborations Ed Seidel Albert Einstein Institute
Derek Wright Computer Sciences Department University of Wisconsin-Madison Condor and MPI Paradyn/Condor.
New and Cool The Cactus Team Albert Einstein Institute
Connections to Other Packages The Cactus Team Albert Einstein Institute
Progress on Component-Based Subsurface Simulation I: Smooth Particle Hydrodynamics Bruce Palmer Pacific Northwest National Laboratory Richland, WA.
Cactus Workshop - NCSA Sep 27 - Oct Adding a Driver Thorn to Cactus Tom Goodale Albert Einstein Institute
WebFlow High-Level Programming Environment and Visual Authoring Toolkit for HPDC (desktop access to remote resources) Tomasz Haupt Northeast Parallel Architectures.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
Introduction to OOP CPS235: Introduction.
Conundrum Talk, LBL May 2000 The Cactus Code: A Framework for Parallel Computing Gabrielle Allen Albert Einstein Institute Max Planck Institute for Gravitational.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Source Level Debugging of Parallel Programs Roland Wismüller LRR-TUM, TU München Germany.
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
New and Cool The Cactus Team Albert Einstein Institute
Cactus Workshop - NCSA Sep 27 - Oct Scott H. Hawley*, Matthew W. Choptuik*  *University of Texas at Austin *  University of British Columbia
Cactus Workshop - NCSA Sep 27 - Oct Generic Cactus Workshop: Summary and Future Ed Seidel Albert Einstein Institute
Metacomputing Within the Cactus Framework What and why is Cactus? What has Cactus got to do with Globus? Gabrielle Allen, Thomas Radke, Ed Seidel. Albert-Einstein-Institut.
Lecture 1 Page 1 CS 111 Summer 2013 Important OS Properties For real operating systems built and used by real people Differs depending on who you are talking.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
A Single Intermediate Language That Supports Multiple Implemtntation of Exceptions Delvin Defoe Washington University in Saint Louis Department of Computer.
Cactus Project & Collaborative Working
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Programming paradigms
The Cactus Team Albert Einstein Institute
In-situ Visualization using VisIt
Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations.
Exploring Distributed Computing Techniques with Ccactus and Globus
Hierarchical Architecture
Presented By: Darlene Banta
Overview of Workflows: Why Use Them?
Presentation transcript:

Cornell Theory Center Aug CCTK The Cactus Computational Toolkit Werner Benger Max-PIanck-Institut für Gravitationsphysik (Albert-Einstein-Institute at Golm/Potsdam – AEI) and Konrad-Zuse-Center for Information Technology Berlin (ZIB)

Introduction = Cactus: Original Motivation " Numerical Relativity " Ongoing Research = What is the Cactus Computational Toolkit? " Cactus Design - Flesh and Thorns " What does Cactus provide? = The Cactus Framework " How to use Cactus " Interfacing Cactus = Future Developments " Metacomputing for Numerical Relativity " Plans...

Cactus - Original Motivation: Numerical Relativity Kepler: stable ellipses Einstein: no stable orbits because of gravitational radiation (GR required for extreme gravity) Astronomy with gravitational wave detectors (taken into operation by this year 2000!) The Two-Body problem in General Relativity is still unsolved.

Axial symmetric Collisions of Black Holes (NCSA ) Pre-Cactus (H-Code, G-Code), Cactus 1.0, 2.x, Cactus 3.x US - NSF Grand Challenge Projects Requirement for Collaborative Code A distorted single black hole Two Black Holes

Focus of 3D Numerical Evolutions during the last couple years = Black Holes (prime source for GW) – from Misner BH collisions to grazing BH collisions with initial spin and momentum (Brandt- Bruegmann initial data) = Gravitational Waves (Evolution of Brill Waves, collapse of pure GW, investigation of critical amplitude, i.e. when do black holes form) = Neutron Stars (NASA Neutron Star Grand Challenge, GR hydrodynamics, neutron stars colliding to black holes,...)

Visualization of 3D data with ZIB's Amira Efforts for Remote Visualization, Remote Monitoring, Remote Steering German Gigabit Project (TIKSL), KDI Portal

German Gigabit Project: TIKSL = Remote Visualization = Remote Steering = Online Monitoring = Globus Services = HDF5 Interface = Data Grid Access (DPSS)

What is the Cactus Computational Toolkit? = Portable Application Framework = "Flesh" provides registration and scheduling facilities = Memory management ("gridarrays","gridfunctions") = "Thorns" are exchangeable code segments = Runtime activation/deactivation of thorns = I/O layers (parallel I/O) = Parameter handling = Exchangeable Parallelization layers (MPI, Globus-MPI, Shared Memory,...)

Data Types (Portability) = Cactus data types to provide portability across platforms = CCTK_REAL " CCTK_REAL4, CCTK_REAL8, CCTK_REAL16 = CCTK_INT " CCTK_INT2, CCTK_INT4, CCTK_INT8 = CCTK_CHAR = CCTK_COMPLEX " CCTK_COMPLEX8, CCTK_COMPLEX16, CCTK_COMPLEX32

Cactus Flesh = Make System " Organizes builds as configurations which hold everything needed to build with a particular set of options on a particular architecture. = API " Functions which must be there for thorns to operate. = Scheduler " Sophisticated scheduler which calls thorn-provided functions as and when needed. = CCL " Configuration language which tells the flesh all it needs to know about the thorns.

Data Structures (Memory Management) = Grid Arrays " A multidimensional and arbitrarily sized array distributed among processors = Grid Functions " A field distributed on the multidimensional computational grid (a Grid Array sized to the grid) – Every point in a grid may hold a different value “f(x,y,z)” = Grid Scalars " Values common to all the grid points = Parameters " Values/Keywords that affect the behavior of the code (initialization, evolution, output, etc..) – parameter checking, steerable parameters

Cactus Thorns = Flesh (core) written in C = Thorns (modules) grouped in packages written in F77, F90, C, C++ = Thorn-Flesh interface fixed in 3 files written in CCL (Cactus Configuration Language): " interface.ccl: Grid Functions, Arrays, Scalars (integer, real, logical, complex) " param.ccl: Parameters and their allowed values " schedule.ccl: Entry point of routines, dynamic memory and communication allocations = Object oriented features for thorns (public, private, protected variables, implementations, inheritance) for clearer interfaces = Compilation: " PERL parses the CCL files and creates the flesh-thorn interface code at compile time " Particularly important for the FORTRAN-C interface. FORTRAN arg. lists must be known at compile time, but depend on the thorn list

Interface = The concept: contract with the rest of the code " Now it is only for the data structures: variables and parameters " adding routines and arguments = Private " The variables that you want the flesh to allocate/communicate but no other thorn to see. = Public " The variables that you want everybody to see (that means that everybody can modify them too!) " Inheritance = Protected " Variables that you want only your friends to see!

Implementation = Why " Two or more thorns that provide the same functionality but different internal implementation – Interchangeable pieces that allow easy comparison and evolution in the development process – They are compiled together and only one is activated at runtime = How " If all the other thorns need to see the same contract, then thorns implementing a certain functionality must – Have the same public and protected variables – The same concept applies to parameters and scheduling = Example " Wildly different evolution approaches for the same equations, so all the analysis and initial data thorns remain the same.

Scheduling = Thorns schedule when their routines should be executed = Basic evolution skeleton idea " standard scheduling points INITIAL, EVOL, ANALYSIS " fine control: run this routine BEFORE/AFTER that routine = Extend/customize with scheduling groups " Add my routine to this group of routines " Run the group WHILE some condition is met = Future redesign " The scheduler is really a runtime selector of the computation flow. " Much more power can be added to this concept

Parallelizing an Application Thorn = CCTK_SyncGroup – synchronize ghostzones for a group of grid variables – just add Synchronization to Scheduler configuration file as well = CCTK_Reduce – call any registered reduction operator, e.g. maximum value over the grid = CCTK_Interpolate – call any registered interpolation operator = CCTK_MyProc – unique processor number within the computation = CCTK_nProcs – total number of processors = CCTK_Barrier – waits for all processors to reach this point

PUGH = The standard parallel driver supplied with Cactus is supplied by thorn PUGH = Driver thorn: Sets up grid variables, handles processor decomposition, deals with processor communications = 1,2,3D Grid Arrays and Grid Functions (beta6) = Uses MPI = Custom processor decomposition = Otherwise decomposes in z, then y, then x directions

How to use Cactus = [Optional: Develop thorns, according to some rules " e.g. specify variables through interface.ccl " Specify calling sequence of the thorn subroutines for given problem and algorithm (schedule.ccl) ] = Specify which thorns are desired for simulation (e.g. Einstein equations + special method 1 +HRSC hydro+wave finder + AMR + live visualization module + remote steering tool…) = Specified code is then created, with only those modules, those variables, those I/O routines, this MPI layer, that AMR system,…, needed (minimal binary) = Subroutine calling lists generated automatically = Automatically created for desired computer architecture = Run it…(local, remote, on the Grid using Globus environment)

How does Cactus help ? = Collaborative working " Different people are experts on different parts of the physical problem " Each person can write a thorn which solves their part, e.g. the metric evolution, hydrodynamics, apparent horizon finding,... " Thorns are encapsulated, and different thorns with the same functionality are interchangeable = Parallelism " Cactus provides a parallel layer " Layer is independent of underlying machine architecture " Researchers don't need to think deeply about parallelism " Choice of parallel library layers (Native MPI, MPICH, MPICH-G(2), LAM, WMPI, PACX, HPVM, MPIPro)

... = IO " Cactus provides optimized IO layers for the various machines " Possible to output very large datasets in a short time " Parallel I/O (Parallel HDF5, Panda, John Shalf's FlexIO - various interfaces to MPI-I/O) = Checkpointing " A lot of runs take more time than queing systems allow " Cactus provides mechanisms to dump out the entire state of the simulation and then to read it in again either on the same machine or another = Platform independence " Cactus provides a platform independent environment " Various "strange compilation" issues on different machines are already handled by the CCTK environment

CACTUS on NT The NT Supercluster was used in three demos and two of the HPC challenges at SC ’98, held in Orlando, Florida, November 9-13, 1998.SC ’98

Who Uses Cactus = Numerical Relativity " AEI " WashU " NCSA " Penn State " UIB " Southampton " PRL "... = Computer Science " Panda IO Project (UIUC) " Globus (Argonne) " Cluster evaluation – Roadrunner (UNM) – NT cluster (NCSA) –... " Autopilot (UIUC) " Gigabit Project (DFN)

Horizons (A Metacomputing Application) = Singularity at center hidden by event horizon " Surface through which nothing in the interior can escape = Event horizon only really detectable if we have the whole spacetime " Not possible while running the simulation = Apparent horizon always within event horizon " Many methods to detect this = If a horizon is found " Definitely have a black hole " Can compute Gaussian Curvature to inspect oscillations and correlation to emitted gravitational waves

Distributed Computation of AH SpaceTime Evolution Apparent Horizon Computation AH(t=9.0) AH(t=11.0) AH(t=16.0) RZG CRAY T3E, 512 Nodes (Garching/Munich) ZIB T3E, 16 Nodes AEI Origin 2000, 16 Nodes Cornell NT Cluster, 16 Nodes Globus Services All the required technique (TIKSL NetHDF5, Globus MDS Queries,...) is already there!

= Cactus communication layer ; Parallel driver thorn (e.g. PUGH) currently provides both variable management and communication … ; abstract send and receives etc = Abstract communication from driver thorn ; easily implement different parallel paradigms ; shared memory, threads, Corba, OpenMP, PVM,... = Compact groups (different layout in memory for improved Cache performance) = Unstructured Meshes/Finite Elements/Spectral Methods = Unstructured Multigrid Solver = Convergence/Multiple Coordinate Patches = Capability browsing mechanism = Command line interface … connect directly to Cactus, scheduling = GUIs, Documentation, GUIs, Documentation …. Coming up (Cactus 4.2)

What physical systems are there now ? = Initial Data " Schwarzschild " Misner " Brandt-Bruegmann puncture data " Brill waves " Teukolsky waves " Distorted Brill wave and black hole " TOV " Colliding neutron stars " Orbiting neutron stars " Boson star " Dust

... = Analysis " Apparent horizon finders fast flow minimisation " Wave extraction " Riemann Invariants " Newman-Penrose quantities " Constraint evaluation " Geodesic tracking

Overview = Introduction = Cactus - Original Motivation: Numerical Relativity = Axial symmetric Collisions of Black Holes (NCSA ) = Focus of 3D Numerical Evolutions during the last couple years = Visualization of 3D data with ZIB's Amira = German Gigabit Project: TIKSL

Overview, II = What is the Cactus Computational Toolkit? = Data Types (Portability) = Cactus Flesh = Data Structures (Memory Management) = Cactus Thorns = Interface = Implementation = Scheduling = Parallelizing an Application Thorn = PUGH = How to use Cactus = How does Cactus help ? = CACTUS on NT = Who Uses Cactus = Horizons (A Metacomputing Application) = Distributed Computation of AH