Cactus Code and Grid Programming Here at GGF1: Gabrielle Allen, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics,

Slides:



Advertisements
Similar presentations
Three types of remote process invocation
Advertisements

The Anatomy of the Grid: An Integrated View of Grid Architecture Carl Kesselman USC/Information Sciences Institute Ian Foster, Steve Tuecke Argonne National.
Gabrielle Allen*, Thomas Dramlitsch*, Ian Foster †, Nicolas Karonis ‡, Matei Ripeanu #, Ed Seidel*, Brian Toonen † * Max-Planck-Institut für Gravitationsphysik.
1 GridLab Grid Application Toolkit and Testbed Contact: Jarek Nabrzyski, GridLab Project Coordinator Poznań Supercomputing and Networking.
USING THE GLOBUS TOOLKIT This summary by: Asad Samar / CALTECH/CMS Ben Segal / CERN-IT FULL INFO AT:
Distributed components
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
Cactus in GrADS Dave Angulo, Ian Foster Matei Ripeanu, Michael Russell Distributed Systems Laboratory The University of Chicago With: Gabrielle Allen,
ARCS Data Analysis Software An overview of the ARCS software management plan Michael Aivazis California Institute of Technology ARCS Baseline Review March.
Cactus in GrADS (HFA) Ian Foster Dave Angulo, Matei Ripeanu, Michael Russell.
The Cactus Portal A Case Study in Grid Portal Development Michael Paul Russell Dept of Computer Science The University of Chicago
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
GridLab & Cactus Joni Kivi Maarit Lintunen. GridLab  A project funded by the European Commission  The project was started in January 2002  Software.
GridSphere for GridLab A Grid Application Server Development Framework By Michael Paul Russell Dept Computer Science University.
Grids and Globus at BNL Presented by John Scott Leita.
Cactus-G: Experiments with a Grid-Enabled Computational Framework Dave Angulo, Ian Foster Chuang Liu, Matei Ripeanu, Michael Russell Distributed Systems.
Cactus Tools for the Grid Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
EU Network Meeting June 2001 Cactus Gabrielle Allen, Tom Goodale Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
CIS 375—Web App Dev II Microsoft’s.NET. 2 Introduction to.NET Steve Ballmer (January 2000): Steve Ballmer "Delivering an Internet-based platform of Next.
The Cactus Code: A Parallel, Collaborative, Framework for Large Scale Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert.
The Astrophysics Simulation Collaboratory Portal Case Study of a Grid-Enabled Application Environment HPDC-10 San Francisco Michael Russell, Gabrielle.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
NeSC Apps Workshop July 20 th, 2002 Customizable command line tools for Grids Ian Kelley + Gabrielle Allen Max Planck Institute for Gravitational Physics.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
Projects using Cactus Gabrielle Allen Cactus Retreat Baton Rouge, April 2004.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational.
Contents 1.Introduction, architecture 2.Live demonstration 3.Extensibility.
The Anatomy of the Grid: An Integrated View of Grid Architecture Ian Foster, Steve Tuecke Argonne National Laboratory The University of Chicago Carl Kesselman.
GridLab: A Grid Application Toolkit and Testbed
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Nomadic Grid Applications: The Cactus WORM G.Lanfermann Max Planck Institute for Gravitational Physics Albert-Einstein-Institute, Golm Dave Angulo University.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
Hands-On Microsoft Windows Server Implementing Microsoft Internet Information Services Microsoft Internet Information Services (IIS) –Software included.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
 Apache Airavata Architecture Overview Shameera Rathnayaka Graduate Assistant Science Gateways Group Indiana University 07/27/2015.
Grid Computing & Semantic Web. Grid Computing Proposed with the idea of electric power grid; Aims at integrating large-scale (global scale) computing.
Developing Applications on Today’s Grids Tom Goodale Max Planck Institute for Gravitational Physics
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
The Cactus Code: A Problem Solving Environment for the Grid Gabrielle Allen, Gerd Lanfermann Max Planck Institute for Gravitational Physics.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.
GridLab WP-2 Cactus GAT (CGAT) Ed Seidel, AEI & LSU Co-chair, GGF Apps RG, Gridstart Apps TWG Gabrielle Allen, Robert Engel, Tom Goodale, *Thomas Radke.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
New and Cool The Cactus Team Albert Einstein Institute
Connections to Other Packages The Cactus Team Albert Einstein Institute
1 Observations on Architecture, Protocols, Services, APIs, SDKs, and the Role of the Grid Forum Ian Foster Carl Kesselman Steven Tuecke.
Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
SPI NIGHTLIES Alex Hodgkins. SPI nightlies  Build and test various software projects each night  Provide a nightlies summary page that displays all.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
New and Cool The Cactus Team Albert Einstein Institute
Cactus Workshop - NCSA Sep 27 - Oct Generic Cactus Workshop: Summary and Future Ed Seidel Albert Einstein Institute
MSF and MAGE: e-Science Middleware for BT Applications Sep 21, 2006 Jaeyoung Choi Soongsil University, Seoul Korea
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Cactus Project & Collaborative Working
Cactus Tools for the Grid
The Cactus Team Albert Einstein Institute
Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations.
Exploring Distributed Computing Techniques with Ccactus and Globus
Dynamic Grid Computing: The Cactus Worm
Presentation transcript:

Cactus Code and Grid Programming Here at GGF1: Gabrielle Allen, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Cactus: Parallel, Collaborative, Modular Application Framework = Open source PSE for scientists and engineers... USER DRIVEN... easy parallelism, no new paradigms, flexible, Fortran, legacy codes. = Flesh (ANSI C) provides code infrastructure (parameter, variable, scheduling databases, error handling, APIs, make, parameter parsing) = Thorns (F77/F90/C/C++) are plug-in and swappable modules or collections of subroutines providing both the computational instructructure and the physical application. Well-defined interface through 3 config files = Everything implemented as a swappable thorn... use best available infrastructure without changing application thorns. = Collaborative, remote and Grid tools = Computational Toolkit: existing thorns for (Parallel) IO, elliptic, MPI unigrid driver, coordinates, interpolations, and more. = Integrate other common packages and tools, HDF5, PETSc, GrACE..

Grid-Enabled Cactus = Cactus and its ancestor codes have been using Grid infrastructure since motivated by simulation requirements... = Support for Grid computing was part of the design requirements for Cactus 4.0 (experiences with Cactus 3) = Cactus compiles out-of-the-box with Globus [using globus device of MPICH-G(2)]  Design of Cactus means that applications are unaware of the underlying machine/s that the simulation is running on … applications become trivially Grid-enabled = Infrastructure thorns (I/O, driver layers) can be enhanced to make most effective use of the underlying Grid architecture = Involved in lots of ongoing Grid projects....

Working and Used "User" Grid Tools = Remote monitoring and steering of simulations (thorn HTTPD). = Just adding point-and-click visualization to Web Server, and embedded visualization. = Remote visualization... data streamed to local site for analysis with local clients "Window into simulation" = Notification by at start of simulation, including details of where to connect (URLs) for other tools. = Checkpointing and restart between machines.

Grand Picture Remote steering and monitoring from airport Origin: NCSA Remote Viz in St Louis T3E: Garching Simulations launched from Cactus Portal Grid enabled Cactus runs on distributed machines Remote Viz and steering from Berlin Viz of data from previous simulations in SF caf ₫ DataGrid/DPSS Downsampling Globus http HDF5 IsoSurfaces

Cactus Worm = Egrid Test Bed: 10 Sites = Simulation starts on one machine, seeks out new resources (faster/cheaper/bigger) and migrates there, etc, etc = Uses: Cactus, Globus = Protocols: gsissh, gsiftp, streams or copies data = Queries Egrid GIIS at each site = Publishes simulation information to Egrid GIIS = Demonstrated at Dallas SC2000 = Development proceeding with KDI ASC (USA), TIKSL/GriKSL (Germany), GrADS (USA), Application Group of Egrid (Europe) = Fundamental dynamic Grid application !!! = Leads directly to many more applications

Go! Dynamic Grid Computing Clone job with steered parameter Queue time over, find new machine Add more resources Found a horizon, try out excision Look for horizon Calculate/Output Grav. Waves Calculate/Output Invariants Find best resources Free CPUs!! NCSA SDSC RZG SDSC LRZArchive data

Users View … Has To Be Easy!

Grid Application Toolkit (GAT) Cactus Application [Black hole collisions, chemical engineering, cosmology, hydrodynamics] Grid Tools [Worm, Spawner, Taskfarmer] Cactus Programming APIs [Grid variables, parameters, scheduling, IO, interpolation, reduction, driver] Grid Application Toolkit [Information services, resource broker, data management, monitoring] YOUR GRID Distributed resources, different OS, software Grid Service Grid Service Grid Service

Grid Application Toolkit (GAT) = Application developer should be able to build simulations with tools that easily enable dynamic grid capabilities = Want to build programming API to: ; Query/Publish to Information Services –Application, Network, Machine and Queue status ; Resource Brokers –Where to go, how to decide?, how to get there? ; Move and manage data, compose executables –Gsiscp, gsiftp, streamed HDF5, scp, GASS,... ; Higher level grid tools: –Migrate, Spawn, Taskfarm, Vector, Pipeline, Clone,... ; Notification –Send me an , SMS, fax, page, when something happens. = Much more.

Usability = Standard User: ; ThornList, parameter file, (configuration file, batch script) = Grid User: = Cactus Application Portal (ASC Project, Version 2 soon) = Application Developer: ; Already Grid aware plug-and-play thorns for I/O, communication (MPICH-G) ; Planned Grid Application Toolkit with flexible API for using Grid services and tools (Resource selection, data management, monitoring, information services,...)  Higher level tools built from basic components: Worm/Migrator, Taskfarmer, Spawner … can be applied to any application. = Grid Infrastructure Developer: ; GAT will provide the same interface to different implementations of Grid services, allowing tools to be easily tested and compared ; As soon as new tools are available they can straightaway be used by Grid- enabled applications.

Portability = Ported to many different architectures (everything we have access to), new architectures are usually no problem since use standard programming languages and packages. = Cactus data types used to ensure compatibility between machines (Distributed simulations have been successfully performed between many different architectures). = Architecture independent checkpointing. Can checkpoint a Cactus run using 32 processors on machine A, and restart on machine B using 256 processors.

Interoperability = We want to be able to make use of Grid services as they become available. = Developing APIs for accessing grid services from applications, either as a library or remote call, e.g. Grid_PostInfo(id, machine, tag, info, service) where service could be mail, SMS, HTTPD, GIS, etc = Lower level thorns will interpret to call the given service. = Use function registration/overloading as appropriate, to call a range of services. = Key point is we will be able to easily add a new service without changing application code.

Reliable Performance = Initial implementation of remote performance monitoring using PAPI library and tools (thorn PAPI) = Since Cactus APIs manages storage assignment and communication it will be possible to make use of this information dynamically. = The Grid Application Toolkit will include APIs for performance monitoring (Grid and Application) and contract specification. = Projects underway to develop application code instrumentation, both from supplied data (Cactus configuation files) and dynamically generated data.

Reliability and Fault Tolerance = Initial implementation of Cactus Worm was quickly coded for SC2000, with no emphasise on fault tolerance = These experiments showed how important fault tolerance was, as well as detailed log files. = The Grid Application Toolkit will have error checking and reporting, eg check a machine is up before sending data to it. = But still a problem... what to do when one machine (e.g. In a cluster) fails during simulation?

Security and Privacy  We want it … ; Passwords, Certificates, Multiple certificates? ; Who can access information about a simulation? ; Who can steer a simulation? ; How does the resource broker know about my resources? ; Collaborative issues, groups ; Who can connect to this socket? = How can we include in GAT? Has to be easy for users, ; Scientists don't care too much about security ; They don't keep private stuff on remote resources = Cactus Grid Tools send data, information through sockets, problems with firewalls.