GridLab: Dynamic Grid Applications for Science and Engineering A story from the difficult to the ridiculous… Ed Seidel Max-Planck-Institut für Gravitationsphysik.

Slides:



Advertisements
Similar presentations
The Anatomy of the Grid: An Integrated View of Grid Architecture Carl Kesselman USC/Information Sciences Institute Ian Foster, Steve Tuecke Argonne National.
Advertisements

High Performance Computing Course Notes Grid Computing.
Gabrielle Allen*, Thomas Dramlitsch*, Ian Foster †, Nicolas Karonis ‡, Matei Ripeanu #, Ed Seidel*, Brian Toonen † * Max-Planck-Institut für Gravitationsphysik.
1 GridLab Grid Application Toolkit and Testbed Contact: Jarek Nabrzyski, GridLab Project Coordinator Poznań Supercomputing and Networking.
GridLab Enabling Applications on the Grid Jarek Nabrzyski et al. Poznań Supercomputing and Networking.
Cactus in GrADS Dave Angulo, Ian Foster Matei Ripeanu, Michael Russell Distributed Systems Laboratory The University of Chicago With: Gabrielle Allen,
Cactus in GrADS (HFA) Ian Foster Dave Angulo, Matei Ripeanu, Michael Russell.
Supporting Efficient Execution in Heterogeneous Distributed Computing Environments with Cactus and Globus Gabrielle Allen, Thomas Dramlitsch, Ian Foster,
The Cactus Portal A Case Study in Grid Portal Development Michael Paul Russell Dept of Computer Science The University of Chicago
Portals Team GridSphere and the GridLab Project Jason Novotny Michael Russell Oliver Wehrens Albert.
Cactus Code and Grid Programming Here at GGF1: Gabrielle Allen, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics,
GridSphere for GridLab A Grid Application Server Development Framework By Michael Paul Russell Dept Computer Science University.
Cactus-G: Experiments with a Grid-Enabled Computational Framework Dave Angulo, Ian Foster Chuang Liu, Matei Ripeanu, Michael Russell Distributed Systems.
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
Cactus Tools for the Grid Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
EU Network Meeting June 2001 Cactus Gabrielle Allen, Tom Goodale Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
The Cactus Code: A Parallel, Collaborative, Framework for Large Scale Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert.
The Astrophysics Simulation Collaboratory Portal Case Study of a Grid-Enabled Application Environment HPDC-10 San Francisco Michael Russell, Gabrielle.
GridLab A Grid Application Toolkit and Testbed IST Jarek Nabrzyski GridLab Project Coordinator Poznań.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
CS 390- Unix Programming Environment CS 390 Unix Programming Environment Topics to be covered: Distributed Computing Fundamentals.
Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
NeSC Apps Workshop July 20 th, 2002 Customizable command line tools for Grids Ian Kelley + Gabrielle Allen Max Planck Institute for Gravitational Physics.
What are the main differences and commonalities between the IS and DA systems? How information is transferred between tasks: (i) IS it may be often achieved.
Dynamic Grid Simulations for Science and Engineering Ed Seidel Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institute) NCSA, U of Illinois.
Albert-Einstein-Institut Using Supercomputers to Collide Black Holes Solving Einstein’s Equations on the Grid Solving Einstein’s.
Projects using Cactus Gabrielle Allen Cactus Retreat Baton Rouge, April 2004.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational.
GridLab: A Grid Application Toolkit and Testbed Jarosław Nabrzyski GridLab Project Manager Poznań Supercomputing and Networking Center, Poland
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
GridLab: A Grid Application Toolkit and Testbed
Nomadic Grid Applications: The Cactus WORM G.Lanfermann Max Planck Institute for Gravitational Physics Albert-Einstein-Institute, Golm Dave Angulo University.
Do’s and Don’ts of Building Grand Challenge Application Teams Ed Seidel Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institute) NCSA, U.
April 1st, The ASC- GridLab Portal Edward Seidel, Michael Russell, Gabrielle Allen, and the rest of the team Max Plank Institut für Gravitationsphysik.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
Ed Seidel Albert Einstein Institute Sources of Gravitational Radiation A.1 Development of CACTUS n Training in the use of the Cactus.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.
GridLab WP-2 Cactus GAT (CGAT) Ed Seidel, AEI & LSU Co-chair, GGF Apps RG, Gridstart Apps TWG Gabrielle Allen, Robert Engel, Tom Goodale, *Thomas Radke.
GridLab Resource Management System (GRMS) Jarek Nabrzyski GridLab Project Coordinator Poznań Supercomputing and.
New and Cool The Cactus Team Albert Einstein Institute
Scenarios for Grid Applications Ed Seidel Max Planck Institute for Gravitational Physics
Connections to Other Packages The Cactus Team Albert Einstein Institute
7. Grid Computing Systems and Resource Management
Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
GridLab Resource Management System (GRMS) Jarek Nabrzyski GridLab Project Coordinator Poznań Supercomputing and.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
Motivation: dynamic apps Rocket center applications: –exhibit irregular structure, dynamic behavior, and need adaptive control strategies. Geometries are.
New and Cool The Cactus Team Albert Einstein Institute
Cactus Workshop - NCSA Sep 27 - Oct Generic Cactus Workshop: Summary and Future Ed Seidel Albert Einstein Institute
Metacomputing Within the Cactus Framework What and why is Cactus? What has Cactus got to do with Globus? Gabrielle Allen, Thomas Radke, Ed Seidel. Albert-Einstein-Institut.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Developing HPC Scientific and Engineering Applications: From the Laptop to the Grid Gabrielle Allen, Tom Goodale, Thomas.
Cactus Project & Collaborative Working
Cactus Tools for the Grid
GridLab: Dynamic Grid Applications for Science and Engineering A story from the difficult to the ridiculous… Ed Seidel Max-Planck-Institut für Gravitationsphysik.
The Cactus Team Albert Einstein Institute
Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations.
Grid Computing.
Exploring Distributed Computing Techniques with Ccactus and Globus
Dynamic Grid Computing: The Cactus Worm
Adaptive Grid Computing
Presentation transcript:

GridLab: Dynamic Grid Applications for Science and Engineering A story from the difficult to the ridiculous… Ed Seidel Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institute) NCSA, U of Illinois + Lots of colleagues… Co-Chair, GGF Applications Working Group

Grand Challenge Simulations Science and Eng. Go Large Scale: Needs Dwarf Capabilities EU Network Astrophysics 10 EU Institutions, 3 years Try to finish these problems … Entire Community becoming Grid enabled Examples of Future of Science & Engineering Require Large Scale Simulations, beyond reach of any machine Require Large Geo-distributed Cross-Disciplinary Collaborations Require Grid Technologies, but not yet using them! Both Apps and Grids Dynamic… NSF Black Hole Grand Challenge 8 US Institutions, 5 years Solve problem of colliding black holes (try…) NASA Neutron Star Grand Challenge 5 US Institutions Solve problem of colliding neutron stars (try…)

Any Such Computation Requires Incredible Mix of Varied Technologies and Expertise! Many Scientific/Engineering Components Physics, astrophysics, CFD, engineering,... Many Numerical Algorithm Components Finite difference methods? Elliptic equations: multigrid, Krylov subspace, preconditioners,... Mesh Refinement? Many Different Computational Components Parallelism (HPF, MPI, PVM, ???) Architecture Efficiency (MPP, DSM, Vector, PC Clusters, ???) I/O Bottlenecks (generate gigabytes per simulation, checkpointing…) Visualization of all that comes out! Scientist/eng. wants to focus on top, but all required for results... Such work cuts across many disciplines, areas of CS…

Cactus: community developed simulation infrastructure Developed as response to needs of large scale projects Numerical/computational infrastructure to solve PDE’s Freely available, Open Source community framework: spirit of gnu/linux Many communities contributing to Cactus Cactus Divided in “Flesh” (core) and “Thorns” (modules or collections of subroutines) Multilingual: User apps Fortran, C, C ++ ; automated interface between them Abstraction: Cactus Flesh provides API for virtually all CS type operations Storage, parallelization, communication between processors, etc Interpolation, Reduction IO (traditional, socket based, remote viz and steering…) Checkpointing, coordinates “Grid Computing”: Cactus team and many collaborators worldwide, especially NCSA, Argonne/Chicago, LBL.

Modularity of Cactus... Application 1 Cactus Flesh Application 2... Sub-app AMR (GrACE, etc) MPI layer 3I/O layer 2 Unstructured... Globus Metacomputing Services User selects desired functionality… Code created... Abstractions... Remote Steer 2 MDS/Remote Spawn Legacy App 2 Symbolic Manip App

Geophysics (Bosl) Numerical Relativity Community Cornell Crack prop. NASA NS GC Livermore BioInformatic (Canada) Intel Microsoft Clemson “Egrid” NCSA, ANL, SDSC AEI Cactus Group (Allen) NSF KDI (Suen) EU Network (Seidel) Astrophysics (Zeus) Global Grid Forum DLR DFN Gigabit (Seidel) GridLab (Allen, Seidel, …) ChemEng (Bishop) San Diego, GMD, Cornell Berkeley “GRADS” (Kennedy, Foster) Cactus Community Development

Future view: much of it here already... Scale of computations much larger Complexity approaching that of Nature Simulations of the Universe and its constituents –Black holes, neutron stars, supernovae –Human genome, human behavior Teams of computational scientists working together Must support efficient, high level problem description Must support collaborative computational science Must support all different languages Ubiquitous Grid Computing Very dynamic simulations, deciding their own future Apps find the resources themselves: distributed, spawned, etc... Must be tolerant of dynamic infrastructure (variable networks, processor availability, etc…) Monitored, viz’ed, controlled from anywhere, with colleagues elsewhere

Grid Simulations: a new paradigm Computational Resources Scattered Across the World Compute servers Handhelds File servers Networks Playstations, cell phones etc… How to take advantage of this for scientific/engineering simulations? Harness multiple sites and devices Simulations at new level of complexity and scale

Many Components for Grid Computing: all have to work for real applications 1.Resources: Egrid ( A “Virtual Organization” in Europe for Grid Computing Over a dozen sites across Europe Many different machines 2.Infrastructure: Globus Metacomputing Toolkit (Example) Develops fundamental technologies needed to build computational grids. Security: logins, data transfer Communication Information (GRIS, GIIS)

Components for Grid Computing, cont. 3.Grid Aware Applications (Cactus example): Grid Enabled Modular Toolkits for Parallel Computation: Provide to Scientist/Engineer… Plug your Science/Eng. Applications in! Must Provide Many Grid Services –Ease of Use: automatically find resources, given need! –Distributed simulations: use as many machines as needed! –Remote Viz and Steering, tracking: watch what happens! Collaborations of groups with different expertise: no single group can do it! Grid is natural for this…

Egrid Testbed Many sites, heterogeneous MPI-Gravitationsphysik, Konrad-Zuse-Zentrum, Poznan, Lecce, Vrije Universiteit-Amsterdam, Paderborn, In 12 weeks, all sites had formed a Virtual Organization with Globus MPICH-G2 GSI-SSH GSI-FTP Central GIIS’s at Poznan, Lecce Key Application: Cactus Egrid merged with Grid Forum to form GGF, but maintains Egrid testbed, identity Brno, MTA-Sztaki-Budapest, DLR-Köln, GMD-St. Augustin ANL, ISI, & friends

Cactus & the Grid Cactus Application Thorns Distribution information hidden from programmer Initial data, Evolution, Analysis, etc Grid Aware Application Thorns Drivers for parallelism, IO, communication, data mapping PUGH: parallelism via MPI (MPICH-G2, grid enabled message passing library) Grid Enabled Communication Library MPICH-G2 implementation of MPI, can run MPI programs across heterogenous computing resources Standard MPI Single Proc

Grid Applications so far... SC93 - SC2000 Typical scenario Find remote resource (often using multiple computers) Launch job (usually static, tightly coupled) Visualize results (usually in-line, fixed) Need to go far beyond this Make it much, much easier –Portals, Globus, standards Make it much more dynamic, adaptive, fault tolerant Migrate this technology to general user Metacomputing the Einstein Equations: Connecting T3E’s in Berlin, Garching, San Diego

Cactus Computational Toolkit Science, Autopilot, AMR, Petsc, HDF, MPI, GrACE, Globus, Remote Steering... The Astrophysical Simulation Collaboratory (ASC) 1. User has science idea Selects Appropriate Resources Collaborators log in to monitor Steers simulation, monitors performance Composes/Builds Code Components w/Interface... Want to integrate and migrate this technology to the generic user…

Supercomputing super difficult Consider simplest case: sit here, compute there Accounts for one AEI user (real case): berte.zib.de denali.mcs.anl.gov golden.sdsc.edu gseaborg.nersc.gov harpo.wustl.edu horizon.npaci.edu loslobos.alliance.unm.edu mcurie.nersc.gov modi4.ncsa.uiuc.edu ntsc1.ncsa.uiuc.edu origin.aei-potsdam.mpg.de pc.rzg.mpg.de pitcairn.mcs.anl.gov quad.mcs.anl.gov rr.alliance.unm.edu sr8000.lrz-muenchen.de 16 machines, 6 different usernames, 16 passwords,... This is hard, but it gets much worse from here…

ASC Portal (Russell, Daues, Wind 2, Bondarescu, Shalf, et al) ASC Project Code management Resource selection (including distributed runs Code Staging, Sharing Data Archiving, Monitoring, etc… Technology: Globus, GSI, Java, DHTML, MyProxy, GPDK, TomCat, Stronghold Used for the ASC Grid Testbed (SDSC, NCSA, Argonne, ZIB, LRZ, AEI) Driven by the need for easy access to machines Useful tool to test Alliance VMR!!

Distributed Computation: Harnessing Multiple Computers Why would anyone want to do this? Capacity Throughput Issues Bandwidth Latency Communication needs Topology Communication/computation Techniques to be developed Overlapping comm/comp Extra ghost zones Compression Algorithms to do this for the scientist… Experiments 3 T3Es on 2 continents Last month: joint NCSA, SDSC test with 1500 processors (Dramlitsch talk)

Distributed Computation: Harnessing Multiple Computers GigE:100MB/sec SDSC IBM SP 1024 procs 5x12x17 =1020 NCSA Origin Array x12x(4+2+2) =480 OC-12 line (But only 2.5MB/sec) Why would anyone want to do this? Capacity, Throughput Solving Einstein Equations, but could be any application 70-85% scaling, ~250GF (only 15% scaling without tricks) Techniques to be developed Overlapping comm/comp, Extra ghost zones Compression Adaption!! Algorithms to do this for the scientist…

Remote Viz/Steering watch/control simulation live… Remote Viz data HTTP Streaming HDF5 Autodownsample Amira Any Viz Client: LCA Vision, OpenDX Changing any steerable parameter Parameters Physics, algorithms Performance Remote Viz data

Thorn HTTPD Thorn which allows simulation any to act as its own web server Connect to simulation from any browser anywhere Monitor run: parameters, basic visualization,... Change steerable parameters See running example at Wireless remote viz, monitoring and steering

Remote Offline Visualization Viz Client (Amira) HDF5 VFD DataGrid (Globus) DPSS FTP HTTP Visualization Client DPSS Server FTP Server Web Server Remote Data Server Viz in Berlin 4TB distributed across NCSA/ANL/Garching Only what is needed n Accessing remote data for local visualization n Should allow downsampling, hyperslabbing, etc. n Grid World: file pieces left all over the world, but logically one file…

Dynamic Distributed Computing Static grid model works only in special cases; must make apps able to respond to changing Grid environment... Many new ideas Consider: the Grid IS your computer: –Networks, machines, devices come and go –Dynamic codes, aware of their environment, seeking out resources –Rethink algorithms of all types –Distributed and Grid-based thread parallelism Scientists and engineers will change the way they think about their problems: think global, solve much bigger problems Many old ideas 1960’s all over again How to deal with dynamic processes processor management memory hierarchies, etc

GridLab: New Paradigms for Dynamic Grids Code should be aware of its environment What resources are out there NOW, and what is their current state? What is my allocation? What is the bandwidth/latency between sites? Code should be able to make decisions on its own A slow part of my simulation can run asynchronously…spawn it off! New, more powerful resources just became available…migrate there! Machine went down…reconfigure and recover! Need more memory…get it by adding more machines! Code should be able to publish this information to central server for tracking, monitoring, steering… Unexpected event…notify users! Collaborators from around the world all connect, examine simulation.

Grid Scenario We see something, but too weak. Please simulate to enhance signal! WashU Potsdam Thessaloniki OK! Resource Estimator Says need 5TB, 2TF. Where can I do this? RZG NCSA 1Tbit/sec Hong Kong Resource Broker: LANL is best match… Resource Broker: NCSA + Garching OK, but need 10Gbit/sec… LANL Now.. LANL

New Grid Applications: some examples Dynamic Staging: move to faster/cheaper/bigger machine “Cactus Worm” Multiple Universe create clone to investigate steered parameter (“Cactus Virus”) Automatic Convergence Testing from intitial data or initiated during simulation Look Ahead spawn off and run coarser resolution to predict likely future Spawn Independent/Asynchronous Tasks send to cheaper machine, main simulation carries on Thorn Profiling best machine/queue, choose resolution parameters based on queue Dynamic Load Balancing inhomogeneous loads, multiple grids Intelligent Parameter Surveys farm out to different machines …Must get application community to rethink algorithms…

Go! Ideas for Dynamic Grid Computing Clone job with steered parameter Queue time over, find new machine Add more resources Found a horizon, try out excision Look for horizon Calculate/Output Grav. Waves Calculate/Output Invariants Find best resources Free CPUs!! NCSA SDSC RZG SDSC LRZArchive data

User’s View... simple!

Issues Raised by Grid Scenarios Infrastructure: Is it ubiquitous? Is it reliable? Does it work? Security: How does user pass proxy from site to site? Firewalls? Ports? How does user/application get information about Grid? Need reliable, ubiquitous Grid information services Portal, Cell phone, PDA What is a file? Where does it live? Crazy Grid apps will leave pieces of files all over the world Tracking How does user track the Grid simulation hierarchies? Two Current Examples that work Now: Building blocks for the future Dynamic, Adaptive Distributed Computing Migration: Cactus Worm

Distributed Computation: Harnessing Multiple Computers GigE:100MB/sec SDSC IBM SP 1024 procs 5x12x17 =1020 NCSA Origin Array x12x(4+2+2) =480 OC-12 line (But only 2.5MB/sec) Solving Einstein Equations, but could be any application 70-85% scaling, ~250GF (only 15% scaling without tricks) Techniques to be developed Overlapping comm/comp, Extra ghost zones Compression Adaption!! Algorithms to do this for the scientist…

Dynamic Adaptation in Distributed Computing Adapt 2 ghosts3 ghosts Compress on! Automatically adapt to bandwidth latency issues Application has NO KNOWLEDGE of machines(s) it is on, networks, etc Adaptive techniques make NO assumptions about network Issues: if network conditions change faster than adaption…

Cactus Worm: Illustration of basic scenario Live demo at (usually) Cactus simulation (could be anything) starts, launched from a portal Queries a Grid Information Server, finds available resources Migrates itself to next site, according to some criterion Registers new location to GIS, terminates old simulation User tracks/steers, using http, streaming data, etc...… Continues around Europe… If we can do this, much of what we want can be done!

Worm as a building block for dynamic Grid applications: many uses Tool to test operation of Grid: Alliance VMR, Egrid, other testbeds Will be outfitted with diagnostics, performance tools What went wrong where? How long did a given Worm “payload” take to migrate Are grid map files in order? Certificates, etc… Basic technology for migrating Entire simulations Parts of simulations Example: contract violation… Code going too slow, too fast, using too much memory, etc…

How to determine when to migrate: Contract Monitor GrADS project activity: Foster, Angulo, Cactus team Establish a “Contract” Driven by user-controllable parameters –Time quantum for “time per iteration” –% degradation in time per iteration (relative to prior average) before noting violation –Number of violations before migration Potential causes of violation Competing load on CPU Computation requires more processing power: e.g., mesh refinement, new subcomputation Hardware problems Going too fast! Using too little memory? Why waste a resource??

Migration due to Contract Violation (Foster, Angulo, Cactus Team…) Load applied 3 successive contract violations Running At UIUC (migration time not to scale) Resource discovery & migration Running At UC

Grid Application Development Toolkit Application developer should be able to build simulations with tools that easily enable dynamic grid capabilities Want to build programming API to easily allow: Query information server (e.g. GIIS) –What’s available for me? What software? How many processors? Network Monitoring Decision Routines (Thorns) –How to decide? Cost? Reliability? Size? Spawning Routines (Thorns) –Now start this up over here, and that up over there Authentication Server –Issues commands, moves files on your behalf (can’t pass-on Globus proxy) Data Transfer –Use whatever method is desired (Gsi-ssh, Gsi-ftp, Streamed HDF5, scp…) Etc…

ID EV IO AN Example Toolkit Call: Routine Spawning Schedule AHFinder at Analysis { EXTERNAL=yes LANG=C } “Finding Horizons”

GridLab Egrid + US Friends: working to make this happen… Large EU Project Under Negotiation with EC… AEI, Lecce, Poznan, Brno, Amsterdam, ZIB- Berlin, Cardiff, Paderborn, Compaq, Sun, Chicago, ISI, Wisconsin 20 positions open!

Grid Related Projects GridLab Enabling these scenarios ASC: Astrophysics Simulation Collaboratory NSF Funded (WashU, Rutgers, Argonne, U. Chicago, NCSA) Collaboratory tools, Cactus Portal Global Grid Forum (GGF) Applications Working Group GrADs: Grid Application Development Software NSF Funded (Rice, NCSA, U. Illinois, UCSD, U. Chicago, U. Indiana...) TIKSL/GriKSL German DFN funded: AEI, ZIB, Garching Remote online and offline visualization, remote steering/monitoring Cactus Team Dynamic distributed computing …

Summary Science/Engineering Drive/Demand Grid Development Problems very large, need new capabilities Grids will fundamentally change research Enable problem scales far beyond present capabilities Enable larger communities to work together (they’ll need to) Change the way researchers/engineers think about their work Dynamic Nature of Grid makes problem much more interesting Harder Matches dynamic nature of problems being studied Need to get applications communities to rethink their problems The Grid is the computer… Join the Applications Working Group of GGF Join our project: Work with us from here, or come to Europe!

Credits: this work resulted from a great team