Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational.

Slides:



Advertisements
Similar presentations
Gabrielle Allen*, Thomas Dramlitsch*, Ian Foster †, Nicolas Karonis ‡, Matei Ripeanu #, Ed Seidel*, Brian Toonen † * Max-Planck-Institut für Gravitationsphysik.
Advertisements

Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
ProActive Task Manager Component for SEGL Parameter Sweeping Natalia Currle-Linde and Wasseim Alzouabi High Performance Computing Center Stuttgart (HLRS),
1 GridLab Grid Application Toolkit and Testbed Contact: Jarek Nabrzyski, GridLab Project Coordinator Poznań Supercomputing and Networking.
GridFlow: Workflow Management for Grid Computing Kavita Shinde.
Cactus in GrADS Dave Angulo, Ian Foster Matei Ripeanu, Michael Russell Distributed Systems Laboratory The University of Chicago With: Gabrielle Allen,
Operating Systems High Level View Chapter 1,2. Who is the User? End Users Application Programmers System Programmers Administrators.
Cactus in GrADS (HFA) Ian Foster Dave Angulo, Matei Ripeanu, Michael Russell.
Supporting Efficient Execution in Heterogeneous Distributed Computing Environments with Cactus and Globus Gabrielle Allen, Thomas Dramlitsch, Ian Foster,
The Cactus Portal A Case Study in Grid Portal Development Michael Paul Russell Dept of Computer Science The University of Chicago
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Portals Team GridSphere and the GridLab Project Jason Novotny Michael Russell Oliver Wehrens Albert.
Cactus Code and Grid Programming Here at GGF1: Gabrielle Allen, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics,
Astrophysics, Biology, Climate, Combustion, Fusion, Nanoscience Working Group on Simulation-Driven Applications 10 CS, 10 Sim, 1 VR.
GridLab & Cactus Joni Kivi Maarit Lintunen. GridLab  A project funded by the European Commission  The project was started in January 2002  Software.
Cactus-G: Experiments with a Grid-Enabled Computational Framework Dave Angulo, Ian Foster Chuang Liu, Matei Ripeanu, Michael Russell Distributed Systems.
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
Cactus Tools for the Grid Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
EU Network Meeting June 2001 Cactus Gabrielle Allen, Tom Goodale Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
Grid Information Systems. Two grid information problems Two problems  Monitoring  Discovery We can use similar techniques for both.
Module 14: Configuring Print Resources and Printing Pools.
IOS110 Introduction to Operating Systems using Windows Session 9 1.
The Cactus Code: A Parallel, Collaborative, Framework for Large Scale Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
The Astrophysics Simulation Collaboratory Portal Case Study of a Grid-Enabled Application Environment HPDC-10 San Francisco Michael Russell, Gabrielle.
GridLab A Grid Application Toolkit and Testbed IST Jarek Nabrzyski GridLab Project Coordinator Poznań.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
Contact Information Office: 225 Neville Hall Office Hours: Monday and Wednesday 12:00-1:00 and by appointment.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
NeSC Apps Workshop July 20 th, 2002 Customizable command line tools for Grids Ian Kelley + Gabrielle Allen Max Planck Institute for Gravitational Physics.
Projects using Cactus Gabrielle Allen Cactus Retreat Baton Rouge, April 2004.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
GridLab: A Grid Application Toolkit and Testbed
Nomadic Grid Applications: The Cactus WORM G.Lanfermann Max Planck Institute for Gravitational Physics Albert-Einstein-Institute, Golm Dave Angulo University.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
April 1st, The ASC- GridLab Portal Edward Seidel, Michael Russell, Gabrielle Allen, and the rest of the team Max Plank Institut für Gravitationsphysik.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Condor RoadMap.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
The Cactus Code: A Problem Solving Environment for the Grid Gabrielle Allen, Gerd Lanfermann Max Planck Institute for Gravitational Physics.
Review of Condor,SGE,LSF,PBS
Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.
GridLab WP-2 Cactus GAT (CGAT) Ed Seidel, AEI & LSU Co-chair, GGF Apps RG, Gridstart Apps TWG Gabrielle Allen, Robert Engel, Tom Goodale, *Thomas Radke.
New and Cool The Cactus Team Albert Einstein Institute
Connections to Other Packages The Cactus Team Albert Einstein Institute
Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
Performance Testing Test Complete. Performance testing and its sub categories Performance testing is performed, to determine how fast some aspect of a.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
SPI NIGHTLIES Alex Hodgkins. SPI nightlies  Build and test various software projects each night  Provide a nightlies summary page that displays all.
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
New and Cool The Cactus Team Albert Einstein Institute
Cactus Workshop - NCSA Sep 27 - Oct Generic Cactus Workshop: Summary and Future Ed Seidel Albert Einstein Institute
03/09/2007http://pcalimonitor.cern.ch/1 Monitoring in ALICE Costin Grigoras 03/09/2007 WLCG Meeting, CHEP.
Run-time Adaptation of Grid Data Placement Jobs George Kola, Tevfik Kosar and Miron Livny Condor Project, University of Wisconsin.
Metacomputing Within the Cactus Framework What and why is Cactus? What has Cactus got to do with Globus? Gabrielle Allen, Thomas Radke, Ed Seidel. Albert-Einstein-Institut.
Cactus Project & Collaborative Working
Cactus Tools for the Grid
The Cactus Team Albert Einstein Institute
Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations.
Exploring Distributed Computing Techniques with Ccactus and Globus
Dynamic Grid Computing: The Cactus Worm
Haiyan Meng and Douglas Thain
SDM workshop Strawman report History and Progress and Goal.
Adaptive Grid Computing
Overview of Workflows: Why Use Them?
Presentation transcript:

Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

THE GRID: Dependable, consistent, pervasive access to high-end resources CACTUS is a freely available, modular, portable and manageable environment for collaboratively developing parallel, high- performance multi-dimensional simulations

Working Cactus Grid Tools = Remote monitoring and steering of simulations (thorn HTTPD). = Just adding point-and-click visualization to Web Server, and embedded visualization. = Remote visualization... data streamed to local site for analysis with local clients "Window into simulation" = Notification by at start of simulation, including details of where to connect (URLs) for other tools. = Checkpointing and restart between machines.

Grand Picture Remote steering and monitoring from airport Origin: NCSA Remote Viz in St Louis T3E: Garching Simulations launched from Cactus Portal Grid enabled Cactus runs on distributed machines Remote Viz and steering from Berlin Viz of data from previous simulations in SF caf ₫ DataGrid/DPSS Downsampling Globus http HDF5 IsoSurfaces

Developing Cactus Grid Tools = Cactus Simulation Portal (Astrophysics Simulation Collaboratory, ASC) - Version 2 next month. Lots of ideas, users are designing it. = Testbeds/VMRs - CVMR (big production machines), Egrid (test and development), EU Astrophysics. = Dynamic scenarios... Production Level Cactus Worm (Migration Tool) and more. = Non-demo distributed simulations... cheaper, faster, bigger = Notification, Info Services, Data management... requested by users

Grid Applications = How can applications really exploit Grids? = The Grid is much more than Portals, distributed computing, and remote visualization... we should be thinking of new application scenarios = Cactus Worm (Simulation Migrator) was a prototype of such classes of applications, this is a thorn (module) which can be added to any application for automatic migration between machines (using checkpoint files) ; Dynamically locate new resources ; Move appropriate data files between old and new machine ; Get hold of appropriate executable for central repository ; Restart application on new machine

Grid Applications = Execution Staging ; Use most appropriate (virtual) machine (fastest, cheapest, biggest) –Need it by Wednesday, only use 200 SUs, need 1GB. ; Set parameters on the basis of available machines... –eg anywhere between 70<nx<150 but I want to run it right now = Simulation Redistribution ; Put new grids on different machines (AMR, Multigrid) ; Adapt automatically to varying processor speeds and application loads ; Slow Startup... start application now on a slow resource, and move to faster resources as they become available

Grid Applications = Simulation Migration ; Move to more appropriate machines (faster, cheaper) ; Move to new resource at end of queue time ; Convergence testing –Send coarser grid to different resources, either at start of simulation or dynamically at user request, or following some event ; Look-Ahead –Spawn a downsampled resolution to a different resource to predict the future ; Cloning/Multiple Universe –Dynamically initiate a cloned version with changed parameters

Grid Applications = Spawn Independent Tasks ; Analysis tasks –Don't need to be performed in line with simulation, send to different available (free) machines ; Vector –Send to a vector pool of resources... e.g. Calculate FFT of each grid variable, send each one to different machine

Grid Applications = Spawn Independent Tasks ; Pipeline –Series of tasks performed on same data

Grid Applications = Task Farming ; Automated parameter studies ; Genetic parmater studies –Use heuristic algorithms to cover large parameter spaces ; Manage embarrassing parallel applications = Make use of ; Condor, Entropia ; Grid scripting languages... everything available from the command line. = Many other possibilities...

Dynamic Grid Computing NCSA Go! Clone job with steered parameter Queue time over, find new machine Add more resources Found a horizon, try out excision Look for horizon Calculate/Output Grav. Waves Calculate/Output Invariants Find best resources Free CPUs!! SDSC RZG SDSC LRZArchive data

Users View … Has To Be Easy!

Cactus Grid Application Toolkit (GAT) Cactus Application [Black hole collisions, chemical engineering, cosmology, hydrodynamics] Grid Tools [Worm, Spawner, Taskfarmer] Cactus Programming APIs [Grid variables, parameters, scheduling, IO, interpolation, reduction, driver] Grid Application Toolkit [Information services, resource broker, data management, monitoring] YOUR GRID Distributed resources, different OS, software Grid Service Grid Service Grid Service

Grid Application Toolkit (GAT) = Application developer should be able to build simulations with tools that easily enable dynamic grid capabilities = Want to build programming API to: ; Query/Publish to Information Services –Application, Network, Machine and Queue status ; Resource Brokers –Where to go, how to decide?, how to get there? ; Move and manage data, compose executables –Gsiscp, gsiftp, streamed HDF5, scp, GASS,... ; Higher level grid tools: –Migrate, Spawn, Taskfarm, Vector, Pipeline, Clone,... ; Notification –Send me an , SMS, fax, page, when something happens. = Much more.

Cactus GAT: Example