Cactus Project & Collaborative Working

Slides:



Advertisements
Similar presentations
1 GridLab Grid Application Toolkit and Testbed Contact: Jarek Nabrzyski, GridLab Project Coordinator Poznań Supercomputing and Networking.
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GridLab Enabling Applications on the Grid Jarek Nabrzyski et al. Poznań Supercomputing and Networking.
Cactus in GrADS (HFA) Ian Foster Dave Angulo, Matei Ripeanu, Michael Russell.
The Cactus Portal A Case Study in Grid Portal Development Michael Paul Russell Dept of Computer Science The University of Chicago
Portals Team GridSphere and the GridLab Project Jason Novotny Michael Russell Oliver Wehrens Albert.
Cactus Code and Grid Programming Here at GGF1: Gabrielle Allen, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics,
Astrophysics, Biology, Climate, Combustion, Fusion, Nanoscience Working Group on Simulation-Driven Applications 10 CS, 10 Sim, 1 VR.
GridLab & Cactus Joni Kivi Maarit Lintunen. GridLab  A project funded by the European Commission  The project was started in January 2002  Software.
GridSphere for GridLab A Grid Application Server Development Framework By Michael Paul Russell Dept Computer Science University.
Computer Programming My Home Page My Paper Job Description Computer programmers write, test, and maintain the detailed instructions, called programs,
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
Grid Computing, B. Wilkinson, a.1 Grid Portals.
Cactus Tools for the Grid Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
EU Network Meeting June 2001 Cactus Gabrielle Allen, Tom Goodale Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
WP-8, ZIB WP-8: Data Handling And Visualization Review Meeting Report Felix Hupfeld, Andrei Hutanu, Andre Merzky, Thorsten Schütt, Brygg Ullmer Zuse-Institute-Berlin.
The Astrophysics Simulation Collaboratory Portal Case Study of a Grid-Enabled Application Environment HPDC-10 San Francisco Michael Russell, Gabrielle.
GridLab A Grid Application Toolkit and Testbed IST Jarek Nabrzyski GridLab Project Coordinator Poznań.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
NeSC Apps Workshop July 20 th, 2002 Customizable command line tools for Grids Ian Kelley + Gabrielle Allen Max Planck Institute for Gravitational Physics.
Projects using Cactus Gabrielle Allen Cactus Retreat Baton Rouge, April 2004.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational.
GridLab: A Grid Application Toolkit and Testbed Jarosław Nabrzyski GridLab Project Manager Poznań Supercomputing and Networking Center, Poland
GridLab: A Grid Application Toolkit and Testbed
GEM Portal and SERVOGrid for Earthquake Science PTLIU Laboratory for Community Grids Geoffrey Fox, Marlon Pierce Computer Science, Informatics, Physics.
Cracow Grid Workshop October 2009 Dipl.-Ing. (M.Sc.) Marcus Hilbrich Center for Information Services and High Performance.
Grid Architecture William E. Johnston Lawrence Berkeley National Lab and NASA Ames Research Center (These slides are available at grid.lbl.gov/~wej/Grids)
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.
Two-level Content Delivery System in PIONIER Optical Network for interactive TV services Cezary Mazurek Poznan Supercomputing and Networking Center, Poland.
New and Cool The Cactus Team Albert Einstein Institute
Connections to Other Packages The Cactus Team Albert Einstein Institute
August 2003 At A Glance The IRC is a platform independent, extensible, and adaptive framework that provides robust, interactive, and distributed control.
Timeshared Parallel Machines Need resource management Need resource management Shrink and expand individual jobs to available sets of processors Shrink.
Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
SPI NIGHTLIES Alex Hodgkins. SPI nightlies  Build and test various software projects each night  Provide a nightlies summary page that displays all.
The Gateway Computational Web Portal Marlon Pierce Indiana University March 15, 2002.
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
New and Cool The Cactus Team Albert Einstein Institute
Cactus Workshop - NCSA Sep 27 - Oct Generic Cactus Workshop: Summary and Future Ed Seidel Albert Einstein Institute
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
VisIt Project Overview
Workload Management Workpackage
Electronic Visualization Laboratory University of Illinois at Chicago
Cactus Tools for the Grid
The Cactus Team Albert Einstein Institute
Outline SOAP and Web Services in relation to Distributed Objects
Enable computational and experimental  scientists to do “more” computational chemistry by providing capability  computing resources and services at their.
G.Manduchi1, T.Fredian2⁾, J.Stillerman2⁾, A. Neto3), F. Sartori3)
Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations.
Exploring Distributed Computing Techniques with Ccactus and Globus
Outline SOAP and Web Services in relation to Distributed Objects
PHP / MySQL Introduction
University of Technology
#01 Client/Server Computing
Dynamic Grid Computing: The Cactus Worm
Haiyan Meng and Douglas Thain
Gateway and Web Services
LO2 – Understand Computer Software
Gordon Erlebacher Florida State University
Web Servers (IIS and Apache)
#01 Client/Server Computing
Software Architecture Taxonomy
Presentation transcript:

Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)

Cactus Modular, collaborative, parallel, portable framework for large scale scientific computing: http://www.cactuscode.org Application/Infrastructure Thorns (modules in Fortran/C/C++ etc) plug into core Flesh (glue in ANSI C) Driven by needs of Numerical Relativity community (very large scale simulations, really need (right now) resources/flexibility promised by grid computing), large distributed collaborations, lots and lots of data, visualization crucial for physics understanding, … Can do now: simulation monitoring, steering (web server thorn), streaming data for visualization (HDF5), readers for many different viz clients, developing simulation portal (ASC), distributed simulations, focus on making it transparent for users (VizLauncher: automatic visualization from web browser interface).

Cactus Computational Toolkit Your Science + AMR, HDF5,MPI, Globus, Elliptic, Steering, etc. Composes/Builds code components using portal … User has science idea... Selects Appropriate Resources... Steers simulation, monitors performance... Collaborators log in to monitor, steer, visualize ... 1

Dynamic Grid Computing Add more resources SDSC Queue time over, find new machine Free CPUs!! RZG SDSC Clone job with steered parameter Calculate/Output Invariants LRZ Archive data Found a horizon, try out excision Calculate/Output Grav. Waves Look for horizon Find best resources Go! NCSA

Collaborative Scenarios Collaboration consists of geographically distributed researchers with access to different sets of resources (supercomputers, Idesks, Caves, Visualization software, highspeed networks, …). Limited number of high resolution simulations taking place at unknown times and places (queuing systems). Important to use this run-time effectively … need to monitor that everything is running properly (steer output dir, switch of expensive analysis, …) … and learn as much physics as possible (steer analysis, output variables, physics parameters, …). Everyone in collaboration needs to be able to interact (in different ways) with the simulation. Any everyone also needs to be able to interact together to see what other people are seeing (Access Grid, Collaborative Visualization).

Requirements I Remote Visualization Visualize data from simulation in realtime using best available tools on local machine Data streamed directly from simulation across network, or accessed from various local filesystems Downsampling, zooming, … Shouldn’t slow down simulation (separate data server needed) Each user should be able to customize what they are seeing (variables, downsampling parameters etc) … … or see exactly what someone else is seeing! Need viz on all platforms (laptop, PDA, phone, Windows, Mac, …) and integrated in same way.

Requirements II Event Description and Transportation Data Description E.g. Remote Steering requests Protocols/APIs Focused on grid aware visualization systems, including distributed collaborative environments. (CaveLib getting close, but not flexible enough) Data Description Remote, distributed in different ways across different machines/file systems/archives, scientific data (multidimensional-arrays, different geometrical objects) With visualization in mind (clients should be able to extract enough info to recognize what the data is and what to do with it). Along the lines of OpenDX data model (general RDF/XML not enough)

Requirements III Security and Security Policies Information Who can interact in which way with the simulation (monitor, access data, steer [physics and/or analysis,output data]). How to implement this? Hierarchies, associations … Information What simulations are running now, which ones are already queued (want to be able to check/amend parameters for these), what is estimate for when queued jobs will be running. Notification Jobs running/finished, significant events (disk space nearly full, event horizon formed). Email, SMS, Fax, …

Requirements IV Be able to reproduce simulations Collaboration Need detailed log of all events to know what happened. Scripting language for Cactus to be able to do it again. Collaboration Project portal for shared simulation configurations (thorn lists, machine configuration files, parameter files) Access to data/configurations for past runs Interactions across Access Grid etc. User Interfaces Need to be intuitive, easy-to-use, but also flexible. Suitable for users, developers, physicists, mathematicians, computer scientists, …

Requirements V System At the moment, everyone in the collaboration has personal access to each different resource (disk space, inodes, number of jobs allowed on queues, everyone has to get local environment in sync …). More useful if these resources were accessed on a group/collaborative basis.

GridLab Project (EU in Negotiation) AEI, Poznan, ZIB, Lecce, Cardiff, VU, Brno, Sztaki, Sun, Compaq, ISI, Argonne, Wisconsin, Athens