Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.

Slides:



Advertisements
Similar presentations
LEAD Portal: a TeraGrid Gateway and Application Service Architecture Marcus Christie and Suresh Marru Indiana University LEAD Project (
Advertisements

University of Chicago Department of Energy The Parallel and Grid I/O Perspective MPI, MPI-IO, NetCDF, and HDF5 are in common use Multi TB datasets also.
SWIM WEB PORTAL by Dipti Aswath SWIM Meeting ORNL Oct 15-17, 2007.
Cactus in GrADS (HFA) Ian Foster Dave Angulo, Matei Ripeanu, Michael Russell.
The Cactus Portal A Case Study in Grid Portal Development Michael Paul Russell Dept of Computer Science The University of Chicago
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Cactus Code and Grid Programming Here at GGF1: Gabrielle Allen, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics,
Astrophysics, Biology, Climate, Combustion, Fusion, Nanoscience Working Group on Simulation-Driven Applications 10 CS, 10 Sim, 1 VR.
Cactus-G: Experiments with a Grid-Enabled Computational Framework Dave Angulo, Ian Foster Chuang Liu, Matei Ripeanu, Michael Russell Distributed Systems.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Cactus Tools for the Grid Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
The Cactus Code: A Parallel, Collaborative, Framework for Large Scale Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
GridChem-- User Support Kent Milfeld Supported by the NSF NMI Program under Award # Oct. 10, 2005.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
CoG Kit Overview Gregor von Laszewski Keith Jackson.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
NeSC Apps Workshop July 20 th, 2002 Customizable command line tools for Grids Ian Kelley + Gabrielle Allen Max Planck Institute for Gravitational Physics.
OpenSees on NEEShub Frank McKenna UC Berkeley. Bell’s Law Bell's Law of Computer Class formation was discovered about It states that technology.
Albert-Einstein-Institut Using Supercomputers to Collide Black Holes Solving Einstein’s Equations on the Grid Solving Einstein’s.
1 Cactus in a nutshell... n Cactus facilitates parallel code design, it enables platform independent computations and encourages collaborative code development.
Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational.
1 Overview of the Application Hosting Environment Stefan Zasada University College London.
Jarek Nabrzyski, Ariel Oleksiak Comparison of Grid Middleware in European Grid Projects Jarek Nabrzyski, Ariel Oleksiak Poznań Supercomputing and Networking.
Nomadic Grid Applications: The Cactus WORM G.Lanfermann Max Planck Institute for Gravitational Physics Albert-Einstein-Institute, Golm Dave Angulo University.
The Globus Project: A Status Report Ian Foster Carl Kesselman
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
April 1st, The ASC- GridLab Portal Edward Seidel, Michael Russell, Gabrielle Allen, and the rest of the team Max Plank Institut für Gravitationsphysik.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
VO-Ganglia Grid Simulator Catalin Dumitrescu, Mike Wilde, Ian Foster Computer Science Department The University of Chicago.
The Cactus Code: A Problem Solving Environment for the Grid Gabrielle Allen, Gerd Lanfermann Max Planck Institute for Gravitational Physics.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
GridLab WP-2 Cactus GAT (CGAT) Ed Seidel, AEI & LSU Co-chair, GGF Apps RG, Gridstart Apps TWG Gabrielle Allen, Robert Engel, Tom Goodale, *Thomas Radke.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
New and Cool The Cactus Team Albert Einstein Institute
Connections to Other Packages The Cactus Team Albert Einstein Institute
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Cactus Grid Computing Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
The Gateway Computational Web Portal Marlon Pierce Indiana University March 15, 2002.
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
New and Cool The Cactus Team Albert Einstein Institute
Cactus Workshop - NCSA Sep 27 - Oct Generic Cactus Workshop: Summary and Future Ed Seidel Albert Einstein Institute
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
Simulation Production System Science Advisory Committee Meeting UW-Madison March 1 st -2 nd 2007 Juan Carlos Díaz Vélez.
A System for Monitoring and Management of Computational Grids Warren Smith Computer Sciences Corporation NASA Ames Research Center.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Cactus Project & Collaborative Working
Cactus Tools for the Grid
Working With Azure Batch AI
LEAD-VGrADS Day 1 Notes.
The Cactus Team Albert Einstein Institute
Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations.
Exploring Distributed Computing Techniques with Ccactus and Globus
Dynamic Grid Computing: The Cactus Worm
DOE 2000 PI Retreat Breakout C-1
SDM workshop Strawman report History and Progress and Goal.
Moodle Scalability What is Scalability?
Gordon Erlebacher Florida State University
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

Cactus/TIKSL/KDI/Portal Synch Day

Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure everyone knows what is going on  coordinate efforts to extent possible  Present/develop detailed plan for Portal  where we are now  who will do what in future  develop coherence with NCSA, other portal efforts  get clear plan for future development, feedback to Globus, NCSA  get testbed set up  get user input asap!  Break up into working groups, summarize plans n Overall Top Question to Answer: How do we integrate and develop limited production testbed asap? Want something in place by end of August!

Agenda, cont. n Questions to be asking and answering in your presentations:  What is the overview and status of your project (save gory details for working groups)?  How do we best integrate the efforts?  When will certain features be done?  What support/features do you want from other groups (NCSA, Portal, Globus, TIKSL, HDF, Cactus, etc.) l What to do about lunch? n Look at rest of agenda...

Big Picture: Project Space Cactus TIKSL AEI-ZIB-Garching KDI Egrid EU Network Globus Zeus Numerical Rel AEI/NCSA/WashU Grid Forum Grads Portal NCSA Users General User Community Developers

What is Cactus? n It’s not just for breakfast anymore... l It is not a relativity application…(but it can do that) l It is not an astrophysics application…(but it can do that) l It is not a fluid dynamics application…(but it can do that) n It is a metacode framework for parallel applications, with… l Pluggable data distribution layers (generic MPI, others) l Pluggable parallel I/O l Pluggable performance monitoring tools (PAPI, Autopilot, etc…) l Pluggable engineering/scientific applications, linear solvers, etc l Pluggable cool stuff (remote steering, monitoring tools…) l Etc... n Cactus + Globus: apps plugged into Cactus can become Grid-enabled n A portal under development: main topic here...

Cactus Computational Toolkit Science, Autopilot, AMR, Petsc, HDF, MPI, GrACE, Globus, Remote Steering... A Portal to Computational Science: The Cactus Collaboratory 1. User has science idea Selects Appropriate Resources Collaborators log in to monitor Steers simulation, monitors performance Composes/Builds Code Components w/Interface... Want to integrate and migrate this technology to the generic user...

Portal Components

Cactus Portal n Has Generic and Cactus-specific parts: build on generic interfaces, which should be enhanced for additional app info l Cactus specific –Code composition (Cactus can be what you want it to be…) –Configuration Analysis (What the hell is in this directory…?) –Parameter Setting –Interfaces must be self configuring… l Generic (+ Cactus specific bonus features…) –Manual Resource Selection 1. Which machine? User selects based on available resources »How will user know loads, wait times, resources? Need to have some standard interface to provide this info Which machines? User wants 20GF, 20GB memory. Could get 64 procs at NCSA and 64 at SDSC... Added cactus bonus: what resources are compatible or recommended with my special configuration –Automatic Resource Selection Just direct job to “appropriate” resources given request….

Cactus Portal, cont... –Job Launching Once resources selected, start job, handle batch, job submission, compilation if required Take care of file storage, archiving n Job Monitoring l Generic: monitoring queues through common interface, notification of completion of job –What is that interface? –What about distributed simulations across sites??? l Cactus specific: –Web server interface (thorn http) All active routines in running simulation displayed All parameters for those routines displayed, steerable parameters can be changed Crude visualization of running simulation through browser interface –Sophisticated Remote Visualization Retrieval of arbitrary data through streaming HDF5 for local visualization »1D, 2D, 3D, downsampled, depending on bandwidth available Inline visualization (e.g., isosurfaces, streamlines) sent over network

Cactus Portal, continued –Performance monitoring: Want generic interface to warn user when perf is poor (usually is, and user does not even know!!!) PAPI (single proc, color coded for routine…) Autopilot What else should be provided? What is envisioned for generic portal? n Steering l Science –User changes parameters based on what is observed Parameters screwed up: abort or keep going? »Forgot to turn on output of favorite variable »Forgot to turn on some routine »Too much output, disc filling up Scientific results lead to change in algorithm or resource request »AMR »Feature indicates some change beneficial –Logging of all changes

Cactus Portal, continued l Performance steering –How is my job doing? Network bandwidth OK? Suggest other architecture Suggest algorithm changes due to current state of performance »E.g., Extra ghost zones n Questions for discussion: l What is really Cactus specific? l What is generic: what will standard portal, VMR, etc, provide? l How to get maximum overlap between these efforts? l How to get active testbed established asap, get appropriate users, portal developers, Globus developers, Grads developers effectively working together l How long before this can brought into production? l Should at least have SC2000 demo!

Grand Picture Remote steering and monitoring from airport Origin: NCSA Remote Viz in St Louis T3E: Garching Simulations launched from Cactus Portal Grid enabled Cactus runs on distributed machines Remote Viz and steering from Berlin Viz of data from previous simulations in SF café DataGrid/DPSS Downsampling Globus http HDF5 IsoSurfaces

Further details... n Cactus l l n Movies, research overview (needs major updating) l n Simulation Collaboratory/Portal Work: l n Remote Steering, high speed networking l l n EU Astrophysics Network l potsdam.mpg.de/research/astro/eu_network/index.html