Www.gridlab.org What I really want from networks NOW, and in 5-10 years time The Researchers View Ed Seidel Max-Planck-Institut für Gravitationsphysik.

Slides:



Advertisements
Similar presentations
-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
Advertisements

Challenges for Interactive Grids a point of view from Int.Eu.Grid project Remote Instrumentation Services in Grid Environment RISGE BoF Manchester 8th.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
14 July 2000TWIST George Brett NLANR Distributed Applications Support Team (NCSA/UIUC)
1© Copyright 2015 EMC Corporation. All rights reserved. SDN INTELLIGENT NETWORKING IMPLICATIONS FOR END-TO-END INTERNETWORKING Simone Mangiante Senior.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
Grid Computing - AAU 14/ Grid Computing Josva Kleist Danish Center for Grid Computing
INFSO-RI Enabling Grids for E-sciencE EGEODE VO « Expanding GEosciences On DEmand » Geocluster©: Generic Seismic Processing Platform.
Medical Education and the Lambda Grid Parvati Dev, PhD Stanford University SUMMIT Lab Parvati Dev, PhD Stanford University SUMMIT Lab.
Future role of DMR in Cyber Infrastructure D. Ceperley NCSA, University of Illinois Urbana-Champaign N.B. All views expressed are my own.
Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
Distributed EU-wide Supercomputing Facility as a New Research Infrastructure for Europe Gabrielle Allen Albert-Einstein-Institut, Germany Jarek Nabrzyski.
Changing Science and Engineering: the impact of HPC Sept 23, 2009 Edward Seidel Assistant Director, Mathematical and Physical Sciences, NSF (Director,
Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.
Applications Requirements Working Group HENP Networking Meeting June 1-2, 2001 Participants Larry Price Steven Wallace (co-ch)
CHEP'07 September D0 data reprocessing on OSG Authors Andrew Baranovski (Fermilab) for B. Abbot, M. Diesburg, G. Garzoglio, T. Kurca, P. Mhashilkar.
Applications for the Grid Here at GGF1: Gabrielle Allen, Thomas, Dramlitsch, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational.
What is Cyberinfrastructure? Russ Hobby, Internet2 Clemson University CI Days 20 May 2008.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
A Wide Range of Scientific Disciplines Will Require a Common Infrastructure Example--Two e-Science Grand Challenges –NSF’s EarthScope—US Array –NIH’s Biomedical.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Do’s and Don’ts of Building Grand Challenge Application Teams Ed Seidel Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institute) NCSA, U.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
Gigi Karmous-Edwards Network Development and Grid Requirements E-IRG May 13th, 2005.
1 Computing Challenges for the Square Kilometre Array Mathai Joseph & Harrick Vin Tata Research Development & Design Centre Pune, India CHEP Mumbai 16.
The Earth System Grid (ESG) Computer Science and Technologies DOE SciDAC ESG Project Review Argonne National Laboratory, Illinois May 8-9, 2003.
Cyberinfrastructure What is it? Russ Hobby Internet2 Joint Techs, 18 July 2007.
Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
Uni Innsbruck Informatik - 1 Network Support for Grid Computing... a new research direction! Michael Welzl DPS NSG Team
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
Cyberinfrastructure Overview Russ Hobby, Internet2 ECSU CI Days 4 January 2008.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
Lawrence H. Landweber National Science Foundation SC2003 November 20, 2003
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Roadmap to Next Generation Internet: Indian Initiatives Subbu C-DAC, India.
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
An Architectural Approach to Managing Data in Transit Micah Beck Director & Associate Professor Logistical Computing and Internetworking Lab Computer Science.
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
Cactus Workshop - NCSA Sep 27 - Oct Generic Cactus Workshop: Summary and Future Ed Seidel Albert Einstein Institute
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Realizing the Promise of Grid Computing Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science.
Intro to Distributed Systems Hank Levy. 23/20/2016 Distributed Systems Nearly all systems today are distributed in some way, e.g.: –they use –they.
Admela Jukan jukan at uiuc.edu March 15, 2005 GGF 13, Seoul Issues of Network Control Plane Interactions with Grid Applications.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
Internet2 Applications & Engineering Ted Hanss Director, Applications Development.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
Developing HPC Scientific and Engineering Applications: From the Laptop to the Grid Gabrielle Allen, Tom Goodale, Thomas.
Cactus Project & Collaborative Working
Clouds , Grids and Clusters
RDA US Science workshop Arlington VA, Aug 2014 Cees de Laat with many slides from Ed Seidel/Rob Pennington.
The Cactus Team Albert Einstein Institute
DOE Facilities - Drivers for Science: Experimental and Simulation Data
Grid Computing AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations.
Grid Computing.
Exploring Distributed Computing Techniques with Ccactus and Globus
Dynamic Grid Computing: The Cactus Worm
The Globus Toolkit™: Information Services
CLUSTER COMPUTING.
GATES: A Grid-Based Middleware for Processing Distributed Data Streams
Presentation transcript:

What I really want from networks NOW, and in 5-10 years time The Researchers View Ed Seidel Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institut) + GridLab Project PHYSICS TODAYTOMORROW

NSF e-Science Panel, Dec 2001 What a scientist wants When I hit Enter on my laptop, I want the solution to this very complex calculation to appear on my screen as a volumetric rendering, within a fraction of a second. When I hit Enter on my laptop, I want a table of requested data to be transferred from an unknown TB database somewhere to my laptop and displayed, within a fraction of a second. Whatever it is, I want it to act like a wire that connects me to resources I need, as if I were the only one on the circuit and using those resources.

Quiz: What does a Researcher Care about? Theoretical bandwidth, latency, and topology, switches, lambdas, etc Application-level features as they experience them Guaranteed reliable data transport performance, remote control of instrumentation and experimental apparatus, information searching performance, Delivered parallel/distributed computational performance, Functional multicast video/audio for collaboration

FAQ: Why isnt my network bandwidth used? Answers: You dont provide enough (end-to-end) bandwidth! You dont provide enough QoS The last mile problem… NSF Grand Challenges 1992 Develop High Perf, demanding applications for science/engineering Create distributed teams of Apps, CS, etc Maximum bandwidth between centers was 45Mb, barely used at that time!

How did we do this EU Calculation? Needed largest academic machines (in US) for simulation LBL/NERSC (US DOE) NCSA Platinum cluster High Speed backbone for data transfer Flew students from Berlin to Illinois 3 weeks analysis and visualization special facilities and experts available to our EU project there Brought 1TB data back on 6 disks purchased in US Remote access QoS very poor Airplanes have better bandwidth than networks Discovery Channel Movie for EU Network 3000 frames Volume Rendering, TB of simulation data EU Project simulation had to be computed and visualized in US!

Network Taxonomy Production Networks: High-performance networks, 24/7 dependablilty (e.g. ESnet, Abilene), for everyone. Experimental Networks: High-performance trials of cutting-edge networks, based on advanced application needs. They MUST be robust, support application-dictated software toolkits, middleware, computing and networking. provide delivered services on a persistent basis, yet encourage experimentation with innovative/novel concepts. Research Networks: Small-scale prototypes; basic research on components, protocols, architecture. Not persistent, dont support applications. Scientists Need/Want new generation Experimental Networks for e-Science Apps, Grand Challenge teams to develop them

Conclusions of NSF Panel Participants overwhelmingly agreed: networks for e- Science must have known and knowable characteristics These are not features of todays Production Networks These are needed in Experimental Networks for next generation e-Science High-performance users Require networks that allow access to information about their operational characteristics. Expect deterministic and repeatable behavior from networks Demand end-to-end service …Or else they will never depend on them for persistent e- Science applications.

Current Grid Application Types Community Driven Serving the needs of distributed communities Video Conferencing Virtual Collaborative Environments Code sharing to experiencing each other at a distance… Data Driven: will grow exponentially in next decade! Remote access of huge data, data mining Weather Information systems Particle Physics Process/Simulation Driven Demanding Simulations of Science and Engineering Get less attention in the Grid World, yet drive HPC!

What we cant quite do now We have the technology, but not the bandwidth SC90 - SC01 Typical scenario Find remote resource Where? Portal! Launch job Visualize results Steer job Metacomputing the Einstein Equations: Connecting T3Es in Berlin, Garching, San Diego Remote Viz, Streaming HDF5 Gridftp Autodownsample Any Viz Client: LCA Vision, OpenDX John Shalf (LBL) won SC2001 Bandwidth Challenge: ~3.5Gbit/sec

Spawning across ARG Testbed Main BH Simulation starts here All analysis tasks spawned automatically to free resources worldwide These task farmed jobs may feed back, steer main job

What we want in 5-10 years Many Disciplines Require Common Infrastructure Common Needs Driven by the Science/Engineering Large Number of Sensors / Instruments Data to community in real time! Daily Generation of Large Data Sets Growth in Computing power from TB ---> PB machines Experimental data Data is on Multiple Length and Time Scales Automatic Archiving in Distributed Repositories Large Community of End Users Multi-Megapixel and Immersive Visualization Collaborative Analysis From Multiple Sites Complex Simulations Needed to Interpret Data Some will need Optical Networks Communications Dedicated Lambdas Data Large Peer-to-Peer Lambda Attached Storage Source: Smarr

Rollout Over 14 Years Starting With Existing Broadband Stations Source: Smarr NSFs EarthScope--USArray: Explosions of Data! Typical of Many Projects 70km spacing, data can be coupled to simulations

Physicist has new idea ! S1S1 S2S2 P1P1 P2P2 S1S1 S2S2 P2P2 P1P1 S Brill Wave Dynamic Grid Computing Found a black hole, Load new component Look for horizon Calculate/Output Grav. Waves Calculate/Output Invariants Find best resources Free CPUs!! NCSA SDSC RZG LRZ Archive data SDSC Add more resources Clone job with steered parameter Queue time over, find new machine Further Calculations AEI We see something, but too weak. Please simulate to enhance signal! Archive to LIGO experiment

Summary Researchers need much higher bandwidth networks now, but simply wont use them unless they provide better end- to-end QoS Data needs will grow exponentially in coming decade Experimental data Simulation data from Petascale computing Virtual presence, video, etc eScience Grand Challenge teams (Networking experts, CS and Science) & Experimental Networks to blaze path Future Grid Apps will be very innovative if Networks are there, and middleware frameworks support them Complex live interaction between users, data, simulation Instantanenous bandwidth on demand: Give me a lambda!