1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.

Slides:



Advertisements
Similar presentations
1 The Evolution of ESnet (Summary) William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Advertisements

1 ESnet Network Requirements ASCAC Networking Sub-committee Meeting April 13, 2007 Eli Dart ESnet Engineering Group Lawrence Berkeley National Laboratory.
ESnet Status Update ESCC July 18, 2007
1 Networking for the Future of Science Networking for the Future of Large-Scale Science: An ESnet Perspective William E. Johnston ESnet Department Head.
1 Networking for the Future of Science Network Architecture and Services to Support Large-Scale Science: An ESnet Perspective William E. Johnston ESnet.
1 Networking for the Future of Science ESnet4: Networking for the Future of DOE Science William E. Johnston ESnet Department Head and Senior Scientist.
1 Networking for the Future of Science ESnet Status Update William E. Johnston, ESnet Department Head and Senior Scientist Joe Burrescia, General Manager.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
National LambdaRail (NLR): Progress Report Tom West President/CEO, CENIC.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
1 ESnet - Connecting the USA DOE Labs to the World of Science Eli Dart, Network Engineer Network Engineering Group Chinese American Network Symposium Indianapolis,
Arkansas Research and Education Optical Network Arkansas Association of Public Universities Little Rock, Arkansas April 10, 2008 Dr. Robert Zimmerman ARE-ON.
National LambdaRail A Fiber-based Research Infrastructure Vice-Provost for Scholarly Technology University of Southern California Chair of the CENIC Board.
1 Optical Research Networks WGISS 18: Beijing China September 2004 David Hartzell NASA Ames / CSC
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
Dan Nae California Institute of Technology US LHCNet Update.
Dan Nae California Institute of Technology The US LHCNet Project ICFA Workshop, Krakow October 2006.
1 ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group ESCC July Energy Sciences Network.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
| BoD over GÉANT (& NRENs) for FIRE and GENI users GENI-FIRE Workshop Washington DC, 17th-18th Sept 2015 Michael Enrico CTO (GÉANT Association)
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
Copyright 2004 National LambdaRail, Inc National LambdaRail Tutorial 7/18/2004 Debbie Montano NLR Director, Development & Operations
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Nlr.net © 2004 National LambdaRail, Inc 1 NLR Update Dave Reese Joint Techs February 2007.
Copyright 2004 National LambdaRail, Inc Connecting to National LambdaRail 6/23/2004 Debbie Montano Director, Development & Operations
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
InterDomain Dynamic Circuit Network Demo Joint Techs - Hawaii Jan 2008 John Vollbrecht, Internet2
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
A User Driven Dynamic Circuit Network Implementation Evangelos Chaniotakis Network Engineering Group DANMS 2008 November Energy Sciences Network.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Keeping up with the RONses Mark Johnson Internet2 Member Meeting May 3, 2005.
CCIRN Meeting: Optical Networking Topic North America report Heather Boyles, Internet2
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
1 Networking for the Future of Science ESnet Status Update William E. Johnston, ESnet Department Head and Senior Scientist Joe Burrescia, General Manager.
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
National LambdaRail, Inc – Confidential & Proprietary National LambdaRail 4/21/2004 Debbie Montano light the future N L.
Cyberinfrastructure and Internet2 Eric Boyd Deputy Technology Officer Internet2.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
Michael Ernst U.S. ATLAS Tier-1 Network Status Evolution of LHC Networking – February 10,
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
July 19, 2005-LHC GDB T0/T1 Networking L. Pinsky--ALICE-USA1 ALICE-USA T0/T1 Networking Plans Larry Pinsky—University of Houston For ALICE-USA.
Page 1 Page 1 Dynamic Provisioning and Survivability in Hybrid Circuit/Packet Optical Networks DoE New Projects Kick-Off Meeting Chicago, Sept
Community Design Workshop - First Day Summary, Issues Rick Summerhill, Director Network Research, Architecture, and Technologies Internet2 Community Design.
1 ESnet Network Requirements ASCAC Networking Sub-committee Meeting April 13, 2007 Eli Dart ESnet Engineering Group Lawrence Berkeley National Laboratory.
Computing at Fermilab D. Petravick Fermilab August 16, 2007.
Networking for the Future of Science
Dynamic Network Services In Internet2
ESnet Network Engineer Lawrence Berkeley National Laboratory
Networking for the Future of Large-Scale Science: An ESnet Perspective
ESnet4: Networking for the Future of DOE Science ESnet R&D Roadmap Workshop, April 23-24, 2007 William E. Johnston ESnet Department Head and Senior.
ESnet Status Update ESCC July 18, 2007
ATLAS Tier 2 Paths Within ESnet
ESnet4: Networking for the Future of DOE Science
LHC Tier 2 Networking BOF
Fall 2006 Internet2 Member Meeting
Wide-Area Networking at SLAC
Abilene Update Rick Summerhill
Presentation transcript:

1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of Science

2 Overview  Logistics  Network Requirements  Sources, Workshop context  Case Study Example  Large Hadron Collider  Today’s Workshop  Structure and Goals

3 Logistics Mid-morning break, lunch, afternoon break Self-organization for dinner Agenda on workshop web page – Round-table introductions

4 Network Requirements Requirements are primary drivers for ESnet – science focused Sources of Requirements – Office of Science (SC) Program Managers – Direct gathering through interaction with science users of the network Examples of recent case studies –Climate Modeling –Large Hadron Collider (LHC) –Spallation Neutron Source at ORNL – Observation of the network – Other Sources (e.g. Laboratory CIOs)

5 Program Office Network Requirements Workshops Two workshops per year One workshop per program office every 3 years Workshop Goals – Accurately characterize current and future network requirements for Program Office science portfolio – Collect network requirements from scientists and Program Office Workshop structure – Modeled after the 2002 High Performance Network Planning Workshop conducted by the DOE Office of Science – Elicit information from managers, scientists and network users regarding usage patterns, science process, instruments and facilities – codify in “Case Studies” – Synthesize network requirements from the Case Studies

6 Large Hadron Collider at CERN

7 LHC Requirements – Instruments and Facilities Large Hadron Collider at CERN – Networking requirements of two experiments have been characterized – CMS and Atlas – Petabytes of data per year to be distributed LHC networking and data volume requirements are unique to date – First in a series of DOE science projects with requirements of unprecedented scale – Driving ESnet’s near-term bandwidth and architecture requirements – These requirements are shared by other very-large-scale projects that are coming on line soon (e.g. ITER) Tiered data distribution model – Tier0 center at CERN processes raw data into event data – Tier1 centers receive event data from CERN FNAL is CMS Tier1 center for US BNL is Atlas Tier1 center for US CERN to US Tier1 data rates: 10 Gbps in 2007, Gbps by 2010/11 – Tier2 and Tier3 sites receive data from Tier1 centers Tier2 and Tier3 sites are end user analysis facilities Analysis results are sent back to Tier1 and Tier0 centers Tier2 and Tier3 sites are largely universities in US and Europe

8 LHC Requirements – Process of Science Strictly tiered data distribution model is only part of the picture – Some Tier2 scientists will require data not available from their local Tier1 center – This will generate additional traffic outside the strict tiered data distribution tree – CMS Tier2 sites will fetch data from all Tier1 centers in the general case Network reliability is critical for the LHC – Data rates are so large that buffering capacity is limited – If an outage is more than a few hours in duration, the analysis could fall permanently behind Analysis capability is already maximized – little extra headroom CMS/Atlas require DOE federated trust for credentials and federation with LCG Service guarantees will play a key role – Traffic isolation for unfriendly data transport protocols – Bandwidth guarantees for deadline scheduling Several unknowns will require ESnet to be nimble and flexible – Tier1 to Tier1,Tier2 to Tier1, and Tier2 to Tier0 data rates could add significant additional requirements for international bandwidth – Bandwidth will need to be added once requirements are clarified – Drives architectural requirements for scalability, modularity

9 LHC Ongoing Requirements Gathering Process ESnet has been an active participant in the LHC network planning and operation – Been an active participant in the LHC network operations working group since its creation – Jointly organized the US CMS Tier2 networking requirements workshop with Internet2 – Participated in the US Atlas Tier2 networking requirements workshop – Participated in US Tier3 networking workshops

10 LHC Requirements Identified To Date 10 Gbps “light paths” from FNAL and BNL to CERN – CERN / USLHCnet will provide10 Gbps circuits to Starlight, to 32 AoA, NYC (MAN LAN), and between Starlight and NYC – 10 Gbps each in near term, additional lambdas over time (3-4 lambdas each by 2010) BNL must communicate with TRIUMF in Vancouver – This is an example of Tier1 to Tier1 traffic – 1 Gbps in near term – Circuit is currently up and running Additional bandwidth requirements between US Tier1s and European Tier2s – Served by USLHCnet circuit between New York and Amsterdam Reliability – 99.95%+ uptime (small number of hours per year) – Secondary backup paths – Tertiary backup paths – virtual circuits through ESnet, Internet2, and GEANT production networks and possibly GLIF (Global Lambda Integrated Facility) for transatlantic links Tier2 site connectivity – 1 to 10 Gbps required – Many large Tier2 sites require direct connections to the Tier1 sites – this drives bandwidth and Virtual Circuit deployment (e.g. UCSD) Ability to add bandwidth as additional requirements are clarified

11 Identified US Tier2 Sites Atlas (BNL Clients) – Boston University – Harvard University – Indiana University Bloomington – Langston University – University of Chicago – University of New Mexico Alb. – University of Oklahoma Norman – University of Texas at Arlington Calibration site – University of Michigan CMS (FNAL Clients) – Caltech – MIT – Purdue University – University of California San Diego – University of Florida at Gainesville – University of Nebraska at Lincoln – University of Wisconsin at Madison

12 LHC ATLAS Bandwidth Matrix as of April 2007 Site ASite ZESnet AESnet ZA-Z 2007 Bandwidth A-Z 2010 Bandwidth CERNBNLAofA (NYC)BNL10Gbps20-40Gbps BNLU. of Michigan (Calibration) BNL (LIMAN)Starlight (CHIMAN) 3Gbps10Gbps BNLBoston University BNL (LIMAN) Internet2 / NLR Peerings 3Gbps (Northeastern Tier2 Center) 10Gbps (Northeastern Tier2 Center) BNLHarvard University BNLIndiana U. at Bloomington BNL (LIMAN) Internet2 / NLR Peerings 3Gbps (Midwestern Tier2 Center) 10Gbps (Midwestern Tier2 Center) BNLU. of Chicago BNLLangston University BNL (LIMAN)Internet2 / NLR Peerings 3Gbps (Southwestern Tier2 Center) 10Gbps (Southwestern Tier2 Center) BNLU. Oklahoma Norman BNLU. of Texas Arlington BNLTier3 AggregateBNL (LIMAN)Internet2 / NLR Peerings 5Gbps20Gbps BNLTRIUMF (Canadian ATLAS Tier1) BNL (LIMAN)Seattle1Gbps5Gbps

13 LHC CMS Bandwidth Matrix as of April 2007 Site ASite ZESnet AESnet ZA-Z 2007 Bandwidth A-Z 2010 Bandwidth CERNFNALStarlight (CHIMAN) FNAL (CHIMAN) 10Gbps20-40Gbps FNALU. of Michigan (Calibration) FNAL (CHIMAN) Starlight (CHIMAN) 3Gbps10Gbps FNALCaltechFNAL (CHIMAN) Starlight (CHIMAN) 3Gbps10Gbps FNALMITFNAL (CHIMAN) AofA (NYC)/ Boston 3Gbps10Gbps FNALPurdue UniversityFNAL (CHIMAN) Starlight (CHIMAN) 3Gbps10Gbps FNALU. of California at San Diego FNAL (CHIMAN) San Diego3Gbps10Gbps FNALU. of Florida at Gainesville FNAL (CHIMAN) SOX3Gbps10Gbps FNALU. of Nebraska at Lincoln FNAL (CHIMAN) Starlight (CHIMAN) 3Gbps10Gbps FNALU. of Wisconsin at Madison FNAL (CHIMAN) Starlight (CHIMAN) 3Gbps10Gbps FNALTier3 AggregateFNAL (CHIMAN) Internet2 / NLR Peerings 5Gbps20Gbps

14 Estimated Aggregate Link Loadings, Denver Seattle Sunnyvale LA San Diego Chicago Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville Existing site supplied circuits ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links Lab supplied link LHC related link MAN link International IP Connections Raleigh OC48 (1) (1(3)) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site Committed bandwidth, Gb/s unlabeled links are 10 Gb/s

15 Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections ESnet Estimated Bandwidth Commitments Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis Atlanta Nashville All circuits are 10Gb/s. MAX West Chicago MAN Long Island MAN Newport News - Elite San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC JLab ELITE ODU MATP Wash., DC OC48 (1(3)) (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (10) (12) (3) (21) (27) (14) (24) (15) (0) (1) (30) FNAL 600 W. Chicago Starlight ANL USLHCNet CERN (total) 2.5 Committed bandwidth, Gb/s BNL 32 AoA, NYC USLHCNet CERN unlabeled links are 10 Gb/s

16 Estimated Aggregate Link Loadings, Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash. DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis (>1 ) Atlanta Nashville Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site OC48 ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections (16) Committed bandwidth, Gb/s link capacity, Gb/s 4040 unlabeled links are 10 Gb/s labeled links are in Gb/s

17 ESnet Estimated Bandwidth Commitments Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash. DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis (>1 ) Atlanta Nashville Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site OC48 (0) (1) ESnet IP core (1 ) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (10) (12) (27) (14) (24) (15) (30) 3 3 (3) (21) FNAL 600 W. Chicago Starlight ANL USLHCNet CERN BNL 32 AoA, NYC USLHCNet CERN unlabeled links are 10 Gb/s 2.5 Committed bandwidth, Gb/s

NP Workshop Goals – Accurately characterize the current and future network requirements for the NP Program Office’s science portfolio – Codify the requirements in a document The document will contain the case studies and summary matrices Structure – Discussion of ESnet4 architecture and deployment – NP Science portfolio – I2 Perspective – Round table discussions of case study documents Ensure that networking folks understand the science process, instruments and facilities, collaborations, etc. outlined in case studies Provide opportunity for discussions of synergy, common strategies, etc Interactive discussion rather than formal PowerPoint presentations – Collaboration services discussion – Wednesday morning

19 Questions? Thanks!