Networking for the Future of Science

Slides:



Advertisements
Similar presentations
1 Resource Optimization in Hybrid Core Networks with 100G Links Malathi Veeraraghavan University of Virginia Date: Feb. 4, 2010 (Collaborator: Admela Jukan)
Advertisements

1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia,
U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Advanced Scientific Computing Research Mathematical, Information, and Computational.
1 Visual Collaboration Jose Leary, Media Specialist.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 Networking for the Future of Science ESnet4: Networking for the Future of DOE Science William E. Johnston ESnet Department Head and Senior Scientist.
1 Networking for the Future of Science ESnet Status Update William E. Johnston ESnet Department Head and Senior Scientist This talk.
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
1 ESnet - Connecting the USA DOE Labs to the World of Science Eli Dart, Network Engineer Network Engineering Group Chinese American Network Symposium Indianapolis,
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group ESCC July Energy Sciences Network.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 ESNet Update Joint Techs Meeting, July 19, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Nlr.net © 2004 National LambdaRail, Inc 1 NLR Update Dave Reese Joint Techs February 2007.
Copyright 2004 National LambdaRail, Inc Connecting to National LambdaRail 6/23/2004 Debbie Montano Director, Development & Operations
Network Monitoring, WAN Performance Analysis, & Data Circuit Support at Fermilab Phil DeMar US-CMS Tier-3 Meeting Fermilab October 23, 2008.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
Recent and Planned Developments in ESnet Michael Ernst, BNL Slides courtesy of Mike Bennett, ESnet Dedicated Meeting with NII May 14, 2013 Tokyo.
1 ESnet Joint Techs, Feb William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan.
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
1 ESnet Trends and Pressures and Long Term Strategy ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Project.
1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory.
CCIRN Meeting: Optical Networking Topic North America report Heather Boyles, Internet2
GOLE and Exchange Architectures John Silvester Professor of Electrical Engineering, USC Board Member, CENIC PI, TransLight/PacificWave (NSF-OCI-IRNC)
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
Michael Ernst U.S. ATLAS Tier-1 Network Status Evolution of LHC Networking – February 10,
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Abilene Update SC'99 :: Portland :: 17-Nov-99. Outline Goals Architecture Current Status NGI Peering International Peering Multicast.
TransPAC-Pacific Wave 100G
NSF International Research Network Connections (IRNC) Program: TransLight/StarLight Maxine D. Brown and Thomas A. DeFanti Electronic Visualization Laboratory.
BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Joint Techs 17 July 2006 University of Wisconsin, Madison,
July 19, 2005-LHC GDB T0/T1 Networking L. Pinsky--ALICE-USA1 ALICE-USA T0/T1 Networking Plans Larry Pinsky—University of Houston For ALICE-USA.
LCG Phase-2 Planning David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Computing at Fermilab D. Petravick Fermilab August 16, 2007.
Copyright 2005 National LambdaRail, Inc
Joint Techs, Columbus, OH
Dynamic Network Services In Internet2
ESnet Network Engineer Lawrence Berkeley National Laboratory
UltraLight Status Report
ESnet4: Networking for the Future of DOE Science ESnet R&D Roadmap Workshop, April 23-24, 2007 William E. Johnston ESnet Department Head and Senior.
ESnet Status Update ESCC July 18, 2007
MAN LAN Update Rick Summerhill
ESnet Network Measurements ESCC Feb Joe Metzger
Energy Sciences Network Enabling Virtual Science June 9, 2009
National LambdaRail Update Dave Jent NLR Layer 2/3 Service Center Indiana University July 18, 2005.
5th EU DataGrid Conference
ATLAS Tier 2 Paths Within ESnet
ESnet4: Networking for the Future of DOE Science
Fall 2006 Internet2 Member Meeting
Steve Cotter, Chin Guok, Joe Metzger, Bill Johnston
Global Cooperation, the IRNC and TransPAC2
Wide-Area Networking at SLAC
OSCARS Roadmap Chin Guok
Abilene Update Rick Summerhill
Presentation transcript:

Networking for the Future of Science ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager Energy Sciences Network Lawrence Berkeley National Laboratory January 21, 2008 Networking for the Future of Science

ESnet 3 with Sites and Peers (Early 2007) ESnet Science Data Network (SDN) core Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren SINet (Japan) Russia (BINP) GÉANT - France, Germany, Italy, UK, etc CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 MREN Netherlands StarTap Taiwan (TANet2, ASCC) PNWGPoP/ PAcificWave CERN (USLHCnet DOE+CERN funded) SEA NSF/IRNC funded LIGO MAN LAN Abilene PNNL AU Starlight CHI-SL Abilene NYC MIT Abilene INEEL ESnet IP core: Packet over SONET Optical Ring and Hubs Lab DC Offices BNL JGI TWC FNAL LLNL LBNL AMES ANL PPPL SNLL NERSC SNV CHI MAE-E SLAC Equinix PAIX-PA Equinix, etc. OSC GTN NNSA Equinix SNV SDN BECHTEL-NV NASA Ames KCP NREL DC JLAB ORNL YUCCA MT OSTI ORAU LANL MAXGPoP ARM Abilene SDSC SNLA NOAA AU GA SRS ALB Allied Signal PANTEX UNM ATL AMPATH (S. America) 42 end user sites AMPATH (S. America) DOE-ALB Office Of Science Sponsored (22) ELP International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) commercial peering points R&E networks Specific R&E network peers Other R&E peering points IP SNV ESnet core hubs Abilene high-speed peering points with Internet2/Abilene

ESnet 3 Backbone as of January 1, 2007 Seattle New York City Sunnyvale Chicago Washington DC San Diego Albuquerque Atlanta El Paso Future ESnet Hub ESnet Hub 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) MAN rings (≥ 10 G/s) Lab supplied links

ESnet 4 Backbone as of April 15, 2007 Seattle Boston Clev. New York City Sunnyvale Chicago Washington DC San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub

ESnet 4 Backbone as of May 15, 2007 Seattle Boston Boston Clev. Clev. New York City Sunnyvale Chicago Washington DC SNV San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub

ESnet 4 Backbone as of June 20, 2007 Seattle Boston Boston Clev. New York City Sunnyvale Denver Chicago Washington DC Kansas City San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Houston Future ESnet Hub ESnet Hub

ESnet 4 Backbone August 1, 2007 (Last JT meeting at FNAL) Seattle Boston Boston Clev. Clev. New York City Sunnyvale Denver Chicago Washington DC Kansas City Los Angeles San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Houston Houston Future ESnet Hub ESnet Hub

ESnet 4 Backbone September 30, 2007 Seattle Boston Boston Boise Clev. Clev. New York City Sunnyvale Denver Chicago Washington DC Kansas City Los Angeles San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Houston Houston Future ESnet Hub ESnet Hub

ESnet 4 Backbone December 2007 Seattle Boston Boston Boise Clev. Clev. New York City Sunnyvale Denver Chicago Washington DC Los Angeles Kansas City Nashville San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 2.5 Gb/s IP Tail (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Houston Houston Future ESnet Hub ESnet Hub

ESnet 4 Backbone December, 2008 Seattle Boston Clev. X2 New York City X2 X2 X2 Sunnyvale Denver Chicago Washington DC DC X2 X2 Los Angeles Kansas City X2 Nashville San Diego Albuquerque Atlanta El Paso 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Houston Houston Future ESnet Hub ESnet Hub

ESnet Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (12/2007) Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren KAREN/REANNZ ODN Japan Telecom America NLR-Packetnet Abilene/I2 SINet (Japan) Russia (BINP) GÉANT - France, Germany, Italy, UK, etc CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 MREN StarTap Taiwan (TANet2, ASCC) CERN (USLHCnet: DOE+CERN funded) PacWave SEA NSF/IRNC funded KAREN / REANNZ Internet2 SINGAREN ODN Japan Telecom America USHLCNet to GÉANT Internet2 NYSERNet MAN LAN AU PNNL Starlight USLHCNet NLR CHI-SL LIGO MIT/ PSFC NEWY Abilene Abilene INL Salt Lake Lab DC Offices BNL PacWave FNAL JGI LVK ANL LLNL SNLL PPPL LBNL CHIC DOE NERSC NETL SUNN AMES SLAC Equinix PAIX-PA Equinix, etc. Equinix DOE GTN NNSA SNV1 NREL BECHTEL-NV DENV WASH NASA Ames IARC KCP ORNL JLAB YUCCA MT ORAU LANL MAXGPoP NLR ARM OSTI SDSC SNLA GA NASH AU SRS NOAA ALBU Allied Signal PANTEX AMPATH (S. America) UNM ATLA AMPATH (S. America) DOE-ALB ~45 end user sites ELPA International (1-10 Gb/s) 10 Gb/s SDN core (I2, NLR) 10Gb/s IP core MAN rings (≥ 10 Gb/s) Lab supplied links OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less Office Of Science Sponsored (22) NNSA Sponsored (13+) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) commercial peering points R&E networks Specific R&E network peers Geography is only representational SUNN ESnet core hubs Other R&E peering points

GLORIAD (Russia and China) Science Data Network Core ESnet4 Core networks 50-60 Gbps by 2009-2010 (10Gb/s circuits), 500-600 Gbps by 2011-2012 (100 Gb/s circuits) Canada (CANARIE) CERN (30+ Gbps) Asia-Pacific Canada (CANARIE) Asia Pacific CERN (30+ Gbps) GLORIAD (Russia and China) USLHCNet Europe (GEANT) 1625 miles / 2545 km Asia-Pacific Science Data Network Core Seattle Chicago Boston IP Core Boise Australia New York Kansas City Cleveland Denver Washington DC Sunnyvale Atlanta Tulsa LA Albuquerque Australia South America (AMPATH) San Diego IP core hubs South America (AMPATH) Houston Jacksonville Production IP core (10Gbps) SDN core (20-30-40-50 Gbps) MANs (20-60 Gbps) or backbone loops for site access International connections SDN hubs Primary DOE Labs Core network fiber path is ~ 14,000 miles / 24,000 km High speed cross-connects with Ineternet2/Abilene Possible hubs 2700 miles / 4300 km

A Tail of Two ESnet4 Hubs MX960 Switch 6509 Switch T320 Routers Sunnyvale Ca Hub Chicago Hub ESnet’s SDN backbone is implemented with Layer2 switches; Cisco 6509s and Juniper MX960s each present their own unique challenges.

ESnet 4 Factoids as of January 21, 2008 ESnet4 installation to date: 32 new 10Gb/s backbone circuits Over 3 times the number from last JT meeting 20,284 10Gb/s backbone Route Miles More than doubled from last JT meeting 10 new hubs Since last meeting Seattle Sunnyvale Nashville 7 new routers 4 new switches Chicago MAN now connected to Level3 POP 2 x 10GE to ANL 2 x 10GE to FNAL 3 x 10GE to Starlight

ESnet Traffic Continues to Exceed 2 PetaBytes/Month Overall traffic tracks the very large science use of the network 2.7 PBytes in July 2007 1 PBytes in April 2006 ESnet traffic historically has increased 10x every 47 months

When A Few Large Data Sources/Sinks Dominate Traffic it is Not Surprising that Overall Network Usage Follows the Patterns of the Very Large Users - This Trend Will Reverse in the Next Few Weeks as the Next Round of LHC Data Challenges Kicks Off

ESnet Continues to be Highly Reliable; Even During the Transition “5 nines” (>99.995%) “4 nines” (>99.95%) “3 nines” (>99.5%) Dually connected sites Note: These availability measures are only for ESnet infrastructure, they do not include site-related problems. Some sites, e.g. PNNL and LANL, provide circuits from the site to an ESnet hub, and therefore the ESnet-site demarc is at the ESnet hub (there is no ESnet equipment at the site. In this case, circuit outages between the ESnet equipment and the site are considered site issues and are not included in the ESnet availability metric.

OSCARS OSCARS Overview Guaranteed Bandwidth Virtual Circuit Services On-demand Secure Circuits and Advance Reservation System OSCARS Guaranteed Bandwidth Virtual Circuit Services Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy

OSCARS Status Update ESnet Centric Deployment Prototype layer 3 (IP) guaranteed bandwidth virtual circuit service deployed in ESnet (1Q05) Prototype layer 2 (Ethernet VLAN) virtual circuit service deployed in ESnet (3Q07) Inter-Domain Collaborative Efforts Terapaths (BNL) Inter-domain interoperability for layer 3 virtual circuits demonstrated (3Q06) Inter-domain interoperability for layer 2 virtual circuits demonstrated at SC07 (4Q07) LambdaStation (FNAL) HOPI/DRAGON Inter-domain exchange of control messages demonstrated (1Q07) Integration of OSCARS and DRAGON has been successful (1Q07) DICE First draft of topology exchange schema has been formalized (in collaboration with NMWG) (2Q07), interoperability test demonstrated 3Q07 Initial implementation of reservation and signaling messages demonstrated at SC07 (4Q07) UVA Integration of Token based authorization in OSCARS under testing Nortel Topology exchange demonstrated successfully 3Q07

Network Measurement Update ESnet About 1/3 of the 10GE bandwidth test platforms & 1/2 of the latency test platforms for ESnet 4 have been deployed. 10GE test systems are being used extensively for acceptance testing and debugging Structured & ad-hoc external testing capabilities have not been enabled yet. Clocking issues at a couple POPS are not resolved. Work is progressing on revamping the ESnet statistics collection, management & publication systems ESxSNMP & TSDB & PerfSONAR Measurement Archive (MA) PerfSONAR TS & OSCARS Topology DB NetInfo being restructured to be PerfSONAR based LHC and PerfSONAR PerfSONAR based network measurement solutions for the Tier 1/Tier 2 community are nearing completion. A proposal from DANTE to deploy a perfSONAR based network measurement service across the LHCOPN at all Tier1 sites is being evaluated by the Tier 1 centers

End