1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory.

Slides:



Advertisements
Similar presentations
University of Illinois at Chicago Annual Update Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic Visualization Laboratory.
Advertisements

1 International IP Backbone of Taiwan Academic Networks Wen-Shui Chen & Yu-lin Chang APAN-TW APAN 2003 Academia Sinica Computing Center, Taiwan.
1 The Evolution of ESnet (Summary) William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia,
U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Advanced Scientific Computing Research Mathematical, Information, and Computational.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Visual Collaboration Jose Leary, Media Specialist.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
Where do we go from here? “Knowledge Environments to Support Distributed Science and Engineering” Symposium on Knowledge Environments for Science and Engineering.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
1 ESnet - Connecting the USA DOE Labs to the World of Science Eli Dart, Network Engineer Network Engineering Group Chinese American Network Symposium Indianapolis,
1 Cyberinfrastructure and Networks : The Advanced Networks and Services Underpinning the Large-Scale Science of DOE’s Office of Science William E. Johnston.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group ESCC July Energy Sciences Network.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 ESNet Update Joint Techs Meeting, July 19, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
The Singapore Advanced Research & Education Network.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
A Technology Vision for the Future Rick Summerhill, Chief Technology Officer, Eric Boyd, Deputy Technology Officer, Internet2 Joint Techs Meeting 16 July.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
1 Status Report on US networks at the Turn of the Century Les Cottrell – SLAC & Stanford U.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 ESnet Joint Techs, Feb William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan.
1 ESnet Trends and Pressures and Long Term Strategy ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Project.
SINET Update and Collaboration with TEIN2 Jun Matsukata National Institute of Informatics (NII) Research Organization of Information and Systems
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
Internet2/Abilene Perspective Guy Almes and Ted Hanss Internet2 Project NASA Ames -- August 10, 1999.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
GOLE and Exchange Architectures John Silvester Professor of Electrical Engineering, USC Board Member, CENIC PI, TransLight/PacificWave (NSF-OCI-IRNC)
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
By David P. Schissel and Reza Shakoori Presented at DOE Office of Science High-Performance Network Research PI Meeting Brookhaven National Lab September.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
Michael Ernst U.S. ATLAS Tier-1 Network Status Evolution of LHC Networking – February 10,
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Abilene Update SC'99 :: Portland :: 17-Nov-99. Outline Goals Architecture Current Status NGI Peering International Peering Multicast.
TransPAC-Pacific Wave 100G
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
Office of Science U.S. Department of Energy High-Performance Network Research Program at DOE/Office of Science 2005 DOE Annual PI Meeting Brookhaven National.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
NSF International Research Network Connections (IRNC) Program: TransLight/StarLight Maxine D. Brown and Thomas A. DeFanti Electronic Visualization Laboratory.
BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
Jefferson Lab Site Update Winter 2010 ESCC Meeting Andy Kowalski Bryan Hess February 4, 2010.
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
LCG Phase-2 Planning David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam.
1 ESnet Network Requirements ASCAC Networking Sub-committee Meeting April 13, 2007 Eli Dart ESnet Engineering Group Lawrence Berkeley National Laboratory.
Networking for the Future of Science
ESnet Network Engineer Lawrence Berkeley National Laboratory
Energy Sciences Network Enabling Virtual Science June 9, 2009
ATLAS Tier 2 Paths Within ESnet
ESnet4: Networking for the Future of DOE Science
Wide-Area Networking at SLAC
Abilene Update Rick Summerhill
Presentation transcript:

1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory

2 Outline  ESnet’s Role in DOE’s Office of Science  ESnet’s Continuing Evolutionary Dimensions  Capacity  Reach  Reliability  Guaranteed Services

3 DOE Office of Science and ESnet “The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, … providing more than 40 percent of total funding … for the Nation’s research programs in high-energy physics, nuclear physics, and fusion energy sciences.” ( The large-scale science that is the mission of the Office of Science is dependent on networks for o Sharing of massive amounts of data o Supporting thousands of collaborators world-wide o Distributed data processing o Distributed simulation, visualization, and computational steering o Distributed data management ESnet’s mission is to enable those aspects of science that depend on networking and on certain types of large-scale collaboration

4 The Office of Science U.S. Community Institutions supported by SC Major User Facilities DOE Multiprogram Laboratories DOE Program-Dedicated Laboratories DOE Specific-Mission Laboratories Pacific Northwest National Laboratory Ames Laboratory Argonne National Laboratory BrookhavenNationalLaboratory Oak Ridge National Laboratory Los Alamos National Laboratory Lawrence Livermore &Sandia National Laboratories Lawrence Berkeley National Laboratory FermiNationalAcceleratorLaboratory PrincetonPlasmaPhysicsLaboratory Thomas Jefferson National Accelerator Facility National Renewable Energy Laboratory StanfordLinearAcceleratorCenter Idaho National Laboratory SC Program sites General Atomics Sandia National Laboratory

5 Footprint of SC Collaborators - Top 100 Traffic Generators Universities and research institutes that are the top 100 ESnet users The top 100 data flows generate 30% of all ESnet traffic (ESnet handles about 3x10 9 flows/mo.) 91 of the top 100 flows are from the Labs to other institutions (shown) (CY2005 data)

6 Changing Science Environment  New Demands on Network Increased capacity o Needed to accommodate a large and steadily increasing amount of data that must traverse the network High-speed, highly reliable connectivity between Labs and US and international R&E institutions o To support the inherently collaborative, global nature of large-scale science High network reliability o For interconnecting components of distributed large- scale science New network services to provide bandwidth guarantees o For data transfer deadlines for remote data analysis, real-time interaction with instruments, coupled computational simulations, etc.

7 Network Utilization ESnet Accepted Traffic (Bytes) Jan 1990 to Jun 2006 ESnet is Currently Transporting over 1.2 Petabytes/month and this volume is increasing exponentially TBytes/Month 1.04 Petabyte/mo April Petabyte/mo June 2006

ESnet traffic has increased by 10X every 47 months, on average, since 1990 Terabytes / month Log Plot of ESnet Monthly Accepted Traffic, January, 1990 – June, 2006 Oct., TBy/mo. Aug., MBy/mo. Jul., TBy/mo. 38 months 57 months 40 months Nov., TBy/mo. Apr., PBy/mo. 53 months

9 High Volume Science Traffic Continues to Grow Top 100 flows are increasing as a percentage of total traffic volume 99% to 100% of top 100 flows are science data (100% starting mid-2005) A small number of large- scale science users account for a significant and growing fraction of total traffic volume 2 TB/month 1/05 6/05 1/06 2 TB/month 7/06

10 Traffic coming into ESnet = Green Traffic leaving ESnet = Blue Traffic between ESnet sites % = of total ingress or egress traffic Traffic notes more than 90% of all traffic is Office of Science less that 10% is inter-Lab Who Generates ESnet Traffic? ESnet Inter-Sector Traffic Summary for June 2006 Peering Points Commercial R&E (mostly universities) 7% 30% 31% 5% 14% ESnet ~7% DOE collaborator traffic, inc. data 76% 31% DOE is a net supplier of data because DOE facilities are used by universities and commercial entities, as well as by DOE researchers DOE sites International (almost entirely R&E sites)

11 CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2) Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren ATL DC MAE-E PAIX-PA Equinix MAE-West ESnet’s Domestic, Commercial, and International Connectivity Abilene CERN (USLHCnet CERN+DOE funded) GÉANT - France, Germany, Italy, UK, etc NYC Starlight CHI-SL Abilene SNV SDSC MAXGPoP SoXGPoP High Speed International Connection Commercial and R&E peering points Abilene High-speed peering points with Abilene ESnet core hubs IP SDN CHI MREN Netherlands StarTap Taiwan (TANet2) UltraLight NGIX-W Australia SEA SINet (Japan) Russia (BINP) AMPATH (S. America) AMPATH S. America MAN LAN Abilene ESnet provides: High-speed peerings with Abilene, CERN, and the international R&E networks Management of the full complement of global Internet routes (about 180,000 unique IPv4 routes) in order to provide DOE scientists rich connectivity to all Internet sites Australia Equinix PNWGPoP/ PacificWave NGIX-E PacificWave UNM ALB SNV USN

12 NLR Supplied Circuits LVK SNLL YUCCA MT BECHTEL-NV PNNL LIGO INEEL SNLA PANTEX ARM KCP NOAA OSTI ORAU SRS JLAB PPPL MIT ANL BNL FNAL AMES LLNL GA OSC GTN NNSA International (high speed) Lab Supplied 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (4) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) 42 end user sites ESnet IP core CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren ESnet IP core: Packet over SONET Optical Ring and Hubs ELP ATL DC commercial and R&E peering points MAE-E PAIX-PA Equinix, etc. PNWGPoP/ PAcificWave  ESnet’s Physical Connectivity (Summer 2006) ESnet core hubs IP Abilene high-speed peering points with Internet2/Abilene Abilene CERN (USLHCnet CERN+DOE funded) GÉANT - France, Germany, Italy, UK, etc NYC Starlight Chi NAP CHI-SL SNV Abilene SNV SDN JGI LBNL SLAC NERSC SDSC Equinix MAXGPoP SoXGPoP SNV ALB ORNL CHI MREN Netherlands StarTap Taiwan (TANet2) UltraLight AU SEA SINet (Japan) Russia (BINP) LANL NREL AMPATH (S. America) AMPATH MAN LAN Abilene MAN rings Abilene USN IARC NASA Ames KCP-ALB UNM LLNL/LANL DC Offices LBNL DC ORAU DC DOE-ALB

13 ESnet LIMAN with SDN Connections

14 LIMAN and BNL ATLAS (A Toroidal LHC ApparatuS), is one of four detectors located at the Large Hadron Collider (LHC) located at CERN BNL is the largest of the ATLAS Tier 1 centers and the only one in the U.S, and so is responsible for archiving and processing approximately 20 percent of the ATLAS raw data During a recent multi-week exercise, BNL was able to sustain an average transfer rate from CERN to their disk arrays of 191 MB/s (~1.5 Gb/s) compared to a target rate of 200 MB/s o This was in addition to “normal” BNL site traffic

15 Chicago Area MAN with SDN Connections

16 CHIMAN: FNAL and ANL Fermi National Laboratory is the only US Tier1 center for the Compact Muon Solenoid (CMS) experiment at LHC Argonne National Laboratory will house a 5-teraflop IBM BlueGene computer part of the National Leadership Computing Facility Together with ESnet, FNAL and ANL will build the Chicago MAN (CHIMAN) to accommodate the vast amounts of data these facilities will generate and receive o Five 10GE circuits will go into FNAL o Three 10GE circuits will go into ANL o Ring connectivity to StarLight and to the Chicago ESnet POP

Jefferson Laboratory Connectivity Eastern LITE (E-LITE) Old Dominion University W&M JLAB JTASC VMASC Bute St CO MATP NYC Atlanta ODU NASA Lovitt MATP Virginia Tech MAX GIGAPOP ESnet Router 10GE OC192 OC48 ESnet core JLAB Site Switch DC - MAX Giga-POP

Gb/s circuits Production IP core Science Data Network core Metropolitan Area Networks International connections Metropolitan Area Rings Primary DOE Labs IP core hubs possible hubs SDN hubs Europe (GEANT) Asia-Pacific New York Chicago Washington, DC Atlanta CERN Seattle Albuquerque Aus. Australia San Diego LA Sunnyvale Denver South America (AMPATH) South America (AMPATH) Canada (CANARIE) CERN Loop off Backbone Canada (CANARIE) Europe (GEANT) SDN Core IP Core ESnet Target Architecture: IP Core+Science Data Network Core+Metro Area Rings

19 Reliability “5 nines” “3 nines” “4 nines” Dually connected sites

20 Guaranteed Services Using Virtual Circuits Traffic isolation and traffic engineering – Provides for high-performance, non-standard transport mechanisms that cannot co-exist with commodity TCP-based transport – Enables the engineering of explicit paths to meet specific requirements e.g. bypass congested links, using lower bandwidth, lower latency paths Guaranteed bandwidth [Quality of Service (QoS)] – Addresses deadline scheduling Where fixed amounts of data have to reach sites on a fixed schedule, so that the processing does not fall far enough behind that it could never catch up – very important for experiment data analysis Reduces cost of handling high bandwidth data flows – Highly capable routers are not necessary when every packet goes to the same place – Use lower cost (factor of 5x) switches to relatively route the packets End-to-end connections are required between Labs and collaborator institutions

21  OSCARS: Guaranteed Bandwidth VC Service For SC Science ESnet On-demand Secured Circuits and Advanced Reservation System (OSCARS) To ensure compatibility, the design and implementation is done in collaboration with the other major science R&E networks and end sites o Internet2: Bandwidth Reservation for User Work (BRUW) -Development of common code base o GEANT: Bandwidth on Demand (GN2-JRA3), Performance and Allocated Capacity for End-users (SA3-PACE) and Advance Multi-domain Provisioning System (AMPS) -Extends to NRENs o BNL: TeraPaths - A QoS Enabled Collaborative Data Sharing Infrastructure for Peta- scale Computing Research o GA: Network Quality of Service for Magnetic Fusion Research o SLAC: Internet End-to-end Performance Monitoring (IEPM) o USN: Experimental Ultra-Scale Network Testbed for Large-Scale Science In its current phase this effort is being funded as a research project by the Office of Science, Mathematical, Information, and Computational Sciences (MICS) Network R&D Program A prototype service has been deployed as a proof of concept o To date more then 20 accounts have been created for beta users, collaborators, and developers o More then 100 reservation requests have been processed

22  OSCARS - BRUW Interdomain Interoperability Demonstration BRUW OSCARS Source LSP Sink LSP Indianapolis IN Chicago IL Chicago IL Sunnyvale CA 4 The first interdomain, automatically configured, virtual circuit between ESnet and Abilene was created on April 6, 2005

23 A Few URLs ESnet Home Page o National Labs and User Facilities o ESnet Availability Reports o OSCARS Documentation o