1 The Energy Sciences Network BESAC August 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.

Slides:



Advertisements
Similar presentations
February 2002 Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO.
Advertisements

M A Wajid Tanveer Infrastructure M A Wajid Tanveer
1 International IP Backbone of Taiwan Academic Networks Wen-Shui Chen & Yu-lin Chang APAN-TW APAN 2003 Academia Sinica Computing Center, Taiwan.
ONE PLANET ONE NETWORK A MILLION POSSIBILITIES Barry Joseph Director, Offer and Product Management.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
The Energy Sciences Network BESAC August 2004
1 The Evolution of ESnet (Summary) William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia,
U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Advanced Scientific Computing Research Mathematical, Information, and Computational.
1 Visual Collaboration Jose Leary, Media Specialist.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Business Data Communications, Stallings 1 Chapter 1: Introduction William Stallings Business Data Communications 6 th Edition.
1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project.
1 The Intersection of Grids and Networks: Where the Rubber Hits the Road William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
ACM 511 Chapter 2. Communication Communicating the Messages The best approach is to divide the data into smaller, more manageable pieces to send over.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
1 UHG MPLS Experience June 14, 2005 Sorell Slaymaker Director Network Architecture & Technologies
1 ESNet Update Joint Techs Meeting, July 19, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
The Singapore Advanced Research & Education Network.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
LAN Switching and Wireless – Chapter 1 Vilina Hutter, Instructor
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Department of Energy Office of Science ESCC & Internet2 Joint Techs Workshop Madison, Wisconsin.July 16-20, 2006 Network Virtualization & Hybridization.
Campus Network Development Network Architecture, Universal Access & Security.
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
1 ESNet Update ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins,
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
Erik Radius Manager Network Services SURFnet, The Netherlands Joint Techs Workshop Columbus, OH - July 20, 2004 GigaPort Next Generation Network & SURFnet6.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
William Stallings Data and Computer Communications
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
1 ESnet Joint Techs, Feb William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan.
1 Backup Options & Sample WAN Designs. 2 Chapter Topics  WAN Backup Design Options  Sample WAN Designs.
1 ESnet Trends and Pressures and Long Term Strategy ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Project.
NETWORKING FUNDAMENTALS. Network+ Guide to Networks, 4e2.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory.
. Large internetworks can consist of the following three distinct components:  Campus networks, which consist of locally connected users in a building.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
DOE UltraScience Net The Need –DOE large-scale science applications on supercomputers and experimental facilities require high-performance networking Petabyte.
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
Challenges in the Next Generation Internet Xin Yuan Department of Computer Science Florida State University
Advanced research and education networking in the United States: the Internet2 experience Heather Boyles Director, Member and Partner Relations Internet2.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
LCG Phase-2 Planning David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam.
An Introduction to ESnet and its Services
Bob Jones EGEE Technical Director
Networking for the Future of Science
ESnet Network Engineer Lawrence Berkeley National Laboratory
Chapter 1: WAN Concepts Connecting Networks
The Energy Sciences Network BESAC August 2004
ATLAS Tier 2 Paths Within ESnet
Computer Networking A computer network, often simply referred to as a network, is a collection of computers and devices connected by communications channels.
Computer Networking A computer network, often simply referred to as a network, is a collection of computers and devices connected by communications channels.
Wide-Area Networking at SLAC
Presentation transcript:

1 The Energy Sciences Network BESAC August 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads Gizella Kapus, Resource Manager and the ESnet Team Lawrence Berkeley National Laboratory Mary Anne Scott Program Manager Advanced Scientific Computing Research Office of Science Department of Energy

2 What is ESnet? Mission: Provide, interoperable, effective and reliable communications infrastructure and leading-edge network services that support missions of the Department of Energy, especially the Office of Science Vision: Provide seamless and ubiquitous access, via shared collaborative information and computational environments, to the facilities, data, and colleagues needed to accomplish their goals. Role: A component of the Office of Science infrastructure critical to the success of its research programs (program funded through ASCR/MICS; managed and operated by ESnet staff at LBNL).

3 Why is ESnet important? Enables thousands of DOE, university and industry scientists and collaborators worldwide to make effective use of unique DOE research facilities and computing resources independent of time and geographic location o Direct connections to all major DOE sites o Access to the global Internet (managing 150,000 routes at 10 commercial peering points) o User demand has grown by a factor of more than 10,000 since its inception in the mid 1990’s—a 100 percent increase every year since 1990 Capabilities not available through commercial networks -Architected to move huge amounts of data between a small number of sites -High bandwidth peering to provide access to US, European, Asia- Pacific, and other research and education networks. Support scientific research Objective: Support scientific research by providing seamless and ubiquitous access to the facilities, data, and colleagues

4 How is ESnet Managed? A community endeavor o Strategic guidance from the OSC programs -Energy Science Network Steering Committee (ESSC) –BES represented by Nestor Zaluzec, ANL and Jeff Nichols, ORNL o Network operation is a shared activity with the community -ESnet Site Coordinators Committee -Ensures the right operational “sociology” for success Complex and specialized – both in the network engineering and the network management – in order to provide its services to the laboratories in an integrated support environment Extremely reliable in several dimensions  Taken together these points make ESnet a unique facility supporting DOE science that is quite different from a commercial ISP or University network

5 …what now??? network environment VISION - A scalable, secure, integrated network environment for ultra-scale distributed science is being developed to make it possible to combine resources and expertise to address complex questions that no single institution could manage alone. Network Strategy Production network -Base TCP/IP services; +99.9% reliable High-impact network -Increments of 10 Gbps; switched lambdas (other solutions); 99% reliable Research network -Interfaces with production, high-impact and other research networks; start electronic and advance towards optical switching; very flexible [UltraScience Net] Revisit governance model o SC-wide coordination o Advisory Committee involvement

6 Where do you come in? Early identification of requirements o Evolving programs o New facilities Participation in management activities Interaction with BES representatives on ESSC Next ESSC meeting on Oct in DC area

7 What Does ESnet Provide? A network connecting DOE Labs and their collaborators that is critical to the future process of science An architecture tailored to accommodate DOE’s large-scale science o move huge amounts of data between a small number of sites High bandwidth access to DOE’s primary science collaborators: Research and Education institutions in the US, Europe, Asia Pacific, and elsewhere Full access to the global Internet for DOE Labs Comprehensive user support, including “owning” all trouble tickets involving ESnet users (including problems at the far end of an ESnet connection) until they are resolved – 24x7 coverage Grid middleware and collaboration services supporting collaborative science o trust, persistence, and science oriented policy

8 What is ESnet Today? ESnet builds a comprehensive IP network infrastructure (routing, IPv6, and IP multicast) on commercial circuits o ESnet purchases telecommunications services ranging from T1 (1 Mb/s) to OC192 SONET (10 Gb/s) and uses these to connect core routers and sites to form the ESnet IP network o ESnet purchases peering access to commercial networks to provide full Internet connectivity Essentially all of the national data traffic supporting US science is carried by two networks – ESnet and Internet-2 / Abilene (which plays a similar role for the university community)

9 How Do Networks Work? Accessing a service, Grid or otherwise, such as a Web server, FTP server, etc., from a client computer and client application (e.g. a Web browser_ involves o Target host names o Host addresses o Service identification o Routing

10 How Do Networks Work? LBNL Google, Inc. ESnet (Core network) Big ISP (e.g. SprintLink) gateway router core router peering router core router border router border/gateway routers implement separate site and network provider policy (including site firewall policy) peering routers  Exchange reachability information (“routes”) implement/enforce routing policy for each provider provide cyberdefense router core routers focus on high- speed packet forwarding peering router DNS

11 10GE RTR ESnet Core is a High-Speed Optical Network RTR optical fiber ring Wave division multiplexing today typically 64 x 10 Gb/s optical channels per fiber channels (referred to as “lambdas”) are usually used in bi-directional pairs Lambda channels are converted to electrical channels usually SONET data framing or Ethernet data framing can be clear digital channels (no framing – e.g. for digital HDTV) ESnet IP router ESnet core Site IP router Site – ESnet network policy demarcation (“DMZ”) site LAN ESnet hub ESnet site RTR A ring topology network is inherently reliable – all single point failures are mitigated by routing traffic in the other direction around the ring.

TWC JGI SNLL LBNL SLAC YUCCA MT BECHTEL PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS ORNL JLAB PPPL ANL-DC INEEL-DC ORAU-DC LLNL/LANL-DC MIT ANL BNL FNAL AMES 4xLAB-DC NERSC NREL ALB HUB LLNL GA DOE-ALB SDSC Japan GTN&NNSA International (high speed) OC192 (10G/s optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet (1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155 Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s) Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) QWEST ATM 42 end user sites ESnet IP GEANT - Germany - France - Italy - UK - etc Sinet (Japan) Japan – Russia(BINP) CA*net4 MREN Netherlands Russia StarTap Taiwan (ASCC) CA*net4 KDDI (Japan) France Switzerland Taiwan (TANet2) Australia CA*net4 Taiwan (TANet2) Singaren ESnet core: Packet over SONET Optical Ring and Hubs ELP HUB SNV HUB CHI HUB NYC HUB ATL HUB DC HUB peering points MAE-E Starlight Chi NAP Fix-W PAIX-W MAE-W NY-NAP PAIX-E Euqinix PNWG SEA HUB ESnet Provides Full Internet Service to DOE Facilities and Collaborators with High-Speed Access to all Major Science Collaborators hubs SNV HUB Abilene high-speed peering points Abilene MAN LAN Abilene CERN

STARLIGHT MAE-E NY-NAP PAIX-E GA LBNL ESnet’s Peering Infrastructure Connects the DOE Community With its Collaborators ESnet Peering (connections to other networks) NYC HUBS SEA HUB Japan SNV HUB MAE-W FIX-W PAIX-W 26 PEERS CA*net4 CERN MREN Netherlands Russia StarTap Taiwan (ASCC) Abilene + 7 Universities 22 PEERS MAX GPOP GEANT - Germany - France - Italy - UK - etc SInet (Japan) KEK Japan – Russia (BINP) Australia CA*net4 Taiwan (TANet2) Singaren 20 PEERS 3 PEERS LANL TECHnet 2 PEERS 39 PEERS CENIC SDSC PNW-GPOP CalREN2 CHI NAP Distributed 6TAP 19 Peers 2 PEERS KDDI (Japan) France EQX-ASH 1 PEER 5 PEERS ESnet provides access to all of the Internet by managing the full complement of Global Internet routes (about 150,000) at 10 general/commercial peering points + high-speed peerings w/ Abilene and the international R&E networks. This is a lot of work, and is very visible, but provides full access for DOE. ATL HUB University International Commercial Abilene EQX-SJ Abilene 6 PEERS Abilene

14 What is Peering? Peering points exchange routing information that says “which packets I can get closer to their destination” ESnet daily peering report (top 20 of about 100) This is a lot of work peering with this outfit is not random, it carries routes that ESnet needs (e.g. to the Russian Backbone Net) ASroutespeer SPRINTLINK UUNET- ALTERNET QWEST LEVEL CABLE- WIRELESS ATT-WORLDNET VERIO GLOBALCENTER OPENTRANSIT COGENTCO ABOVENET SINGTEL CAIS ABILENE BT TWTELECOM ALERON BROADWING XO SBC

15 Why so many routes? So that when I want to get to someplace out of the ordinary, I can get there. For example: (Technological Design Institute of Applied Microelectronics, Novosibirsk, Russia) Peering routers Start: snv-lbl-oc48.es.net ESnet core snvrt1-ge0-snvcr1.es.net ESnet peering at Sunnyvale pos3-0.cr01.sjo01.pccwbtn.netAS3491 CAIS Internet pos5-1.cr01.chc01.pccwbtn.net “ pos6-1.cr01.vna01.pccwbtn.net “ pos5-3.cr02.nyc02.pccwbtn.net “ pos6-0.cr01.ldn01.pccwbtn.net “ rbnet.pos4-1.cr01.ldn01.pccwbtn.netAS3491->AS5568 (Russian Backbone Network) peering point MSK-M9-RBNet-5.RBNet.ruRussian Backbone Network MSK-M9-RBNet-1.RBNet.ru “ NSK-RBNet-2.RBNet.ru “ Finish: Novosibirsk-NSC-RBNet.nsc.ruRBN to AS 5387 (NSCNET-2) What is Peering?

16 Organized by Office of Science Mary Anne Scott, Chair Dave Bader Steve Eckstrand Marvin Frazier Dale Koelling Vicky White Workshop Panel Chairs Ray Bair and Deb Agarwal Bill Johnston and Mike Wilde Rick Stevens Ian Foster and Dennis Gannon Linda Winkler and Brian Tierney Sandy Merola and Charlie Catlett August 13-15, 2002 Predictive Drivers for the Evolution of ESnet The network and middleware requirements to support DOE science were developed by the OSC science community representing major DOE science disciplines o Climate o Spallation Neutron Source o Macromolecular Crystallography o High Energy Physics o Magnetic Fusion Energy Sciences o Chemical Sciences o Bioinformatics Available at The network is needed for: o long term (final stage) data analysis o “control loop” data analysis (influence an experiment in progress) o distributed, multidisciplinary simulation

17 The Analysis was Driven by the Evolving Process of Science Feature Vision for the Future Process of Science Characteristics that Motivate High Speed Nets Requirements Discipline NetworkingMiddleware Climate (near term) Analysis of model data by selected communities that have high speed networking (e.g. NCAR and NERSC) A few data repositories, many distributed computing sites NCAR - 20 TBy NERSC - 40 TBy ORNL - 40 TBy Authenticated data streams for easier site access through firewalls Server side data processing (computing and cache embedded in the net) Information servers for global data catalogues Climate (5 yr) Enable the analysis of model data by all of the collaborating community Add many simulation elements/components as understanding increases 100 TBy / 100 yr generated simulation data, 1-5 PBy / yr (just at NCAR) o Distribute large chunks of data to major users for post- simulation analysis Robust access to large quantities of data Reliable data/file transfer (across system / network failures) Climate (5-10 yr) Integrated climate simulation that includes all high-impact factors 5-10 PBy/yr (at NCAR) Add many diverse simulation elements/components, including from other disciplines - this must be done with distributed, multidisciplinary simulation Virtualized data to reduce storage load Robust networks supporting distributed simulation - adequate bandwidth and latency for remote analysis and visualization of massive datasets Quality of service guarantees for distributed, simulations Virtual data catalogues and work planners for reconstituting the data on demand analysis was driven by

18 Evolving Quantitative Science Requirements for Networks Science AreasToday End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s100 Gb/s1000 Gb/shigh bulk throughput Climate (Data & Computation) 0.5 Gb/s Gb/sN x 1000 Gb/shigh bulk throughput SNS NanoScienceNot yet started1 Gb/s1000 Gb/s + QoS for control channel remote control and time critical throughput Fusion Energy0.066 Gb/s (500 MB/s burst) Gb/s (500MB/ 20 sec. burst) N x 1000 Gb/stime critical throughput Astrophysics0.013 Gb/s (1 TBy/week) N*N multicast1000 Gb/scomputational steering and collaborations Genomics Data & Computation Gb/s (1 TBy/day) 100s of users1000 Gb/s + QoS for control channel high throughput and steering

19 Observed Drivers for ESnet Evolution Are we seeing the predictions of two years ago come true? Yes!

20 OSC Traffic Increases by X Annually Annual growth in the past five years has increased from 1.7x annually to just over 2.0x annually. TBytes/Month ESnet is currently transporting about 250 terabytes/mo. (250,000,000 MBy/mo.) ESnet Monthly Accepted Traffic

21 Fermilab (US)  CERN SLAC (US)  IN2P3 (FR) 1 Terabyte/day SLAC (US)  INFN Padva (IT) Fermilab (US)  U. Chicago (US) CEBAF (US)  IN2P3 (FR) INFN Padva (IT)  SLAC (US) U. Toronto (CA)  Fermilab (US) Helmholtz-Karlsruhe (DE)  SLAC (US) DOE Lab  DOE Lab SLAC (US)  JANET (UK) Fermilab (US)  JANET (UK) Argonne (US)  Level3 (US) Argonne  SURFnet (NL) IN2P3 (FR)  SLAC (US) Fermilab (US)  INFN Padva (IT) A small number of science users account for a significant fraction of all ESnet traffic  Since BaBar data analysis started, the top 20 ESnet flows have consistently accounted for > 50% of ESnet’s monthly total traffic (~130 of 250 TBy/mo) ESnet Top 20 Data Flows, 24 hr. avg., ESnet is Engineered to Move a Lot of Data

FNAL (US)  IN2P3 (FR) 2.2 Terabytes SLAC (US)  INFN Padua (IT) 5.9 Terabytes U. Toronto (CA)  Fermilab (US) 0.9 Terabytes SLAC (US)  Helmholtz-Karlsruhe (DE) 0.9 Terabytes SLAC (US)  IN2P3 (FR) 5.3 Terabytes CERN  FNAL (US) 1.3 Terabytes FNAL (US)  U. Nijmegen (NL) 1.0 Terabytes FNAL (US)  Helmholtz-Karlsruhe (DE) 0.6 Terabytes FNAL (US)  SDSC (US) 0.6 Terabytes U. Wisc. (US)  FNAL (US) 0.6 Terabytes  The traffic is not transient: Daily and weekly averages are about the same.  SLAC is a prototype for what will happen when Climate, Fusion, SNS, Astrophysics, etc., start to ramp up the next generation science ESnet Top 10 Data Flows, 1 week avg.,

23 ESnet is a Critical Element of Large-Scale Science ESnet is a critical part of the large-scale science infrastructure of high energy physics experiments, climate modeling, magnetic fusion experiments, astrophysics data analysis, etc. As other large-scale facilities – such as SNS – turn on, this will be true across DOE

24 Science Mission Critical Infrastructure ESnet is a visible and critical piece of general DOE science infrastructure o if ESnet fails, tens of thousands of DOE and University users know it within minutes if not seconds Requires high reliability and high operational security in the o network operations, and o ESnet infrastructure support – the systems that support the operation and management of the network and services -Secure and redundant mail and Web systems are central to the operation and security of ESnet –trouble tickets are by –engineering communication by –engineering database interface is via Web -Secure network access to Hub equipment -Backup secure telephony access to all routers -24x7 help desk (joint w/ NERSC) and 24x7 on-call network engineers

Automated, real-time monitoring of traffic levels and operating state of some 4400 network entities is the primary network operational and diagnosis tool SecureNet Network Configuration OSPF Metrics (routing and connectivity) Performance Hardware Configuration IBGP Mesh (routing and connectivity)

26 ESnet’s Physical Infrastructure Equipment rack detail at NYC Hub, 32 Avenue of the Americas (one of ESnet’s core optical ring sites) Picture detail

27 Cisco 7206 AOA-AR1 (low speed links to MIT & PPPL) ($38,150 list) Juniper M20 AOA-PR1 (peering RTR) ($353,000 list) Juniper T320 AOA-CR1 (Core router) ($1,133,000 list) Juniper OC192 Optical Ring Interface (the AOA end of the OC192 to CHI ($195,000 list) Juniper OC48 Optical Ring Interface (the AOA end of the OC48 to DC-HUB ($65,000 list) AOA Performance Tester ($4800 list) Qwest DS3 DCX DC / AC Converter ($2200 list) Lightwave Secure Terminal Server ($4800 list) ESnet core Qwest 32 AofA HUB NYC, NY (~$1.8M, list) Sentry power 48v 30/60 amp panel ($3900 list) Sentry power 48v 10/25 amp panel ($3350 list) Typical Equipment of an ESnet Core Network Hub

28 LBNL PPPL BNL AMES Remote Engineer partial duplicate infrastructure DNS Remote Engineer partial duplicate infrastructure TWC Remote Engineer Disaster Recovery and Stability The network must be kept available even if, e.g., the West Coast is disabled by a massive earthquake, etc. ATL HUB SEA HUB ALB HUB NYC HUBS DC HUB ELP HUB CHI HUB SNV HUB Duplicate Infrastructure Currently deploying full replication of the NOC databases and servers and Science Services databases in the NYC Qwest carrier hub Engineers, 24x7 Network Operations Center, generator backed power Spectrum (net mgmt system) DNS (name – IP address translation) Eng database Load database Config database Public and private Web (server and archive) PKI cert. repository and revocation lists collaboratory authorization service Reliable operation of the network involves remote NOCs replicated support infrastructure generator backed UPS power at all critical network and infrastructure locations non-interruptible core - ESnet core operated without interruption through o N. Calif. Power blackout of 2000 o the 9/11/2001 attacks, and o the Sept., 2003 NE States power blackout

29 ESnet WAN Security and Cyberattack Defense Cyber defense is a new dimension of ESnet security o Security is now inherently a global problem o As the entity with a global view of the network, ESnet has an important role in overall security 30 minutes after the Sapphire/Slammer worm was released, 75,000 hosts running Microsoft's SQL Server (port 1434) were infected. (“The Spread of the Sapphire/Slammer Worm,” David Moore (CAIDA & UCSD CSE), Vern Paxson (ICIR & LBNL), Stefan Savage (UCSD CSE), Colleen Shannon (CAIDA), Stuart Staniford (Silicon Defense), Nicholas Weaver (Silicon Defense & UC Berkeley EECS) ) Jan., 2003http://

30 ESnet and Cyberattack Defense Sapphire/Slammer worm infection hits creating almost a full Gb/s (1000 megabit/sec.) traffic spike on the ESnet backbone

31 Cyberattack Defense LBNL ESnet router border router X peering router Lab gateway router ESnet second response – filter traffic from outside of ESnet Lab first response – filter incoming traffic at their ESnet gateway router ESnet third response – shut down the main peering paths and provide only limited bandwidth paths for specific “lifeline” services X peering router gateway router border router router attack traffic X ESnet first response – filters to assist a site  Sapphire/Slammer worm infection created a Gb/s of traffic on the ESnet core until filters were put in place (both into and out of sites) to damp it out.

Science Services: Support for Shared, Collaborative Science Environments X.509 identity certificates and Public Key Infrastructure provides the basis of secure, cross- site authentication of people and systems ( o ESnet negotiates the cross-site, cross-organization, and international trust relationships to provide policies that are tailored to collaborative science in order to permit sharing computing and data resources, and other Grid services o Certification Authority (CA) issues certificates after validating request against policy o This service was the basis of the first routine sharing of HEP computing resources between US and Europe

33 Science Services: Public Key Infrastructure * Report as of July 15,2004

34 Voice, Video, and Data Tele-Collaboration Service Another highly successful ESnet Science Service is the audio, video, and data teleconferencing service to support human collaboration o Seamless voice, video, and data teleconferencing is important for geographically dispersed scientific collaborators o ESnet currently provides to more than a thousand DOE researchers and collaborators worldwide -H.323 (IP) videoconferences (4000 port hours per month and rising) -audio conferencing (2500 port hours per month) (constant) -data conferencing (150 port hours per month) -Web-based, automated registration and scheduling for all of these services Huge cost savings for the Labs

35 ESnet’s Evolution over the Next Years Upgrading ESnet to accommodate the anticipated increase from the current 100%/yr traffic growth to 300%/yr over the next 5-10 years is priority number 7 out of 20 in DOE’s “Facilities for the Future of Science – A Twenty Year Outlook” Based on the requirements of the OSC Network Workshops, ESnet must address o Capable, scalable, and reliable production IP networking -University and international collaborator connectivity -Scalable, reliable, and high bandwidth site connectivity o Network support of high-impact science -provisioned circuits with guaranteed quality of service (e.g. dedicated bandwidth) o Science Services to support Grids, collaboratories, etc

36 New ESnet Architecture to Accommodate OSC The future requirements cannot be met with the current, telecom provided, hub and spoke architecture of ESnet The core ring has good capacity and resiliency against single point failures, but the point-to-point tail circuits are neither reliable nor scalable to the required bandwidth ESnet Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Atlanta (ATL) Washington, DC (DC) El Paso (ELP) DOE sites

37 S C I C&C instrument compute storage cache & compute Evolving Requirements for DOE Science Network Infrastructure C S C C S I C S C C S I C S C C S I C&C C S C C S I 1-40 Gb/s, end-to-end guaranteed bandwidth paths Gb/s, end-to-end In the near term applications need higher bandwidth high bandwidth QoS high bandwidth and QoS network resident cache and compute elements high bandwidth and QoS network resident cache and compute elements robust bandwidth (multiple paths) 1-3 yr Requirements 3-5 yr Requirements 2-4 yr Requirements 4-7 yr Requirements

38 A New Architecture With the current architecture ESnet cannot address o the increasing reliability requirements o the long-term bandwidth needs (incrementally increasing tail circuit bandwidth is too expensive – it will not scale to what OSC needs) -LHC will need dedicated 10 Gb/s into and out of FNAL and BNL ESnet can benefit from o Engaging the research and education networking community for advanced technology o Leveraging the R&E community investment in fiber and networks

39 A New Architecture ESnet new architecture goals: full redundant connectivity for every site and high-speed access for every site (at least 10 Gb/s) Three part strategy 1) MAN rings provide dual site connectivity and much higher site-to-core bandwidth 2) A second core will provide -multiply connected MAN rings for protection against hub failure -extra core capacity -a platform for provisioned, guaranteed bandwidth circuits -alternate path for production IP traffic -carrier neutral hubs 3) a high-reliability IP core (like the current ESnet core)

40 Europe Asia- Pacific ESnet Existing Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Washington, DC (DC) El Paso (ELP) DOE/OSC Labs New hubs Existing hubs 2 nd Core (e.g. NLR) A New ESnet Architecture Possible new hubs Atlanta (ATL) Metropolitan Area Rings

41 DEN ELP ALB ATL MANs High-speed cross connects with Internet2/Abilene Major DOE Office of Science Sites ESnet Beyond FY07 Japan CERN Europe SDG AsiaPac SEA NLR – ESnet hubs Qwest – ESnet hubs SNV Europe 10Gb/s 30Bg/s 40Gb/s Japan CHI High-impact science core Lab supplied Major international 2.5 Gbs 10 Gbs Future phases Production IP ESnet core DC Japan NYC

42 Conclusions ESnet is an infrastructure that is critical to DOE’s science mission Focused on the Office of Science Labs, but serves many other parts of DOE ESnet is working hard to meet the current and future networking need of DOE mission science in several ways: o Evolving a new high speed, high reliability, leveraged architecture o Championing several new initiatives which will keep ESnet’s contributions relevant to the needs of our community

43 Reference -- Planning Workshops High Performance Network Planning Workshop, August DOE Workshop on Ultra High-Speed Transport Protocols and Network Provisioning for Large-Scale Science Applications, April Science Case for Large Scale Simulation, June DOE Science Networking Roadmap Meeting, June Workshop on the Road Map for the Revitalization of High End Computing, June (public report) ASCR Strategic Planning Workshop, July Planning Workshops-Office of Science Data-Management Strategy, March & May 2004 o (report coming soon)