1 Energy Sciences Network (ESnet) Futures Chinese American Networking Symposium November 30 – December 2, 2004 Joseph Burrescia, Senior Network Engineer.

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

International Grid Communities Dr. Carl Kesselman Information Sciences Institute University of Southern California.
University of Illinois at Chicago Annual Update Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic Visualization Laboratory.
February 2002 Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO.
Update on Advanced Networking in Singapore L.W.C. Wong & F.B.S. Lee Singapore Advanced Research & Education Network.
1 Optical network CERNET's experience and prospective Xing Li, Congxiao Bao
1 IPv6 Development in China Xing Li Outline l A brief history l Experience l CNGI project l CERNET2 design.
National Computerization Agency (NCA) Future of KOREN/APII October 31, 2003 Byun, Sang-Ick / NCA
Nordic Networks Peter Villemoes 20th NORDUnet Networking Conference April 2002 Copenhagen.
Nordic research networks Peter Villemoes NORDUnet A/S NORDUnet 2000 Helsinki September 2000.
Abilene and Internet2 Engineering Update Guy Almes Terena Networking Conference 2002 Limerick, Ireland Guy Almes Terena Networking Conference 2002 Limerick,
The DataTAG Project 25 March, Brussels FP6 Information Day Peter Clarke, University College London.
1 The Metro Ethernet Forum Helping Define the Next Generation of Service and Transport Standards Ron Young Chairman of the Board
All rights reserved © 2000, Alcatel 1 CPE-based VPNs Hans De Neve Alcatel Network Strategy Group.
1 International IP Backbone of Taiwan Academic Networks Wen-Shui Chen & Yu-lin Chang APAN-TW APAN 2003 Academia Sinica Computing Center, Taiwan.
A. Sim, CRD, L B N L 1 ANI and Magellan Launch, Nov. 18, 2009 Climate 100: Scaling the Earth System Grid to 100Gbps Networks Alex Sim, CRD, LBNL Dean N.
Grids e HEP. Concorde (15 Km) Balloon (30 Km) CD stack with 1 year LHC data! (~ 20 Km) Mt. Blanc (4.8 Km) Bytes 10 3 Terabytes 1 Petabyte.
ONE PLANET ONE NETWORK A MILLION POSSIBILITIES Barry Joseph Director, Offer and Product Management.
1 The Evolution of ESnet (Summary) William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia,
U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Advanced Scientific Computing Research Mathematical, Information, and Computational.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
STAR Silicon Vertex Tracker Detector (SVT) Update
Where do we go from here? “Knowledge Environments to Support Distributed Science and Engineering” Symposium on Knowledge Environments for Science and Engineering.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 ESNet Update Joint Techs Meeting, July 19, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
STAR The Centrality Dependence of Strange Baryon and Meson Production in Cu+Cu and Au+Au with √s NN = 200 GeV Anthony Timmins for the STAR Collaboration.
National Collaboratories Program Overview Mary Anne ScottFebruary 7, rd DOE/NSF Meeting on LHC and Global Computing “Infostructure”
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
Erik Radius Manager Network Services SURFnet, The Netherlands Joint Techs Workshop Columbus, OH - July 20, 2004 GigaPort Next Generation Network & SURFnet6.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
1 ESnet Joint Techs, Feb William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan.
1 ESnet Trends and Pressures and Long Term Strategy ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Project.
1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory.
OSCARS Roadmap Chin Guok Feb 6, 2009 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of.
Internet2 Joint Techs Workshop, Feb 15, 2005, Salt Lake City, Utah ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok
CIPANP '03Jets and high pT hadrons in STAR1 Jets and high p T hadrons: results from STAR Peter Jacobs Lawrence Berkeley National Laboratory STAR Collaboration.
DOE UltraScience Net The Need –DOE large-scale science applications on supercomputers and experimental facilities require high-performance networking Petabyte.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
LCG Phase-2 Planning David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam.
Networking for the Future of Science
DOE Facilities - Drivers for Science: Experimental and Simulation Data
ESnet Network Engineer Lawrence Berkeley National Laboratory
The Energy Sciences Network BESAC August 2004
ATLAS Tier 2 Paths Within ESnet
Wide-Area Networking at SLAC
OSCARS Roadmap Chin Guok
Abilene Update Rick Summerhill
Presentation transcript:

1 Energy Sciences Network (ESnet) Futures Chinese American Networking Symposium November 30 – December 2, 2004 Joseph Burrescia, Senior Network Engineer William E. Johnston, ESnet Dept. Head and Senior Scientist

2 What is the Mission of ESnet? Enable thousands of DOE, university and industry scientists and collaborators worldwide to make effective use of unique DOE research facilities and computing resources independent of time and geographic location o Direct connections to all major DOE sites o Access to the global Internet (managing 150,000 routes at 10 commercial peering points) Provide capabilities not available through commercial networks -Architected to move huge amounts of data between a small number of sites -High bandwidth peering to provide access to US, European, Asia- Pacific, and other research and education networks. Support scientific research Objective: Support scientific research by providing seamless and ubiquitous access to the facilities, data, and colleagues

3 Russia: MEPHI - Moscow LPP/LHE JINR - Dubna IHEP - Protvino U.S. Laboratories: Argonne Berkeley Brookhaven U.S. Universities: UC Berkeley UC Davis UC Los Angeles Carnegie Mellon Creighton University Indiana University Kent State University Michigan State University City College of New York Ohio State University Penn. State University Purdue University Rice University Texas A&M UT Austin U. of Washington Wayne State University Yale University Brazil: Universidade de Sao Paulo China: IHEP – Beijing IMP - Lanzou IPP – Wuhan USTC SINR – Shanghai Tsinghua University Great Britain: University of Birmingham France: IReS Strasbourg SUBATECH - Nantes Germany: MPI – Munich University of Frankfurt India: IOP - Bhubaneswar VECC - Calcutta Panjab University University of Rajasthan Jammu University IIT - Bombay VECC – Kolcata Poland: Warsaw University of Tech. The STAR Collaboration at the Relativistic Heavy Ion Collider (RHIC) Brookhaven National Laboratory

4 Supernova Cosmology Project

5 Tier 1 Tier2 Center Online System event reconstruction French Regional Center German Regional Center Institut e Institute ~0.25TIPS Workstations ~100 MBytes/sec ~ Gbps Mbits/sec Physics data cache ~PByte/sec Tier2 Center ~ Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier physicists in 31 countries are involved in this 20-year experiment in which DOE is a major player. Grid infrastructure spread over the US and Europe coordinates the data analysis CERN LHC CMS detector 15m X 15m X 22m, 12,500 tons, $700M. human=2m analysis Italian Center FermiLab, USA Regional Center Courtesy Harvey Newman, CalTech CERN / LHC High Energy Physics Data Provides One of Science’s Most Challenging Data Management Problems ~2.5 Gbits/sec event simulation human

6 How is this Mission Accomplished? ESnet builds a comprehensive IP network infrastructure (routing, IPv4, IP multicast, IPv6) based on commercial circuits o ESnet purchases telecommunications services ranging from T1 (1 Mb/s) to OC192 SONET (10 Gb/s) and uses these to connect core routers and sites to form the ESnet IP network o ESnet peers at high speeds with domestic and International R&E networks. o ESnet peers with many commercial networks to provide full Internet connectivity

7 TWC JGI SNLL LBNL SLAC YUCCA MT BECHTEL PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS ORNL JLAB PPPL ANL-DC INEEL-DC ORAU-DC LLNL/LANL-DC MIT ANL BNL FNAL AMES 4xLAB-DC NERSC NREL ALB HUB LLNL GA DOE-ALB SDSC Japan GTN&NNSA International (high speed) OC192 (10G/s optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet (1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155 Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s) Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) QWEST ATM 42 end user sites ESnet IP GEANT - Germany - France - Italy - UK - etc Sinet (Japan) Japan – Russia(BINP) CA*net4 China MREN Netherlands Russia StarTap Taiwan (ASCC) CA*net4 KDDI (Japan) France Switzerland Taiwan (TANet2) Australia CA*net4 Taiwan (TANet2) Singaren ESnet core: Packet over SONET Optical Ring and Hubs (Qwest) ELP HUB SNV HUB CHI HUB NYC HUB ATL HUB DC HUB peering points MAE-E Starlight Chi NAP Fix-W PAIX-W MAE-W NY-NAP PAIX-E Euqinix PNWG SEA HUB What is ESnet Today? hubs SNV HUB Abilene high-speed peering points Abilene MAN LAN Abilene CERN

8 Modern, large-scale science is dependent on networks o Data sharing o Collaboration o Distributed data processing o Distributed simulation o Data management Science Drives the Future Direction of ESnet

9 Predictive Drivers for the Evolution of ESnet The network and middleware requirements to support DOE science were developed by the OSC science community representing major DOE science disciplines o Climate simulation o Spallation Neutron Source facility o Macromolecular Crystallography o High Energy Physics experiments o Magnetic Fusion Energy Sciences o Chemical Sciences o Bioinformatics Available at The network is essential for: o long term (final stage) data analysis o “control loop” data analysis (influence an experiment in progress) o distributed, multidisciplinary simulation August, 2002 Workshop Organized by Office of Science Mary Anne Scott, Chair, Dave Bader, Steve Eckstrand. Marvin Frazier, Dale Koelling, Vicky White Workshop Panel Chairs Ray Bair, Deb Agarwal, Bill Johnston, Mike Wilde, Rick Stevens, Ian Foster, Dennis Gannon, Linda Winkler, Brian Tierney, Sandy Merola, and Charlie Catlett

10 The Analysis was Driven by the Evolving Process of Science Feature Vision for the Future Process of Science Characteristics that Motivate High Speed Nets Requirements Discipline NetworkingMiddleware Climate (near term) Analysis of model data by selected communities that have high speed networking (e.g. NCAR and NERSC) A few data repositories, many distributed computing sites NCAR - 20 TBy NERSC - 40 TBy ORNL - 40 TBy Authenticated data streams for easier site access through firewalls Server side data processing (computing and cache embedded in the net) Information servers for global data catalogues Climate (5 yr) Enable the analysis of model data by all of the collaborating community Add many simulation elements/components as understanding increases 100 TBy / 100 yr generated simulation data, 1-5 PBy / yr (just at NCAR) o Distribute large chunks of data to major users for post- simulation analysis Robust access to large quantities of data Reliable data/file transfer (across system / network failures) Climate (5-10 yr) Integrated climate simulation that includes all high-impact factors 5-10 PBy/yr (at NCAR) Add many diverse simulation elements/components, including from other disciplines - this must be done with distributed, multidisciplinary simulation Virtualized data to reduce storage load Robust networks supporting distributed simulation - adequate bandwidth and latency for remote analysis and visualization of massive datasets Quality of service guarantees for distributed, simulations Virtual data catalogues and work planners for reconstituting the data on demand analysis was driven by

11 Evolving Quantitative Science Requirements for Networks Science AreasToday End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s100 Gb/s1000 Gb/shigh bulk throughput Climate (Data & Computation) 0.5 Gb/s Gb/sN x 1000 Gb/shigh bulk throughput SNS NanoScienceNot yet started1 Gb/s1000 Gb/s + QoS for control channel remote control and time critical throughput Fusion Energy0.066 Gb/s (500 MB/s burst) Gb/s (500MB/ 20 sec. burst) N x 1000 Gb/stime critical throughput Astrophysics0.013 Gb/s (1 TBy/week) N*N multicast1000 Gb/scomputational steering and collaborations Genomics Data & Computation Gb/s (1 TBy/day) 100s of users1000 Gb/s + QoS for control channel high throughput and steering

12 Observed Data Movement in ESnet TBytes/Month ESnet Monthly Accepted Traffic Through Sept Annual growth in the past five years about 2.0x annually. ESnet is currently transporting about 300 terabytes/mo. (300,000,000 Mbytes/mo.)

The Science Traffic on ESnet is Increasing Dramatically: A Small Number of Science Users at Major Facilities Account for a Significant Fraction of all ESnet Traffic Fermilab (US)  CERN SLAC (US)  IN2P3 (FR) 1 Terabyte/day SLAC (US)  INFN Padva (IT) Fermilab (US)  U. Chicago (US) CEBAF (US)  IN2P3 (FR) INFN Padva (IT)  SLAC (US) U. Toronto (CA)  Fermilab (US) Helmholtz-Karlsruhe (DE)  SLAC (US) DOE Lab  ? SLAC (US)  JANET (UK) Fermilab (US)  JANET (UK) Argonne (US)  Level3 (US) Argonne  SURFnet (NL) IN2P3 (FR)  SLAC (US) Fermilab (US)  INFN Padva (IT) ESnet Top 20 Data Flows, 24 hr. avg.,  Since BaBar production started, the top 20 ESnet flows have consistently accounted for > 50% of ESnet’s monthly total traffic (~130 of 250 TBy/mo)  As LHC data starts to move, this will increase a lot ( times)

14 Enabling the Future: ESnet’s Evolution Near Term and Beyond Based on the requirements of the OSC Network Workshops, ESnet networking must address o A reliable production IP Network -University and international collaborator connectivity -Scalable, reliable, and high bandwidth site connectivity o A network support of high-impact science (Science Data Network) -provisioned circuits with guaranteed quality of service (e.g. dedicated bandwidth) o Additionally be positioned to incorporate lessons learned from DOE’s UltraScience Network, the Advanced Research Network. Upgrading ESnet to accommodate the anticipated increase from the current 100%/yr traffic growth to 300%/yr over the next 5-10 years is priority number 7 out of 20 in DOE’s “Facilities for the Future of Science – A Twenty Year Outlook”

15 S C I C&C instrument compute storage cache & compute Evolving Requirements for DOE Science Network Infrastructure C S C C S I C S C C S I C S C C S I C&C C S C C S I 1-40 Gb/s, end-to-end guaranteed bandwidth paths Gb/s, end-to-end In the near term applications need higher bandwidth high bandwidth QoS high bandwidth and QoS network resident cache and compute elements high bandwidth and QoS network resident cache and compute elements robust bandwidth (multiple paths) 1-3 yr Requirements 3-5 yr Requirements 2-4 yr Requirements 4-7 yr Requirements

16 New ESnet Architecture Needs to Accommodate OSC The essential requirements cannot be met with the current, telecom provided, hub and spoke architecture of ESnet The core ring has good capacity and resiliency against single point failures, but the point-to-point tail circuits are neither reliable nor scalable to the required bandwidth ESnet Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Atlanta (ATL) Washington, DC (DC) El Paso (ELP) DOE sites

17 Basis for a New Architecture Goals for each site: fully redundant connectivity high-speed access to the backbone Meeting these goals requires a two part approach: Connecting to sites via a MAN ring topology to provide: Dual site connectivity and much higher site bandwidth Employing a second backbone to provide: Multiply connected MAN ring protection against hub failure Extra backbone capacity A platform for provisioned, guaranteed bandwidth circuits An alternate path for production IP traffic Access to carrier neutral hubs

18 ESnet Existing Core New York (AOA)Chicago (CHI) Sunnyvale (SNV) Atlanta (ATL) Washington, DC (DC) El Paso (ELP) DOE/OSC sites Existing hubs Bay Area, Chicago, New York MANs

19 ESnet Existing Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Atlanta (ATL) Washington, DC (DC) El Paso (ELP) DOE/OSC sites New hubs Existing hubs Enhancing the Existing Core

20 Addition of Science Data Network (SDN) Backbone Europe Asia- Pacific ESnet Existing Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Atlanta (ATL) Washington, DC (DC) El Paso (ELP) DOE/OSC sites New hubs Existing hubs USN ESnet SDN Backbone

21 ESnet – UltraScience Net Topology

Guaranteed Bandwidth Circuit Multi-Protocol Label Switching (MPLS) and Resource Reservation Protocol (RSVP) is used to create a Label Switched Path (LSP) Quality-of-Service (QoS) level is assign to the LSP to guarantee bandwidth. On-Demand Secure Circuits and Advance Reservation System (OSCARS) RSVP, MPLS enabled on internal interfaces SourceSink LSP between ESnet border routers MPLS labels are attached onto packets from Source and placed in separate queue to ensure guaranteed bandwidth. Regular production traffic queue. Interface queues

User Application Reservation Manager Web-Based User Interface (WBUI) will prompt the user for a username/password and forward it to the AAAS. Authentication, Authorization, and Auditing Subsystem (AAAS) will handle access, enforce policy, and generate usage records. Bandwidth Scheduler Subsystem (BSS) will track reservations and map the state of the network (present and future). Path Setup Subsystem (PSS) will setup and teardown the on-demand paths (LSPs). On-Demand Secure Circuits and Advance Reservation System (OSCARS) User Instructions to setup/teardown LSPs on routers Web-Based User Interface Authentication, Authorization, And Auditing Subsystem Bandwidth Scheduler Subsystem Path Setup Subsystem Reservation Manager User app request via AAAS User request via WBUI User feedback

DEN ELP ALB ATL Metropolitan Area Rings Aus. CERN Europe SDG AsiaPac SEA Major DOE Office of Science Sites High-speed cross connects with Internet2/Abilene ESnet hubs SNV Europe 10Gb/s 30Bg/s 40Gb/s Japan CHI High-impact science core Lab supplied Major international 2.5 Gbs 10 Gbs Future phases Production IP ESnet core DC Japan NYC Aus. ESnet IP Core ESnet Science Data Network (2 nd Core) Metropolitan Area Rings 10 Gbps enterprise IP traffic Gbps circuit based transport ESnet Meeting Science Requirements – 2007/2008