U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Advanced Scientific Computing Research Mathematical, Information, and Computational.

Slides:



Advertisements
Similar presentations
University of Illinois at Chicago Annual Update Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic Visualization Laboratory.
Advertisements

February 2002 Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO.
Update on Advanced Networking in Singapore L.W.C. Wong & F.B.S. Lee Singapore Advanced Research & Education Network.
ONE PLANET ONE NETWORK A MILLION POSSIBILITIES Barry Joseph Director, Offer and Product Management.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update June 12,
The Energy Sciences Network BESAC August 2004
1 The Evolution of ESnet (Summary) William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia,
1 Visual Collaboration Jose Leary, Media Specialist.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
Office of Science U.S. Department of Energy U.S. Department of Energy’s Office of Science Dr. Raymond L. Orbach Under Secretary for Science U.S. Department.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
1 Cyberinfrastructure and Networks : The Advanced Networks and Services Underpinning the Large-Scale Science of DOE’s Office of Science William E. Johnston.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 The Energy Sciences Network BESAC August 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
1 ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project.
1 The Intersection of Grids and Networks: Where the Rubber Hits the Road William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Washington Update July 21, 2004 ESCC Meeting.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
1 ESNet Update Joint Techs Meeting, July 19, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael.
A Technology Vision for the Future Rick Summerhill, Chief Technology Officer, Eric Boyd, Deputy Technology Officer, Internet2 Joint Techs Meeting 16 July.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Department of Energy Office of Science ESCC & Internet2 Joint Techs Workshop Madison, Wisconsin.July 16-20, 2006 Network Virtualization & Hybridization.
1 ESNet Update ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins,
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
1 ESnet Joint Techs, Feb William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Federal Project Manager Michael S. Collins, Stan.
1 ESnet Trends and Pressures and Long Term Strategy ESCC, July 21, 2004 William E. Johnston, ESnet Dept. Head and Senior Scientist R. P. Singh, Project.
1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory.
Keeping up with the RONses Mark Johnson Internet2 Member Meeting May 3, 2005.
SINET Update and Collaboration with TEIN2 Jun Matsukata National Institute of Informatics (NII) Research Organization of Information and Systems
ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05
DOE UltraScience Net The Need –DOE large-scale science applications on supercomputers and experimental facilities require high-performance networking Petabyte.
1 ESnet Update ESnet/Internet2 Joint Techs Albuquerque, New Mexico February 6, 2005 R. Kevin Oberman ESnet Network Engineer Lawrence Berkeley National.
Cyberinfrastructure and Internet2 Eric Boyd Deputy Technology Officer Internet2.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
Internet2 Strategic Directions October Fundamental Questions  What does higher education (and the rest of the world) require from the Internet.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Connect communicate collaborate GÉANT - The GN3 Project Goals - Challenges - Vision Hans Döbbeling, DANTE TNC 2009, Malaga,
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Office of Science U.S. Department of Energy High-Performance Network Research Program at DOE/Office of Science 2005 DOE Annual PI Meeting Brookhaven National.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
BNL Network Status and dCache/Network Integration Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
LCG Phase-2 Planning David Foster IT/CS 14 th April 2005 Thanks to Dante, ASNet and ESnet for material presented at the T0/T1 Networking meeting in Amsterdam.
Bob Jones EGEE Technical Director
Fermilab T1 infrastructure
Networking for the Future of Science
DOE Facilities - Drivers for Science: Experimental and Simulation Data
ESnet Network Engineer Lawrence Berkeley National Laboratory
The Energy Sciences Network BESAC August 2004
ATLAS Tier 2 Paths Within ESnet
Wide-Area Networking at SLAC
Abilene Update Rick Summerhill
Presentation transcript:

U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Advanced Scientific Computing Research Mathematical, Information, and Computational Research Division Connecting Scientists …to collaborators, to facilities, to datastores, to computational resources, to…. Energy Sciences Network (ESnet) November 3, 2004 BERAC Meeting

Office of Science U.S. Department of Energy Science Drivers for Networks  Modern, large-scale science is dependent on networks  Data sharing  Collaboration  Distributed data processing  Distributed simulation  Data management

Office of Science U.S. Department of Energy Science Mission Critical Infrastructure  ESnet is a visible and critical piece of DOE science infrastructure  If ESnet fails, tens of thousands of DOE and University users know it within minutes if not seconds  Requires high reliability and high operational security in the network operations

Office of Science U.S. Department of Energy What is ESnet?  Mission: Provide, interoperable, effective and reliable communications infrastructure and leading-edge network services that support missions of the Department of Energy, especially the Office of Science  Vision: Provide seamless and ubiquitous access, via shared collaborative information and computational environments, to the facilities, data, and colleagues needed to accomplish their goals.  Role: A component of the Office of Science infrastructure critical to the success of its research programs (funded through ASCR/MICS and managed and operated by ESnet staff at LBNL). Essentially all of the national data traffic supporting US science is carried by two networks—ESnet and Internet-2/Abilene (which plays a similar role for the university community)

Office of Science U.S. Department of Energy What is ESnet’s user base?  Between 10,000 and 100,000 researchers in the US (guesstimate)  Mainly Office of Science programs—ASCR, BER, BES, FES, HEP, NP  Also traffic for NNSA and others  All the US national labs  Hundreds of universities  Hundreds of foreign institutions Characteristics of the user base  Many casual users  Users from science disciplines that span SC interests  Users concerned with any data intensive and computationally intensive tasks  Collaborators distributed geographically, small to large groups

Office of Science U.S. Department of Energy What Does ESnet Provide?  ESnet builds a comprehensive IP network infrastructure  Telecommunications services ranging from T1 (1 Mb/s) to OC192 SONET (10 Gb/s) and uses these to connect core routers and sites to form the ESnet IP network  Peering access to commercial networks  High bandwidth access to DOE’s primary science collaborators: Research and Education institutions in the US, Europe, Asia Pacific, and elsewhere  Comprehensive user support, including “owning” all trouble tickets involving ESnet users (including problems at the far end of an ESnet connection) until they are resolved – 24x7 coverage  Grid middleware and collaboration services supporting collaborative science  trust, persistence, and science oriented policy

Office of Science U.S. Department of Energy Why is ESnet important?  Enables thousands of DOE, university and industry scientists and collaborators worldwide to make effective use of unique DOE research facilities and computing resources independent of time and geographic location  Direct connections to all major DOE sites  Access to the global Internet (managing 150,000 routes at 10 commercial peering points)  User demand has grown by a factor of more than 10,000 since its inception in the mid 1990’s—a 100 percent increase every year since 1990  Capabilities not available through commercial networks  Architected to move huge amounts of data between a small number of sites  High bandwidth peering to provide access to US, European, Asia-Pacific, and other research and education networks. Support scientific research Objective: Support scientific research by providing seamless and ubiquitous access to the facilities, data, and colleagues

8 TWC JGI SNLL LBNL SLAC YUCCA MT BECHTEL PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS ORNL JLAB PPPL ANL-DC INEEL-DC ORAU-DC LLNL/LANL-DC MIT ANL BNL FNAL AMES 4xLAB-DC NERSC NREL ALB HUB LLNL GA DOE-ALB SDSC Japan GTN&NNSA International (high speed) OC192 (10G/s optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet (1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155 Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s) Office Of Science Sponsored (22) NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) QWEST ATM 42 end user sites ESnet IP GEANT - Germany - France - Italy - UK - etc Sinet (Japan) Japan – Russia(BINP) CA*net4 CERN MREN Netherlands Russia StarTap Taiwan (ASCC) CA*net4 KDDI (Japan) France Switzerland Taiwan (TANet2) Australia CA*net4 Taiwan (TANet2) Singaren ESnet core: Optical Ring and Hubs ELP HUB SNV HUB CHI HUB NYC HUB ATL HUB DC HUB peering points MAE-E Starlight Chi NAP Fix-W PAIX-W MAE-W NY-NAP PAIX-E Euqinix PNWG SEA HUB ESnet Connects DOE Facilities and Collaborators hubs SNV HUB Abilene

9 STARLIGHT MAE-E NY-NAP PAIX-E GA LBNL ESnet’s Peering Infrastructure Connects the DOE Community With its Collaborators ESnet Peering (connections to other networks) NYC HUBS SEA HUB Japan SNV HUB MAE-W FIX-W PAIX-W 26 PEERS CA*net4 CERN MREN Netherlands Russia StarTap Taiwan (ASCC) Abilene + 7 Universities 22 PEERS MAX GPOP GEANT - Germany - France - Italy - UK - etc SInet (Japan) KEK Japan – Russia (BINP) Australia CA*net4 Taiwan (TANet2) Singaren 20 PEERS 3 PEERS LANL TECHnet 2 PEERS 39 PEERS CENIC SDSC PNW-GPOP CalREN2 CHI NAP Distributed 6TAP 19 Peers 2 PEERS KDDI (Japan) France EQX-ASH 1 PEER 5 PEERS ESnet provides access to all of the Internet by managing the full complement of Global Internet routes (about 150,000) at 10 general/commercial peering points + high-speed peerings w/ Abilene and the international R&E networks. This is a lot of work, and is very visible, but provides full access for DOE. ATL HUB University International Commercial Abilene EQX-SJ Abilene 6 PEERS Abilene

Office of Science U.S. Department of Energy How is ESnet Managed?  A community endeavor  Strategic guidance from the OSC programs  Energy Science Network Steering Committee (ESSC)  BER representatives are T.P. Straatsma, PNNL; Raymond McCord, ORNL  Network operation is a shared activity with the community  ESnet Site Coordinators Committee  Ensures the right operational “sociology” for success  Complex and specialized – both in the network engineering and the network management – in order to provide its services to the laboratories in an integrated support environment  Extremely reliable in several dimensions  Taken together these points make ESnet a unique facility supporting DOE science that is quite different from a commercial ISP or University network

Office of Science U.S. Department of Energy What’s a possible future? VISION –  A seamless, high-performance network infrastructure in which science applications and advanced facilities are "n-way" interconnected to terascale computing, petascale storage, and high- end visualization capabilities.  This advanced network facilitates collaborations among researchers and interactions between researchers and experimental and computational resources,  Science, especially large-scale science, moves to a new regime that eliminates isolation, discourages redundancy, and promotes rapid scientific progress through the interplay of theory, simulation, and experiment.

Office of Science U.S. Department of Energy Network and Middleware Needs of DOE Science  Focused on science requirements driving advanced network infrastructure  Middleware research  Network research  Network governance model  Requirements for DOE science developed for a representative cross-section of the OSC scientific community  Climate simulation  Spallation Neutron Source facility  Macromolecular Crystallography  High Energy Physics experiments  Magnetic Fusion Energy Sciences  Chemical Sciences  Bioinformatics

13 Evolving Quantitative Science Requirements for Networks Science AreasToday End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks High Energy Physics 0.5 Gb/s100 Gb/s1000 Gb/shigh bulk throughput Climate (Data & Computation) 0.5 Gb/s Gb/sN x 1000 Gb/shigh bulk throughput SNS NanoScienceNot yet started1 Gb/s1000 Gb/s + QoS for control channel remote control and time critical throughput Fusion Energy0.066 Gb/s (500 MB/s burst) Gb/s (500MB/ 20 sec. burst) N x 1000 Gb/stime critical throughput Astrophysics0.013 Gb/s (1 TBy/week) N*N multicast1000 Gb/scomputational steering and collaborations Genomics Data & Computation Gb/s (1 TBy/day) 100s of users1000 Gb/s + QoS for control channel high throughput and steering

Office of Science U.S. Department of Energy New Strategic Directions to Address Needs of DOE Science Focused on what is needed to achieve the science driven network requirements of the previous workshop  THE #1 DRIVER for continuing advancements in networking and middleware – Petabyte scale experimental and simulation data systems will be increasing to exabyte scale data systems.  Bioinformatics, Climate, LHC, etc.  Computational systems that process or produce data continue to advance with Moore’s Law  ….. Organized by the ESSC  Workshop Chair, Roy Whitney, JLAB  Workshop Editors, Roy Whitney, JLAB; Larry Price, ANL

Office of Science U.S. Department of Energy Observed Drivers for ESnet Evolution  Are we seeing the predictions of two years ago come true?  Yes!

Office of Science U.S. Department of Energy …what now??? network environment VISION - A scalable, secure, integrated network environment for ultra-scale distributed science is being developed to make it possible to combine resources and expertise to address complex questions that no single institution could manage alone.  Network Strategy Production network  Base TCP/IP services; +99.9% reliable High-impact network  Increments of 10 Gbps; switched lambdas (other solutions); 99% reliable Research network  Interfaces with production, high-impact and other research networks; start electronic and advance towards optical switching; very flexible  Revisit governance model  SC-wide coordination  Advisory Committee involvement

Office of Science U.S. Department of Energy Evolution of ESnet  Upgrading ESnet to accommodate the anticipated increase from the current 100%/yr traffic growth to 300%/yr over the next 5-10 years is priority number 7 out of 20 in DOE’s “Facilities for the Future of Science – A Twenty Year Outlook”  Based on the requirements, ESnet must address I. Capable, scalable, and reliable production IP networking  University and international collaborator connectivity  Scalable, reliable, and high bandwidth site connectivity II. Network support of high-impact science  provisioned circuits with guaranteed quality of service (e.g. dedicated bandwidth) III. Evolution to optical switched networks  Partnership with UltraScienceNet  Close collaboration with the network R&D community IV. Science Services to support Grids, collaboratories, etc

18 New ESnet Architecture Needed to Accommodate OSC The essential DOE Office of Science requirements cannot be met with the current, telecom provided, hub and spoke architecture of ESnet The core ring has good capacity and resiliency against single point failures, but the point-to-point tail circuits are neither reliable nor scalable to the required bandwidth ESnet Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Atlanta (ATL) Washington, DC (DC) El Paso (ELP) DOE sites

Office of Science U.S. Department of Energy A New ESnet Architecture  Goals  Fully redundant connectivity for every site  High-speed access for every site (at least 20 Gb/s)  Three part strategy  Metropolitan Area Network (MAN) rings provide dual site connectivity and much higher site-to-core bandwidth  A Science Data Network core for  multiply connected MAN rings for protection against hub failure  expanded capacity for science data  a platform for provisioned, guaranteed bandwidth circuits  alternate path for production IP traffic  carrier circuit and fiber access neutral hubs  A High-reliability IP core (e.g. the current ESnet core)

20 Site gateway router site equip. Site gateway router Qwest hub ESnet IP core ANL FNAL ESnet MAN Architecture CERN (DOE funded link) monitor ESnet management and monitoring ESnet managed λ / circuit services tunneled through the IP backbone monitor site equip. ESnet production IP service ESnet managed λ / circuit services T320 StarLight International peerings Site LAN ESnet SDN core T320

GEANT (Europe) Asia- Pacific ESnet IP Core New York (AOA) Chicago (CHI) Sunnyvale (SNV) Washington, DC (DC) El Paso (ELP) DOE/OSC Labs New hubs Existing hubs ESnet Science Data Network (2 nd Core) A New ESnet Architecture: Science Data Network + IP Core Possible new hubs Atlanta (ATL) Metropolitan Area Rings CERN

22 DEN ELP ALB ATL MANs High-speed cross connects with Internet2/Abilene Major DOE Office of Science Sites Goals for ESnet – FY07 Japan CERN Europe SDG AsiaPac SEA SDN – ESnet hubs Qwest – ESnet hubs SNV Europe 10Gb/s 30Bg/s 40Gb/s Japan CHI High-impact science core Lab supplied Major international 2.5 Gbs 10 Gbs Future phases Production IP ESnet core DC Japan NYC

Office of Science U.S. Department of Energy Conclusions  ESnet is an infrastructure that is critical to DOE’s science mission  Focused on the Office of Science Labs, but serves many other parts of DOE  ESnet is working hard to meet the current and future networking need of DOE mission science in several ways:  Evolving a new high speed, high reliability, leveraged architecture  Championing several new initiatives which will keep ESnet’s contributions relevant to the needs of our community  Grid middleware services for large numbers of users are hard – but they can be provided if careful attention is paid to scaling

Office of Science U.S. Department of Energy Where do you come in?  Early identification of requirements  Evolving programs  New facilities  Interaction with BER representatives on ESSC  Participation in management activities

Office of Science U.S. Department of Energy Additional Information

Office of Science U.S. Department of Energy Planning Workshops  High Performance Network Planning Workshop, August  DOE Workshop on Ultra High-Speed Transport Protocols and Network Provisioning for Large-Scale Science Applications, April  Science Case for Large Scale Simulation, June  DOE Science Networking Roadmap Meeting, June  Workshop on the Road Map for the Revitalization of High End Computing, June (public report)  ASCR Strategic Planning Workshop, July  Planning Workshops-Office of Science Data-Management Strategy, March & May 2004  (report coming soon)

27 ESnet is Architected and Engineered to Move a Lot of Data Annual growth in the past five years has increased from 1.7x annually to just over 2.0x annually. TBytes/Month ESnet is currently transporting about 250 terabytes/mo. (250,000,000 Mbytes/mo.) ESnet Monthly Accepted Traffic

28 Traffic coming into ESnet = Green Traffic leaving ESnet = Blue Traffic between sites % = of total ingress or egress traffic Note that more that 90% of the ESnet traffic is OSC traffic ESnet Appropriate Use Policy (AUP) All ESnet traffic must originate and/or terminate on an ESnet an site (no transit traffic is allowed) Who Generates Traffic, and Where Does it Go? ESnet Inter-Sector Traffic Summary, Jan 2003 / Feb 2004 (1.7X overall traffic increase, 1.9X OSC increase) (the international traffic is increasing due to BABAR at SLAC and the LHC tier 1 centers at FNAL and BNL) Commercial R&E (mostly universities) International 21/14% 17/10% 9/26% 14/12% 10/13% 4/6% ESnet ~25/18% Peering Points DOE collaborator traffic, inc. data 72/68% 53/49% DOE is a net supplier of data because DOE facilities are used by universities and commercial entities, as well as by DOE researchers DOE sites

Office of Science U.S. Department of Energy Fermilab (US)  CERN SLAC (US)  IN2P3 (FR) 1 Terabyte/day SLAC (US)  INFN Padva (IT) Fermilab (US)  U. Chicago (US) CEBAF (US)  IN2P3 (FR) INFN Padva (IT)  SLAC (US) U. Toronto (CA)  Fermilab (US) Helmholtz-Karlsruhe (DE)  SLAC (US) DOE Lab  DOE Lab SLAC (US)  JANET (UK) Fermilab (US)  JANET (UK) Argonne (US)  Level3 (US) Argonne  SURFnet (NL) IN2P3 (FR)  SLAC (US) Fermilab (US)  INFN Padva (IT) A small number of science users account for a significant fraction of all ESnet traffic  Since BaBar data analysis started, the top 20 ESnet flows have consistently accounted for > 50% of ESnet’s monthly total traffic (~130 of 250 TBy/mo) ESnet Top 20 Data Flows, 24 hr. avg., ESnet is Engineered to Move a Lot of Data

FNAL (US)  IN2P3 (FR) 2.2 Terabytes SLAC (US)  INFN Padua (IT) 5.9 Terabytes U. Toronto (CA)  Fermilab (US) 0.9 Terabytes SLAC (US)  Helmholtz-Karlsruhe (DE) 0.9 Terabytes SLAC (US)  IN2P3 (FR) 5.3 Terabytes CERN  FNAL (US) 1.3 Terabytes FNAL (US)  U. Nijmegen (NL) 1.0 Terabytes FNAL (US)  Helmholtz-Karlsruhe (DE) 0.6 Terabytes FNAL (US)  SDSC (US) 0.6 Terabytes U. Wisc. (US)  FNAL (US) 0.6 Terabytes  The traffic is not transient: Daily and weekly averages are about the same.  SLAC is a prototype for what will happen when Climate, Fusion, SNS, Astrophysics, etc., start to ramp up the next generation science ESnet Top 10 Data Flows, 1 week avg.,