1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman

Slides:



Advertisements
Similar presentations
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
Advertisements

High Performance Computing Course Notes Grid Computing.
BioMedical Engineering IDEA Workshop Challenges and Opportunities for Diversity September 27, 2007 Roosevelt Hotel, Hollywood CA Patrice O. Yarbough, PhD.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
Implications of ESnet Site Reliance on Cloud Services Greg Bell, ESnet ESCC – July 14, 2010.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Science Research: Journey to 10,000 Sources Presented by: Abe Lederman, President and Founder Deep Web Technologies, Inc. Special Libraries Association.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
Mass Storage System Forum HEPiX Vancouver, 24/10/2003 Don Petravick (FNAL) Olof Bärring (CERN)
A Technology Vision for the Future Rick Summerhill, Chief Technology Officer, Eric Boyd, Deputy Technology Officer, Internet2 Joint Techs Meeting 16 July.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Department of Energy Office of Science ESCC & Internet2 Joint Techs Workshop Madison, Wisconsin.July 16-20, 2006 Network Virtualization & Hybridization.
Data Logistics in Particle Physics Ready or Not, Here it Comes… Prof. Paul Sheldon Vanderbilt University Prof. Paul Sheldon Vanderbilt University.
LHC Open Network Environment LHCONE David Foster CERN IT LCG OB 30th September
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
CERN openlab V Technical Strategy Fons Rademakers CERN openlab CTO.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services BNL USATLAS Tier 1 / Tier 2 Meeting John Bigrow December 14, 2005.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
SIF for US Science Michael Helm Esnet 09 June 2011.
Spectrum of Support for Data Movement and Analysis in Big Data Science Network Management and Control E-Center & ESCPS Network Management and Control E-Center.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
Uni Innsbruck Informatik - 1 Network Support for Grid Computing... a new research direction! Michael Welzl DPS NSG Team
Bob Lucas University of Southern California Sept. 23, 2011 Transforming Geant4 for the Future Bob Lucas and Rob Roser USC and FNAL May 8, 2012.
Keeping up with the RONses Mark Johnson Internet2 Member Meeting May 3, 2005.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Internet Connectivity and Performance for the HEP Community. Presented at HEPNT-HEPiX, October 6, 1999 by Warren Matthews Funded by DOE/MICS Internet End-to-end.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Advanced research and education networking in the United States: the Internet2 experience Heather Boyles Director, Member and Partner Relations Internet2.
7/11/0666th IETF1 QoS Enhancements to BGP in Support of Multiple Classes of Service Andreas Terzis Computer Science Department Johns Hopkins University.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
Ian Bird CERN, 17 th July 2013 July 17, 2013
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Planning for LCG Emergencies HEPiX, Fall 2005 SLAC, 13 October 2005 David Kelsey CCLRC/RAL, UK
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
Toward High Breakthrough Collaboration (HBC) Susan Turnbull Program Manager Advanced Scientific Computing Research March 4, 2009.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
Brookhaven Science Associates U.S. Department of Energy 1 n BNL –8 OSCARS provisioned circuits for ATLAS. Includes CERN primary and secondary to LHCNET,
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
LHCOPN / LHCONE Status Update John Shade /CERN IT-CS Summary of the LHCOPN/LHCONE meeting in Amsterdam Grid Deployment Board, October 2011.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
T0-T1 Networking Meeting 16th June Meeting
DOE Facilities - Drivers for Science: Experimental and Simulation Data
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
How to address the increasing connectivity needs of the HEP community?
به نام خدا Big Data and a New Look at Communication Networks Babak Khalaj Sharif University of Technology Department of Electrical Engineering.
Using Netflow data for forecasting
ECE 4450:427/527 - Computer Networks Spring 2017
PRPv1 Discussion topics
ExaO: Software Defined Data Distribution for Exascale Sciences
Science and Engineering Applications
The New Internet2 Network: Expected Uses and Application Communities
Presentation transcript:

1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman

2 Welcome!!! Much thanks to our UM Hosts, Internet2 Staff. Close collaboration between DOE and University Networking Communities has evolved past a “nice to have,” to a mutual operational necessity. While the demand placed on ESnet continues to accelerate at unprecedented scale, relatively little ESnet traffic remains within the DOE complex. -Onslaught of the LHC -Onslaught of Grid Computing

Traffic Volume of the Top 30 AS-AS Flows, June 2006 (AS-AS = mostly Lab to R&E site, a few Lab to R&E network, a few “other”) DOE Office of Science Program LHC / High Energy Physics - Tier 0-Tier1 LHC / HEP - T1-T2 HEP Nuclear Physics Lab - university LIGO (NSF) Lab - commodity Math. & Comp. (MICS) Terabytes FNAL -> CERN traffic is comparable to BNL -> CERN but on layer 2 flows that are not yet monitored for traffic – soon) Large-Scale Flow Trends, June 2006 Subtitle: “Onslaught of the LHC”)

4 The Onslaught of Grids Answer: Most large data transfers are now done by parallel / Grid data movers In June, % of the hosts generating the 1000 work flows were involved in parallel data movers (Grid applications) This, combined with the dramatic increase in the proportion of traffic due to large-scale science (now 50% of all traffic) represents the most significant traffic pattern change in the history of ESnet This probably argues for a network architecture that favors path multiplicity and route diversity plateaus indicate the emergence of parallel transfer systems (a lot of systems transferring the same amount of data at the same time) Question: Why is peak flow bandwidth decreasing while total traffic is increasing?

5 We mutually face networking challenges far more complex than what throwing more bandwidth on-line can fix: Nature of data flows redefining the nature of networks themselves: -Long Term vs. Short Term Programmed Bulk Data Transfer -Chaotic, Short Term Preemptive Activity (e.g. ad-hoc jobs, control activity) Services in R&D only a few years ago are now standard fare (e.g. QoS, MPLS, Layer 2)

6 The Moral of the Story: The US cannot remain a world leader in scientific research without addressing these networking challenges And With That…

7 Let’s Get to Work!!!