Michael Ernst U.S. ATLAS Tier-1 Network Status Evolution of LHC Networking – February 10, 2014 1.

Slides:



Advertisements
Similar presentations
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
Advertisements

Internet Access for Academic Networks in Lorraine TERENA Networking Conference - May 16, 2001 Antalya, Turkey Infrastructure and Services Alexandre SIMON.
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 IP Address Management Survey of Esnet sites – December 2011 Prepared by: Les Cottrell SLAC, ESCC meeting Clemson University February 3 rd 2011.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS Computing Integration.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
LHCOPN & LHCONE Network View Joe Metzger Network Engineering, ESnet LHC Workshop CERN February 10th, 2014.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
| BoD over GÉANT (& NRENs) for FIRE and GENI users GENI-FIRE Workshop Washington DC, 17th-18th Sept 2015 Michael Enrico CTO (GÉANT Association)
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
US LHC Tier-1 WAN Data Movement Security Architectures Phil DeMar (FNAL); Scott Bradley (BNL)
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
LHC Open Network Environment LHCONE David Foster CERN IT LCG OB 30th September
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
Recent and Planned Developments in ESnet Michael Ernst, BNL Slides courtesy of Mike Bennett, ESnet Dedicated Meeting with NII May 14, 2013 Tokyo.
1 Networking in the WLCG Facilities Michael Ernst Brookhaven National Laboratory.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
ATLAS Network Requirements – an Open Discussion ATLAS Distributed Computing Technical Interchange Meeting University of Tokyo.
Content: India’s e-infrastructure an overview The Regional component of the Worldwide LHC Computing Grid (WLCG ) India-CMS and India-ALICE Tier-2 site.
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
Rationalizing Tier 2 Traffic and Utilizing the Existing Resources (T/A Bandwidth) What We Have Here is a Traffic Engineering Problem LHC Tier 2 Technical.
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Efi.uchicago.edu ci.uchicago.edu Storage federations, caches & WMS Rob Gardner Computation and Enrico Fermi Institutes University of Chicago BigPanDA Workshop.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Networks ∙ Services ∙ People Mian Usman TNC15, Porto GÉANT IP Layer 17 th June 2015 IP Network Architect, GÉANT.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Global Science experimental Data hub Center April 25, 2016 Seo-Young Noh Status Report on KISTI’s Computing Activities.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
Brookhaven Science Associates U.S. Department of Energy 1 n BNL –8 OSCARS provisioned circuits for ATLAS. Includes CERN primary and secondary to LHCNET,
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
Network Engineering Services Group Update Patrick Dorn, Network Engineer ESnet Network Engineering Group ESCC July
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
T0-T1 Networking Meeting 16th June Meeting
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Networking for the Future of Science
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Dagmar Adamova, NPI AS CR Prague/Rez
Tony Cass, Edoardo Martelli
ATLAS Tier 2 Paths Within ESnet
Wide-Area Networking at SLAC
Presentation transcript:

Michael Ernst U.S. ATLAS Tier-1 Network Status Evolution of LHC Networking – February 10,

Tier-1 Production Network Connectivity As to the Tier-1: the maximum usable b/w is 70 Gbps 50 Gbps dedicated circuits/unshared plus 20 Gbps general IP service shared across all departments at BNL) Currently available for Tier-0  Tier-1 and Tier-1  Tier-1: 17 Gbps via OPN/USLHCNet + 2*10 Gbps ESnet/GEANT shared by researchers in US One dedicated 10 Gbps circuit for LHCONE (LHC Open Network Environment, connecting the Tier-1 at MANLAN in New York) DOE/ESnet “dark fiber project” has brought abundant physical fiber infrastructure into the lab BNL connected to ESnet at 100G Have T1 facility connected to ESnet at 100G for R&D (ANA TA link) In the process of moving BNL/T1 production environment to 100G

OPN R&E + Virtual Circuits LHCONE ATLAS Software and Computing Week - October 24, Gbps 3.5 Gbps

4 SNLL PNNL LANL SNLA JLAB PPPL MIT/ PSFC BNL AMES LLNL GA JGI LBNL SLAC NERSC ORNL ANL FNAL Salt Lake INL GFDL PU Physics SEAT STAR CHIC WASH commercial peering points R&E network peering locations NASH ATLA HOUS NEWY AOFA BOST KANS DENV ALBQ LASV BOIS SACR ELPA SDSC LOSA Routed IP 100 Gb/s Routed IP 4 X 10 Gb/s 3 rd party 10Gb/s Express / metro 100 Gb/s Express / metro 10G Express multi path 10G Lab supplied links Other links Tail circuits Major Office of Science (SC) sites LBNL Major non-SC DOE sites LLNL CLEV ESnet optical transport nodes (only some are shown) MANLAN ESnet managed 10G router 10 ESnet managed 100G routers 100 Site managed routers SUNN ESnet PoP/hub locations LOSA ESnet optical node locations (only some are shown) EQCH Geography is only representational ESnet5 March SUNN CHAT

BNL’s LHCOPN Connectivity is provided by USLHCNet H. Newman

12PB WNs 20 Data Transfer Nodes

12PB WNs

CERN/T1 -> BNL Transfer Performance via ANA 100G Regular ATLAS Production + Test Traffic Observations (all in the context of ATLAS) – Never exceeded ~50 Gbits/sec – CERN (ATLAS EOS) -> BNL limited at ~1.5 GB/s Achieved >8 GB/s between 2 CERN and BNL – Each T1 (via OPN/CERN) -> BNL limited to ~0.5 GB/s

Evolving Tier-2 Networking All 5 US ATLAS Tier-2 (10 Sites) sites are currently connected at rate of at least 10 Gbps – This has proven not sufficient to efficiently utilize the resources at federated sites (CPU & Disk at different sites) US ATLAS Facilities have recognized the need to develop network infrastructure at sites – A comprehensive, forward looking plan exists – Additional Funding was provided by US ATLAS Mgmt & NSF Sites are in the process of upgrading their Connectivity to 100 Gbps – 6 Sites will have completed upgrade by end of April – All others will be done by the end of

From CERN to BNL 12

From BNL to T1s 13

From BNL to T1s and T2s 14

T1s vs. T2s from BNL (2013 Winter Conference Preparations) 15 CA T1 CA T2s DE T1 DE T2s UK T1 UK T2s FR T1 FR T2s T2s in several regions are getting ~an order of magnitude more Data from BNL than the associated T1s

From T1s to BNL 16

From T1s and T2s to BNL 17

From BNL to CERN 18

T1s vs. T2s to BNL 19 DE T1 DE T2 FR T1 FR T2 UK T1 UK T2 CA T1 CA T2s

From BNL to non-US T2s 20

From non-US T2s to BNL

Remote Access – A possible Game-Changer Data access over the WAN at job runtime – Today tightly coupled with Federated Data Access Automatic data discovery with XrootD redirector Unpredictable network/storage bandwidth requirement – Possible issues include hotspots, campus network congestion, storage congestion, latency – Totally synchronous: time to completion within minutes/seconds (or less)

Worldwide FAX Deployment

Jobs Accessing Data Remotely w/ FAX

Traffic Statistics - Observations Traffic Volume To/From BNL – From CERN to BNL: ~500 TB/month during ATLAS data taking – To BNL: 1,400 TB/month (Peak 1,900 TB/month) – From BNL: 1,900 TB/month (Peak TB/month) T1 Traffic Volume To/From BNL via LHCOPN – To BNL: 400 TB/month (Peak 1,200 TB/month) – From BNL: 400 TB/month (Peak 600 TB/month) – BNL to T2 Volume during conference preparation order of magnitude higher than BNL to T1 Volume Traffic Volume From/To BNL via LHCONE and GIP – To BNL from non-US T2s: 200 TB/month (Peak 500 TB/month) – From BNL to non-US T2s: 1000 TB/month (~400MB/s) – Traffic clearly driven by analysis activities 25

Trends Looking at 2012 and 2013 statistics data, from BNL’s perspective Traffic statistics suggests BNL/T1 to T2 Traffic is dominating – Traffic to non-US T2s doubled to 1 PB/month in September 2012 ~Constant since then, potential to grow w/ new data Largely driven by analysis – BNL Traffic Volume from/to T1s via LHCOPN staying fairly constant for 2 years at ~500 TB/month Largely independent from data taking 26

Conclusion Rather than maintaining distinct networks the LHC Community should aim at unifying its network infrastructure – In ATLAS Tiers are becoming more and more meaningless – We are thinking about optimizing usage of CPU and Disk and we also need to think about optimizing usage of Network resources – Load-balanced links – Traffic prioritization, if necessary Traffic aggregation on fewer Links 27

R&E Transatlantic Connectivity (5/2013) Dale Finkelson Not showing LHCOPN Circuits

Looking at it generically …

Concerns With the T1 and the T2s in the US upgrading now to 100G the global infrastructure needs to follow LHCONE Evolution – Currently the LHCONE side-by-side w/ general R&E infrastructure – Traffic segregated, but what is actually the benefit? Is anyone looking at the flows for optimization, steering? Is it really true that our ‘Elephant’ flows interfere w/ traffic from other science communities? P2P/Dynamic Circuit Infrastructure – Are the interface definitions and components mature enough to serve applications? – What would happen if the experiments started to use dynamic circuits extensively, in multi-domain environment? – Would there be sufficient infrastructure in the system? 30

Backup Material 31

From US T2s to BNL

From BNL to US T2s