1 1 LHCOPN Status and Plans David Foster Head, Communications and Networks CERN January 2008 Joint Techs Hawaii LHCOPN Status and Plans Joint-Techs Hawaii.

Slides:



Advertisements
Similar presentations
CERN STAR TAP June 2001 Status of the EU DataGrid Project Fabrizio Gagliardi CERN EU-DataGrid Project Leader June 2001
Advertisements

David Foster, CERN NorduNet Meeting April 2008 NorduNet 2008 LHCOPN Present and Future David Foster Head, Communications and Networks CERN.
Computing for LHC Dr. Wolfgang von Rüden, CERN, Geneva ISEF students visit CERN, 28 th June - 1 st July 2009.
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
Les Les Robertson WLCG Project Leader WLCG – Worldwide LHC Computing Grid Where we are now & the Challenges of Real Data CHEP 2007 Victoria BC 3 September.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
CERN IT Department CH-1211 Genève 23 Switzerland t Status and Plans TERENA 2010 Vilnius, Lithuania John Shade /CERN.
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
1 ESnet Network Measurements ESCC Feb Joe Metzger
GÉANT and The Future of Pan-European Networking CCIRN, 3 rd July 2004 John Boland CEO HEAnet Member of DANTE Board of Directors.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Frédéric Hemmer, CERN, IT DepartmentThe LHC Computing Grid – October 2006 LHC Computing and Grids Frédéric Hemmer IT Deputy Department Head October 10,
1 1 David Foster Head, Communications and Networks CERN May 2008 LHC Networking T0-T1 Status and Directions.
1 1 David Foster Head, Communications and Networks CERN LHC Networking LHC Grid Fest October 2008.
13-May-03D.P.Kelsey, WP8 CA and VO organistion1 CA’s and Experiment (VO) Organisation WP8 Meeting EDG Barcelona, 13 May 2003 David Kelsey CCLRC/RAL, UK.
Frédéric Hemmer, CERN, IT Department The LHC Computing Grid – June 2006 The LHC Computing Grid Visit of the Comité d’avis pour les questions Scientifiques.
Connect. Communicate. Collaborate GGF 16 - Athens, Greece, February 14, 2006 Optical Private Network (OPN) support of Grid e-Science Projects: A GÉANT2/NREN.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Connect communicate collaborate LHCONE L3VPN Status Update Mian Usman LHCONE Meeting Rome 28 th – 29 th Aprils 2014.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
Ian Bird LCG Deployment Manager EGEE Operations Manager LCG - The Worldwide LHC Computing Grid Building a Service for LHC Data Analysis 22 September 2006.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
CERN IT Department CH-1211 Genève 23 Switzerland Visit of Professor Karel van der Toorn President University of Amsterdam Wednesday 10 th.
Jürgen Knobloch/CERN Slide 1 A Global Computer – the Grid Is Reality by Jürgen Knobloch October 31, 2007.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Ian Bird LCG Deployment Area Manager & EGEE Operations Manager IT Department, CERN Presentation to HEPiX 22 nd October 2004 LCG Operations.
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
Service, Operations and Support Infrastructures in HEP Processing the Data from the World’s Largest Scientific Machine Patricia Méndez Lorenzo (IT-GS/EIS),
Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume. cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Connect. Communicate. Collaborate LHCOPN lambda and fibre routing – Episode 3 Michael Enrico Network Engineering & Planning Manager DANTE LHC Meeting,
David Foster, CERN GDB Meeting April 2008 GDB Meeting April 2008 LHCOPN Status and Plans A lot more detail at:
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
LHC-OPN operations Roberto Sabatino LHC T0/T1 networking meeting Amsterdam, 31 January 2006.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
David Foster, CERN LHC T0-T1 Meeting, Cambridge, January 2007 LHCOPN Meeting January 2007 Many thanks to DANTE for hosting the meeting!! Thanks to everyone.
WLCG – Status and Plans Ian Bird WLCG Project Leader openlab Board of Sponsors CERN, 23 rd April 2010.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
Report on availability of the JINR networking, computing and information infrastructure for real data taking and processing in LHC experiments Ivanov V.V.
Grid Computing at NIKHEF Shipping High-Energy Physics data, be it simulated or measured, required strong national and trans-Atlantic.
T0-T1 Networking Meeting 16th June Meeting
Bob Jones EGEE Technical Director
LHCOPN lambda and fibre routing Episode 4 (the path to resilience…)
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
“A Data Movement Service for the LHC”
LHCOPN update Brookhaven, 4th of April 2017
SURFnet6: the Dutch hybrid network initiative
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
The LHC Computing Challenge
LHCONE L3VPN Status update Mian Usman LHCOPN-LHCONE meeting
LHC Data Analysis using a worldwide computing grid
Overview & Status Al-Ain, UAE November 2007.
Presentation transcript:

1 1 LHCOPN Status and Plans David Foster Head, Communications and Networks CERN January 2008 Joint Techs Hawaii LHCOPN Status and Plans Joint-Techs Hawaii David Foster Head, Communications and Networks CERN January 2008

2 2 Acknowledgments Many presentations and material in the public domain have contributed to this presentation, too numerous to mention individually.

3 3 LHC Mont Blanc, 4810 m Downtown Geneva

4 4 CERN – March m in Circumference SC Magnets pre ‑ cooled to °C (80 K) using tonnes of liquid nitrogen 60 tonnes of liquid helium bring them down to °C (1.9 K). 600 Million Proton Collisions/second The internal pressure of the LHC is atm, ten times less than the pressure on the Moon

5 5 CERN’s Detectors To observe the collisions, collaborators from around the world are building four huge experiments: ALICE, ATLAS, CMS, LHCb Detector components are constructed all over the world Funding comes mostly from the participating institutes, less than 20% from CERN CMS ALICE ATLAS LHCb

6 6 The LHC Computing Challenge Signal/Noise Data volume High rate x large number of channels x 4 experiments  15 PetaBytes of new data each year Compute power Event complexity x Nb. events x thousands users  100 k of today's fastest CPUs Worldwide analysis & funding Computing funding locally in major regions & countries Efficient analysis everywhere  GRID technology

7 7 CERN – March 2007

8 8

9 9 LHC Computing  Multi-science Grid MONARC project First LHC computing architecture – hierarchical distributed model 2000 – growing interest in grid technology HEP community main driver in launching the DataGrid project EU DataGrid project middleware & testbed for an operational grid – LHC Computing Grid – LCG deploying the results of DataGrid to provide a production facility for LHC experiments – EU EGEE project phase 1 starts from the LCG grid shared production infrastructure expanding to other communities and sciences CERN

10 The WLCG Distribution of Resources Tier-0 – the accelerator centre Data acquisition and initial Processing of raw data Distribution of data to the different Tier’s Canada – Triumf (Vancouver) France – IN2P3 (Lyon) Germany – Forschunszentrum Karlsruhe Italy – CNAF (Bologna) Netherlands – NIKHEF/SARA (Amsterdam) Nordic countries – distributed Tier-1 Spain – PIC (Barcelona) Taiwan – Academia SInica (Taipei) UK – CLRC (Oxford) US – FermiLab (Illinois) – Brookhaven (NY) Tier-1 (11 centers ) – “online” to the data acquisition process  high availability Managed Mass Storage –  grid-enabled data service Data-heavy analysis National, regional support Tier-2 – ~200 centres in ~40 countries Simulation End-user analysis – batch and interactive 14

11 Centers around the world form a Supercomputer The EGEE and OSG projects are the basis of the Worldwide LHC Computing Grid Project WLCG Inter-operation between Grids is working!

12 Tier-1 Centers: TRIUMF (Canada); GridKA(Germany); IN2P3 (France); CNAF (Italy); SARA/NIKHEF (NL); Nordic Data Grid Facility (NDGF); ASCC (Taipei); RAL (UK); BNL (US); FNAL (US); PIC (Spain) The Grid is now in operation, working on: reliability, scaling up, sustainability

13 Guaranteed bandwidth can be a good thing

14 LHCOPN Mission To assure the T0-T1 transfer capability. Essential for the Grid to distribute data out to the T1’s. Capacity must be large enough to deal with most situation including “Catch up” The excess capacity can be used for T1-T1 transfers. Lower priority than T0-T1 May not be sufficient for all T1-T1 requirements Resiliency Objective No single failure should cause a T1 to be isolated. Infrastructure can be improved Naturally started as an unprotected “star” – insufficient for a production network but enabled rapid progress. Has become a reason for and has leveraged cross border fiber. Excellent side effect of the overall approach.

15 LHCOPN Design Information All technical content is on the LHCOPN Twiki: Coordination Process LHCOPN Meetings (every 3 months) Active Working Groups –Routing –Monitoring –Operations Active Interfaces to External Networking Activities European Network Policy Groups US Research Networking Grid Deployment Board LCG Management Board EGEE

16 CERN – March 2007

17 SWITCH COLT - ISP Interoute - ISP Globalcrossing - ISP WHO - CIC CITIC74 - CIC CIXP TIFR - Tier2 UniGeneva - Tier2 RIPN USLHCnetChicago – NYC - Amst CA-TRIUMF - Tier1 ES-PIC - Tier1 DE-KIT - Tier1 FR-CCIN2P3 - Tier1 IT-INFN-CNAF - Tier1 NDGF - Tier1 NL-T1 - Tier1 TW-ASGC - Tier1 UK-T1-RAL - Tier1 US-T1-BNL - Tier1c US-FNAL-CMS - Tier1c CH-CERN – Tier0 LHCOPN Geant2 Equinix -TIX Russian Tier2s CERN WAN Network 10Gbps 5G 6G 40G 20G 12.5G 20G 1Gbps 100Mbps CERN External Network Links

18 GPN g513-e-rci76-1 IX Europe - last update: e513-x-mfte6-1 e513-e-rci65-3 e513-e-rci76-2 SWITCH AS559 GEANT AS20965 Chicago POP CIXP E513-X StarLight Force10 swice2.switch.ch C7606 I-root dns server Akamai AS21357 as1-gva C2509 as2-gva C2511 swice3.switch.ch C7606 LHCOPN CITIC /20 who-7204-a who-7204-b FNAL AS3152 ESnet AS293 Abilene AS11537 RIPE RIS(04) AS12654 K-root dns server e600chi.uslhcnet.org WHO /16 Reuters AS65020 e513-e-rci76-1 e600nyc.uslhcnet.org New York POP USLHCnet AS /23 e600gva1 e600gva2 l513-c-rftec-2 x424nyc.uslhcnet.org Internet GC AS3549 rt1.gen.ch.geant2.net JT640 as1(-5)-csen C2511 e513-e-shp3m-4 e513-e-rci72-4 tt87.ripe.net Internet Level3 AS3356 l513-c-rftec-1 rt1.par.fr.geant2.net JT640 evo-us Abilene AS11537 TIX Tier2 UniGe JINR AS2875 KIAE AS6801 RadioMSU AS2683 tt31.ripe.net ext-dns-2 ext-dns-1 g513-e-rci76-2 evo-eu e513-e-mhpyl-1 GN2 - E2E Internet COLT AS8220 CERN External Network E513-E – AS513 Amsterdam Internet Level3 AS3356 e600ams r513-c-rca80-1 GPRS - VPN

19 Transatlantic Link Negotiations Yesterday A major provider lost their shirt on this deal!

20 LHCOPN Architecture 2004 Starting Point

21 GÉANT2: Consortium of 34 NRENs Multi-Wavelength Core (to 40) G Loops Dark Fiber Core Among 16 Countries:  Austria  Belgium  Bosnia-Herzegovina  Czech Republic  Denmark  France  Germany  Hungary  Ireland  Italy,  Netherland  Slovakia  Slovenia  Spain  Switzerland  United Kingdom 22 PoPs, ~200 Sites 38k km Leased Services, 12k km Dark Fiber Supporting Light Paths for LHC, eVLBI, et al. H. Doebbeling

22

23 Basic Link Layer Monitoring Perfsonar very well advanced in deployment (but not yet complete). Monitors the “up/down” status of the links. Integrated into the “End to End Coordination Unit” (E2ECU) run by DANTE Provides simple indications of “hard” faults. Insufficient to understand the quality of the connectivity

24

25

26 Active Monitoring Active monitoring needed Implementation consistency needed for accurate results One-way delay TCP achievable bandwidth ICMP based round trip time Traceroute information for path changes Needed for service quality issues First mission is T0-T1 and T1-T1 T1 deployment could be also used for T1-T2 measurements as a second step and with corresponding T2 infrastructure.

27 Background Stats

28 Monitoring Evolution Long standing collaboration of the measurement and monitoring technologies Monitoring working group of the LHCOPN ESNet and Dante have been leading the effort Proposal for a Managed Service by Dante Manage the tools, archives Manage the hardware, O/S Manage integrity of information Sites have some obligations On-site operations support Provision of a terminal server Dedicated IP port on the border router PSTN/ISDN line for out of band communication Gigabit Ethernet Switch GPS Antenna Protected power Rack Space

29 Operational Procedures Have to be finalised but need to deal with change and incident management. Many parties involved. Have to agree on the real processes involved Recent Operations workshop made some progress Try to avoid, wherever possible, too many “coordination units”. All parties agreed we need some centralised information to have a global view of the network and incidents. Further workshop planned to quantify this. We also need to understand existing processes used by T1’s.

30 Resiliency Issues The physical fiber path considerations continue Some lambdas have been re-routed. Others still may be. Layer3 backup paths for RAL and PIC are still an issue. In the case of RAL, excessive costs seem to be a problem. For PIC, still some hope of a CBF between RedIris and Renater Overall the situation is quite good with the CBF links, but can still be improved. Most major “single” failures are protected against.

31 T0-T1 Lambda routing (schematic) Connect. Communicate. Collaborate DE Frankfurt Basel T1 GRIDKA T1 Zurich CNAF DK Copenhagen NL SARA UK London T1 BNL T1 FNAL CH NY Starlight MAN LAN FR Paris T1 IN2P3 Barcelona T1 PIC ES Madrid T1 RAL IT Milan Lyon Strasbourg/Kehl GENEVA Atlantic Ocean VSNL N VSNL S AC-2/Yellow Stuttgart T1 NDGF T0 Hamburg T1 SURFnet T0-T1s: CERN-RAL CERN-PIC CERN-IN2P3 CERN-CNAF CERN-GRIDKA CERN-NDGF CERN-SARA CERN-TRIUMF CERN-ASGC USLHCNET NY (AC-2) USLHCNET NY (VSNL N) USLHCNET Chicago (VSNL S) T1 TRIUMF T1 ASGC ??? Via SMW-3 or 4 (?) Amsterdam

32 T1-T1 Lambda routing (schematic) Connect. Communicate. Collaborate DE Frankfurt Basel T1 GRIDKA T1 Zurich CNAF DK Copenhagen NL SARA UK London T1 BNL T1 FNAL CH NY Starlight MAN LAN FR Paris T1 IN2P3 Barcelona T1 PIC ES Madrid T1 RAL IT Milan Lyon Strasbourg/Kehl GENEVA Atlantic Ocean VSNL N VSNL S AC-2/Yellow Stuttgart T1 NDGF T0 Hamburg T1 SURFnet T1-T1s: GRIDKA-CNAF GRIDKA-IN2P3 GRIDKA-SARA SARA-NDGF T1 TRIUMF T1 ASGC ??? Via SMW-3 or 4 (?)

33 Some Initial Observations Connect. Communicate. Collaborate DE Frankfurt Basel T1 GRIDKA T1 Zurich CNAF DK Copenhagen NL SARA UK London T1 BNL T1 FNAL CH NY Starlight MAN LAN FR Paris T1 IN2P3 Barcelona T1 PIC ES Madrid T1 RAL IT Milan Lyon Strasbourg/Kehl GENEVA Atlantic Ocean VSNL N VSNL S AC-2/Yellow Stuttgart T1 NDGF T0 Hamburg T1 SURFnet (Between CERN and BASEL) Following lambdas run in same fibre pair: CERN-GRIDKA CERN-NDGF CERN-SARA CERN-SURFnet-TRIUMF/ASGC (x2) USLHCNET NY (AC-2) Following lambdas run in same (sub-)duct/trench: (all above +) CERN-CNAF USLHCNET NY (VSNL N) [supplier is COLT] Following lambda MAY run in same (sub-)duct/trench as all above: USLHCNET Chicago (VSNL S) [awaiting info from Qwest…] (Between BASEL and Zurich) Following lambdas run in same trench: CERN-CNAF GRIDKA-CNAF (T1-T1) Following lambda MAY run in same trench as all above: USLHCNET Chicago (VSNL S) [awaiting info from Qwest…] T1 TRIUMF T1 ASGC ??? Via SMW-3 or 4 (?) KEY GEANT2 NREN USLHCNET Via SURFnet T1-T1 (CBF)

34 Closing Remarks The LHCOPN is an important part of the overall requirements for LHC Networking. It is a (relatively) simple concept. Statically Allocated 10G Paths in Europe Managed Bandwidth on the 10G transatlantic links via USLHCNet Multi-domain operations remain to be completely solved This is a new requirement for the parties involved and a learning process for everyone Many tools and ideas exist and the work is now to pull this all together into a robust operational framework

35 Simple solutions are often the best!

36 CERN – March 2007 LHCOPN Status and Plans David Foster Head, Communications and Networks CERN January 2008 APAN Engineering Session Hawaii