David Foster, CERN NorduNet Meeting April 2008 NorduNet 2008 LHCOPN Present and Future David Foster Head, Communications and Networks CERN.

Slides:



Advertisements
Similar presentations
CCIRN topic: diversity of inter- continental links Heather Boyles, Internet2
Advertisements

Connect. Communicate. Collaborate GÉANT2 (and 3) CCIRN XiAn 26 August 2007 David West DANTE.
Circuit Monitoring July 16 th 2011, OGF 32: NMC-WG Jason Zurawski, Internet2 Research Liaison.
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association Steinbuch Centre for Computing (SCC)
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
1 1 LHCOPN Status and Plans David Foster Head, Communications and Networks CERN January 2008 Joint Techs Hawaii LHCOPN Status and Plans Joint-Techs Hawaii.
The Future of GÉANT: The Future Internet is Present in Europe Vasilis Maglaris Professor of Electrical & Computer Engineering, NTUA Chairman, NREN Policy.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
CERN IT Department CH-1211 Genève 23 Switzerland t Status and Plans TERENA 2010 Vilnius, Lithuania John Shade /CERN.
Connect. Communicate. Collaborate Place your organisation logo in this area End-to-End Coordination Unit Toby Rodwell, Network Engineer, DANTE TNLC, 28.
1 ESnet Network Measurements ESCC Feb Joe Metzger
GÉANT and The Future of Pan-European Networking CCIRN, 3 rd July 2004 John Boland CEO HEAnet Member of DANTE Board of Directors.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE II - Network Service Level Agreement (SLA) Establishment EGEE’07 Mary Grammatikou.
1 1 David Foster Head, Communications and Networks CERN May 2008 LHC Networking T0-T1 Status and Directions.
1 1 David Foster Head, Communications and Networks CERN LHC Networking LHC Grid Fest October 2008.
Connect communicate collaborate perfSONAR MDM updates: New interface, new possibilities Domenico Vicinanza perfSONAR MDM Product Manager
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
1 Measuring Circuit Based Networks Joint Techs Feb Joe Metzger
Bas Kreukniet, Sr Network Specialist at SURF SARA NL-T1 Expectations, findings, and innovation Geneva Workshop 10 Februari 2014.
LHC Open Network Environment LHCONE David Foster CERN IT LCG OB 30th September
1 LHC-OPN 2008, Madrid, th March. Bruno Hoeft, Aurelie Reymund GridKa – DE-KIT procedurs Bruno Hoeft LHC-OPN Meeting 10. –
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
Connect. Communicate. Collaborate Operations of Multi Domain Network Services Marian Garcia Vidondo, DANTE COO TNC 2008, Bruges May.
US LHCNet Update Dan Nae California Institute of Technology LHC OPN Meeting Munich, April 2007.
Connect communicate collaborate perfSONAR MDM for LHCOPN/LHCONE: partnership, collaboration, interoperability, openness Domenico Vicinanza perfSONAR MDM.
Connect communicate collaborate LHCONE Diagnostic & Monitoring Infrastructure Richard Hughes-Jones DANTE Delivery of Advanced Network Technology to Europe.
Future R&E networks: Technology challenges Roberto Sabatino CTO DANTE.
US LHC Tier-2 Network Performance BCP Mar-3-08 LHC Community Network Performance Recommended BCP Eric Boyd Deputy Technology Officer Internet2.
Connect communicate collaborate Connectivity Services, Autobahn and New Services Domenico Vicinanza, DANTE EGEE’09, Barcelona, 21 st -25 th September 2009.
INFSO-RI Enabling Grids for E-sciencE NRENs & Grids Workshop Relations between EGEE & NRENs Mathieu Goutelle (CNRS UREC) EGEE-SA2.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
Operations model Maite Barroso, CERN On behalf of EGEE operations WLCG Service Workshop 11/02/2006.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
CERN IT Department CH-1211 Geneva 23 Switzerland t James Casey CCRC’08 April F2F 1 April 2008 Communication with Network Teams/ providers.
Connect. Communicate. Collaborate LHCOPN lambda and fibre routing – Episode 3 Michael Enrico Network Engineering & Planning Manager DANTE LHC Meeting,
David Foster, CERN GDB Meeting April 2008 GDB Meeting April 2008 LHCOPN Status and Plans A lot more detail at:
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHCOPN Operational model: Roles and functions.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHCOPN operations Presentation and training.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHCOPN operations Presentation and training.
LHC-OPN operations Roberto Sabatino LHC T0/T1 networking meeting Amsterdam, 31 January 2006.
LHCOPN operational model Guillaume Cessieux (CNRS/FR-CCIN2P3, EGEE SA2) On behalf of the LHCOPN Ops WG GDB CERN – November 12 th, 2008.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
Campana (CERN-IT/SDC), McKee (Michigan) 16 October 2013 Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology.
David Foster, CERN LHC T0-T1 Meeting, Cambridge, January 2007 LHCOPN Meeting January 2007 Many thanks to DANTE for hosting the meeting!! Thanks to everyone.
Connect. Communicate. Collaborate Place your organisation logo in this area End-to-End Coordination Unit Marian Garcia, Operations Manager, DANTE LHC Meeting,
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
NORDUnet Nordic Infrastructure for Research & Education Federated Operations in an Intercontinental Network Environment Dale Finkelson (Internet2) & Erik-Jan.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Operating an Optical Private Network: the.
LHCOPN operational handbook Documenting processes & procedures Presented by Guillaume Cessieux (CNRS/IN2P3-CC) on behalf of CERN & EGEE-SA2 LHCOPN meeting,
T0-T1 Networking Meeting 16th June Meeting
Bob Jones EGEE Technical Director
LHCOPN lambda and fibre routing Episode 4 (the path to resilience…)
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
EGEE Middleware Activities Overview
LHCOPN update Brookhaven, 4th of April 2017
Networking for the Future of Science
The SURFnet Project Bram Peeters, Manager Network Services
Thanks to everyone for attending!!
Otto Kreiter 10th Dec LHCOPN meeting
LHC Data Analysis using a worldwide computing grid
Wide-Area Networking at SLAC
Presentation transcript:

David Foster, CERN NorduNet Meeting April 2008 NorduNet 2008 LHCOPN Present and Future David Foster Head, Communications and Networks CERN

David Foster, CERN

CERN Accelerator Complex

David Foster, CERN CERN – March m in Circumference (but varies with the moon!) SC Magnets cooled to °C (80 K), °C (4.5K), °C (1.9k) Using tons of liquid nitrogen and 120 tonnes of liquid helium 600 Million Proton Collisions/second The internal pressure of the LHC is atm, ten times less than the pressure on the Moon

David Foster, CERN What is the Higgs? CERN – March 2007

David Foster, CERN 6 A. Farbin

David Foster, CERN NorduNet Meeting April 2008 The Beginning... Essential for Grid functioning to distribute data out to the T1s. – Capacity must be large enough to deal with most situation including Catch up OPN conceived in 2004 as a Community Network – Renamed as Optical Private Network as a more descriptive name. – Based on 10G as the best choice for affordable adequate connectivity by G is (almost) commodity now! Considered by some as too conservative - Can fill a 10G pipe with just (a few) pcs! Simple end-end model – This is not a research project, but, an evolving production network relying on emerging facilities.

David Foster, CERN NorduNet Meeting April 2008 Hybrid Networking Model Infrastructure is provided by a number of initiatives: – GEANT-2 – Commercial Links – Coordinated Infrastructures (USLHCNet, GLIF) – NRENS + Research Networks (ESNet, I2, Canarie etc) Managed by the community – Closed Club of participants – Routers at the end points – Federated operational model

David Foster, CERN CERN – March 2007

David Foster, CERN NorduNet Meeting April 2008

David Foster, CERN Traffic Statistics

David Foster, CERN NorduNet Meeting April 2008 Current Situation T0-T1 Network is operational and stable. – But, The first principle is that you must not fool yourself, and you're the easiest person to fool. Richard Feynman Several areas of weakness – Physical Path Routing – IP Backup – Operational Support – Monitoring

David Foster, CERN NorduNet Meeting April 2008 Physical Paths Dante analysed the physical path routing for the OPN links. The network had been built over time, taking in each case the most direct (and cheapest!) wavelength path in the GEANT network.

David Foster, CERN T0-T1 Lambda routing (schematic) Connect. Communicate. Collaborate DE Frankfurt Basel T1 GRIDKA T1 Zurich CNAF DK Copenhagen NL SARA UK London T1 BNL T1 FNAL CH NY Starlight MAN LAN FR Paris T1 IN2P3 Barcelona T1 PIC ES Madrid T1 RAL IT Milan Lyon Strasbourg/Kehl GENEVA Atlantic Ocean VSNL N VSNL S AC-2/Yellow Stuttgart T1 NDGF T0 Hamburg T1 SURFnet T0-T1s: CERN-RAL CERN-PIC CERN-IN2P3 CERN-CNAF CERN-GRIDKA CERN-NDGF CERN-SARA CERN-TRIUMF CERN-ASGC USLHCNET NY (AC-2) USLHCNET NY (VSNL N) USLHCNET Chicago (VSNL S) T1 TRIUMF T1 ASGC ??? Via SMW-3 or 4 (?) Amsterdam

David Foster, CERN T1-T1 Lambda routing (schematic) Connect. Communicate. Collaborate DE Frankfurt Basel T1 GRIDKA T1 Zurich CNAF DK Copenhagen NL SARA UK London T1 BNL T1 FNAL CH NY Starlight MAN LAN FR Paris T1 IN2P3 Barcelona T1 PIC ES Madrid T1 RAL IT Milan Lyon Strasbourg/Kehl GENEVA Atlantic Ocean VSNL N VSNL S AC-2/Yellow Stuttgart T1 NDGF T0 Hamburg T1 SURFnet T1-T1s: GRIDKA-CNAF GRIDKA-IN2P3 GRIDKA-SARA SARA-NDGF T1 TRIUMF T1 ASGC ??? Via SMW-3 or 4 (?)

David Foster, CERN Some Initial Observations Connect. Communicate. Collaborate DE Frankfurt Basel T1 GRIDKA T1 Zurich CNAF DK Copenhagen NL SARA UK London T1 BNL T1 FNAL CH NY Starlight MAN LAN FR Paris T1 IN2P3 Barcelona T1 PIC ES Madrid T1 RAL IT Milan Lyon Strasbourg/Kehl GENEVA Atlantic Ocean VSNL N VSNL S AC-2/Yellow Stuttgart T1 NDGF T0 Hamburg T1 SURFnet (Between CERN and BASEL) Following lambdas run in same fibre pair: CERN-GRIDKA CERN-NDGF CERN-SARA CERN-SURFnet-TRIUMF/ASGC (x2) USLHCNET NY (AC-2) Following lambdas run in same (sub-)duct/trench: (all above +) CERN-CNAF USLHCNET NY (VSNL N) [supplier is COLT] Following lambda MAY run in same (sub-)duct/trench as all above: USLHCNET Chicago (VSNL S) [awaiting info from Qwest…] (Between BASEL and Zurich) Following lambdas run in same trench: CERN-CNAF GRIDKA-CNAF (T1-T1) Following lambda MAY run in same trench as all above: USLHCNET Chicago (VSNL S) [awaiting info from Qwest…] T1 TRIUMF T1 ASGC ??? Via SMW-3 or 4 (?) KEY GEANT2 NREN USLHCNET Via SURFnet T1-T1 (CBF)

David Foster, CERN NorduNet Meeting April 2008 Physical Path Routing Analysis showed many common physical paths of fibers and wavelengths. Re-routing of some wavelengths has been done. – More costly solution (more intervening equipment) – especially the path from Amsterdam -> CERN – 5x10G on this path.

David Foster, CERN NorduNet Meeting April 2008 IP Backup In case of failures, degraded service may be expected. – This is not yet quantified on a per failure basis. The IP configuration needs to be validated – Some failures have indeed produced successful failover. – Tests are planned for this month (9 th April) Some sites still have no physical backup paths – PIC (difficult) and RAL (some possibilities)

David Foster, CERN NorduNet Meeting April 2008 Operational Support EGEE-SA2 providing the lead on the operational model – Much initial disagreement on approach, now starting to converge. Last OPN meeting concentrated on points of view The network manager view The user view (Readiness expectations) The distributed view (E2ECU, IPCU, GGUS etc) The grass roots view (Site engineers) The centralised view (Dante) – All documentation is available on the Twiki. Much work remains to be done.

David Foster, CERN NorduNet Meeting April 2008 Operational Model Need to identify the major operational components and formalise their interactions including: – Information repositories GGUS, TTS, Twiki, PerfSonar etc. – Actors Site network support, ENOC, E2ECU, USLHCNet etc. Grid Operations. – Processes Who is responsible for which information? How does communication take place? – Actor Repository – Actor Actor For what purpose does communication take place? – Resolving identified issues – Authorising changes and developments A minimal design is needed to deal with the major issues – Incident Management (including scheduled interventions) – Problem Management – Change Management

David Foster, CERN NorduNet Meeting April 2008 In Practical Terms …. (provided by Dan Nae, as a site managers view) An end-to-end monitoring system that can pin-point reliably where most of the problems are An effective way to integrate the above monitoring system into the local procedures of the various local NOCs to help them take action A centralized ticketing system to keep track of all the problems A way to extract performance numbers from the centralized information (easy) Clear dissemination channels to announce problems, maintenance, changes, important data transfers, etc. Someone to take care of all the above A data repository engineers can use and a set of procedures that can help solve the hard problems faster (detailed circuit data, ticket history, known problems and solutions) A group of people (data and network managers) who can evaluate the performance of the LHCOPN based on experience and gathered numbers and can set goals (target SLAs for the next set of tenders, responsiveness, better dissemination channels, etc)

David Foster, CERN NorduNet Meeting April 2008 Basic Link Layer Monitoring Perfsonar very well advanced in deployment (but not yet complete). Monitors the up/down status of the links. Integrated into the End to End Coordination Unit (E2ECU) run by DANTE Provides simple indications of hard faults. Insufficient to understand the quality of the connectivity

David Foster, CERN

NorduNet Meeting April 2008 Monitoring Coherent (active) monitoring is a essential feature to understand how well the service is running. – Many activities around PerfSonar are underway in Europe and the US. Initial proposal by Dante to provide an appliance is now largely accepted. – Packaged, coherent, maintained installation of tools to collect information on the network activity. – Caveat: Service only guaranteed to end of GN2 (Macrh 2009) with the intention to continue in GN3.

David Foster, CERN NorduNet Meeting April 2008 Initial Useful Metrics and Tools (From Eric Boyd I2) Network Path characteristics Round trip time (perfSONAR PingER) Routers along the paths (traceroute) Path utilization/capacity (perfSONAR SNMP-MA) One way delay, delay variance (perfSONAR owamp) One way packet drop rate (perfSONAR owamp) Packets reordering (perfSONAR owamp) Achievable throughput (perfSONAR bwctl) Mar-3-08

David Foster, CERN NorduNet Meeting April 2008 Issues, Risks, Mitigation OPN is fundamental to getting the data from CERN to the T1s. It is a complex multi-domain network relying on infrastructure provided by: – (links) NRENs, Dante and commercial providers – (IP) T1s and CERN – (operations) T1s, CERN, EGEE and USLHCNet Developing a robust operational model is a major ongoing piece of work. – Define responsibilities. Avoid finger pointing loops – Need to separate design from implementation – Need to combine innovation and operation – Be robust, but not too conservative

David Foster, CERN NorduNet Meeting April 2008 HEP Bandwidth Roadmap for Major Links (in Gbps): US LHCNet Example Paralleled by ESnet Roadmap for Data Intensive Sciences Harvey Newman

David Foster, CERN NorduNet Meeting April 2008 Science Lives in an Evolving World New competition for the last mile giving a critical mass of people access to high performance networking. – But asymmetry may become a problem. New major investments in high capacity backbones. – Commercial and dot com investments. – Improving end-end performance. New major investments in data centers. – Networks of data centers are emerging (a specialised grid!) – Cloud computing, leverages networks and economies of scale – its easier (and cheaper) to move a bit than a watt. This creates a paradigm change, but at the user service level and new business models are emerging – Multimedia services are a major driver. (YouTube, IPTV etc.) – Social networking (Virtual world services etc) – Virtualisation to deliver software services – Transformation of software from a product to a service

David Foster, CERN NorduNet Meeting April 2008 The Business of Science is Evolving For the first time in High-Energy Particle Physics, the network is an integral part of the computing system. – This means that the community will take advantage of emerging opportunities, T0-T1, T1-T1, T1-T Scientific users will need increasing access to a broad range of competitive network services that will enable them to move forward with new ideas. – Low barriers to entry (cost & complexity) – Moving towards Globalisation of Innovation Europe will need to continue to evolve infrastructures and create new innovative services that attract new users. – New services rely on advanced network fabric and certainly this is most needed in areas where there is still a Digital Divide.

David Foster, CERN NorduNet Meeting April Simple solutions are often the best!