LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting NIKHEF/SARA, Amsterdam, The.

Slides:



Advertisements
Similar presentations
TransLight/StarLight Tom DeFanti (Principal Investigator) Maxine Brown (Co-Principal Investigator) Joe Mambretti (Investigator) Alan Verlo, Linda Winkler.
Advertisements

Connect. Communicate. Collaborate I-SHARe Anand Patil, DANTE NML-WG, Open Grid Forum 22, Cambridge (MA), 26 February 2008.
Connect. Communicate. Collaborate GEANT2 update. Connect. Communicate. Collaborate GÉANT2 Topology.
Erik-Jan Bos, Sr. Strategy Advisor, Global Programs, Internet2 100G intercontinental: The Next Network Frontier JET Meeting, June 18, 2013.
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
IPv6 at CERN Update on Network status David Gutiérrez Co-autor: Edoardo MartelliEdoardo Martelli Communication Services / Engineering
Trial of the Infinera PXM Guy Roberts, Mian Usman.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
Agenda Network Infrastructures LCG Architecture Management
Dutch Tier 1 SARA/NIKHEF & SURFnet Erik-Jan Bos Director of Network Services SURFnet, The Netherlands To/T1 networking meeting, Geneva, Switzerland July.
GLIF Engineering (TEC) Working Group & SURFnet6 Blue Print Erik-Jan Bos Director of Network Services, SURFnet I2 Fall meeting, Austin, TX, USA September.
SURFnet and the LHC Erik-Jan Bos Director of Network Services, SURFnet Co-chair of GLIF TEC LHC T0/1 Network Meeting, Amsterdam January 21, 2005.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Connect. Communicate. Collaborate Place your organisation logo in this area End-to-End Coordination Unit Toby Rodwell, Network Engineer, DANTE TNLC, 28.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
| BoD over GÉANT (& NRENs) for FIRE and GENI users GENI-FIRE Workshop Washington DC, 17th-18th Sept 2015 Michael Enrico CTO (GÉANT Association)
Performance Monitoring - Internet2 Member Meeting -- Nicolas Simar Performance Monitoring Internet2 Member Meeting, Indianapolis.
T0/T1 network meeting July 19, 2005 CERN
HEPiX, CASPUR, April 3-7, 2006 – Steve McDonald Steven McDonald TRIUMF Network & Computing Services Canada’s National Laboratory.
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Connect communicate collaborate LHCONE L3VPN Status Update Mian Usman LHCONE Meeting Rome 28 th – 29 th Aprils 2014.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
GigaPort NG Network SURFnet6 and NetherLight Kees Neggers SURFnet Amsterdam October 12, 2004.
GLIF Infrastructure Kees Neggers SURFnet SC2004 Pittsburgh, PA 12 November 2004.
Techs in Paradise 2004, Honolulu / Lambda Networking BOF / Jan 27 NetherLight day-to-day experience APAN lambda networking BOF Erik Radius Manager Network.
Connect. Communicate. Collaborate BANDWIDTH-ON-DEMAND SYSTEM CASE-STUDY BASED ON GN2 PROJECT EXPERIENCES Radosław Krzywania (speaker) PSNC Mauro Campanella.
Erik Radius Manager Network Services SURFnet, The Netherlands Joint Techs Workshop Columbus, OH - July 20, 2004 GigaPort Next Generation Network & SURFnet6.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
GigaPort NG Network SURFnet6 and NetherLight Erik-Jan Bos Director of Network Services, SURFnet GDB Meeting, SARA&NIKHEF, Amsterdam October 13, 2004.
Networks ∙ Services ∙ People Mian Usman LHCOPN/ONE meeting – Amsterdam Status update LHCONE L3VPN 28 th – 29 th Oct 2015.
Connect communicate collaborate LHCONE Diagnostic & Monitoring Infrastructure Richard Hughes-Jones DANTE Delivery of Advanced Network Technology to Europe.
IPv6 and the US higher education and research networking community Doug Van Houweling President and CEO, Internet2
Connect communicate collaborate LHCONE in Europe DANTE, DFN, GARR, RENATER, RedIRIS LHCONE Meeting Amsterdam 26 th – 27 th Sept 11.
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
Optical Networks and eVLBI Bill St. Arnaud
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LHCOPN Operational model: Roles and functions.
David Foster, CERN LHC T0-T1 Meeting, Seattle, November November Meeting David Foster SEATTLE.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
LHC-OPN operations Roberto Sabatino LHC T0/T1 networking meeting Amsterdam, 31 January 2006.
Erik-Jan Bos, Director of Network Services, SURFnet Internet2 Joint Techs Salt Lake City, UT, USA – February 14, 2005 Report from the G IF Tech.
LHCOPN operational model Guillaume Cessieux (CNRS/FR-CCIN2P3, EGEE SA2) On behalf of the LHCOPN Ops WG GDB CERN – November 12 th, 2008.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
David Foster, CERN LHC T0-T1 Meeting, Cambridge, January 2007 LHCOPN Meeting January 2007 Many thanks to DANTE for hosting the meeting!! Thanks to everyone.
LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting CERN, Geneva, Switzerland;
Connect. Communicate. Collaborate Place your organisation logo in this area End-to-End Coordination Unit Marian Garcia, Operations Manager, DANTE LHC Meeting,
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
NORDUnet Nordic Infrastructure for Research & Education Federated Operations in an Intercontinental Network Environment Dale Finkelson (Internet2) & Erik-Jan.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
CAnet 4 LHC network resources US LHC network working group meeting Oct. 23, 2006 Thomas Tam CANARIE Inc.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
T0-T1 Networking Meeting 16th June Meeting
Fermilab T1 infrastructure
“A Data Movement Service for the LHC”
LHCONE L3VPN Status update Mian Usman LHCOPN-LHCONE meeting
K. Schauerhammer, K. Ullmann (DFN)
The SURFnet Project Bram Peeters, Manager Network Services
CERN-USA connectivity update DataTAG project
an overlay network with added resources
Wide-Area Networking at SLAC
Network Technology Evolution
Optical Networking Activities in NetherLight
Network Technology Evolution
Presentation transcript:

LHC high-level network architecture Erik-Jan Bos Director of Network Services SURFnet, The Netherlands T0/T1 network meeting NIKHEF/SARA, Amsterdam, The Netherlands; April 8, 2005

ContentsContents History and mission A proposed high-level architecture Further steps

History and mission January 20 & 21, 2005 meeting in Amsterdam chaired by David Foster: –Presentations by the experiments –Presentations by some network orgs –Conclusion: Move from bottom up to top down –Consensus on small task force for proposing LHC high- level network architecture Initial proposed people: Don Petravick, Kors Bos, David Foster, Paolo Moroni, Edoardo Martelli, Roberto Sabatino, Erik-Jan Bos (volunteered to be chair)

First steps to the architecture Assumptions: –High-volume data streams –Continuous data streams –Keep It Simple Stay as low in the stack as you can (see January presentations)

Security considerations Important to address security concerns already in the design phase Architecture will be kept as protected as possible from external access At least in the beginning, access from trusted sources (i.e. LHC prefixes) will not be restricted

A proposed high-level architecture (1) Optical Private Network, consisting of dedicated 10G paths between T0 and each T1, two flavors: –“Light path T1” –“Routed T1” Special measures for back-up for T0-T1, to be filled- in later T0 preferred interface is 10Gbps Ethernet LAN-PHY

A proposed high-level architecture (2)

A proposed high-level architecture (3)

Light Path definition Definition: “(i) a point to point circuit based on WDM technology or (ii) a circuit-switched channel between two end points with deterministic behaviour based on TDM technology or (iii) concatenations of (i) and (ii)” So: A layer 1 connection with Ethernet framing Document contains examples

Light Path T1 Uses a dedicated light path, at 10G, to the interface at T0 Possible implementation for a European T1: –10GE LAN PHY at T0 awaiting the T1 –10GE LAN PHY at T1 for the connection to T0 –T1 connects to NRN at 10GE LAN PHY –NRN connects to GÉANT2 at 10GE LAN PHY or 10G SONET (with GFP-F mapping) –GÉANT2 connects to T0 at 10GE LAN PHY CIDR address block of T1 on this interface

Routed T1 BGP peering established between the T0's router and the T1's router site using external BGP (eBGP) Possible implementation for a non-European T1: –10GE LAN PHY at T0 awaiting the T1 (10GE WAN PHY to be discussed with CERN, to avoid extra box in Geneva) –Connection to an intercontinental wave from a commercial carrier –Connected to a router of the NRN on 10GE WAN PHY –T1 connected to NRN at 10G

What does this mean for you? (1) T1 will be responsible for organising the physical connectivity from the T1's premises to the T0's computer centre Party to contact and to get involved: Your local NRN (European NREN, ESnet, CANARIE, or ASnet) European NRENs: –Will sync with DANTE –DANTE to connect to T0 –One primary 10G light path per Tier1 and a back-up path

What does this mean for you? (2) Non-European Tier1s, e.g.: –Have dedicated bandwidth into CERN, or –Connect to an open optical exchange in Europe, like NetherLight, CzechLight, NorthernLight or UKLight and ask DANTE for a 10G light path between the *Light and CERN

Envisioned T0-T1 provisioning Name of T1LP/RoutedT0 Interface and intervening networks ASCCRouted10GE LAN, ASNet, NetherLight|GÉANT2 BNLRouted10G SONET, LHCnet*, ESnet CNAFLight Path10G LAN, GÉANT2, GARR FNALRouted10G SONET, LHCnet*, ESnet IN2P3Light Path10G LAN, RENATER3 GridKaLigth Path10G LAN, GÉANT2, X-WiN SARALight Path10G LAN PHY, GÉANT2, SURFnet6 NorduGridLight PathGÉANT2, NORDUnet, Nordic NRNs ? PICLight Path10G LAN, GÉANT2, RedIRIS, Catalan Net RALLight Path10G LAN, GÉANT2, SuperJANET5 TRIUMFLight PathCA*net 4, ? * = CALTECH-CERN transatlantic links

PlanningPlanning Start date for physics traffic is June 2007 T1s are encouraged to proceed with provisioning well before that date, ideally already within 2005 Nevertheless, T1s must be ready at full bandwidth not later than Q1 2006, to be in place for the mid SC.

“LHC Network Operations”, discussion Distributed Operations: –Every Tier is responsible to monitor and assure the functionality of its own equipment and line(s) –Parties involved: Tiers, DANTE, NRNs, *Light operators –Communication infrastructure in place Centralised Operations: –LHC Helpdesk and/or NOC –Ultimately, the LHC NOC does all configuration, trouble shooting, and fixing Hybrid Operations: –Central LHC health & volume monitoring capability –Each Tier or network organization has responsibility

A word on future growth Some light path math (theoretical): –10 Gbit/s ~ byte/day or 100 Tbyte/day –Eleven 10G light paths -> more than 1 petabyte/day or roughly half an exabyte/year In case a 10G is not sufficient: –Order a second 10G between T0 and T1 –Preferably on a separate physical path –Architecture fully allows for this

Items for further discussion Agree with T0 about the physical interface for the T0-T1 link Verify that the proposed addressing set-up is compatible with the grid software (e.g. can the servers be grouped in the same CIDR block?) Inform T0 about the AS number used Check if it is possible to establish an environment without default route Verify if the proposed security model is compatible with the Grid applications Decide a backup strategy in case an alternate path at full speed is not available: tolerate a few hours stop or prefer low performance on general purpose research backbones.

Next Steps Get comments in on version 1.0 of the document Together with results of the discussion write the final version 2.0 T1s must start to work with their NRNs NRNs must work on dedicated bandwidth with DANTE for GÉANT2 light paths and/or commercial carriers and/or open optical exchange operators

AcknowledgementsAcknowledgements Thanks to (alphabetically): –Kors Bos (NIKHEF) –Hans Döbbeling (DANTE) –David Foster (CERN) –Bill Johnston (ESnet) –Donna Lamore (FNAL) –Edoardo Martelli (CERN) –Paolo Moroni (CERN) –Don Petravick (FNAL) –Roberto Sabatino (DANTE) –Karin Schauerhammer (DFN) –Klaus Ullmann (DFN)

Thank you Questions?