LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th, 2011 1.

Slides:



Advertisements
Similar presentations
Circuit Monitoring July 16 th 2011, OGF 32: NMC-WG Jason Zurawski, Internet2 Research Liaison.
Advertisements

Connect communicate collaborate LHCONE – Linking Tier 1 & Tier 2 Sites Background and Requirements Richard Hughes-Jones DANTE Delivery of Advanced Network.
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
Module 5 - Switches CCNA 3 version 3.0 Cabrillo College.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
Dynamically Provisioned Networks as a Substrate for Science David Foster CERN.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
Semester 4 - Chapter 3 – WAN Design Routers within WANs are connection points of a network. Routers determine the most appropriate route or path through.
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
Agenda Network Infrastructures LCG Architecture Management
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
IRNC Special Projects: IRIS and DyGIR Eric Boyd, Internet2 October 5, 2011.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
| BoD over GÉANT (& NRENs) for FIRE and GENI users GENI-FIRE Workshop Washington DC, 17th-18th Sept 2015 Michael Enrico CTO (GÉANT Association)
Connect communicate collaborate perfSONAR MDM updates: New interface, new possibilities Domenico Vicinanza perfSONAR MDM Product Manager
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
GEC 15 Houston, Texas October 23, 2012 Tom Lehman Xi Yang University of Maryland Mid-Atlantic Crossroads (MAX)
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Bandwidth-on-Demand evolution Gerben van Malenstein Fall 2011 Internet2 Member Meeting Raleigh, North Carolina, USA – October 3, 2011.
LHC Open Network Environment LHCONE David Foster CERN IT LCG OB 30th September
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
1 Networking in the WLCG Facilities Michael Ernst Brookhaven National Laboratory.
Global Networking for the LHC Artur Barczyk California Institute of Technology ECOC Conference Geneva, September 18 th,
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
DYNES Storage Infrastructure Artur Barczyk California Institute of Technology LHCOPN Meeting Geneva, October 07, 2010.
Connect communicate collaborate Intercontinental Multi-Domain Monitoring for the LHC Community Domenico Vicinanza perfSONAR MDM Product Manager DANTE –
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
Cloud Connectivity Walter van Dijk TF-MSP 27 September 2012 Connecting Cloud Providers to the SURFnet network.
Chapter 3 - VLANs. VLANs Logical grouping of devices or users Configuration done at switch via software Not standardized – proprietary software from vendor.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
Connect. Communicate. Collaborate Operations of Multi Domain Network Services Marian Garcia Vidondo, DANTE COO TNC 2008, Bruges May.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
Connect communicate collaborate LHCONE moving forward Roberto Sabatino, Mian Usman DANTE LHCONE technical workshop SARA, 1-2 December 2011.
Connect communicate collaborate LHCONE Diagnostic & Monitoring Infrastructure Richard Hughes-Jones DANTE Delivery of Advanced Network Technology to Europe.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
SDN/OPENFLOW IN LHCONE A discussion June 5, 2013
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Possible Governance-Policy Framework for Open LightPath Exchanges (GOLEs) and Connecting Networks June 13, 2011.
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
Networks ∙ Services ∙ People Mian Usman TNC15, Porto GÉANT IP Layer 17 th June 2015 IP Network Architect, GÉANT.
The DYNES Architecture & LHC Data Movement Shawn McKee/University of Michigan For the DYNES collaboration Contributions from Artur Barczyk, Eric Boyd,
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
© 2003, Cisco Systems, Inc. All rights reserved. 2-1 Campus Network Design.
Campana (CERN-IT/SDC), McKee (Michigan) 16 October 2013 Deployment of a WLCG network monitoring infrastructure based on the perfSONAR-PS technology.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
LHCOPN / LHCONE Status Update John Shade /CERN IT-CS Summary of the LHCOPN/LHCONE meeting in Amsterdam Grid Deployment Board, October 2011.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
LHCONE Point- to-Point service WORKSHOP SUMMARY
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
LHC Open Network Project status and site involvement
K. Schauerhammer, K. Ullmann (DFN)
Module 5 - Switches CCNA 3 version 3.0.
Presentation transcript:

LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,

Path to LHCONE Started with Workshop on Transatlantic Connectivity for LHC experiments –June CERN Same time as changes in the computing models were being discussed in the LHC experiments Experiments provided a requirements document (Oct 2010) –Tasked LHCOPN with providing a proposal LHCT2S group was formed from within the LHCOPN LHCT2S Meeting in Geneva in January 2011 –Discussion of 4 proposals, led to formation of a small working group drafting an architectural proposal based on these 4 documents LHCOPN Meeting in Lyon in February 2011 –Draft architecture approved, finalised as “v2.2” LHCONE meeting in Washington, June 2011

LHC Computing Infrastructure 3 WLCG in brief: 1 Tier-0 (CERN) 11 Tiers-1s; 3 continents 164 Tier-2s; 5 (6) continents Plus O(300) Tier-3s worldwide WLCG in brief: 1 Tier-0 (CERN) 11 Tiers-1s; 3 continents 164 Tier-2s; 5 (6) continents Plus O(300) Tier-3s worldwide

The current LHCOPN Dedicated network resources for Tier0 and Tier1 data movement 130 Gbps total Tier0-Tier1 capacity Simple architecture –Point-to-point Layer 2 circuits –Flexible and scalable topology Grew organically –From star to partial mesh –Open to technology choices have to satisfy requirements Federated governance model –Coordination between stakeholders –No single administrative body required 4

Moving to New Computing Models Moving away from the strict MONARC model 3 recurring themes: –Flat(ter) hierarchy: Any site can use any other site as source of data –Dynamic data caching: Analysis sites will pull datasets from other sites “on demand”, including from Tier2s in other regions Possibly in combination with strategic pre-placement of data sets –Remote data access: jobs executing locally, using data cached at a remote site in quasi-real time Possibly in combination with local caching Expect variations by experiment 5

6 Ian Bird, CHEP conference, Oct 2010

Why LHCONE? Next generation computing models will be more network- intensive LHC data movements have already started to saturate some main (e.g. transatlantic GP R&E) links –Guard against “defensive actions” by GP R&E providers We cannot simply count on General Purpose Research & Education networks to scale up –LHC is currently the power-user –Other science fields start creating large data flows as well 7

Characterization of User Space 8 Cees de Laat; This is where LHC users are

LHCONE The requirements, architecture, services 9

Requirements summary (from the LHC experiments) Bandwidth: –Ranging from 1 Gbps (Minimal site) to 5-10Gbps (Nominal) to N x 10 Gbps (Leadership) –No need for full-rate, but several full-rate connections between Leadership sites –Scalability is important, sites are expected to migrate Minimal  Nominal  Leadership Bandwidth growth: Minimal = 2x/yr, Nominal&Leadership = 2x/2yr Connectivity: –Facilitate good connectivity to so far (network-wise) under-served sites Flexibility: –Should be able to include or remove sites at any time Budget Considerations: –Costs have to be understood, solution needs to be affordable 10

Some Design Considerations So far, T1-T2, T2-T2, and T3 data movements have been mostly using General Purpose Network infrastructure –Shared resources (with other science fields) –Mostly best effort service Increased reliance on network performance  need more than best effort Separate large LHC data flows from routed GPN Collaboration on global scale, diverse environment, many parties –Solution to be Open, Neutral and Diverse –Agility and Expandability Scalable in bandwidth, extent and scope Allow to choose the most cost effective solution Organic activity, growing over time according to needs GPN Dedicated Multipoint Dedicated Point-2-Point Costs Performance

LHCONE Architecture Builds on the Hybrid network infrastructures and Open Exchanges –To build a global unified service platform for the LHC community LHCONE’s architecture incorporates the following building blocks –Single node Exchange Points –Continental / regional Distributed Exchanges –Interconnect Circuits between exchange points Likely by allocated bandwidth on various (possibly shared) links to form LHCONE Access method to LHCONE is chosen by the end-site, alternatives may include –Dynamic circuits –Fixed lightpaths –Connectivity at Layer 3, where/as appropriate We envisage that many of the Tier-1/2/3s may connect to LHCONE through aggregation networks 12

High-level Architecture, Pictorial 13

LHCONE Network Services Offered to Tier1s, Tier2s and Tier3s Shared Layer 2 domains: separation from non-LHC traffic –IPv4 and IPv6 router addresses on shared layer 2 domain(s) –Private shared layer 2 domains for groups of connectors –Layer 3 routing is between and up to the connectors A set of Route Servers will be available Point-to-point layer 2 connections: per-channel traffic separation –VLANS without bandwidth guarantees between pairs of connectors Lightpath / dynamic circuits with bandwidth guarantees –Lightpaths can be set up between pairs of connectors Monitoring: perfSONAR archive –current and historical bandwidth utilization and availability statistics This list of services is a starting point and not necessarily exclusive LHCONE does not preclude continued use of the general R&E network infrastructure by the Tier1s, Tier2s and Tier3s - where appropriate

Dedicated/Shared Resources LHCONE concept builds on traffic separation between LHC high impact flows, and non-LHC traffic –Avoid negative impact on other research traffic –Enable high-performance LHC data movement Services to use resources allocated to LHCONE Prototype/Pilot might use non-dedicated resources, but need to be careful about evaluation metrics 15

The Case for Dynamic Circuits in LHC Data Processing Data models do not require full-rate all times On-demand data movement will augment and partially replace static pre- placement  Network utilisation will be more dynamic and less predictable Performance expectations will not decrease –More dependence on the network, for the whole data processing system to work well! Need to move large data sets fast between computing sites –On-demand: caching –Scheduled: pre-placement –Transfer latency is important Network traffic in excess of what was anticipated As data volumes grow rapidly, and experiments rely increasingly on the network performance - what will be needed in the future is –More bandwidth –More efficient use of network resources –Systems approach including end-site resources and software stacks 16

17 David Foster; 1 st TERENA ASPIRE workshop, May 2011

Dynamic Bandwidth Allocation Will be one of the services to be provided in LHCONE Allows to allocate network capacity on as-needed basis –Instantaneous (“Bandwidth on demand”), or –Scheduled allocation Dynamic Circuit Service is present in several networks –Internet2, ESnet, SURFnet, US LHCNet Planned (or in experimental deployment) in others –E.g. GEANT, RNP, … DYNES: NSF funded project to extend hybrid & dynamic network capabilities to campus & regional networks –In first deployment phase; fully operational in

DYNES Deployment Topology 19

LHCONE + DYNES 20 DYNES Participants can dynamically connect to Exchange Points via ION Service Dynamic Circuits through and beyond the exchange point? Static tail? DYNES Participants can dynamically connect to Exchange Points via ION Service Dynamic Circuits through and beyond the exchange point? Static tail? Hybrid dynamic circuit and IP routed segment model?

LHCONE Pilot Implementation To include a number of sites identified by the CMS and Atlas experiments It is expected that LHCONE will grow organically from this implementation Currently operational: multipoint service using –4 Open Exchange Points CERNLight, Netherlight, MANLAN and Starlight –Dedicated core capacity SURFnet, US LHCNet –Route server at CERN Architecture working group is finalizing inter-domain connectivity design –GEANT+ 4 NRENs –Internet2, ESnet –Other Open Exchanges –Connections to South America and Asia 21

Summary LHCONE will provide dedicated network connectivity for the LHC computing sites –Built on the infrastructure provided by the R&E Networks –Collaborative effort between the experiments, CERN, the networks and the sites Will provide 4 services –Static point-to-point –Dynamic point-to-point –Multipoint –Monitoring Pilot is currently being implemented LHCONE will grow organically according to requirements and funding 22

THANK YOU! 23

EXTRA SLIDES 24

LHCONE Policy Summary LHCONE policy will be defined and may evolve over time in accordance with the governance model Policy Recommended for LHCONE governance –Any Tier1/2/3 can connect to LHCONE –Within LHCONE, transit is provided to anyone in the Tier1/2/3 community that is part of the LHCONE environment –Exchange points must carry all LHC traffic offered to them (and only LHC traffic), and be built in carrier-neutral facilities so that any connector can connect with its own fiber or using circuits provided by any telecom provider –Distributed exchange points: same as above + the interconnecting circuits must carry all the LHC traffic offered to them –No additional restrictions can be imposed on LHCONE by the LHCONE component contributors The Policy applies to LHCONE components, which might be switches installed at the Open Exchange Points, or virtual switch instances, and/or (virtual) circuits interconnecting them 25

LHCONE Governance Summary Governance is proposed to be similar to the LHCOPN, since like the LHCOPN, LHCONE is a community effort –Where all the stakeholders meet regularly to review the operational status, propose new services and support models, tackle issues, and design, agree on, and implement improvements Includes connectors, exchange point operators, CERN, and the experiments; 4 working groups –Governance, Architecture, Operations, Stakeholders Defines the policies of LHCONE and requirements for participation –It does not govern the individual participants Is responsible for defining how costs are shared Is responsible for defining how resources of LHCONE are allocated 26