LHC Open Network Project status and site involvement

Slides:



Advertisements
Similar presentations
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
Advertisements

Cs/ee 143 Communication Networks Chapter 6 Internetworking Text: Walrand & Parekh, 2010 Steven Low CMS, EE, Caltech.
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
Chapter 4 Network Layer slides are modified from J. Kurose & K. Ross CPE 400 / 600 Computer Communication Networks Lecture 14.
MPLS L3 and L2 VPNs Virtual Private Network –Connect sites of a customer over a public infrastructure Requires: –Isolation of traffic Terminology –PE,
Institute of Technology, Sligo Dept of Computing Semester 3, version Semester 3 Chapter 3 VLANs.
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
1 October 20-24, 2014 Georgian Technical University PhD Zaza Tsiramua Head of computer network management center of GTU South-Caucasus Grid.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
LHCOPN & LHCONE Network View Joe Metzger Network Engineering, ESnet LHC Workshop CERN February 10th, 2014.
Common Devices Used In Computer Networks
Data Communications and Computer Networks Chapter 4 CS 3830 Lecture 18 Omar Meqdadi Department of Computer Science and Software Engineering University.
Network Layer4-1 Chapter 4: Network Layer Chapter goals: r understand principles behind network layer services: m network layer service models m forwarding.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
1 Second ATLAS-South Caucasus Software / Computing Workshop & Tutorial October 24, 2012 Georgian Technical University PhD Zaza Tsiramua Head of computer.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
LHC Open Network Environment LHCONE David Foster CERN IT LCG OB 30th September
1 Networking in the WLCG Facilities Michael Ernst Brookhaven National Laboratory.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
Doc.: IEEE 11-04/0319r0 Submission March 2004 W. Steven Conner, Intel Corporation Slide 1 Architectural Considerations and Requirements for ESS.
LHCONE VRF Reachability & Transit
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
SDN/OPENFLOW IN LHCONE A discussion June 5, 2013
9 th November 2005David Foster CERN IT-CS 1 LHCC Review WAN Status David Foster Networks and Communications Systems Group Leader.
NORDUnet Nordic Infrastructure for Research & Education Report of the CERN LHCONE Workshop May 2013 Lars Fischer LHCONE Meeting Paris, June 2013.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
Networks ∙ Services ∙ People Mian Usman TNC15, Porto GÉANT IP Layer 17 th June 2015 IP Network Architect, GÉANT.
ESnet’s Use of OpenFlow To Facilitate Science Data Mobility Chin Guok Inder Monga, and Eric Pouyoul OGF 36 OpenFlow Workshop Chicago, Il Oct 8, 2012.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
LHCOPN / LHCONE Status Update John Shade /CERN IT-CS Summary of the LHCOPN/LHCONE meeting in Amsterdam Grid Deployment Board, October 2011.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
T0-T1 Networking Meeting 16th June Meeting
INTRODUCTION NETWORKING CONCEPTS AND ADMINISTRATION CSIS 3723
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
Multi-layer software defined networking in GÉANT
WLCG Network Discussion
Grid Optical Burst Switched Networks
LHCONE status and future
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
CS408/533 Computer Networks Text: William Stallings Data and Computer Communications, 6th edition Chapter 1 - Introduction.
Openflow-based Multipath Switching in Wide Area Networks
LHCONE Point- to-Point service WORKSHOP SUMMARY
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Network Layer Goals: Overview:
Integration of Network Services Interface version 2 with the JUNOS Space SDK
Network Monitoring and Troubleshooting with perfSONAR MDM
How to address the increasing connectivity needs of the HEP community?
KISTI Daejeon, 23rd September 2015
CS4470 Computer Networking Protocols
WLCG Collaboration Workshop;
Chapter 8: Subnetting IP Networks
Overlay Networking Overview.
Goals Introduce the Windows Server 2003 family of operating systems
Firewalls Routers, Switches, Hubs VPNs
Dynamic Circuit Service Hands-On GMPLS Engineering Workshop
Network Layer I have learned from life no matter how far you go
Chapter 3 VLANs Chaffee County Academy
an overlay network with added resources
OSCARS Roadmap Chin Guok
The UltraLight Program
Multicasting Unicast.
Presentation transcript:

LHC Open Network Project status and site involvement Ramiro Voicu (Caltech/USLHCNet) ALICE T1/T2 workshop Lyon, June 5th, 2013

Outline Background LHCONE Overview L3VPN service, VRF Connection recommendations R&D activities Summary

Background: LHCOPN 2004/2005 timeframe the LHC Optical Private Network (LHCOPN) was designed Hierarchical MONARC model (late 1990 – early 2000) Every T2 associated to a T1[*] Evolved from a star to partial mesh Dedicated network resources for T0 – T1 and T1 – T1 data movement Federated governance (no single administrative body) [*] ALICE adopted a different model

LHCOPN Edoardo Martelli, CERN

Background Transatlantic connectivity workshop in 2010 at CERN Moving from hierarchical pre-placement models towards a flatter, peer-to-peer model: greater availability, locality Suits ALICE data access model Tier 0 Tier 1 Tier 2 Tier 0 Tier 1 Tier 2

LHCONE Introduction LHCONE was born to address two main issues: ensure that the services to the science community maintain their quality and reliability “The Network infrastructure is the most reliable service we have” Ian Bird – WLCG project leader protect existing R&E infrastructures against very large LHC data flows accomplished through distinct infrastructure and/or traffic engineering LHCONE is expected to provide some guarantees of performance Large data flows across managed bandwidth that would provide better determinism than shared IP networks Segregation from competing traffic flows Manage capacity as # sites x Max flow/site x # Flows increases provide ways for better utilization of resources Use all available resources Provide Traffic Engineering and flow management capability Leverage investments being made in advanced networking

LHCONE Overview Open Exchange Points Distributed exchange point Single node exchange point

LHCONE Overview Current activities split in several areas: Multipoint connectivity through L3VPN (Production) Point-to-point dynamic circuits (R&D) targeting demonstration this year Common to both is logical separation of LHC traffic from the General Purpose Network (GPN) Avoids interference effects Allows trusted connection and firewall bypass More R&D in SDN/Openflow for LHC traffic for tasks which cannot be done with traditional methods

Routed L3VPN Service, VRF Based on Virtual Routing and Forwarding (VRF) BGP peerings between the VRF domains Currently serving 44 LHC computing sites

Routed L3VPN Service, VRF, cont. Current logical connectivity diagram: Mian Usman, DANTE

Inter-domain connectivity Many of the inter-domain peerings are established at Open Lightpath Exchanges Any R&E Network or End-site can peer with the LHCONE domains at any of the Exchange Points (or directly)

LHCONE L3VPN for Service Providers See information on wiki: https://twiki.cern.ch/twiki/bin/view/LHCONE/LhcOneVRF List of BGP communities on the wiki Implementation recommendations: Do not filter by prefix length. Any prefix length must be accepted Apply maximum-prefix checks. Ad-hocs for TierXs, 3000 between VRFs Do not allow private AS Numbers

Computing Site Connection HOWTO Wiki instructions: https://twiki.cern.ch/twiki/bin/view/LHCONE/LhcOneHowToConnect Any LHC computing site can connect to the LHCONE infrastructure Connect to one of the VRF domains see Wiki for list of active VRF domains directly or at exchange point A computing site (TierX) needs to have public IP addresses a public AS Number a BGP capable router LHCONE transports LHC data traffic only Announce only LCG IP prefixes Traffic must be symmetric, to avoid problems with stateful firewalls

Connection recommendations Recommendations to address asymmetric routing: Define local LAN address ranges that will participate in LHCONE. Advertise these address range prefixes to LHCONE using BGP Agree to accept all BGP route prefixes advertised by the LHCONE Ensure that only hosts in your locally defined LHCONE ranges have the ability to forward packets into the LHCONE network Ensure that the LHCONE paths are preferred over general R&E IP paths End sites should avoid static configuration of packet filters, BGP prefix lists and policy based routing, where possible. More details here: https://indico.cern.ch/getFile.py/access?contribId=8&resId=1&materialId=slides&confId=179710 Michael O’Connor, ESnet

LHCONE Operational Aspects The Operational Model is broadly based on the LHCOPN experience Basic principle: A sites experiencing problems contacts its first LHCONE upstream provider Still in the process of being worked out Current operational handbook draft: https://twiki.cern.ch/twiki/pub/LHCONE/LhcOneVRF/LHCONE_VRF_Operational_Handbook-v0.5.pptx

Point-to-point service PATH TO A DEMONSTRATION SYSTEM

Dynamic Point-to-Point Service Point-to-point circuit service complements the L3VPN service and provides predictable performance Guaranteed bandwidth between a pair of end-points (VRF domains and/or end-sites) No jitter and predictable delay Several provisioning systems developed by R&E community: OSCARS (ESnet), OpenDRAC (SURFnet), G-Lambda-A (AIST), G-Lambda-K (KDDI), AutoBAHN (GEANT) Inter-domain: need accepted standard OGF NSI: The standards Network Services Interface Connection Service (NSI CS): v1 ‘done’ and demonstrated e.g. at GLIF and SC’12 Currently standardizing v2

Point-to-Point Service in LHCONE Intended to support bulk data transfers at high rate Separation from GPN-style infrastructure to avoid interferences between flows Conducted 2 workshops: 1st LHCONE P2P workshop was held in December 2012 https://indico.cern.ch/conferenceDisplay.py?confId=215393 2nd workshop held May 2013 in Geneva https://indico.cern.ch/conferenceDisplay.py?confId=241490 (Some) Challenges: multi-domain system edge connectivity – to and within end-sites how to use the system from LHC experiments’ perspective e.g. ANSE project in the US manage expectations

Point-to-point Demo/Testbed Demo proposed and led by Inder Monga (ESnet) Choose a few interested sites Build static mesh of P2P circuits with small but permanent bandwidth Use NSI 2.0 mechanisms to Dynamically increase and reduce bandwidth Based on Job placement or transfer queue Based or dynamic allocation of resources Define adequate metrics! for meaningful comparison with GPN or/and VRF Time scale: TDB (“this year”) Participation: TDB (“any site/domain interested”) More discussion at the next LHCONE/LHCOPN meeting in Paris (June 2013)

SDN/Openflow OTHER R&D ACTIVITIES

Meeting on SDN in LHCONE, May 3rd, 2013 Discussed the potential use case: SDN/Openflow could enable solutions to problems where no commercial solution exists Identify possible issues/problems Openflow could solve, for which no other solution currently exists? Multitude of transatlantic circuits makes flow management difficult Impacts the LHCONE VRF, but also the GPN No satisfactory commercial solution has been found at layers 1-3 Investigate the use of Openflow to address the problem at Layer2 Caltech has a DoE funded project running, developing multipath switching capability (OLiMPS) Will examine this for use in LHCONE Second use case: ATLAS is experimenting with OpenStack at several sites. Openflow/SDN is the natural technology in the network. Could be used to bridge the data centers.

Summary The LHCONE is currently operating a L3VPN (aka VRF) service 44 LHC computing sites connected The Point-to-point circuit service will be first shown in a demonstration Based on NSI Interested Networks and Sites are welcome to join Ideas in the SDN/Openflow space are currently investigated Useful to LHCONE participating networks? Any direct benefit to the LHC computing model?

Information exchange Weekly audio conference, 14.30 UTC, alternating every second week: architecture discussion operations mailing lists: lhcone-operations@cern.ch lhcone-architecture@cern.ch LHCONE wiki: https://twiki.cern.ch/twiki/bin/view/LHCONE/WebHome Next LHCONE/LHCOPN meeting: Paris, June 17/18, 2013 http://indico.cern.ch/conferenceDisplay.py?confId=236955

More details: http://lhcone.net