LHCONE Point- to-Point service WORKSHOP SUMMARY

Slides:



Advertisements
Similar presentations
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
Advertisements

SA2 “Testbeds as a Service” LHCONE Meeting May 2/ Geneva, CH Jerry Sobieski (NORDUnet)
Internet2 and AL2S Eric Boyd Senior Director of Strategic Projects
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
Dynamically Provisioned Networks as a Substrate for Science David Foster CERN.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
In-Band Flow Establishment for End-to-End QoS in RDRN Saravanan Radhakrishnan.
Transport SDN: Key Drivers & Elements
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
1 Wide Area Network. 2 What is a WAN? A wide area network (WAN ) is a data communications network that covers a relatively broad geographic area and that.
1 The SpaceWire Internet Tunnel and the Advantages It Provides For Spacecraft Integration Stuart Mills, Steve Parkes Space Technology Centre University.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
TeraPaths TeraPaths: establishing end-to-end QoS paths - the user perspective Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
1 High-Level Carrier Requirements for Cross Layer Optimization Dave McDysan Verizon.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
The FI-WARE Project – Base Platform for Future Service Infrastructures FI-WARE Interface to the network and Devices Chapter.
LHC Open Network Environment LHCONE David Foster CERN IT LCG OB 30th September
1 Networking in the WLCG Facilities Michael Ernst Brookhaven National Laboratory.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk, California Institute of Technology for the ANSE team LHCONE Point-to-Point Service.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
From the Transatlantic Networking Workshop to the DAM Jamboree to the LHCOPN Meeting (Geneva-Amsterdam-Barcelona) David Foster CERN-IT.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
SDN Management Layer DESIGN REQUIREMENTS AND FUTURE DIRECTION NO OF SLIDES : 26 1.
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
Network Components By Kagan Strayer. Network Components This presentation will cover various network components and their functions. The components that.
ATLAS Network Requirements – an Open Discussion ATLAS Distributed Computing Technical Interchange Meeting University of Tokyo.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
ANSE: Advanced Network Services for Experiments Institutes: –Caltech (PI: H. Newman, Co-PI: A. Barczyk) –University of Michigan (Co-PI: S. McKee) –Vanderbilt.
SDN/OPENFLOW IN LHCONE A discussion June 5, 2013
11 ROUTING IP Chapter 3. Chapter 3: ROUTING IP2 CHAPTER INTRODUCTION  Understand the function of a router.  Understand the structure of a routing table.
Company LOGO Network Architecture By Dr. Shadi Masadeh 1.
NORDUnet Nordic Infrastructure for Research & Education Report of the CERN LHCONE Workshop May 2013 Lars Fischer LHCONE Meeting Paris, June 2013.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
From the Transatlantic Networking Workshop to the DAM Jamboree David Foster CERN-IT.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
LHCONE NETWORK SERVICES: GETTING SDN TO DEV-OPS IN ATLAS Shawn McKee/Univ. of Michigan LHCONE/LHCOPN Meeting, Taipei, Taiwan March 14th, 2016 March 14,
Openflow-based Multipath Switching in Wide Area Networks Michael Bredel (Caltech) Artur Barczyk, Azher Mughal, Harvey Newman (Caltech) Ronald van der Pol.
ESnet’s Use of OpenFlow To Facilitate Science Data Mobility Chin Guok Inder Monga, and Eric Pouyoul OGF 36 OpenFlow Workshop Chicago, Il Oct 8, 2012.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
LHCONE NETWORK SERVICES INTERFACE (NSI) POINT-TO-POINT TESTBED WITH ATLAS SITES Shawn McKee/Univ. of Michigan Kaushik De/Univ. of Texas Arlington (Thanks.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Baseline Services Group Status of File Transfer Service discussions Storage Management Workshop 6 th April 2005 Ian Bird IT/GD.
Instructor Materials Chapter 8: Subnetting IP Networks
Evolution of storage and data management
Deterministic Communication with SpaceWire
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
Multi-layer software defined networking in GÉANT
Use Case for Distributed Data Center in SUPA
GENUS Virtualisation Service for GÉANT and European NRENs
Openflow-based Multipath Switching in Wide Area Networks
Author: Ragalatha P, Manoj Challa, Sundeep Kumar. K
Author: Daniel Guija Alcaraz
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Wide Area Network.
Integration of Network Services Interface version 2 with the JUNOS Space SDK
SENSE: SDN for End-to-end Networked Science at the Exascale
LHC Open Network Project status and site involvement
Software Defined Networking (SDN)
Software Defined Networking (SDN)
ExaO: Software Defined Data Distribution for Exascale Sciences
Network Architecture By Dr. Shadi Masadeh 1.
OSCARS Roadmap Chin Guok
Towards Predictable Datacenter Networks
Presentation transcript:

LHCONE Point- to-Point service WORKSHOP SUMMARY Artur Barczyk/Caltech Atlas Technical Interchange Meeting Tokyo, May 2013 May 15, 2013 Artur.Barczyk@cern.ch

Background LHCONE data transport services Routed IP service aka “VRF” service virtualized routing environment, implemented (Dynamic) Point to point service being worked out, to be ready for LHC restart in 2015 1st LHCONE P2P workshop was held in December 2012 2nd workshop held May 2013 in Geneva This presentation aims at giving a representative summary of the May meeting no claim to completeness URL: https://indico.cern.ch/conferenceDisplay.py?confId=241490 May 15, 2013 Artur.Barczyk@cern.ch

The Workshop Initial starting point: Whitepaper draft ~40 participants, representing R&E Networking community ESnet, NORDUnet, GEANT, Internet2, US LHCNet, others LHC computing May 15, 2013 Artur.Barczyk@cern.ch

Agenda May 15, 2013 Artur.Barczyk@cern.ch

Application Perspective (Tony Wildish, CMS) PhEDEx and “Bandwidth on Demand” https://twiki.cern.ch/twiki/bin/view/Main/PhEDExAndBoD Data-placement in CMS T0 -> T1: custodial data Primary use-case for investigation/prototyping T2 -> T1: harvest MC production T1->T2, T2->T2: placement for analysis #nodes, time-profile, concurrency vary considerably Long-duration transfers (TB, hours) What is the right service interface? May 15, 2013 Artur.Barczyk@cern.ch

Application Perspective (Tony Wildish, CMS) Basic motivation: improved performance in long transfers Basic questions to address: Cope with call blocking must be able to prioritise requests request parameters? Min. & Max. Bandwidth Min/Max. data volume Request needs to return options decision taken by requester binary replies not useful May 15, 2013 Artur.Barczyk@cern.ch

CMS use cases (Tony Wildish), cont. PhEDEx has 3 use cases with different features #circuits, topologies, time-evolution Scales: hours, TB, nothing smaller Start with T0 -> T1s Ultimate goal is to support analysis flows too RESTful service Augment existing capabilities with circuits Expect occasional failure or refusal from service Need priority (& ownership?) Budget/share-management? Who provides that? May 15, 2013 Artur.Barczyk@cern.ch

Testbed as a Service (Jerry Sobieski, NORDUnet) A new “Testbed as a Service” activity in GN3+ Objective: Provide rapid prototype network testbeds to the network research community Draws from numerous predecessor projects: FEDERICA, Network Factory, OFELIA, NOVI, MANTICHORE, GEYSERS, DRAGON, GENI, PLANET LAB, .... Potential platform for R&D in the LHCONE context? Also note GEANT Open Calls May 15, 2013 Artur.Barczyk@cern.ch

High Performance Transfers (Inder Monga, ESnet) Networks have hot been an issue for LHC so far because people desired “better than best [effort service]” Network complexity (exists!) is hidden from users If we want “better than best”, we need a well defined interface NSI provides consistent guaranteed bandwidth across multiple WAN domains “Dynamic” is not a requirement May 15, 2013 Artur.Barczyk@cern.ch

Point-to-point Demo/Testbed Proposed by Inder Monga (ESnet) Choose a few interested sites Build static mesh of P2P circuits with small but permanent bandwidth Use NSI 2.0 mechanisms to Dynamically increase and reduce bandwidth Based on Job placement or transfer queue Based or dynamic allocation of resources Define adequate metrics! for meaningful comparison with GPN or/and Include both CMS and ATLAS Time scale: TDB (“this year”) Participation: TDB (“any site/domain interested”) More discussion at the next LHCONE/LHCOPN meeting in Paris (June 2013) May 15, 2013 Artur.Barczyk@cern.ch

Point-to-point Demo/Testbed, cont. Site selection: Set of Tier1, and Tier 2 sites, maybe Tier0 DYNES/ANSE projects in the US want sites in Europe, Asia Use bandwidth from the VRF? might be necessary where circuit infrastructure not existing. But: will shared BW not taint the results? transatlantic: 2 dedicated links for circuits existing (include 100G transatlantic wave?) API to the experiments’ SW stacks: here’s where ANSE is particularly important May 15, 2013 Artur.Barczyk@cern.ch

Other items discussed Much focus on network monitoring as basis for decision taking in the experiments’ software choices need to be provided SURFnet starts a project aiming at analysing LHCOPN and LHCONE flows interesting basis for discussion caution Remote I/O Different traffic pattern to what networks were assuming until now (many small vs “few” large flows) Will this become significant? (not likely) but will it be critical (i.e. require better than best effort?) May 15, 2013 Artur.Barczyk@cern.ch

SDN/Openflow meeting May 15, 2013 Artur.Barczyk@cern.ch

One slide of introduction Software Defined Networking (SDN): Simply put, physical separation of control and data planes Openflow: a protocol between controller entity and the network devices The potential is clear: a network operator (or even user) can write applications which determine how the network behaves App App Controller (PC) OpenFlow Protocol Flow Tables OpenFlow Switch MAC src MAC dst … ACTION

SDN - Brief summary SDN/Openflow could enable solutions to problems where no commercial solution exists Identify possible issues/problems Openflow could solve, for which no other solution currently exists… e.g.: Multitude of transatlantic circuits makes flow management difficult Impacts the LHCONE VRF, but also the GPN No satisfactory commercial solution has been found at layers 1-3 Problem can be easily addressed at Layer2 using Openflow Caltech has a DOE funded project running, developing multipath switching capability (OLiMPS) We’ll examine this for use in LHCONE Second use case: ATLAS is experimenting with OpenStack at several sites. Openflow is the natural virtualisation technology in the network. Could be used to bridge the data centers. Needs some more thought to go into this, interest in ATLAS May 15, 2013 Artur.Barczyk@cern.ch

Summary Good discussion between R&E networking and LHC experiments Approaching understanding of what networks can or could provide what experiments (may) need Point-to-point service: we’ll construct a trial infrastructure with participation of several Tier1/Tier2 sites ideally on at least 2 continents Comparison with GPN and/or VRF service More details will be discussed at the next LHCONE/LHCOPN meeting in Paris June 17/18 http://indico.cern.ch/conferenceDisplay.py?confId=236955 SDN: identified potential use cases flow load balancing within the networks elastic virtual data center Need to continue the dialog, and keep the momentum. May 15, 2013 Artur.Barczyk@cern.ch