LHCONE Point2Point Service ‘BGP solution’ From the Netherlands: Freek Dijkstra, Sander Boele, Hans Trompert and Gerben van Malenstein LHCOPN - LHCONE meeting.

Slides:



Advertisements
Similar presentations
Project by: Palak Baid (pb2358) Gaurav Pandey (gip2103) Guided by: Jong Yul Kim.
Advertisements

CMPE 150- Introduction to Computer Networks 1 CMPE 150 Fall 2005 Lecture 25 Introduction to Computer Networks.
Implementing Inter-VLAN Routing
Module 1: Demystifying Software Defined Networking Module 2: Realizing SDN - Microsoft’s Software Defined Networking Solutions with Windows Server 2012.
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Ethernet and switches selected topics 1. Agenda Scaling ethernet infrastructure VLANs 2.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
LHCONE OpenFlow Research Activity Eric Boyd Dale Finkelson Gerben van Malenstein Erik-Jan Bos Jim Williams Joe Mambretti.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
Virtual LAN Design Switches also have enabled the creation of Virtual LANs (VLANs). VLANs provide greater opportunities to manage the flow of traffic on.
Professor OKAMURA Laboratory. Othman Othman M.M. 1.
CustomerSegment and workloads Your Datacenter Active Directory SharePoint SQL Server.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
 Network Segments  NICs  Repeaters  Hubs  Bridges  Switches  Routers and Brouters  Gateways 2.
EMEA Partners XTM Network Training
Architecting the Network Part 3 Geoff Huston Chief Scientist, Internet Telstra ISOC Workshop.
Virtual Subnet: A Scalable Cloud Data Center Interconnect Solution draft-xu-virtual-subnet-06 Xiaohu Xu IETF82, TAIWAN.
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Networks ∙ Services ∙ People Enzo Capone (GÉANT) LHCOPN/ONE meeting – LBL Berkeley (USA) Status update LHCONE L3VPN 1 st /2 nd June 2015.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
Bandwidth-on-Demand evolution Gerben van Malenstein Fall 2011 Internet2 Member Meeting Raleigh, North Carolina, USA – October 3, 2011.
Bas Kreukniet, Sr Network Specialist at SURF SARA NL-T1 Expectations, findings, and innovation Geneva Workshop 10 Februari 2014.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
NORDUnet Nordic Infrastructure for Research & Education LHCone P2P routing without dynamic router configuration Magnus Bergroth.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
LHCONE VRF Reachability & Transit
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
Technical Solution Proposal
OSCARS Roadmap Chin Guok Feb 6, 2009 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of.
Networks ∙ Services ∙ People Mian Usman LHCOPN/ONE meeting – Amsterdam Status update LHCONE L3VPN 28 th – 29 th Oct 2015.
LHCONE Point-to-Point Circuit Experiment Authentication and Authorization Model Discussion LHCONE meeting, Rome April 28-29, 2014 W. Johnston, Senior Scientist.
Connect communicate collaborate LHCONE Diagnostic & Monitoring Infrastructure Richard Hughes-Jones DANTE Delivery of Advanced Network Technology to Europe.
SDN and OSCARS how-to Evangelos Chaniotakis Network Engineering Group ESCC Indianapoilis, July 2009 Energy Sciences Network Lawrence Berkeley National.
© 2005 Cisco Systems, Inc. All rights reserved. BGP v3.2—3-1 Route Selection Using Policy Controls Using Multihomed BGP Networks.
AutoGOLE Networks Status Report Gerben van Malenstein LHCOPN - LHCONE meeting at LBL June 2, 2015 – Berkeley, CA, USA.
NORDUnet Nordic Infrastructure for Research & Education Report of the CERN LHCONE Workshop May 2013 Lars Fischer LHCONE Meeting Paris, June 2013.
© 2005 Cisco Systems, Inc. All rights reserved. BGP v3.2—1-1 BGP Overview Establishing BGP Sessions.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
Brookhaven Science Associates U.S. Department of Energy 1 n BNL –8 OSCARS provisioned circuits for ATLAS. Includes CERN primary and secondary to LHCNET,
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Bandwidth on Demand update Hans Trompert, Peter Hinrich, Gerben van Malenstein Oslo, Norway – September 17, 2012 EVN-NREN Meeting.
LHCOPN / LHCONE Status Update John Shade /CERN IT-CS Summary of the LHCOPN/LHCONE meeting in Amsterdam Grid Deployment Board, October 2011.
MPLS Virtual Private Networks (VPNs)
“Your application performance is only as good as your network” (4)
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
Multi-layer software defined networking in GÉANT
Introduction An introduction to the software and organization of the Internet Lab.
Network Layer, and Logical Addresses
SURFnet network upgrade plans for next 5 years
Update on SINET5 implementation for ICEPP (ATLAS) and KEK (Belle II)
Switch Setup Connectivity to Other locations Via MPLS/LL etc
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
LHCONE L3VPN Status update Mian Usman LHCOPN-LHCONE meeting
LHC Open Network Project status and site involvement
Architecting the Network Part 3
Introduction An introduction to the software and organization of the Internet Lab.
Marrying OpenStack and Bare-Metal Cloud
Network Virtualization
PRPv1 Discussion topics
AutoGOLE Dashboard presented by Cees de Laat
Network Tools Tutorial
OSCARS Roadmap Chin Guok
Tim Strakh CEO, IEOFIT CCIE RS, CCIE Sec CCIE Voice, CCIE DC
An Application Programming Interface for Interconnection Services
Presentation transcript:

LHCONE Point2Point Service ‘BGP solution’ From the Netherlands: Freek Dijkstra, Sander Boele, Hans Trompert and Gerben van Malenstein LHCOPN - LHCONE meeting at LBL - June 2, 2015 – Berkeley, CA, USA

Earlier experience by SURFsara Life Science Grid (NL, 2011) 2 Regular IP connectivity between two sites

Earlier experience by SURFsara Life Science Grid (NL, 2011) 3 Automatically (scripted) routing traffic into dynamic circuit

NL, 2011

Scenario and result 2015 LHCONE Point2Point Service Exchanging production traffic between Brookhaven National Laboratory (US) and SURFsara (NL) via a dynamic layer 2 path while using BGP to put traffic into the path. Test was executed last week of May 2015, successfully, since production traffic was routed over the created dynamic path. 5

LHCONE P2P Experiment (BNL – SURFSara) (Test Setup May 2015) BNL (AS43) ESnet aofa-cr5amst-cr5 NetherLight SURFnet SURFSara (AS1162) /30 [VLAN 3901] /30 [VLAN 3901] BGP (AS43) Route Announcements: / / /24 BGP (AS43) Route Announcements: / / /24 BGP (AS1162) Route Announcements: / / /25 BGP (AS1162) Route Announcements: / / /25 4/2/1 VLAN 3901 NSI STPID: urn:ogf:network:es.net:2013::aofa-cr5:4_2_1:+ NSI STPID: urn:ogf:network:es.net:2013::aofa-cr5:4_2_1:+ NSI STPID: urn:ogf:network:es.net:2013::amst-cr5:3_1_1:+ NSI STPID: urn:ogf:network:es.net:2013::amst-cr5:3_1_1:+ VLAN VLAN Gbps Guaranteed Ethernet VLAN tagged multi-domain circuit between BNL and SURFSara 100G Physical Connection: SURFNET:S145-ODF18/38:Asd001A_8700_07:10/2 Physical Connection: SURFNET:S145-ODF18/38:Asd001A_8700_07:10/2 Physical Connection: AMST-HUB:AMST-FDP:A7/8:FRONT Physical Connection: AMST-HUB:AMST-FDP:A7/8:FRONT Asd001A_5410_01 5/8 urn:ogf:network:surfnet.nl:1990:production7:netherlight-1?vlan= Asd001A_5410_01 5/8 urn:ogf:network:surfnet.nl:1990:production7:netherlight-1?vlan= Asd001A_5410_03 9/10 urn:ogf:network:netherlight.net:2013:production7:esnet-1?vlan= Asd001A_5410_03 9/10 urn:ogf:network:netherlight.net:2013:production7:esnet-1?vlan= Asd001A_5410_03 3/6 urn:ogf:network:netherlight.net:2013:production7:surfnet-1?vlan= Asd001A_5410_03 3/6 urn:ogf:network:netherlight.net:2013:production7:surfnet-1?vlan= VLAN Asd001A_8700_07 5/12 urn:ogf:network:surfnet.nl:1990:production7:96292?vlan= G

BNL–SURFsara: SURFsara L2 details SURFsaraNetherLight rt-core-2 grid-r1 Grid storage NIKHEF Grid storage VLAN 3901 in 30 Gb/s trunk VLAN 3901 (4 Gb/s dedicated) in 10 Gb/s MSP VLAN 3901 is forwarded on layer 2 NL-T1 (routing VRF) perfSONAR Asd001A 5410_03 intf 3/4 ODF 18 port 41 S145/N17

BNL–SURFsara: Layer 3 details SURFsara (AS 1162) Grid storage IPv6 for some inexplicable reason ignored... again NIKHEF Grid storage perfSONAR BNL (AS 43) / / / /22 = grid-storage-cluster /28 = perfSONAR-lhcopn-lan /25 = NIKHEF-NL-T1-grid / /30

Dynamic circuit created 9

Traffic ~ 200M steady over dynamic circuit Most traffic from BNL to SURFsara, while expected opposite 10

perfSONAR 11

How does BGP scale? The BNL-SURFsara BGP session in this scenario is essentially just a direct BGP peering over a circuit. –A regular IP peering has larger latency between the two peers. –The dynamic circuit (and thus its BGP session) may be down for prolonged periods of time for dynamic circuits. Technically, BGP scales for hundreds of peers Manual maintenance only scales for up to peers –After that, it becomes tedious, and one likes to make preset-agreements on e.g. BGP peering IP addresses. 12

How does BGP scale? Internet Exchanges have faced the same scaling issues, and found solutions like route servers. This can't be used without any changes in this scenario, since route servers assume that all routers are in the same VLAN. The big advantages of circuits is that there is no fixed central infrastructure (like LHCONE), and traffic engineering (e.g. avoiding TCP congestion control to kick in) is easier. Scalability falls between: –LHCONE: does not need configuration templates, config once –(Dynamic) Circuits: needs configuration templates after >10-20 sites connected (BGP sessions) –OpenFlow: always needs automated scripts to configure, even for a few flows 13

Discussion To what extend does BGP scale when using dynamic circuits? How to scale this scenario to partial mesh (including route server)? 14