Network Engineering Services Group Update Patrick Dorn, Network Engineer ESnet Network Engineering Group ESCC July 15 2013.

Slides:



Advertisements
Similar presentations
Erik-Jan Bos, Sr. Strategy Advisor, Global Programs, Internet2 100G intercontinental: The Next Network Frontier JET Meeting, June 18, 2013.
Advertisements

Nlr.net © 2004 National LambdaRail, Inc 1 NLR Tom West Wendy Huntoon Bonnie Hurst.
CESNET Research Department CEF Networks lighting Stanislav Sima October 18th, 2004.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
1 Introduction to Optical Networks. 2 Telecommunications Network Architecture.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
Innovating the commodity Internet Update to CENIC 14-Mar-2007.
May 2001GRNET GRNET2 Designing The Optical Internet of Greece: A case study Magda Chatzaki Dimitrios K. Kalogeras Nassos Papakostas Stelios Sartzetakis.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
LHCOPN & LHCONE Network View Joe Metzger Network Engineering, ESnet LHC Workshop CERN February 10th, 2014.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
ESnet Abilene 3+3 Measurements Presented at the Joint Techs Meeting in Columbus July 19 th 2004 Joe Metzger ESnet Network Engineer
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
1 Interconnecting the Cyberinfrastructure Robert Feuerstein, Ph.D.
1 ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group ESCC July Energy Sciences Network.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 Provider Bridging design for UNM Campus - CPBN.
Intorduction to Lumentis
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
Nlr.net © 2004 National LambdaRail, Inc NLR Update February 5, 2006.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Nlr.net © 2004 National LambdaRail, Inc 1 NLR Update Dave Reese Joint Techs February 2007.
1 NYSERNet Updates and the Next Generation Network Joint Techs Meeting Salt Lake City, Feb 2005 Bill Owens.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Bandwidth-on-Demand evolution Gerben van Malenstein Fall 2011 Internet2 Member Meeting Raleigh, North Carolina, USA – October 3, 2011.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
Recent and Planned Developments in ESnet Michael Ernst, BNL Slides courtesy of Mike Bennett, ESnet Dedicated Meeting with NII May 14, 2013 Tokyo.
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
GRNET: owned fiber in South East Europe ready for e2e lamda services (via GEANT2) Chrysostomos Tziouvaras MSc. GRNET Broadband Infrastructures Development.
Summary - Part 2 - Objectives The purpose of this basic IP technology training is to explain video over IP network. This training describes how video can.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
Reconfigurable Optical Mesh and Network Intelligence Nazar Neayem Alcatel-Lucent Internet 2 - Summer 2007 Joint Techs Workshop Fermilab - Batavia, IL July.
Circuit Services Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
Expanded WaveCo Services Linda Roos, Christian Todorov Joint Techs - January 21, 2008.
Nlr.net © 2004 National LambdaRail, Inc 1 NLR Update John Silvester University of Southern Califonia
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
Michael Ernst U.S. ATLAS Tier-1 Network Status Evolution of LHC Networking – February 10,
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Advanced Network Services Today October 3, Advanced Network Services - Today – Today – Current Services – Operations Status – Upgrade Overview 2.
Rob Adams, VP Product Marketing/Product Line Management From Infrastructure to Equipment to Ongoing Operations Reducing the Cost of Optical Networking.
Networks ∙ Services ∙ People Mian Usman TNC15, Porto GÉANT IP Layer 17 th June 2015 IP Network Architect, GÉANT.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
Connecting to the new Internet2 Network What to Expect… Steve Cotter Rick Summerhill FMM 2006 / Chicago.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
1 Deploying Measurement Systems in ESnet Joint Techs, Feb Joseph Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Dynamic Network Services In Internet2
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Reconfigurable Optical Mesh and Network Intelligence
ATLAS Tier 2 Paths Within ESnet
Wide-Area Networking at SLAC
Abilene Update Rick Summerhill
Christian Todorov Internet2 Fall Member Meeting October 9, 2007
Presentation transcript:

Network Engineering Services Group Update Patrick Dorn, Network Engineer ESnet Network Engineering Group ESCC July

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science Outline Operational work completed since January 100G Testbed Status ANA-100G BayExpress Alien Wave Testing Current network snapshot 100G testing Operational work in progress

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science Since January ANL and ORNL 100G connections in production (April) − Production meaning in use for primary IP peering and available for OSCARS circuits 100G production connections to exchange points and R&E peers: − MANLAN, Starlight, WIX, CENIC (PACWAVE) − Internet2: Sunnyvale, Chicago, New York (via MANLAN), DC (via WIX) ESnet5 consolidation across the national footprint, increasing consistency, reducing router count, power consumption and maintenance costs. For example: − Decommissioned OC48 between Atlanta and ORNL − Removed aofa-cr2, star-sdn1, star-cr1, ornl-rt2, pnwg-sdn1 − Eliminated MX-ALU interconnect bottlenecks at the hubs − Removed all DNS entries referring to “ANI”

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science Since January Swapped out our un-supported “third party” 10x10 MSA (aka “LR- 10”) CFPs in our Ciena interfaces with Ciena OEM’d CFPs − Covered by 4-hour replacement maintenance contract − Full PM / stats support We were able to extract additional value from the third party optics by using a pair of them in the TA Testbed in a test configuration at BNL. Normalized 100G Testbed infrastructure − Testbed ALUs reconfigured from scratch − Moved out of ESnet backbone IP space − Moved out of ESnet ASN (from 293 to 3432)

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science ESnet 100G Testbed (AS3432) ESnet 100G Testbed (AS3432) ESnet5 (AS293) ESnet5 (AS293) star-tb1star-cr5nersc-mr2nersc-tb1 1/1/1:5 [to_star-cr5] ( /30) (2001:400:E11E:4::2/127) 6/1/1:5 [to_star-tb1] ( /30 ) (2001:400:E11E:4::3/127 ) ESnet 100G Testbed Topology 2x10G100G system: / :400:E100::7/12 8 system: / :400:E100::8/ G 2/1/1:5 [to_nersc-tb1] ( /30) (2001:400:E100:39::2/127) 2/1/1:5 [to_star-tb1] ( /30) (2001:400:E100:39::3/127) 10/1/1:5 [to_nersc-mr2_ip-a] ( /30) (2001:400:E11E:2::2/12 7) xe-1/2/0:5 ( /30) (2001:400:E11E:2::3/127 ) 10/1/2:5 [to_nersc-mr2_ip-b] ( /30) (2001:400:E11E:8::2/12 7) xe-7/2/0.5 ( /30) (2001:400:E11E:8::3/12 7)

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science ESnet 100G Testbed (AS3432) ESnet 100G Testbed (AS3432) ESnet5 (AS293) ESnet5 (AS293) star-cr5nersc-mr2nersc-tb1 ESnet 100G Testbed Future 2x10G 100G aofa-cr5star-tb1 100G STAR-AOFA Testbed 100G

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science ANA-100G Advanced North Atlantic 100G Pilot (ANA-100G) − Combined effort with Internet2 (USA) NORDUnet, (Nordic countries) ESnet (USA DoE) SURFnet (Netherlands) CANARIE (Canada) GÉANT (Europe) 100G wave from New York to Amsterdam − For prototyping, experiments, etc − 12 month term − Consortium purchased spectrum on submarine link − Lit with consortium-owned cards slotted in provider’s chassis

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science ANA-100G

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science ANA-100G North American terrestrial component Trans-Atlantic submarine component European submarine component Drawing courtesy of Ciena

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science ANA-100G at TNC TNC Demos Big data transfers with multipathing, OpenFlow and MPTCP Visualize 100G traffic How many modern servers can fill a 100Gbps Transatlantic Circuit? First European ExoGENI at Work Up and down North 100G

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science ANA-100G Futures European side being relocated from Maastricht to Amsterdam Circuit is shared among participating organizations − Likely timesliced for 100G clear-channel use − Also opportunities for multiple 20-40G experiments in parallel Experiments are being planned for next 12 months − E.g. BNL-CERN demo

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science BayExpress Alien Wave Testing Joint project with Juniper PTX router/MPLS platform Single PTX chassis at LBL PTX carved into two logical systems Beta PTX PICs with 100G coherent, colored optics Ixia at LBL for traffic generation / error detection

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science BayExpress Alien Wave Testing 13 No transponder on DWDM system. Colored, tunable 100 GE interface on the router. Transponder on DWDM system Transponder on DWDM system CFP grey optics Long-haul colored optics PTX

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science BayExpress Alien Wave Testing Phase 1 − Alien on 2 node Ciena lab system at LBL Phase 2 − Alien on production BayExpress − 1 segment LBL-NERSC − 12km Phase 3 − Alien on production BayExpress − 1400km (700km each direction)

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science BayExpress Alien Wave Testing Phase 3 Explain “long way around” Total Distance: ~700km 7 ROADMs PTX

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science BayExpress Alien Wave Testing Wrapup Motivations − Explore technology for future architectures Bit of experience with PTX platform Gained experience with router based colored optics Proof of km operation − Gain operational experience with alien wavelengths Greater understanding of provisioning steps and parameters Ciena 6500s behaved perfectly »No operational impact »Alien was balanced with existing native waves »BER of existing waves unaffected Possible future testing − Attempt alien wavelength in the backbone?

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science ESnet5 Routed Network (July 2013) Routers 16 Alcatel Lucent (ALU) 7750-SR12 − 10-slot router with up to 200G per slot − G interfaces & G interfaces 30 existing Juniper MX’s − Used in 10G hubs, commercial exchange points, sites 16 existing Juniper M7i & M10i − For terminating links slower than GE 4 very old Cisco 7206 − For terminating links slower than GE Services Standard routed IP (including full Internet services) Point to Point Dynamic Virtual Circuits using OSCARS Various overlay networks (Private VPNs, LHCONE VRF)

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science SNLL PNNL LANL SNLA JLAB PPPL MIT/ PSFC BNL AMES LLNL GA JGI LBNL SLAC NERSC ORNL ANL FNAL Salt Lake INL GFDL PU Physics SEAT STAR CHIC WASH commercial peering points R&E network peering locations NASH ATLA HOUS NEWY AOFA BOST KANS DENV ALBQ LASV BOIS SACR ELPA SDSC LOSA Routed IP 100 Gb/s Routed IP 4 X 10 Gb/s 3 rd party 10Gb/s Express / metro 100 Gb/s Express / metro 10G Express multi path 10G Lab supplied links Other links Tail circuits Major Office of Science (SC) sites LBNL Major non-SC DOE sites LLNL CLEV ESnet optical transport nodes (only some are shown) MANLAN ESnet managed 10G router 10 ESnet managed 100G routers 100 Site managed routers SUNN ESnet PoP/hub locations LOSA ESnet optical node locations (only some are shown) EQCH Geography is only representational ESnet5 July SUNN CHAT

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science ESnet Optical Network (July 2013) 341 Ciena 6500 Nodes − 57 Add/Drops − 284 Amps ESnet waves deployed − G − 7 40G (muxes 4x10G client circuits) − 18 10G metro (non-coherent)

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science ESnet Optical Footprint: Add/Drops

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science 100G Testing Review of ESnet process before links are placed into service: Saturation Test − No internal bottlenecks to prevent running at capacity − Meet or exceed 95% of line-rate for 5 minutes Loss Test − 50% of line rate capacity over 24 hour duration − Ensure that line is performing properly (no errors) − Strive for zero loss on all circuits At 100G, transfer 500 TB in <= 24 hours with 0 loss 100G site connections are extended DMZs In general our testing of site 100G connections has involved site equipment interface

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science 100G Testing (cont’d) Have a catalog of issues from our experiences − Internal testing − Community troubleshooting − Site connection acceptance testing Example equipment issues we’ve seen: − 100G interfaces with a max single flow rate of 12G − 100G interfaces with 50G limits Static vlan mapping Dynamic load balancing − 100G interfaces with 92.5 max single flow rate − 100G interfaces with data corruption on jumboframes All equipment has its own set of issues and quirks − Important to understand what you deploy/support

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science 100G Testing (cont’d) We were asked to help with this problem LHC ATLAS issue − “Urgent difficult problem we're seeing here bringing IU to the MWT2 group in Chicago” − Problem: intermittent payload data corruption This is corruption of the payload that's not being caught by normal checksums in the underlying protocols. We worked with IU to isolate the source of the problem by ruling out IU’s Chicago-Indianapolis optical transport IU worked with the network equipment manufacturer to identify the root cause of the problem − Used a similar type of test equipment to prove it − IU looking to buy test equipment

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science 100G Testing (cont’d) Having 100G test equipment we can depend is valuable − Isolated problems quickly − Easy to validate repairs Similar test equipment used turning up the transatlantic link for ANA-100G Debugging these problems can be time-consuming − Pattern corruption problem took weeks (days of dedicated effort) − We don’t have time to waste on debugging the test equipment

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science In Progress and Future Work 100G production connections to: − BNL, FNAL, LBNL, LLNL, NERSC 40G transport into Equinix Ashburn & re-arranging our Washington DC ring to provide diverse backbone connections − Longer term: considering 10G for DC MAN Complete provisioning of diverse fiber laterals and diverse optical nodes at ANL & FNAL

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science In Progress and Future Work (cont’d) 40G transport into Las Vegas and relocate to a Level 3 colo − Enables us to deliver higher BW in the region and redundant connectivity Expand optical capability in Atlanta − Supports ORNL area and possible future JLAB redundancy Ames Lab: upgrade to 10G Continue cleanup and consolidation at the hubs, moving connections from the MX’s to the ALUs

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science OSCARS Service Changes Increasing the number of queues − One benefit of this is to apply some of the suggestions in HNTES Provision a queue for alpha flows Support best effort connectivity service if guaranteed bandwidth circuit fails Migrate to single queue* (avoiding Scavenger queue) and PLP bit to accommodate in-profile/out-of-profile OSCARS guaranteed bandwidth traffic on Juniper routers Support zero-bandwidth best effort circuit requests * On all ESnet5 ALU routers, use of single queue and PLP for OSCARS guaranteed bandwidth circuit have already been implemented

Lawrence Berkeley National LaboratoryU.S. Department of Energy | Office of Science QoS Changes Currently Class of Service Queues Service QueueRemarks Network ControlFor network control and management traffic Expedited ForwardingFor in-profile OSCARS guaranteed bandwidth circuit traffic Best EffortFor general IP routing ScavengerFor out-of-profile OSCARS circuit traffic and scavenger IP traffic Service QueueRemarks Network ControlFor network control and management traffic Expedited Forwarding CircuitsFor in-profile/out-of-profile* OSCARS guaranteed bandwidth circuit traffic Best Effort CircuitsFor non-guaranteed OSCARS circuit traffic (i.e. zero-bandwidth circuits) Assured ForwardingFor testing elephant (Alpha) flow isolation and routing (prototype deployment) Best EffortFor general IP routing ScavengerFor scavenger IP traffic Proposed Class of Service Queues (late 3Q2013) * In proposed QoS change, Packet Loss Priority (PLP) bit is used to determine in-profile/out-of-profile OSCARS traffic

Questions? Thanks!