100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.

Slides:



Advertisements
Similar presentations
Duke University SDN Approaches and Uses GENI CIO Workshop – July 12, 2012.
Advertisements

Virtual Links: VLANs and Tunneling
LambdaStation Phil DeMar Don Petravick NeSC Oct. 7, 2004.
T1-NREN Luca dell’Agnello CCR, 21-Ottobre The problem Computing for LHC experiments –Multi tier model (MONARC) –LHC computing based on grid –Experiment.
US CMS Tier1 Facility Network Andrey Bobyshev (FNAL) Phil DeMar (FNAL) CHEP 2010 Academia Sinica Taipei, Taiwan.
Internet Access for Academic Networks in Lorraine TERENA Networking Conference - May 16, 2001 Antalya, Turkey Infrastructure and Services Alexandre SIMON.
FNAL Site Perspective on LHCOPN & LHCONE Future Directions Phil DeMar (FNAL) February 10, 2014.
1 Scalability is King. 2 Internet: Scalability Rules Scalability is : a critical factor in every decision Ease of deployment and interconnection The intelligence.
Multi-Layer Switching Layers 1, 2, and 3. Cisco Hierarchical Model Access Layer –Workgroup –Access layer aggregation and L3/L4 services Distribution Layer.
Alcatel IP Network Division — 1 - Next Generation Services for IP/MPLS Core Networks- Iulian Costea Project Manager - IP Network Division
Network Upgrades { } Networks Projects Briefing 12/17/03 Phil DeMar; Donna Lamore Data Comm. Group Leaders.
Copyright © 2004 Juniper Networks, Inc. Proprietary and Confidentialwww.juniper.net 1 Provider Opportunities for Enterprise MPLS APRICOT 2006, Perth Matt.
Trial of the Infinera PXM Guy Roberts, Mian Usman.
SERVER LOAD BALANCING Presented By : Priya Palanivelu.
1 Chapter 8 Local Area Networks - Internetworking.
Semester 4 - Chapter 3 – WAN Design Routers within WANs are connection points of a network. Routers determine the most appropriate route or path through.
Transport SDN: Key Drivers & Elements
WAN Technologies.
Lecture slides prepared for “Business Data Communications”, 7/e, by William Stallings and Tom Case, Chapter 8 “TCP/IP”.
Agenda Network Infrastructures LCG Architecture Management
1 Wide Area Network. 2 What is a WAN? A wide area network (WAN ) is a data communications network that covers a relatively broad geographic area and that.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
CD FY09 Tactical Plan Review FY09 Tactical Plan for Wide-Area Networking Phil DeMar 9/25/2008.
1 October 20-24, 2014 Georgian Technical University PhD Zaza Tsiramua Head of computer network management center of GTU South-Caucasus Grid.
Fermi National Accelerator Laboratory, U.S.A. Brookhaven National Laboratory, U.S.A, Karlsruhe Institute of Technology, Germany CHEP 2012, New York, U.S.A.
Enterprise Architecture and Infrastructure Progress Report for Committee on Technology and Architecture March 2012 Mark Day Dept. of Radiology & Biomedical.
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
Lambda Station Project Andrey Bobyshev; Phil DeMar; Matt Crawford ESCC/Internet2 Winter 2008 Joint Techs January 22; Honolulu, HI
1 Second ATLAS-South Caucasus Software / Computing Workshop & Tutorial October 24, 2012 Georgian Technical University PhD Zaza Tsiramua Head of computer.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
US LHC Tier-1 WAN Data Movement Security Architectures Phil DeMar (FNAL); Scott Bradley (BNL)
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
S4-Chapter 3 WAN Design Requirements. WAN Technologies Leased Line –PPP networks –Hub and Spoke Topologies –Backup for other links ISDN –Cost-effective.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Version 4.0 Introducing Network Design Concepts Designing and Supporting Computer Networks.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
CD FY10 Budget and Tactical Plan Review FY10 Tactical Plans for Wide-Area Networking Phil DeMar 10/1/2009 Tactical plan name(s)DocDB# FY10 Tactical Plan.
INFN TIER1 (IT-INFN-CNAF) “Concerns from sites” Session LHC OPN/ONE “Networking for WLCG” Workshop CERN, Stefano Zani
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco PublicITE I Chapter 6 1 Introducing Network Design Concepts Designing and Supporting Computer Networks.
Cisco 3 - Switches Perrine - Brierley Page 112/1/2015 Module 5 Switches.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
Hierarchical Topology Design. 2 Topology Design Topology is a map of an___________ that indicates network segments, interconnection points, and user communities.
ATLAS Network Requirements – an Open Discussion ATLAS Distributed Computing Technical Interchange Meeting University of Tokyo.
Point-to-point Architecture topics for discussion Remote I/O as a data access scenario Remote I/O is a scenario that, for the first time, puts the WAN.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
Use of Alternate Path Circuits at Fermilab {A Site Perspective of E2E Circuits} Phil DeMar I2/JointTechs Meeting Monday, Feb. 12, 2007.
A Strawman for Merging LHCOPN and LHCONE infrastructure LHCOPN + LHCONE Meeting Washington, DC, Jan. 31, 2013 W. E. Johnston and Chin Guok.
FNAL Site Report Phil DeMar Summer 2013 ESCC meeting LBL July 15, 2013.
Networks ∙ Services ∙ People Mian Usman TNC15, Porto GÉANT IP Layer 17 th June 2015 IP Network Architect, GÉANT.
ESnet’s Use of OpenFlow To Facilitate Science Data Mobility Chin Guok Inder Monga, and Eric Pouyoul OGF 36 OpenFlow Workshop Chicago, Il Oct 8, 2012.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
WAN Technologies. 2 Large Spans and Wide Area Networks MAN networks: Have not been commercially successful.
Brookhaven Science Associates U.S. Department of Energy 1 n BNL –8 OSCARS provisioned circuits for ATLAS. Includes CERN primary and secondary to LHCNET,
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
Chapter-1 LAN Design. Introduction Hierarchical network When building a LAN that satisfies the needs of a small- or medium-sized business, your plan.
Network Virtualization Ben Pfaff Nicira Networks, Inc.
WLCG Network Outlook By Edoardo Martelli with input from and interpreted by Tony Cass.
Instructor Materials Chapter 1: LAN Design
Fermilab T1 infrastructure
DE-KIT(GridKa) -- LHCONE
Update on SINET5 implementation for ICEPP (ATLAS) and KEK (Belle II)
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Wide Area Network.
Chapter 1: WAN Concepts Connecting Networks
Dingming Wu+, Yiting Xia+*, Xiaoye Steven Sun+,
Presentation transcript:

100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015

FNAL High-Impact Traffic Isolation Philosophy If feasible, science data traffic kept logically separate: –Optimal performance likely over WAN science data network paths –Easier to target internal LAN upgrades on high-impact science needs –May facilitate more flexible perimeter security models –More limited interaction with sensitive or interactive traffic: 4/14/20152Phil DeMar - CHEP 2015 Don’t want this to be your users audio/video applications Routed IP network path still supports our general internet traffic –Also a significant percentage of our science data traffic as well

FNAL’s Science Data WAN Network Paths LHCOPN: –T0 T1 data –(Most T1 T1 data movement –Supported by three virtual circuits: Primary – 40Gb/s guaranteed Secondary – 30Gb/s (failover) Tertiary – 10Gb/s (failover II) 4/14/20153Phil DeMar - CHEP 2015 LHCONE –Private routed virtual network –FNAL LHCONE traffic via ESnet, including trans-Atlantic stuff –US LHC universities starting to migrate to ESnet for LHCONE Legacy pt-to-pt virtual circuits (static) –Six US Tier-2s have circuits to FNAL (Obligatory LHCOPN figure)

Current Perimeter Architecture Science data network services supported with separate infrastructure: –Separate border router –Bypass path(s) –Security model: 4/14/20154Phil DeMar - CHEP 2015 General routed internet traffic via primary border router: –Fails over to secondary (not load-balanced...) –Some science data via routed IP Known Traffic from well-known systems at trusted sites

How Science Network Traffic Is Isolated Keyed on Policy-Based Routing (PBR) –Essentially source/destination ACLs Satisfies security policy for bypass traffic –Ingress on science data network border router –Egress for bypass from data center LANs (ie., CMS Tier-1) 4/14/20155Phil DeMar - CHEP 2015 Object tracking & SLA monitoring check that PBR path is functional Science network non- PBR traffic forwarded to routed IP path –May create path asymmetry issues

10/30/2014Phil DeMar | FNAL Site Report6 Now On To 100GE Upgrades

FNAL WAN Access – ChiExpress MAN ChiExpress: –ESnet local loop access for FNAL (and neighboring ANL) 4/14/20157Phil DeMar - CHEP 2015 –100GE-based technology –Full geographic redundancy –Reasonable channel reconfiguration (~1hr) FNAL channels: –One 100GE –2 nd 100GE in place but not in use yet –Four 10GE channels Routed IP Backup science data path

New Perimeter Architecture Consolidation to two border devices: –Cost-driven –Based on Brocade MLXe’s –In progress: 4-6 month cutover 4/14/20158Phil DeMar - CHEP 2015 Science traffic still kept logically separate Routed IP traffic still based on default & failover Network R&D becoming a high-impact traffic factor

WAN Traffic Distribution Strategy One 100GE channel for production science data traffic –Circuits (including LHCOPN) and LHCONE traffic… Second 100GE for network research & failover of science data traffic (LHCONE) –Will reexamine/readjust this strategy when science data traffic starts pushing up toward 100GE: Optimally, just upgrade that channel to 2x100GE Or load-balance LHCONE traffic across existing 100GE channels –Generally, circuit-based traffic falls back on routed IP path Migrating routed IP traffic over to 100GE channel(s) is a goal –Starting to saturate current 2x10GE capacity –Support for >10GE flows –But no specific time table yet 9Phil DeMar - CHEP 20154/14/2015

VRFs Separation of science data & general IP network routing done by virtual routing & forwarding (VRF) technology: –Shared layer-2 (SVIs) –Separate layer-3 –Preserves current security model Current implementation problem: Only default MLXe VRF has PBR capability Looking into work-arounds Phil DeMar - CHEP /14/2015 Secondary border router Primary border router

10/30/2014Phil DeMar | FNAL Site Report11 100GE In the Data Centers

100GE Upgrades – CMS Tier-1 Interconnect between dual core Nexus 7000’s upgraded to 2x100GE: –Replaces 16 x 10GE –Addtl 100GE links to aggregation switches WAN connection upgraded to 100GE Replaces 4 x 10GE Wide-scale 10GB-T deployment ~200 systems this year EOR/MOR, not TOR Phil DeMar - CHEP /14/2015

100GE Upgrades – General Data Center(s) Interconnect between dual core Nexus 7710’s upgraded to 100GE: –Replaces 8 x 10GE –100GE link to one aggregation switch as well WAN connection upgraded to 100GE Replaces 2 x 10GE 10GB-T deployment scaling up ~50 systems this year 13Phil DeMar - CHEP 20154/14/2015

Resiliency 14Phil DeMar - CHEP /14/ GE MAN channel up: LHCOPN & LHCONE follow usual paths No PBR changes 100GE MAN channel down: LHC tertiary & LHCONE thru other 100GE No PBR changes Primary border rtr down: LHC tertiary & LHCONE thru other LAN path PBR multihop tracking reroutes to backup bypass

Future Directions Immediate term – finish consolidation into two border devices Short term – consolidate science data network traffic: –Migrate T1 T1 traffic over to LHCONE –Phase out legacy static circuits in favor of LHCONE: Obviously requires coordinated actions with remote sites Long term – evaluate SDN-based technologies (OpenFlow) to isolate on-site science data traffic –Evolution of inter-domain SDN is too unclear now to start planning SDN beyond our border 10/30/2014Phil DeMar | FNAL Site Report15

10/30/2014Phil DeMar | FNAL Site Report16 Questions?