NLR Layer2/Layer3 Users BOF NLR status update Albuquerque Internet2 Joint Techs 8 February 2006 Brent Sweeny, Indiana University Jon-Paul Herron, Indiana.

Slides:



Advertisements
Similar presentations
INDIANAUNIVERSITYINDIANAUNIVERSITY Update on US National and International Networking Initiatives James Williams Indiana University
Advertisements

John Silvester University of Southern California APAN 33, Chiang Mai, Thailand Feb 14, 2012.
FRGP – NLR/I2 Marla Meehl Manager of the FRGP 3/4/08.
Network & Services Overview June 2012 Jeff Ambern
Network Technical Planning Committee Report Great Plains Network 4/27/2010.
1 National LambdaRail Layer 2 and 3 Networks: Update 17 July 2005 Joint Techs Meeting Burnaby, BC Caren Litvanyi NLR Layer2/3 Service Center Global Research.
Nlr.net © 2004 National LambdaRail, Inc 1 NLR Tom West Wendy Huntoon Bonnie Hurst.
Copyright 2007 CENIC and PNWGP TransitRail: Nationwide Commodity Peering Program February 12, 2007 Minneapolis, MN.
National LambdaRail, Inc – Confidential & Proprietary 1 National LambdaRail Overview/Review 3 May 2005 Internet2 Spring Members Meeting Layers 2 & 3 User’s.
Facilities Based Networking Corporation for Education Networking Initiatives in California (CENIC)
NLR – National LambdaRail CAN2004 November 29, 2004 Tom West.
Gigi Karmous-Edwards Optical Control Plane International ICFA workshop Daegu Korea May 25th 2005.
National LambdaRail A Fiber-based Research Infrastructure Vice-Provost for Scholarly Technology University of Southern California Chair of the CENIC Board.
1 Optical Research Networks WGISS 18: Beijing China September 2004 David Hartzell NASA Ames / CSC
National LambdaRail Layer 2 and 3 Networks 3 May 2005 Internet2 Spring Members Meeting Layers 2 & 3 Users BOF.
Nlr.net © 2004 National LambdaRail, Inc NLR Update July 26, 2006.
Nlr.net © 2004 National LambdaRail, Inc Owning, Controlling and Managing the Network Infrastructure: Bridging Research Communities Tom West, CEO National.
APAN19-TLPW&WHREN, International Research Connections Translight/PacificWave and WHREN John Silvester Vice-Provost, University of Southern California.
Abilene Update Joint Techs Summer ’05 Vancouver, CA Steve Cotter Director, Network Services Steve Cotter Director, Network Services.
1 Update Jacqueline Brown University of Washington APAN Meeting Busan, Korea 29 August 2003.
Next Generation Peering for Next Generation Networks Jacqueline Brown Executive Director International Partnerships Pacific Northwest Gigapop CANS2004,
Nlr.net © 2004 National LambdaRail, Inc NLR Update February 5, 2006.
Copyright 2004 National LambdaRail, Inc National LambdaRail Tutorial 7/18/2004 Debbie Montano NLR Director, Development & Operations
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Nlr.net © 2004 National LambdaRail, Inc 1 NLR Update Dave Reese Joint Techs February 2007.
NSF-IRNC-KICKOFF-TLPW, NSF-SCI: IRNC Translight/PacificWave John Silvester Vice-Provost, University of Southern California Chair, CENIC.
Copyright 2004 National LambdaRail, Inc Connecting to National LambdaRail 6/23/2004 Debbie Montano Director, Development & Operations
NLR Layer2/Layer3 Users BOF NLR status update Philadelphia Internet2 Member Meeting 19 September 2005 Brent Sweeny, Indiana University.
Pacific NorthWest Gigapop, Pacific Wave & International Peering David Morton, PNWGP RENU Technology Workshop.
Nlr.net © 2004 National LambdaRail, Inc GLIF: The Underlying Plumbing in the USA Tom West, CEO National LambdaRail
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
Sponsored by the National Science Foundation GENI Current Ops Workflow Connectivity John Williams San Juan, Puerto Rico Mar
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
Leveraging the NLR: Creation of the Florida LambdaRail Larry Conrad Associate VP and CIO, FSU Chair of Board, Florida LambdaRail, LLC.
Gigi Karmous-Edwards Network Development and Grid Requirements E-IRG May 13th, 2005.
PacificWave Update John Silvester University of Southern California Chair, CENIC Internet2 - ITF, Philadelphia,
Layer 1,2,3 networking on GrangeNet II Slide Pack Greg Wickham APAN 2006 ver 1.1.
1 How High Performance Ethernet Plays in RONs, GigaPOPs & Grids Internet2 Member Meeting Sept 20,
1 Role of Ethernet in Optical Networks Debbie Montano Director R&E Alliances Internet2 Member Meeting, Apr 2006.
Boulder Research and Administration Network (BRAN), Front Range GigaPoP (FRGP), Bi-State Optical Network (BiSON) 1 Marla Meehl UCAR/FRGP/BiSON Manager.
OptIPuter Networks Overview of Initial Stages to Include OptIPuter Nodes OptIPuter Networks OptIPuter Expansion OPtIPuter All Hands Meeting February 6-7.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 C-Wave Overview Kevin McGrattan Consulting System Engineer January 22,
National LambdaRail/ Florida LambdaRail Larry Conrad Associate VP and CIO, FSU Board Chair, Florida LambdaRail, LLC.
1 © 2004, Cisco Systems, Inc. All rights reserved. Aiken & Boromound Network Research Infrastructure “Back to the Future” (aka National Lambda Rail – NLR)
Nlr.net © 2004 National LambdaRail, Inc NLR Update Jt. Techs, Madison July 17, 2006.
GOLE and Exchange Architectures John Silvester Professor of Electrical Engineering, USC Board Member, CENIC PI, TransLight/PacificWave (NSF-OCI-IRNC)
Nlr.net © 2004 National LambdaRail, Inc 1 NLR Update John Silvester University of Southern Califonia
National LambdaRail, Inc – Confidential & Proprietary National LambdaRail 4/21/2004 Debbie Montano light the future N L.
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
Spring 2004 Internet2 Member Meeting NLR Service Center Update Dave Jent Indiana University.
Light the future N L R National LambdaRail Update Joint Techs 2005 Salt Lake City, Utah Dave Jent.
CENIC/NLR Optical Network Management Winter 2006 Joint Techs Workshop February 7, 2006 Chris Costa CENIC
Nlr.net © 2004 National LambdaRail, Inc National LambdaRail Tom West, CEO
Light the future N L R National LambdaRail Update Joint Techs ‘05 Salt Lake City, Utah Steve Cotter February 13, 2005.
HOPI Update - Internet 2 Project Hybrid Optical Packet Infrastructure Peter O’Neil NCAB May 19, 2004.
Gigi Karmous-Edwards GLIF/NLR GGF13 March 15 th 2005.
Copyright 2007 CENIC and PNWGP
NLR Infrastructure Update
Copyright 2005 National LambdaRail, Inc
nlr.net © 2004 National LambdaRail, Inc
Copyright 2007 CENIC and PNWGP
C-Wave Overview Kevin McGrattan Consulting System Engineer
Internet2 Network Infrastructure: Abilene Current Status and Future Plans Rick Summerhill Associate Director, Backbone Network Infrastructure, Internet2.
National LambdaRail Update Dave Jent NLR Layer 2/3 Service Center Indiana University July 18, 2005.
NLR Infrastructure Update
Update on Dark Fiber Techniques in the US
NLR Technical BoF: NLR PacketNet Brent Sweeny Jon-Paul Herron
Infrastructure Update
Presentation transcript:

NLR Layer2/Layer3 Users BOF NLR status update Albuquerque Internet2 Joint Techs 8 February 2006 Brent Sweeny, Indiana University Jon-Paul Herron, Indiana University John Moore, North Carolina State University

NLR Layer2/Layer3 users BOF2 Agenda 1. Progress Report 2. Questions (non how-do-I-connect) 3. What’s the Layer2/3 connection process? 4. Experiments case studies 5. “Ask the experts”—questions, special situations, discussion, “how-do-I”, “I need…”, etc.

NLR Layer2/Layer3 users BOF3 NLR Engineering/Support Organization A very distributed, coordinated organization: Service desk at Indiana Service desk at Indiana Layer1 NOC and engineering at CENIC Layer1 NOC and engineering at CENIC Layer2/3 NOC and engineering at Indiana Layer2/3 NOC and engineering at Indiana Also: Measurement, monitoring Also: Measurement, monitoring Tech mailing lists for Layer2 and Layer3 users Tech mailing lists for Layer2 and Layer3 users Technical documentation Technical documentation Experiments support center at North Carolina Experiments support center at North Carolina

NLR Layer2/Layer3 users BOF4 The Big News about NLR Layers 2 and 3: It’s working! It’s working! People are connected, and using it! People are connected, and using it!

NLR Layer2/Layer3 users BOF5 Who’s Who— about 150 institutions NLR Members (for participants see Corporation for Education Network Initiatives in CaliforniaCorporation for Education Network Initiatives in California (CENIC) Pacific Northwest GigaPopPacific Northwest GigaPop (PNWGP) Pittsburgh Supercomputing Center Pittsburgh Supercomputing Center and the University of PittsburghUniversity of Pittsburgh Duke UniversityDuke University, representing a coalition of North Carolina universities Mid-Atlantic Terascale Partnership Internet2Internet2® (no participants currently) Florida LambdaRail, LLC Southern Light Rail, Inc. Committee on Institutional CooperationCommittee on Institutional Cooperation (CIC) Cornell University Louisiana Board of Regents Oklahoma State Board of Regents Lonestar Education and Research NetworkLonestar Education and Research Network (LEARN) University of New MexicoUniversity of New Mexico, on behalf of the state of New Mexico University Corporation for Atmospheric Research University Corporation for Atmospheric Research (UCAR), representing a coalition of universities and government agencies from Colorado, Wyoming, and Utah

NLR Layer2/Layer3 users BOF6 NLR PacketNet/FrameNet Current Status

NLR Layer2/Layer3 users BOF7 Review: NLR architecture and services

NLR Layer2/Layer3 users BOF8 National LambdaRail design NLR WaveNet PoP NLR WaveNet & FrameNet PoP NLR WaveNet, FrameNet & PacketNet PoP NLR owned fiber

NLR Layer2/Layer3 users BOF9 Generic NLR L1, L2 and L3 PoP Layout CRS-1 Colo EastWest NLR demarc DWDM 1G wave, link or port 10G wave, link or port

NLR Layer2/Layer3 users BOF10 DAL SYR TUL PEN ELP PHO BAT ALB HOU WDC OGD CLE NYC SAA 4 4 JAC SLC 4 4 Level3 fiber WilTel fiber 4 4 RAT 4 Cisco terminal Cisco OADM Cisco terminal Cisco OADM STA CHI KAN PIT BOI CLE ATL POR RAL DEN SVL SEA LAX 8 8 NLR Layer 1 “WaveNet”

NLR Layer2/Layer3 users BOF11 Layer 1 Phase 2 Deployment DAL SYR TUL PEN ELP KAN PHO BAT ALB HOU WDC OGD CLE NYC SAA DEN 4 4 JAC SLC 4 4 Level3 fiber WilTel fiber LAX 4 4 RAT 4 Cisco terminal Cisco OADM Cisco terminal Cisco OADM

NLR Layer2/Layer3 users BOF12 Layer 1 baseline Opportunity to connect into lambda fabric Opportunity to connect into lambda fabric Point to point Point to point Other endpoint could be anywhere Other endpoint could be anywhere Early examples: Early examples: HOPI HOPI Ultralight Ultralight iGRID iGRID SC05 (Supercomputing) SC05 (Supercomputing)

NLR Layer2/Layer3 users BOF13 Layer 2 Network Design “FrameNet” HOU TUL ELP KAN BAT LAX ALB PIT WDC CLE ATL RAL CHI NYC DEN SVL SEA JAC 10GE wave 10GE managed wave Yellow sites are done Cisco 6509 switch SVL

NLR Layer2/Layer3 users BOF14 Layer 2 installation status All Switch installations completed: SunnyvaleDenver Kansas CityChicago ClevelandPittsburgRaleighWashington DC JacksonvilleAtlantaLos AngelesTulsa El PasoHouston New York CityBaton Rouge Albuquerque Layer2 backbone interconnections status: All layer2 backbone interconnects are done except: Tulsa-Kans Tulsa-Hous (that is, both directions out of Tulsa) New York City-Wash New York City-Clev (e.g. both directions out of NYC) Los Angeles-Sunnyvale

NLR Layer2/Layer3 users BOF15 Layer 2 service baseline (see also FrameNet Technical Guide) 1GE Connection into local GE Connection into local GE Access to “national exchange fabric” 1GE Access to “national exchange fabric” Additional Options: Additional Options: Dedicated point to point Etherrnet, Nx1GE Dedicated point to point Etherrnet, Nx1GE Best-effort point to multipoint (no dedicated bw) Best-effort point to multipoint (no dedicated bw) Soon: Soon: 10GE ports 10GE ports Dedicated point-to-multipoint Dedicated point-to-multipoint

NLR Layer2/Layer3 users BOF16 Goal Provide circuit-like options for users who can’t use, can’t afford, or don’t need, a 10G Layer1 wave via point-to-point layer2 VLANs. Provide circuit-like options for users who can’t use, can’t afford, or don’t need, a 10G Layer1 wave via point-to-point layer2 VLANs. Experiment with large-scale layer2 capabilities. Experiment with large-scale layer2 capabilities.

NLR Layer2/Layer3 users BOF17 National Exchange Fabric National Exchange Fabric Multipoint, public, best-effort, resilient, permanent, stable Multipoint, public, best-effort, resilient, permanent, stable Non-dedicated bandwidth Non-dedicated bandwidth NLR allocated addresses, peer with any other member across the layer 2 exchange policy-free. NLR allocated addresses, peer with any other member across the layer 2 exchange policy-free. Possible to have more than one (say by MTU). Possible to have more than one (say by MTU). Ready to go today. Ready to go today. Current participants: Duke, MATP, SLR—several more coming… Current participants: Duke, MATP, SLR—several more coming… Backup connections to networks such as Abilene, commodity providers, etc. Backup connections to networks such as Abilene, commodity providers, etc. Point-to-point, private, permanent, stable Point-to-point, private, permanent, stable Could be best-effort or guaranteed. Could be best-effort or guaranteed. Could be nailed-up or resilient. Could be nailed-up or resilient. Load-balance or leave idle until needed. Load-balance or leave idle until needed. Ready to go today, though we only have pricing for the guaranteed nailed-up case. Ready to go today, though we only have pricing for the guaranteed nailed-up case. Example: NREN Example: NREN

NLR Layer2/Layer3 users BOF18 To enable a flexible topology for NLR layer3 To enable a flexible topology for NLR layer3 Best-effort, private, temporary, experimental Best-effort, private, temporary, experimental We have 8 layer3 nodes, but the topology between them can be made much more interesting by creating various connections over the layer2 network. We have 8 layer3 nodes, but the topology between them can be made much more interesting by creating various connections over the layer2 network. Enables layer3 experimentation. Enables layer3 experimentation. To provide members with a second path into the NLR layer3 network To provide members with a second path into the NLR layer3 network Point-to-point, private, permanent, experimental Point-to-point, private, permanent, experimental Connect to a second node on the layer3 backbone. Connect to a second node on the layer3 backbone. Load-balance or leave idle until needed. Load-balance or leave idle until needed. Included in membership. Included in membership.

NLR Layer2/Layer3 users BOF19 Temporary connections for special projects Temporary connections for special projects Guaranteed, private, temporary, stable Guaranteed, private, temporary, stable For remote instrumentation, where the member only has the remote resource reserved for a limited window. For remote instrumentation, where the member only has the remote resource reserved for a limited window. For conferences, demos, and other special events. For conferences, demos, and other special events. Provides a low latency/jitter path if needed. Provides a low latency/jitter path if needed. Nailed-up if latency is critical, probably resilient if not. Nailed-up if latency is critical, probably resilient if not. Could be point-to-point or multipoint. Could be point-to-point or multipoint. Technically, this is possible today, but we have no pricing model for it. Technically, this is possible today, but we have no pricing model for it.

NLR Layer2/Layer3 users BOF20 Bootstrapping circuit-like research Bootstrapping circuit-like research Point-to-point, private, guaranteed, temporary, nailed-up, experimental Point-to-point, private, guaranteed, temporary, nailed-up, experimental To enable a researcher to get started while waiting for funding or provisioning of a layer1 circuit. To enable a researcher to get started while waiting for funding or provisioning of a layer1 circuit. Similar to a special event, but more experimental, a probably a stronger need for it to be nailed-up. Similar to a special event, but more experimental, a probably a stronger need for it to be nailed-up. Technically, this is possible today, but we have no pricing model for it. Technically, this is possible today, but we have no pricing model for it. Provide control plane network for optical experiments Provide control plane network for optical experiments Permanent, resilient, experimental Permanent, resilient, experimental A topology could be created for the oob management network needed for some dynamic optical networking experiments (GMPLS, etc.) A topology could be created for the oob management network needed for some dynamic optical networking experiments (GMPLS, etc.)

NLR Layer2/Layer3 users BOF21 Cluster/Grid LAN Cluster/Grid LAN Multipoint, private, guaranteed, experimental Multipoint, private, guaranteed, experimental To enable remote clusters to appear on the same LAN. To enable remote clusters to appear on the same LAN. It is not known if spanning tree would be wanted. It is not known if spanning tree would be wanted. It could evolve into a more production-like requirement. It could evolve into a more production-like requirement. Technically, this is possible today, but we have no pricing model for it. Technically, this is possible today, but we have no pricing model for it. Experiment directly with Layer 2 Experiment directly with Layer 2 Could be of any type (experimental, obviously) Could be of any type (experimental, obviously) Web-based provisioning, direct user requests, etc. Web-based provisioning, direct user requests, etc. Concern about interaction with more production-like requirements. Concern about interaction with more production-like requirements.

NLR Layer2/Layer3 users BOF22 ALB backhaul BAT backhaul RAL backhaul JAC backhaul TUL backhaul PIT backhaul HOU LAX WDC ATL CHI NYC DEN SEA Cisco CRS-1 router 10GE wave Yellow sites are installed Layer 3 Network “PacketNet” LAX

NLR Layer2/Layer3 users BOF23 Layer 3 installation status All layer3 router installations are complete. All interconnections between layer3 backbone routers are complete now except: New York City (both directions)

NLR Layer2/Layer3 users BOF24 Layer3 connections are up to: NLR member sites: MATP UCAR Duke/NC PNW CENIC PSC Peer networks: Peer networks: CAnet CAnet USGS USGS Transpac Transpac Coming peers:  ESnet  DREN  StarLight  NREN Exchange points:  StarLight (now)  Pacific Wave (signed)  (others soon)

NLR Layer2/Layer3 users BOF25 PacketNet Peering Principles Very simple: “AUP-free”

NLR Layer2/Layer3 users BOF26 Layer3 Peering Details Member Connections Prefix Lists are used, with no approval process for updates Prefix Lists are used, with no approval process for updates Only routes NLR will normally prevent are: Only routes NLR will normally prevent are: Bogons and private addresses Bogons and private addresses Transit of other upstream providers Transit of other upstream providers

NLR Layer2/Layer3 users BOF27 Layer3 Peering Details Non-Member R&E Peers ASN Lists are used ASN Lists are used Only routes NLR will normally prevent are: Only routes NLR will normally prevent are: Bogons and private addresses Bogons and private addresses Other than that, NLR will cater its peering to meet the expectations and needs for each peer Other than that, NLR will cater its peering to meet the expectations and needs for each peer

NLR Layer2/Layer3 users BOF28 Layer3 service baseline (see also PacketNet Technical Guide) Each member gets two routed connections Each member gets two routed connections “local” 10GE “local” 10GE VLAN backhauled to 2 nd node VLAN backhauled to 2 nd node BGP peering with NLR L3 network BGP peering with NLR L3 network IPv4 unicast IPv4 unicast IPv4 multicast (MBGP/PIM/MSDP) IPv4 multicast (MBGP/PIM/MSDP) IPv6 unicast (multicast later) IPv6 unicast (multicast later) An ‘experimental’ (changeable, changing) L3 network An ‘experimental’ (changeable, changing) L3 network

NLR Layer2/Layer3 users BOF29 Layer 3 coming services Likely eventual logical routers Likely eventual logical routers More 1GE options More 1GE options More 10G options More 10G options Pre-emptable connections Pre-emptable connections MPLS MPLS More user control—scheduling, testing, etc More user control—scheduling, testing, etc User access to measurement data User access to measurement data

NLR Layer2/Layer3 users BOF30 NOC webpages for Layer2/Layer3 (noc.nlr.net) Tools Tools Proxy Proxy Weathermap (layer2, layer3) Weathermap (layer2, layer3) Utilization Utilization Documents, notably: Documents, notably: FrameNet Technical Guide FrameNet Technical Guide PacketNet Technical Guide PacketNet Technical Guide PacketNet BGP Communities PacketNet BGP Communities

NLR Layer2/Layer3 users BOF31 NLR User Resources