Redundancy for High-Performance Connectivity Dan Magorian Director of Engineering and Operations Mid-Atlantic Crossroads Internet2 Member Meeting September.

Slides:



Advertisements
Similar presentations
Indiana University Global NOC Chris Robb The Hybrid Packet and Optical Initiative as a Connectivity Solution Presented to the APAN NOC & Resource Allocation.
Advertisements

Singapore Open Exchange ( Current Situation Network Layer-3 –STIX in Singapore for more than 5 years –Starhub IX and other commercial.
NEW OUTLOOK ON MULTI-DOMAIN AND MULTI-LAYER TRAFFIC ENGINEERING Adrian Farrel
6 nd Gigapop Geeks BOF Dan Magorian & Jon-Paul Herron, Cochairs Welcome!! The forum where Gigapop/RON operators can rant, rave, and be politically incorrect.
Network & Services Overview June 2012 Jeff Ambern
2006 © SWITCH 1 TNC'06 Panel Presentation Myths about costs of circuit vs. packet switching Simon Leinen.
ONE PLANET ONE NETWORK A MILLION POSSIBILITIES Barry Joseph Director, Offer and Product Management.
Compute Aggregate 1 must advertise this link. We omit the physical port on the switch to which the node is directly connected. Network Aggregate Links.
Nlr.net © 2004 National LambdaRail, Inc National LambdaRail, Inc PARTNERING TO PROVIDE ADVANCED NETWORKING FOR CYBER INFRASTRUCTURE PROJECTS.
Joint Techs, Minneapolis, MN Tuesday February 13, 2007 Understanding Non-layer 3 Networks Dan Magorian, MAX Director of Engineering and Operations
Shalini Bhavanam. Key words: Basic Definitions Classification of Networks Types of networks Network Topologies Network Models.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Structure of the Internet Update for 1 st H/Wk We will start lab next week Paper presentation at the end of the session Next Class MPLS.
Kae Hsu Communication Network Dept. Redundant Internet service provision - customer viewpoint.
Carrier Networks (Chapter 4) Long Haul/Core Network Metro Network Access Network Technologies Convergence Signaling Digital Hierarchy/SONET.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
Dan Magorian Director of Engineering & Operations November 30, 2005 What’s MAX Been Up to Lately? Presentation to MAX membership.
Network Transition Matt Zekauskas Internet2 Community Design Workshop 15, 16 June 2006 Indianapolis, IN.
1 Interconnecting the Cyberinfrastructure Robert Feuerstein, Ph.D.
Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Fall 08 Member Meeting.
9th Gigapop Geeks BOF hosted by Dan Magorian & Brent Sweeny Welcome!! The forum where Gigapop/RON operators can rant, rave, and be politically incorrect.
TTM1 – 2013: Core networks and Optical Circuit Switching (OCS)
Lecture Note on Dense Wave Division Multiplexing (DWDM)
1 Provider Bridging design for UNM Campus - CPBN.
Copyright 2004 National LambdaRail, Inc Connecting to National LambdaRail 6/23/2004 Debbie Montano Director, Development & Operations
Peering Policies - When to Peer, When not to Peer Quilt Peering Workshop October 2006 St Louis, Missouri.
1 NYSERNet Updates and the Next Generation Network Joint Techs Meeting Salt Lake City, Feb 2005 Bill Owens.
Commercial Peering Service Community Attribute Use in Internet2 CPS Caren Litvanyi lead network engineer peering team Internet2 NOC GigaPoP Geeks BOF January.
MAX Gigapop: “Federal & Corporate Recruitment Strategies” Dan Magorian Director of Engineering & Operations Presentation to Internet2 Gigapop Coordination.
Quilt Peering Workshop, St Louis MO Oct 13, 2006 MAX experience with capacity planning Dan Magorian, MAX Director of Engineering and Operations
Redundancy in High Performance Networks Fall 2005 Internet2 Member Meeting.
HOPI: Making the Connection Chris Robb 23 June 2004 Broomfield, CO Quilt Meeting.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
What is Bandwidth on Demand ? Bandwidth on Demand (BoD) is based on a technology that employs a new way of managing and controlling SONET-based equipment.
For Discussion Purposes Only Digital 395 August 2-5, 2011 Vendor Overview for Electronics Requirements.
1 How High Performance Ethernet Plays in RONs, GigaPOPs & Grids Internet2 Member Meeting Sept 20,
Joint Techs Workshop Albuquerque, NM 2/7/2006 Some thoughts on deploying measurement infrastructures Dan Magorian Director of Engineering and Operations.
Operating Wide-Area Ethernet Networks Matt Davy Global NOC Matt Davy Global NOC.
Regional Network Future Architectures Joint Techs July 20,2004 TFN/OARnet
Five Essential Elements for Future Regional Optical Networks Harold Snow Sr. Systems Architect, CTO Group.
Data Communication Networks
Reconfigurable Optical Mesh and Network Intelligence Nazar Neayem Alcatel-Lucent Internet 2 - Summer 2007 Joint Techs Workshop Fermilab - Batavia, IL July.
INDIANAUNIVERSITYINDIANAUNIVERSITY TransPAC Engineering Chris Robb Indiana University
Keeping up with the RONses Mark Johnson Internet2 Member Meeting May 3, 2005.
Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Spring 07 Member Meeting.
1 Why Optical Layer Protection? Optical layer provides lightpath services to its client layers (e.g., SONET, IP, ATM) Protection mechanisms exist in the.
Regional Optical Networks and Evolving US National Research and Education Networking Paul Schopis, OARnet Dale Smith, University of Oregon International.
1 Merit Network: Connecting People and Organizations Since 1966 Business Continuity A Networking Perspective Bob Stovall Merit Annual Meeting June 22 &
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
Gigapop Geeks BOF “ The forum where the Geeks can speak out ” Welcome! Hosts: Dan Magorian, MAX Brent Sweeny, IU/Abilene Noc, Standing in for Jon-Paul.
Fiber Acquisition and Implementation Internet2 – Joint Techs July 17, 2006.
1 Backbone Performance Comparison Jeff Boote, Internet2 Warren Matthews, Georgia Tech John Moore, MCNC.
Introduction to Telecommunications, 2/e By M.A.Rosengrant Copyright (c) 2007 by Pearson Education, Inc. All rights reserved. Figure 28–1 A next generation.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
Internet2’s new network Heather Boyles, APAN Singapore Meeting – Network Engineering Session 19 July 2006.
Redundancy. Single point of failure Hierarchical design produces many single points of failure Redundancy provides alternate paths, but may undermine.
August 22, 2001 Traffic and Cost Model for RPR versus 1GbE and 10GbE Architectures A Carriers’ Carrier Perspective Stevan Plote Director of Technology.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
AARNet Network Update IPv6 Workshop APAN 23, Manilla 2007 AARNet.
Keeping local stuff local
SURFnet6: the Dutch hybrid network initiative
Operating Wide-Area Ethernet Networks
Introduction to Computers
Ken Gunnells, Ph.D. - Networking Paul Crigler - Programming
AARNet Network Update IPv6 Workshop APAN 23, Manilla 2007 AARNet.
Identify network systems and their components
Ethernet Solutions for Optical Networks
Presentation transcript:

Redundancy for High-Performance Connectivity Dan Magorian Director of Engineering and Operations Mid-Atlantic Crossroads Internet2 Member Meeting September 21, 2005

What Do We Mean by Redundancy, Anyway? Hopefully not what the British mean by redundancy, which we Americans call “laid off”. From the user/customer perspective, it might be: “Whatever you net geeks need to do to keep my connection alive when Bad Things Are Happening. Please don’t bother me with the details.” From an Admin and CIO point of view: “All that expensive stuff you keep asking us to pay for, that we’re not convinced you really need, but since redundancy is a sacred cow we can’t argue against.”

That’s fine, but from a Techie Perspective: Traditionally most RONs/gigapops and service providers in the industry have used layer 3 protection: Abilene has a partial mesh across country, MAX has a ring around our region, most state nets have meshes, etc. Each segment on the ring or mesh terminates in a router. Usually pick up customers there, and can load balance using mpls if needed. Not just protection, but making the most use of expensive resources: why have paths sitting around unused just for protection? Eg, might be able postpone 10G upgrade w/ multiple OC48 paths. So with dwdm serving these topologies, often more point- to-point drops than express lambda pass-throughs.

What’s Wrong with this Approach? Most obvious one, well known to this community, is that not all applications are well served by best-effort IP. Some really work better w/ dedicated lambda resources. Also, worked fine for years with OC48 networks, but is proving to be uneconomic with 10G. Routers not getting cheaper, optical is. Also, most customer circuits are now ethernet, so much router functionality at edges no longer needed to pick up customer sonet, atm, ds3s, etc. So, MAX production net has decommissioned routers at most pops, and uses dwdm optical backhaul to fewer “Big Fat routers” (Juniper T640s) in middle. Still can only afford to give top-tier customers own lambdas, so use aggregation switches & L2 protection for custs < gige.

So in the 10G world we’re using L1/L2 protection Traditional L3 approach with redundant router ints is just too expensive, at least Juniper is. So “switch routers” are winning: Force10, Cisco 6500, etc. Problem is, lose functionality with non-carrier class routers: eg, can you do v6 multicast? Bigger Question: Is L2 the right layer to do protection? Many “light paths” are being strung together out of hybrid of L1 and L2: lambdas daisy-chained into 10G switches feeding vlans over shared paths: not very robust. L1 protection can be economic: tradeoff several schemes: One transponder laser w/ optical protect switch One cpe interface, two tranceiver lasers

Generally, we’ve been promoting “high-9’s” redundancy to customers Two customer routers, connected over two diverse fiber/lambda paths, to two MAX routers. Had this topology for years with Univ System MD, really worked out well for protection against failure of either side router or path. Working with lot of larger customers, eg NIH, NASA, JHU, Census, HHMI, etc to move to this topology. Problem is, costs money and takes time, especially procuring diverse fiber paths to difficult locations. Still doesn’t solve problem of Abilene redundancy. MAX has actually had redundant Abilene connection for years because we run NGIX/E, but to same WASH router.

So gigapops/RONs have been talking about backing each other up Lot of ways to do this, varying costs: Private circuits, eg NIH is getting OC3 to Chicago RON interconnects via procured fiber connections Interconnects using NLR lambdas, eg Atlantic Wave. Just announced: Qwest backhaul to another Abilene node. We’ve talking with Qwest about redundant ISP port offering to Quilt for minimum cost also Still lots of unanswered questions about provisioning for transit capacity and consolidation. How does MAX pay for increasing Abilene conn to 10G to handle eg PSC failover? Could we end up with fewer 10G-connected gigapops, and how will this affect Abilene NG finances?

Ancient Chinese curse: “May you live in interesting times” We’ll see how it works out! Thanks!