“Client-side DWDM”: A model for next-gen Baltimore region optical fanout Dan Magorian Director of Engineering and Operations MidAtlantic Crossroads (MAX)

Slides:



Advertisements
Similar presentations
6 nd Gigapop Geeks BOF Dan Magorian & Jon-Paul Herron, Cochairs Welcome!! The forum where Gigapop/RON operators can rant, rave, and be politically incorrect.
Advertisements

2006 © SWITCH 1 TNC'06 Panel Presentation Myths about costs of circuit vs. packet switching Simon Leinen.
Guillaume Crenn, Product Line Manager
Joint Techs, Minneapolis, MN Tuesday February 13, 2007 Understanding Non-layer 3 Networks Dan Magorian, MAX Director of Engineering and Operations
BERnet Genesis The University System of Maryland.
Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Fall 07 Member Meeting.
Network Updates August 12, Network Backbone Updates Will be rolling eTech IP services directly on to backbone – Need tech council approval – Helps.
Reconfigurable Optical Networks using WSS based ROADMs Steven D. Robinson VP, Product Management  Five Essential Elements of the.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
1 Wide Area Network. 2 What is a WAN? A wide area network (WAN ) is a data communications network that covers a relatively broad geographic area and that.
Lighting The Fiber ‘The Road to an Operational RON’ j.p.streck (NC State/NCNI/NCLR) m.johnson (MCNC/NCNI/NCLR)
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
Lighting up the metro backbone to enable advanced services
Nlr.net © 2004 National LambdaRail, Inc 1 Capacity Planning At Layers 1 and 2 Quilt Meeting Oct 13, 2006.
1 Interconnecting the Cyberinfrastructure Robert Feuerstein, Ph.D.
NOBEL WP5 Meeting Munich – 14 June 2005 WP5 Cost Study Group Author:Martin Wade (BT) Lead:Andrew Lord (BT) Relative Cost Analysis of Transparent & Opaque.
OC-192 VSR Interfaces Russ Tuck, Ph.D.
Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Fall 08 Member Meeting.
1 Reliable high-speed Ethernet and data services delivery Per B. Hansen ADVA Optical Networking February 14, 2005.
Valentino Cavalli Workshop, Bad Nauheim, June Ways and means of seeing the light Technical opportunities and problems of optical networking.
Fujitsu Proprietary and Confidential All Rights Reserved, ©2006 Fujitsu Network Communications Simplicity and Automation in Reconfigurable Optical Networks.
OSC Networking Engineering Update OSTEER June 13, 2007 Paul Schopis Associate Director OSC Networking.
1 Provider Bridging design for UNM Campus - CPBN.
Metro/regional optical network architectures for Internet applications Per B. Hansen, Dir. Bus. Dev. Internet2’s Spring Member Meeting May 3, 2005.
Intorduction to Lumentis
Connecticut Education Network University of Connecticut NEREN The Current State and Future of Advanced Optical Networks July 20,
© Ciena Corporation The Path to 100 G Ethernet Martin Nuss VP & Chief Technologist.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
1 NYSERNet Updates and the Next Generation Network Joint Techs Meeting Salt Lake City, Feb 2005 Bill Owens.
Internet2 Fall Conference October 4, 2001 Partnering for Success MAX – Qwest Alliance Builds a High Performance Network in the Washington, D.C. Metro Area.
Quilt Peering Workshop, St Louis MO Oct 13, 2006 MAX experience with capacity planning Dan Magorian, MAX Director of Engineering and Operations
HOPI: Making the Connection Chris Robb 23 June 2004 Broomfield, CO Quilt Meeting.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
17575_03_2003 © 2003, Cisco Systems, Inc. All rights reserved. Optical Networking: From Photons to Packets Rajiv Ramaswami VP/GM Optical Technology Group.
Connecticut Education Network University of Connecticut NEREN RON Equipment Futures June 23, 2004.
Joint Techs Workshop Albuquerque, NM 2/7/2006 Some thoughts on deploying measurement infrastructures Dan Magorian Director of Engineering and Operations.
NETWORK HARDWARE CABLES NETWORK INTERFACE CARD (NIC)
Regional Network Future Architectures Joint Techs July 20,2004 TFN/OARnet
Five Essential Elements for Future Regional Optical Networks Harold Snow Sr. Systems Architect, CTO Group.
® Adtran, Inc All rights reserved 1 ® Adtran, Inc All rights reserved ADTRAN & Smart Grid January 21, 2010 Kevin Morgan Director, Product Marketing.
Impact of Photonic Integration on Optical Services Serge Melle VP Technical Marketing, Infinera.
Optical Networking Industry Perspective BoF Internet 2 Fall Meeting Zouheir Mansourati Movaz Networks.
Presentation to EDUCAUSE Mid-Atlantic Regional Conference Anthony Conto, Ph.D. Executive Director, MAX Copyright Anthony Conto, 2001.
Reconfigurable Optical Mesh and Network Intelligence Nazar Neayem Alcatel-Lucent Internet 2 - Summer 2007 Joint Techs Workshop Fermilab - Batavia, IL July.
Chapter 4 Version 1 Virtual LANs. Introduction By default, switches forward broadcasts, this means that all segments connected to a switch are in one.
Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Spring 07 Member Meeting.
Optical + Ethernet: Converging the Transport Network An Overview.
Regional Optical Networks and Evolving US National Research and Education Networking Paul Schopis, OARnet Dale Smith, University of Oregon International.
© Copyright 2006 Glimmerglass. All Rights Reserved. More than just another single point of failure? Optical Switching.
Metro/regional optical network architectures for Internet applications Per B. Hansen, Dir. Bus. Dev. Joint Techs Workshop July 18, 2005.
National LambdaRail, Inc – Confidential & Proprietary National LambdaRail 4/21/2004 Debbie Montano light the future N L.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
Gigapop Geeks BOF “ The forum where the Geeks can speak out ” Welcome! Hosts: Dan Magorian, MAX Brent Sweeny, IU/Abilene Noc, Standing in for Jon-Paul.
Southeastern Universities Research Association Regional Vision and Strategy Should be the Driver Wrap-up Session SURA SE Networking Summit Meeting February.
Fiber Acquisition and Implementation Internet2 – Joint Techs July 17, 2006.
Rob Adams, VP Product Marketing/Product Line Management From Infrastructure to Equipment to Ongoing Operations Reducing the Cost of Optical Networking.
Redundancy for High-Performance Connectivity Dan Magorian Director of Engineering and Operations Mid-Atlantic Crossroads Internet2 Member Meeting September.
Deploying 40Gbps Wavelengths and Beyond  Brian Smith.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Fermilab T1 infrastructure
SURFnet6: the Dutch hybrid network initiative
ITS 145: Intro to Information Systems
Wide Area Network.
Reconfigurable Optical Mesh and Network Intelligence
Welcome UMATS Retreat June 21, 2007.
Flexible Transport Networks
Connectors, Repeaters, Hubs, Bridges, Switches, Routers, NIC’s
Presentation transcript:

“Client-side DWDM”: A model for next-gen Baltimore region optical fanout Dan Magorian Director of Engineering and Operations MidAtlantic Crossroads (MAX) Presentation to Joint Techs, Fermilab July 17, 2007

Here’s the usual optical fanout nonstarter situation at most places Almost all RONs have dwdm systems, either hard- provisioned with jumpers or with ROADMs. Hardly any campuses have dwdm systems, but many are fiber-rich. So most campuses figure that will bolt dark fiber to lambdas to connect RON lambdas to high-bandwidth researchers. If any actually materialize on their campus as real demand with “check in hand”. A few realize this many not scale and are thinking about white light switches to conserve fiber. When they think about dwdm systems to carry lambdas to edge, cost usually prohibitive, given fiber resources and lack of perceived need for many dwdm system features.

So the Baltimore region wasn’t much different from anyone else Actual “check in hand” researcher 10G demand to JHU, UMBC, & others was uncertain until recently. Community had been well served by MAX & Internet2 high- performance layer 3 transit services. No one in Baltimore or DC had yet joined NLR. Tho MAX has been fanning out NLR lambdas across the region for other projects and customers for years. But recently, Teraflow testbed & other projects appeared with actual needs for 10G lambdas to Balt researchers Also, growing demand from less well-heeled esearcher projects for vlans over shared 10G lambdas, similar to NLR’s Framenet service. So, suddenly Balt institutions had to get their act together.

Luckily, had resources needed for this BERnet (Baltimore Educational Region network) has long history of good forum for horse trading assets and working mutual deals between 7 participants: state net, university net, libraries, 4 universities. Many other regions have similar forums, but this level of cooperation actually rather uncommon in Mid-Atlantic, so BERnet frequently touted as good cooperative model. Had just built cooperative dwdm regional ring year before, run by Univ System MD, and all 7 participants already had dark fiber lit with 1310 to two main pop locations. MAX was already in midst of procuring 3 rd generation unified dwdm system to replace 2 fragmented metro rings built (more on that next time). State net was willing to contribute a fiber spur to Baltimore no longer used in production net for research needs.

BERnet dwdm 6 St Paul MIT dwdm Boston High-level BERnet diagram, inc coming MIT-MAX RON-RON interconnection. (Will mean at least 4 R&E paths to get 10G north: I2, NLR, Awave, and MIT) MIT dwdm Albany MIT dwdm NYC MIT dwdm BALT BERnet Participants BERnet Participants MAX dwdm MCLN MAX dwdm CLPK NLR and I2 lambdas Lambdas to Europe

BERnet Regional Diagram inc new MAX dwdm MCLN NLR & I2 College Park 6 St. Paul660 RedwoodUMBC JHMIJHUMIT 300 Lex. 40 wavelength Amplified Line Side Path 40 wavelength MUX w ITU XFP’s Client Side Path One Transponder Pair to Pay for and Provision End to End

Already BERnet had talked through L3 routed vs L2 bypass tradeoffs Not news to anyone in this community, same most places: High-end researcher demand approximates circuits with persistent large flows needing low latency. National R&E backbones like ESnet have moved to accommodate that by building circuit switching for top flows Upgrading campus L3 infrastructures (the “regular path”) to accommodate this emerging demand involves very expensive router and switch replacement. Usual interim approach is for campuses to “special case” researchers with L2 bypass infrastructure until greater overall demand warrants 10G end-to-end for everyone.

Originally, plan was to extend USM dwdm system down to DC But new MAX dwdm wouldn’t be same system, would have created OEO and need for two transponder pairs, Didn’t want to use “alien waves” across core: –problems with no demarc –need for color coordination across diverse systems. Wanted participants to understand longer term cost implications of 1 vs 2 transponder pairs per lambda. –One transponder pair instead of two means half the incremental cost to add 10 G channels ($32K vs. $64K ea). Over $1M save if all 40 populated. –Transponders dominate costs over long term! –Unlike state nets, were within distance, didn’t need OEO for regen.

So instead, I talked BERnet participants into DIY idea of “client dwdm”. Everyone is familiar with full-featured dwdm system features. Also lots of vendors selling low-cost bare-bones dwdm systems, eg Ekinops, JDSU, etc 3 rd alternative: “do it yourself” minimal dwdm components (that aren’t even systems) are perfect for regional fanout from full-featured systems. So one XFP or SFP goes in client (trib) pluggable ports of dwdm system, and other side goes in IT or researcher ethernet switch or even in 10G nic. Also switch to switch. Instead of $30-60k dwdm cost per chassis for participants, cost is only $22k for 40 lambda filter sets + $6k per “colored” Finisar or equivalent XFP pair. 1G or 2.5G SFPs under $1k/pr Also $15k/pop for newly released Aegis OLM-8000 optical power monitors from 99/1 taps to view 8 fibers. Lower costs mean even small folks can play!

“Client” and “Line” dwdm provisioning example MCLN NLR & I2 College Park 6 St. Paul660 Redwood JHU Client side: instead of “normal” 1310 optic, uses “colored” dwdm XFP pair on an assigned wavelength 40 km reach $6K Line side: Transponder Pair makes the 185 km reach with Amps and holds the client XFP $32K

Many commodity 40 channel 2U C band passive parallel filters are available. We chose Bookham who OEM components for Adva and others’ dwdm systems, and had them packaged for us. We needed 99/1 tap ports for optical power monitors. Beware: many dwdm system vendors mark up filters significantly. Also, some still 32 channel. Could have also done 10-channel, not much cheaper.

More “Client Dwdm” examples 6 St. Paul 660 Redwood JHU switch XFP pairs on an assigned wavelengths on 40 channel dwdm filters (40 km reach $6K). Red lambda is to DC, blue to NYC, green local to UMBC DWDM 300 W Lexington DWDM UMBC DWDM All each participant fiber needs is filter pair plus one optical power monitor at each pop, not full dwdm chassis

This “client dwdm” approach has lots of advantages in many situations Good interim approach where have fiber and need to do more than just put 1310 over it, but don’t have budget or need for full or even bare-bones dwdm system. Easy to migrate existing 1310 services onto dwdm colored sfps and xfps. Doesn’t take lot of optical expertise. Optical power monitors are very important for snmp data to mrtg power levels. Remember, while some lambdas are from campus IT switches, some researcher lambdas from their switches or nics may not be visible to campus IT at all. Because one end in full-featured dwdm xpdr client port, still have benefit of support of 10 G WANphy, SONET/SDH, OTU2 for international partners, Fiber Channel (DR sites), GMPLS enable control plane and Dynamic Provisioning for DCS service. Main caveat: Cisco and Ciena proprietary pluggable optics! Other folks considering using it, eg NOAA & NASA/GSFC.

Deploying it in August, will let you know how it comes out! Will forward info on filters or optical power monitors we used to anyone interested. Thanks!