Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Fall 07 Member Meeting.

Slides:



Advertisements
Similar presentations
Selecting an IXP Where to peer?. THE TOP 10 IXP SELECTION CRITERIA How do network operators choose an Internet Exchange Point? 2.
Advertisements

Singapore Open Exchange ( Current Situation Network Layer-3 –STIX in Singapore for more than 5 years –Starhub IX and other commercial.
6 nd Gigapop Geeks BOF Dan Magorian & Jon-Paul Herron, Cochairs Welcome!! The forum where Gigapop/RON operators can rant, rave, and be politically incorrect.
Joint Techs, Minneapolis, MN Tuesday February 13, 2007 Understanding Non-layer 3 Networks Dan Magorian, MAX Director of Engineering and Operations
8th Gigapop Geeks BOF tonight hosted by Dan Magorian Welcome!! The forum where Gigapop/RON operators can rant, rave, and be politically incorrect about.
BERnet Genesis The University System of Maryland.
CEG3185 Tutorial 7 Routers and Routing. IP Address An Internet Protocol address (IP address) is a numerical label assigned to each device (e.g., computer,
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
CSEE W4140 Networking Laboratory Lecture 8: LAN Switching Jong Yul Kim
Networking Components
Bandwidth on Demand Dave Wilson DW238-RIPE
(part 3).  Switches, also known as switching hubs, have become an increasingly important part of our networking today, because when working with hubs,
Connecting LANs, Backbone Networks, and Virtual LANs
July 16, 2004Joint Techs Workshop, Columbus OH Fooling around with LSPs: Some Multivendor Gmpls/Mpls testing Dan Magorian Director, Engineering and Operations.
Network Architecture and Protocol Concepts. Network Architectures (1) The network provides one or more communication services to applications –A service.
LAN Overview (part 2) CSE 3213 Fall April 2017.
Innovating the commodity Internet Update to CENIC 14-Mar-2007.
DRAGON Supercomputing 2004 Presentation Plans October 6, 2004 National Science Foundation.
SURA IT Committee July 29,2005 Tony Conto MAX Update.
Peter O’Neil Executive Director November 29, 2007 MAX Fall Member Meeting.
“Client-side DWDM”: A model for next-gen Baltimore region optical fanout Dan Magorian Director of Engineering and Operations MidAtlantic Crossroads (MAX)
Dan Magorian Director of Engineering & Operations November 30, 2005 What’s MAX Been Up to Lately? Presentation to MAX membership.
Lecture 8 Page 1 Advanced Network Security Review of Networking Basics: Internet Architecture, Routing, and Naming Advanced Network Security Peter Reiher.
Nlr.net © 2004 National LambdaRail, Inc 1 Capacity Planning At Layers 1 and 2 Quilt Meeting Oct 13, 2006.
Network Transition Matt Zekauskas Internet2 Community Design Workshop 15, 16 June 2006 Indianapolis, IN.
Dan Magorian Director of Engineering & Operations What’s MAX Been Up to Lately? Presentation to MAX membership Spring 06 Member Meeting.
1 Interconnecting the Cyberinfrastructure Robert Feuerstein, Ph.D.
1 Deployment of IP Multicast in Campus Infrastructures Kevin Almeroth UC--Santa Barbara
Section 4 : The OSI Network Layer CSIS 479R Fall 1999 “Network +” George D. Hickman, CNI, CNE.
Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Fall 08 Member Meeting.
Impact of SMW-4 on Bangladesh Presented By: Sumon Ahmed Sabir
Repeaters and Hubs Repeaters: simplest type of connectivity devices that regenerate a digital signal Operate in Physical layer Cannot improve or correct.
OSC Networking Engineering Update OSTEER June 13, 2007 Paul Schopis Associate Director OSC Networking.
EMEA Partners XTM Network Training
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Internet2 Fall Conference October 4, 2001 Partnering for Success MAX – Qwest Alliance Builds a High Performance Network in the Washington, D.C. Metro Area.
Quilt Peering Workshop, St Louis MO Oct 13, 2006 MAX experience with capacity planning Dan Magorian, MAX Director of Engineering and Operations
Redundancy in High Performance Networks Fall 2005 Internet2 Member Meeting.
HOPI: Making the Connection Chris Robb 23 June 2004 Broomfield, CO Quilt Meeting.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
1. 2 Router is a device which makes communication between two or more networks present in different geographical locations. Routers are data forwarding.
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
1 How High Performance Ethernet Plays in RONs, GigaPOPs & Grids Internet2 Member Meeting Sept 20,
Joint Techs Workshop Albuquerque, NM 2/7/2006 Some thoughts on deploying measurement infrastructures Dan Magorian Director of Engineering and Operations.
Operating Wide-Area Ethernet Networks Matt Davy Global NOC Matt Davy Global NOC.
1 Role of Ethernet in Optical Networks Debbie Montano Director R&E Alliances Internet2 Member Meeting, Apr 2006.
Switch Features Most enterprise-capable switches have a number of features that make the switch attractive for large organizations. The following is a.
Presentation to EDUCAUSE Mid-Atlantic Regional Conference Anthony Conto, Ph.D. Executive Director, MAX Copyright Anthony Conto, 2001.
Network Components By Kagan Strayer. Network Components This presentation will cover various network components and their functions. The components that.
Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Spring 07 Member Meeting.
Regional Optical Networks and Evolving US National Research and Education Networking Paul Schopis, OARnet Dale Smith, University of Oregon International.
Nlr.net © 2004 National LambdaRail, Inc 1 NLR Update John Silvester University of Southern Califonia
OARtech OSCnet Technical Update Paul Schopis OSC Networking April 9, 2008.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
Gigapop Geeks BOF “ The forum where the Geeks can speak out ” Welcome! Hosts: Dan Magorian, MAX Brent Sweeny, IU/Abilene Noc, Standing in for Jon-Paul.
Redundancy for High-Performance Connectivity Dan Magorian Director of Engineering and Operations Mid-Atlantic Crossroads Internet2 Member Meeting September.
NT1210 Introduction to Networking
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Chapter 3 Part 1 Switching and Bridging
Copyright 2007 CENIC and PNWGP
NLR Infrastructure Update
AARNet Network Update IPv6 Workshop APAN 23, Manilla 2007 AARNet.
Copyright 2007 CENIC and PNWGP
Operating Wide-Area Ethernet Networks
Chapter 4 Data Link Layer Switching
TCP/IP Protocol Suite: Review
NLR Infrastructure Update
TCP/IP Protocol Suite: Review
How Our Customers Communicate With Us
Presentation transcript:

Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Fall 07 Member Meeting

What has the Production Side of MAX been doing since the Spring member meeting? The last 6 months has been one of the biggest network changeovers in MAX’s history! (“The Big Move”) –We interviewed many potential dwdm system vendors. –Did a pseudo-RFP process with UMD procurement helping. –Worked with our technical advisory committee (TAC, thanks guys!) –Took field trips to the short-list vendors sites for lab testing. –Selected a vendor (Fujitsu Flashwave 7500). –Got the PO expedited through UMD Purchasing in record time. –Got delivery expedited through Fuijitsu. –Installed in MAX lab, configured, and out to field in one month. –Inc lot of prep, ripping out Movaz dwdm systems, customer coordination, and cutover in 3 main pops. Phase 1 of Fujitsu Flashwave 7500 system has been selected, procured, and installed in record time!!

Example: Baltimore DWDM installation timetable as of July. Slipped about a month, was still very aggressive. 7/27 Parts arrived, breakers changed, Fujitsu 7500 PO cut. 8/3 Norwin and DaveI move Force10, install MRV 10G 8/3 Dan and MAX folks have 10G lambda Mclean ready 8/10 Dan and MAX engineers at Fujitsu training TX 8/17 Balt peerings moved to Force10s/T640s 8/17 Bookham filters arrived, Aegis power mons installed. 8/24 M40e, Dell, 6 SP colo rack 3 cleared. 8/24 (depends on Fujitsu ship) Fujitsu gear staged in lab 8/31 Fujitsu DWDM and switch installed in colo rack 3 9/7-14 Move participant peerings to lambdas on Bookhams 9/21 Mop up of Aegis power monitor mrtgs, etc

Where are we today vs April? 3 main pops in Mclean (LVL3), College Park, and 6 St Paul Baltimore almost complete. This included a major move of our main UMD pop: –into the NWMD colo area of UMD bldg 224 room 0302 –moving out of OIT colo area of UMD 224 room –involved renovating the racks, moving over MD DHMH gear –tie fibers, unfortunately coupled with huge fiber contractor hassles –NWMD folks (Greg and Tim) were very helpful. Still to be done: –tie fiber in 6 St Paul (do it ourselves next week with our new Sumitomo bulk fusion splicer) –Finish up of BERnet dwdm filter cutovers –Phase 2 dwdm, replacing ancient 2000 Luxn dwdm DC ring. Very proud that we moved all the customer and backbone lambdas with only tiny amounts of downtime for the cuts! Especially want to thank Quang, Dave, Matt, and Chris!

In addition to the dwdm changeover, the other pop moves have been a huge piece of work also In Mclean, had to move the HOPI rack to Internet2’s suite In Baltimore, we’re sharing USM’s Force10 switches, and removed the BERnet Juniper M40e. Lot of cutover work. Moved the NGIX/E east coast Fednet peer point –Procured, tested in lab, and installed in new CLPK pop. –Including a lot of RMA problems with 10G ints. –Lot of jumper work, config moves and night cuts to get done. –Monday just moved out CLPK Dell customers. –Next up: moving the lab T640 to the new CLPK pop, new jumpers, config move and consolidation. We had intended to have new dense 1U Force10 or Foundry switches selected and installed –But found that their OSes were immature/unstable. –Had to do an initial one in Equinix pop to support new 10G link. –So made decision to consolidate onto Cisco 6509s for Phase 1, postpone Phase 2 switches till spring 08.

RR402 Before: 48V PDUs, Dell switch and inverters, Force10 (top), Juniper M40e (bot)

RR402 After: Fujitsu ROADM optical shelf (top), transponder 1-16 shelf (bot) with 2 10G lambdas installed, Cisco 2811 “out-of-band” DCC router with console cables, Fujitsu XG2000 “color translator” XFP switch. Still to be installed: transponder shelf to hold space.

RR202 after: Force10 E300 relocated (top, still needs to be moved up), 3 Bookham 40ch filters, Aegis dwdm power monitor, tie fiber panel to RR402

CLPK MCLN BALT MCLN ASHB ARLG DCGW DCNE Internet2 NewNet NGIX T640 M40E Cogent ISP R&E Nets Old Abilene Qwest ISP Qwest ISP MAX Production topology Spr Original Zhone dwdm over Qwest fiber 2. Movaz dwdm over State Md fiber 3. Gige on HHMI dwdm over Abovenet fiber 4. Univ Sys Md MRV dwdm, various fiber Ring 1 Ring 2 Ring 3 Ring 4 National LambdaRail

CLPK MCLN 660 RW MCLN ASHB ARLG DCGW DCNE Internet2 NewNet NGIX T640 Cogent ISP R&E Nets Old Abilene Qwest ISP Qwest ISP MAX Production topology Fall/07 1. Original Zhone dwdm over Qwest fiber 2. Fujitsu dwdm over State Md fiber 3. 10G on HHMI dwdm over Abovenet fiber 4. 10G on Univ Sys Md, MRV dwdm Ring 1 Ring 2 Ring 3 Prod Ring 4 National LambdaRail 6 St Paul New Res fiber 10G backbone 10G lambda

BERnet client-side DWDM approach MCLN NLR & I2 College Park 6 St. Paul660 RedwoodUMBC JHMIJHUMIT 300 Lex. New 40 wavelength Fujitsu dwdm 40 wavelength MUX w ITU XFP’s Client Side Path One Transponder Pair to Pay for and Provision End to End (UMBC is connected 6 SP, Also Sailor and Morgan joined)

More “Client Dwdm” examples between participants 6 St. Paul 660 Redwood JHU switch XFP pairs on an assigned wavelengths on 40 channel dwdm filters (40 km reach $6K). Red lambda is to DC, blue to NYC, green local to UMBC DWDM 300 W Lexington DWDM UMBC DWDM All each participant fiber needs is filter pair, not full dwdm chassis

BERnet Production L1/L2 topology as of Nov USM F10 6 St Paul USM F10 Redwood St USM MRV USM MRV BERnet Production 10G lambda USM MRV MAX 6509 CLPK New MAX 6509 MCLN CLPK T640MCLN T640 MAX Fujitsu Dwdm Phase 2 DC ring BERnet Participants BERnet Participants Participant Production lambdas Xconnect New BERnet Res DWDM New 10G lambdas

Next Steps for the Big Move NGIX 6509 chassis just freed up moves next week to MCLN installation with connecting 10G lambda. This is the start of MAX’s Layer 2 service offering. USM finishing optical work on 660-6SP MRV dwdm link. Will put in BALT production 10G to MCLN, allows protected double peerings with MAX T640s. 40 channel filter installs: 6 SP/660 RW ends in (except Sailor), need to install/test participant ends, transition fibers from 1310 to dwdm: 660-6SP, JHU/JHMI, UMBC, Sailor, Morgan. Also Pat Gary’s group at CLPK. Then bring up sfp/xfp lambdas on, set up Aegis power mons /web pages. Move of CLPK Juniper T640 to new pop next: big one. Hope to have all pops move done by end of Dec/early Jan. Happy to give tours!

Phase 2 of dwdm system In spring will continue to unify new Fujitsu dwdm system. Ironic: –Phase 2 is replacing our original Luxn/Zhone system from 2000 –While Phase 1 was replacing the Movaz/Advas that came later. –Those reversed due to the need to get main Ring 2 changed first. –So now we’re moving on to change over the original ring 1. –Luxn/Zhone dwdm now completely obsolete, really unsupported. Still an issue with less traffic to DC. One 10G will hold L3 traffic to participants for awhile. –Very interested in hearing/collaborating on DC lambda needs. –There is an initiative with the Quilt for low-cost lambdas, which we’re hoping will result in Qwest offering to MAX and rest of community, feed lambdas from original DC Qwest pop. Get involved with TAC to hear details:

Participant Redundant Peering Initiative MCLN router CLPK router USMNIHJHU Fujitsu dwdm Fujitsu dwdm Have been promoting this since But now want to really emphasize that with new dwdm infra- structure we can easily double-peer your campus to both routers for high-9s availa- bility. 8 folks so far.

RFC2547 VRFs (separate routing tables) expansion gives participants choice Due to Internet2 and NLR merger not happening, converged network does not appear in the cards. This means business as usual: I2 and NLR acting as competitors dividing community, trying to pull RONs to “their” side, increasingly acrimonious. We intend to do our best to handle this for folks (to the extent possible) by playing in both camps, and offering participants choice. So have traded with VT/MATP for NLR layer 3 PacketNet connection, in addition to (not replacing) I2 connection. Technically, have implemented this on Juniper T640 routers as additional “VRFs”: separate routing tables which we can move participant connections into. Dave Diller is “chief VRF wrangler”, did tricky “blend” work.

MAX has run VRFs for years, now has 5 Participant Peering Vlans MAX Infrastructure NLR VRF I2 & NLR Blended VRF I2 VRF Qwest VRF Cogent VRF

New Service: Layer 2 vlans Announced in Spring member meeting. MAX has traditionally run Layer 1 (optical) and Layer 3 (routed IP) service. –Only NGIX/E exchange point is Layer 2 service. –Continues to be demand for non-routed L2 service (vlans), similar to NLR’s FrameNet service. This means that folks will be able to stretch private vlans from DC to Mclean to Baltimore over shared 10G channel. Also will be able to provision dedicated ethernets. Next week we’re moving Cisco 6509 out to Mclean early to get this started, will interconnect two main switches with 10G lambda. Haven’t figured out service costs yet, will involve TAC. Your ideas and feedback are welcome.

BERnet dwdm 6 St Paul MIT Nortel dwdm Boston Long-distance high-level diagram of new dwdm system (meeting MIT in Baltimore) MIT Nortel dwdm Albany MIT Nortel dwdm NYC MIT Nortel dwdm BALT BERnet Participants BERnet Participants MAX dwdm MCLN MAX dwdm CLPK NLR and I2 lambdas Lambdas to Europe

New service: Flow analysis We announced this in the spring. Turns out it would be useful to be able to characterize traffic flows passing thru MAX infrastructure for participant use. –We bought Juniper hardware assists and big Linux pc with lots of disk to crunch and store a year’s data. –Using open-source Flow Tools analysis packages. Not snooping packets: contents not collected by Netflow, but does have source/destination addresses and ports. So there are some confidentiality issues, not anonymous yet. Have done a prototype for people to look at. Send to for url and login/password if you’re interested in testing. Ideally, would like web interface where people put in AS #s –Then could look at flows to/from their institutions –Could also look at protocol (traffic type), top talkers, etc. Interested in people’s ideas and feedback during afternoon.

FlowTools Graph Examples

Peering Services There has been some interest in either Internet2’s Commercial Peering Service and/or Cenic’s TransitRail. Right now, we offer Cogent at $16/mb/m through GWU and Qwest for $28/mb/m, soon to drop several $/mb/m based on Quilt contracts. Demand for Cogent has been slow. Have been thinking about mixing one or both peering services in with the Cogent offering, might enable us to drop price to around $10/mb/m, depending on traffic mix. Problem is, demand has to be enough to cover the “peering club” and additional infrastructure costs. We tried this long ago with direct Cogent connection, not enough folks signed. Would people be interested in this? Hope to hear discussion and feedback in the afternoon sessions.

Closing thought: the new Fujitsu dwdm system is part of a sea change Not just needed to create a unified infrastructure across MAX region and replacing aging vendor hardware. Also lays foundation for dynamic circuit and lambda services activities that are happening in the community Want to get people thinking about implications of dynamic allocation, dedicated rather than shared resources: –Circuit-like services for high-bandwidth low-latency projects –Not replacement of “regular” ip routing, but in addition –Possible campus strategies for fanout. Need to plan for how will deliver, just as BERnet doing –facilitating researcher use of this as it comes about. People may say, “We don’t have any of those applications on our campus yet”. –But suddenly may have researchers with check in hand –Eg, in planning phase now for DC ring, need to forecast. –Talk to us about what you’re doing and thinking!

Thanks!