Download presentation
Presentation is loading. Please wait.
Published byBriana Lang Modified over 9 years ago
1
Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Fall 08 Member Meeting
2
MAX’s new “baby picture”: after Phase 3 of “the Big Move” spring 09, replacing legacy dwdm systems with single unified carrier- class Fujitsu 7500 dwdm to now 9 pops
3
What has the Production Side of MAX been doing since the Spring member meeting? Finished Phase 2 (DC ring) of the “Big Move” in Sept!: –Expected to have deployed Fujitsu FW7500s to replace 2000- vintage Luxn/Zhone dwdm system by end May. –Staged in lab in testing, had fiber jumper issues to UMD that slowed it down a week or two. –Mainly, ran into snag with Fujitsu software bug with sonet timing for Flexponder modules (newer versions of the Muxponder modules used Phase 1) from head timing source not being propagated, Requires us to arrange timing source at each DC ring location (thanks NWMD and GWU). Finally ended up giving up on Flexponders, went back to tried-and- true Muxponders. Lesson learned: trial new modules first! –Actual Sunday installation went cleanly, only time we’ve needed to do multihour “hard cut” instead of few minutes.
4
What has the Production Side of MAX been doing since the Spring member meeting? (con’t) Phase 3 Extension of VA ring to Ashburn and Reston –MAX has used 1G and 10G lambdas to Equinix Ashburn courtesy of Howard Hughes Med Inst. Has worked out very well, but unable to get more for other customers and projects. Also needed to get own rack at Equinix, out of shared colo. –Because of good Fujitsu pricing and standardized single rack design with small aggregator switch, buildout payback good. –Fibergate turned out winner on diverse fiber ring quotes. –Contract negotiations have been underway since the summer, but are finally complete and ordered. Expect installation in late Feb, operational March. –Not really part of Phase 3, but Juniper EX switches to be installed in Jan replacing old Dell aggregator switches.
5
MAX VA pop locations
6
How did MAX enter into a colo deal at CRG West’s new Reston Exchange for MAX members? There has been significant pent-up demand for quality low-cost DR/colo in the region for many years, increasing w/each disaster. We checked out 16 possibilities over 3 years to try to find a colo provider partner for MAX members whose location passed the screen of available space, power, cooling, low cost, access, ability to deliver MAX lambdas, and "compatible institutional mission.“ So while we were doing the engineering to extend our dwdm system to Equinix in Ashburn VA, it turned out that CRG West, operators of One Wilshire in LA and other colo facilities around the country, had acquired the old AOL colo facility at 12100 Sunrise Valley Drive in Reston VA. The critical question then was, could we strike a favorable rate for our members at a time when commercial facilities like Level3's are becoming extremely expensive? CRG West uses a model for pricing their space so you pay primarily for power/cooling costs, and are striving to keep prices low for this class facility.
7
What are the advantages of using the CRG West colo through MAX? MAX protected lambdas have been very successful, and offer one of the best ways to interconnect data centers with dedicated layer 1 dwdm, making remote and local clusters much better integrated than with traditional layer 3 shared routing, as well as reducing security concerns. Other gigapops with existing colo/DR centers such NyserNet in NY have reported very good member uptake and success with private lambdas to their colo centers, exceeding research lambdas. Many people actually want relatively close colo/DR space, to allow use of local staff instead of expensive "smart hands" service, and reduce staff travel time. There are many MAX members for whom Reston is sufficiently far away. In addition, we may be able to arrange reciprocal arrangements with other colo operators such as NyserNet once we are rolling.
8
The “Big Move” continues Phase 4 of the Fujitsu dwdm cutover –Currently MAX and BERnet have a 10G lambda courtesy of USM’s MRV dwdm system. This has worked out well, including sharing their Force10 switches for participant lambda backhaul after MAX router in BALT was retired. –But would like to ring out research fiber and be able to offer MAX protected lambdas to BALT just like other pops; current research fiber from State of MD is unprotected. –Found a diverse path thru fiber broker; dark fiber on BALT- DC route very hard to get, no longer available from Level3 or Qwest. –Likely to add 111 Market Street Baltimore if can get colo. –Budgeted for fiber in FY09, likely buildout next summer.
11
Other Activities Provisioned NOAA 10G lambda from CO via MCLN back to CLPK and on to NOAA Silver Spring. Continuing to work with NOAA Silver Spring and Colorado on further planning and support for NOAA research net. With additional 200M of Qwest ISP to UMD and soon 500M to USM to tide them over, now load balancing to both Qwest connections. 500M of TransitRail available, very economic. Held IPv6 workshops, so many people signed up that had to have two back-to-back, Bill Cerveny invited in to teach. Lot of fiber work helping NLM engineer diverse 10G connections, JHU bring up second fiber path and lambdas. We’re here to help folks with fiber issues (fiber tools available) and expertise designing and procuring paths, as well as layers 2 and 3 as well. Dave, Quang, Matt, Chris Ts
12
Closing Thought: Beyond 10G Although MAX just did 40G interop testing, OC768 SONET is still far too expensive at $600k/interface. Affordable 100G ethernet still long way off due to delaying standards fight. But growing requirements and traffic demands aren’t waiting for new interfaces to become affordable. In the R&E world, we have been fortunate that 10G interfaces have been reasonably economic and provide good headroom for traffic levels right now. MAX has had 2 10G participant connection upgrades recently, 3 rd on way. But many commercial providers have long outstripped single 10Gs on their backbones, and have had to link aggregate (LAG) many 10Gs together to get capacity for many flows. Now starting to see R&E requirements for eg 20G paths (JHU grant) and R&E backbones needing to LAG 10Gs. How soon before we see single high performance flows over 10G?
13
Thanks! magorian@maxgigapop.net
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.