Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Fall 08 Member Meeting.

Slides:



Advertisements
Similar presentations
Infrastrukturë Backbone ALBANIAN NATIONAL FIBER OPTIC NETWORK.
Advertisements

Packet Switching vs. Circuit Switching
Selecting an IXP Where to peer?. THE TOP 10 IXP SELECTION CRITERIA How do network operators choose an Internet Exchange Point? 2.
2006 © SWITCH 1 TNC'06 Panel Presentation Myths about costs of circuit vs. packet switching Simon Leinen.
ONE PLANET ONE NETWORK A MILLION POSSIBILITIES Barry Joseph Director, Offer and Product Management.
Joint Techs, Minneapolis, MN Tuesday February 13, 2007 Understanding Non-layer 3 Networks Dan Magorian, MAX Director of Engineering and Operations
“We recently turned to Tech Data to perform some standard configurations for a project that included fourteen 4510 switches. As a Gold partner we have.
Rob Vietzke Connecticut Education Network University of Connecticut NEREN Quilt RON Meeting RFP Examples June 22, 2004.
BERnet Genesis The University System of Maryland.
Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Fall 07 Member Meeting.
Status of IPv6 John Sweeting Time Warner Cable Oct 5, 2011.
Middle of the Country Regional Optical Networks, Part 2 Jay Ford network engineer University of Iowa Internet2 member meeting.
FiberCo/WaveCo Update Joint Techs February 12, 2007 Minneapolis, Minnesota.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Data Centers and IP PBXs LAN Structures Private Clouds IP PBX Architecture IP PBX Hosting.
University of Oklahoma Network Infrastructure and National Lambda Rail.
The Optical Communications Market
GLIF Engineering (TEC) Working Group & SURFnet6 Blue Print Erik-Jan Bos Director of Network Services, SURFnet I2 Fall meeting, Austin, TX, USA September.
Company and Product Overview Company Overview Mission Provide core routing technologies and solutions for next generation carrier networks Founded 1996.
NERO/OWEN Partnership - Oregon High Performance Networking David Crowe, Jr. NERO Network Network Engineer.
OARnet Update Linda Roos Gathering of State Networks February 4, 2004.
Innovating the commodity Internet Update to CENIC 14-Mar-2007.
© XchangePoint 2001 Economic Differences Between Transit and Peering Exchanges Keith Mitchell Chief Technical Officer NANOG 25 10th June 2002.
Asia Pacific University Initiatives Co-Location/Exchange Point Service Discussion CSG Fall Meeting September 12 th,
“Client-side DWDM”: A model for next-gen Baltimore region optical fanout Dan Magorian Director of Engineering and Operations MidAtlantic Crossroads (MAX)
Dan Magorian Director of Engineering & Operations November 30, 2005 What’s MAX Been Up to Lately? Presentation to MAX membership.
Dan Magorian Director of Engineering & Operations What’s MAX Been Up to Lately? Presentation to MAX membership Spring 06 Member Meeting.
1 Interconnecting the Cyberinfrastructure Robert Feuerstein, Ph.D.
Multi-Use Network Update Presented to: Colorado Higher Education Computing Organization CHECO Spring Conference 2001.
Abilene Update Joint Techs Summer ’05 Vancouver, CA Steve Cotter Director, Network Services Steve Cotter Director, Network Services.
OSC Networking Engineering Update OSTEER June 13, 2007 Paul Schopis Associate Director OSC Networking.
1 Provider Bridging design for UNM Campus - CPBN.
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Peering Policies - When to Peer, When not to Peer Quilt Peering Workshop October 2006 St Louis, Missouri.
MAX Gigapop: “Federal & Corporate Recruitment Strategies” Dan Magorian Director of Engineering & Operations Presentation to Internet2 Gigapop Coordination.
Internet2 Fall Conference October 4, 2001 Partnering for Success MAX – Qwest Alliance Builds a High Performance Network in the Washington, D.C. Metro Area.
Redundancy in High Performance Networks Fall 2005 Internet2 Member Meeting.
Computer Networks Lecture 5 Packet Switching & Circuit Switching, Causes of impairment Lahore Leads University.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
How to Build a MAN Last Update Copyright Kenenth M. Chipps PhD 1.
Joint Techs Workshop Albuquerque, NM 2/7/2006 Some thoughts on deploying measurement infrastructures Dan Magorian Director of Engineering and Operations.
Africa IXP. Outline / Overview Connectivity in Africa 400’000’000.
Introduce the project Africa IXP (Team 4). Introduce team members.
Presentation to EDUCAUSE Mid-Atlantic Regional Conference Anthony Conto, Ph.D. Executive Director, MAX Copyright Anthony Conto, 2001.
Scotland Internet Exchange The LINX UK-wide Peering Initiative John Souter CEO, LINX Scotland (Edinburgh) Peering Event March 2013.
Network (traffic) Planning Cross country and local traffic planning.
INDIANAUNIVERSITYINDIANAUNIVERSITY TransPAC Engineering Chris Robb Indiana University
Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Spring 07 Member Meeting.
Regional Optical Networks and Evolving US National Research and Education Networking Paul Schopis, OARnet Dale Smith, University of Oregon International.
Broadband Wireless The Business Case for High Capacity Presented by: Paul S. Bachow February 20,
7 May 2002 Next Generation Abilene Internet2 Member Meeting Washington DC Internet2 Member Meeting Washington DC.
Gigapop Geeks BOF “ The forum where the Geeks can speak out ” Welcome! Hosts: Dan Magorian, MAX Brent Sweeny, IU/Abilene Noc, Standing in for Jon-Paul.
Redundancy for High-Performance Connectivity Dan Magorian Director of Engineering and Operations Mid-Atlantic Crossroads Internet2 Member Meeting September.
Taking your Business Technology Further. First Communications: At A Glance Technology Provider since 1998, serving thousands of Businesses throughout.
Jefferson Lab Site Update February 25, 2014 Bryan Hess
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
Copyright 2007 CENIC and PNWGP
AARNet Network Update IPv6 Workshop APAN 23, Manilla 2007 AARNet.
FiberCo/WaveCo Update
SURFnet6: the Dutch hybrid network initiative
Copyright 2007 CENIC and PNWGP
Welcome UMATS Retreat June 21, 2007.
MAN LAN Update Rick Summerhill
Beyond 10G Hong Liu February 14, 2005.
AARNet Network Update IPv6 Workshop APAN 23, Manilla 2007 AARNet.
SURFnet6 Hybrid Optical and Packet Switching Infrastructure
Ethernet Solutions for Optical Networks
NYSERNet and the University at Albany
Where global business and growth connect
Presentation transcript:

Dan Magorian Director of Engineering & Operations What’s MAX Production Been Up to? Presentation to MAX membership Fall 08 Member Meeting

MAX’s new “baby picture”: after Phase 3 of “the Big Move” spring 09, replacing legacy dwdm systems with single unified carrier- class Fujitsu 7500 dwdm to now 9 pops

What has the Production Side of MAX been doing since the Spring member meeting? Finished Phase 2 (DC ring) of the “Big Move” in Sept!: –Expected to have deployed Fujitsu FW7500s to replace vintage Luxn/Zhone dwdm system by end May. –Staged in lab in testing, had fiber jumper issues to UMD that slowed it down a week or two. –Mainly, ran into snag with Fujitsu software bug with sonet timing for Flexponder modules (newer versions of the Muxponder modules used Phase 1) from head timing source not being propagated, Requires us to arrange timing source at each DC ring location (thanks NWMD and GWU). Finally ended up giving up on Flexponders, went back to tried-and- true Muxponders. Lesson learned: trial new modules first! –Actual Sunday installation went cleanly, only time we’ve needed to do multihour “hard cut” instead of few minutes.

What has the Production Side of MAX been doing since the Spring member meeting? (con’t) Phase 3 Extension of VA ring to Ashburn and Reston –MAX has used 1G and 10G lambdas to Equinix Ashburn courtesy of Howard Hughes Med Inst. Has worked out very well, but unable to get more for other customers and projects. Also needed to get own rack at Equinix, out of shared colo. –Because of good Fujitsu pricing and standardized single rack design with small aggregator switch, buildout payback good. –Fibergate turned out winner on diverse fiber ring quotes. –Contract negotiations have been underway since the summer, but are finally complete and ordered. Expect installation in late Feb, operational March. –Not really part of Phase 3, but Juniper EX switches to be installed in Jan replacing old Dell aggregator switches.

MAX VA pop locations

How did MAX enter into a colo deal at CRG West’s new Reston Exchange for MAX members? There has been significant pent-up demand for quality low-cost DR/colo in the region for many years, increasing w/each disaster. We checked out 16 possibilities over 3 years to try to find a colo provider partner for MAX members whose location passed the screen of available space, power, cooling, low cost, access, ability to deliver MAX lambdas, and "compatible institutional mission.“ So while we were doing the engineering to extend our dwdm system to Equinix in Ashburn VA, it turned out that CRG West, operators of One Wilshire in LA and other colo facilities around the country, had acquired the old AOL colo facility at Sunrise Valley Drive in Reston VA. The critical question then was, could we strike a favorable rate for our members at a time when commercial facilities like Level3's are becoming extremely expensive? CRG West uses a model for pricing their space so you pay primarily for power/cooling costs, and are striving to keep prices low for this class facility.

What are the advantages of using the CRG West colo through MAX? MAX protected lambdas have been very successful, and offer one of the best ways to interconnect data centers with dedicated layer 1 dwdm, making remote and local clusters much better integrated than with traditional layer 3 shared routing, as well as reducing security concerns. Other gigapops with existing colo/DR centers such NyserNet in NY have reported very good member uptake and success with private lambdas to their colo centers, exceeding research lambdas. Many people actually want relatively close colo/DR space, to allow use of local staff instead of expensive "smart hands" service, and reduce staff travel time. There are many MAX members for whom Reston is sufficiently far away. In addition, we may be able to arrange reciprocal arrangements with other colo operators such as NyserNet once we are rolling.

The “Big Move” continues Phase 4 of the Fujitsu dwdm cutover –Currently MAX and BERnet have a 10G lambda courtesy of USM’s MRV dwdm system. This has worked out well, including sharing their Force10 switches for participant lambda backhaul after MAX router in BALT was retired. –But would like to ring out research fiber and be able to offer MAX protected lambdas to BALT just like other pops; current research fiber from State of MD is unprotected. –Found a diverse path thru fiber broker; dark fiber on BALT- DC route very hard to get, no longer available from Level3 or Qwest. –Likely to add 111 Market Street Baltimore if can get colo. –Budgeted for fiber in FY09, likely buildout next summer.

Other Activities Provisioned NOAA 10G lambda from CO via MCLN back to CLPK and on to NOAA Silver Spring. Continuing to work with NOAA Silver Spring and Colorado on further planning and support for NOAA research net. With additional 200M of Qwest ISP to UMD and soon 500M to USM to tide them over, now load balancing to both Qwest connections. 500M of TransitRail available, very economic. Held IPv6 workshops, so many people signed up that had to have two back-to-back, Bill Cerveny invited in to teach. Lot of fiber work helping NLM engineer diverse 10G connections, JHU bring up second fiber path and lambdas. We’re here to help folks with fiber issues (fiber tools available) and expertise designing and procuring paths, as well as layers 2 and 3 as well. Dave, Quang, Matt, Chris Ts

Closing Thought: Beyond 10G Although MAX just did 40G interop testing, OC768 SONET is still far too expensive at $600k/interface. Affordable 100G ethernet still long way off due to delaying standards fight. But growing requirements and traffic demands aren’t waiting for new interfaces to become affordable. In the R&E world, we have been fortunate that 10G interfaces have been reasonably economic and provide good headroom for traffic levels right now. MAX has had 2 10G participant connection upgrades recently, 3 rd on way. But many commercial providers have long outstripped single 10Gs on their backbones, and have had to link aggregate (LAG) many 10Gs together to get capacity for many flows. Now starting to see R&E requirements for eg 20G paths (JHU grant) and R&E backbones needing to LAG 10Gs. How soon before we see single high performance flows over 10G?

Thanks!