A new national network Andrew Mackarel Andrea Tognola Dave Wilson
Evolution of the network Procurement process The new hardware and layout Transition and timeline Agenda
Evolution of the network
Star topology First IP network - star topology Single core router in UCD Typical speeds: 64k-2M, clients and peers UCCUCD HEANCIR Forbairt VCIL MCI Ebone DIT TCD (then a second for BGP)
Bring the network out of Dublin Avoid duplication & cost Add resilience Tendered for PoP sites –Won by client sites Typical speeds –clients up to ~10M, upstreams N*2M Clients External peers and upstreams National Backbone
This model has scaled to 1Gbit/s Overlaid on National Backbone Extension Dark fibre network providing ethernet point-to-point –Bring network to the client –Avoid duplication –Reduce cost –Increase flexibility –Add resilience These goals now addressed by NBE! Multi-pop backbone
Scaling limit >1Gbit/s requires new hardware Software upgrades long and intensive Feature upgrades require new hardware… Clients External peers and upstreams National Backbone
2008 onward New Best Current Practice: Concentrate, then duplicate –Concentrating reduces the hardware, and with it failure incidence and scaling cost –Duplicating provides resilience, and keeps network consistent Model now being followed by many NRENs Collapsed backbone
Procurement
Chose procedure –Competitive dialogue, open, restricted PIN notice followed by open procedure –PIN notice sought input on type of equipment and architecture, and set expectations –Proceeded to open RfT based in part on the input received in response to PIN RfT issued December 2006 Procurement
Responses arrived February 2007 Thorough evaluation including on-site testing of candidate equipment –Single vendor solution chosen –Biggest single contract in HEAnet Extensive environmental requirements checked with colo sites Contract signed August 2007 Delivery October 2007 Implementation ongoing as we speak Procurement
Detailed transition plan being developed –HEAnet –Lan Communications & Cisco –Being customised with each client's input Starting with HEAnet internal infrastructure Identification of project to clients now –Contact taking place to agree individual transition plans over the coming weeks To implementation
The hardware and new architecture
Chosen equipment: Cisco CRS-1 Scalablity –up to 100Gbit/s interfaces –1.5+ Terabits per second total Full support for current and future services New operating system platform: IOS-XR Hardware Scalablity
Support for critical new features –IPv6 multicast, 4byte ASN, … Excellent high availability –Software is modular –Possibility of hitless upgrades! Top of the field for future growth –10Gbit/s connections a matter of course –Scale to 100Gbit/s peer, multi-10Gbit/s client Features
Two routers, both performing core and access functions –One in Citywest, one in Kilcarbery Park Where possible, client gets connectivity via NBE to both routers –Primary/backup connections –Resilience a function of the underlying NBE Ethernet only, burst up to 10Gbit/s per interface Network Architecture
Bandwidth scaling Today Effective limit of old equipment 10Gig connections to INEX, GÉANT and General Internet now being commissioned
The transition and timelines
Careful consultation with each client Minimise disturbance to connections Where possible, bring up new connections before deleting old –BGP policy will assist in this (thank you!) Preferentially route traffic over new network for a test period –Can revert by shutting down new links Transition plan customised for each client's needs Transition
NBE resilience rollout in parallel Clients not directly connected to NBE should still peer with both routers –BGP preferred, to protect from fibre cut or reboot of one or other router All clients now transitioning to Ethernet –Extra flexibility of design & speed over ATM and serial links –Connect directly with NBE –Use VLANs to provide second peering session Transition
Testing during November Gbit/s interconnect with current network Slow start with handpicked connections –HEAnet internal infrastructure first (MNS - ftp.heanet.ie, videoconf, streaming) –INEX and backup GÉANT connections –One client of each technical category, by agreement, full support from all partners Plan: production traffic by Christmas –Transition in earnest starts in January 2008 –Majority March 2008, decommission June Transition
Most transitions should be hitless or near-hitless –Will discuss each connection in detail If HEAnet manages your CPE –We will contact you for scheduling –We will manage the transition If you manage your own CPE –We will contact you for scheduling and planning of the changes Impact
Thank you! Questions?
Bonus slides
NOW - Installation Late Nov - Acceptance tests End Nov- HEAnet services Early Dec- First clients and peers Mid Dec- Acceptance and freeze for Christmas period Jan-Feb- Next 10 clients Feb-Mar- Remaining clients and completion Timelines
Inform customer of new network design Identify local liaison at site Determine Local requirements, software changes, patching Agree time schedule for parallel running/test Implement switchover to new connection Confirm new connection confirms to requirements Terminate old connection Tasks for transition
New IP Backbone Rollout phase Nov 2007 – June 2008
Current topology “Rednet” cr1-kp cr1-cwt cr1-galcr1-lim cr1-cork ar1-tcdar1-dcu ar1-cwt ar2-cwt
CWT-PE1 KP-PE1 CR1-CWT CR1-KP CR2-CWT CR2-KP 1) Interconnection with the core (Nov 07) 11 ESBT NTL
CWT-PE1 KP-PE1 CR1-CWT CR1-KP CR2-CWT CR2-KP 2) Interconnection with Bluenet (Nov) 11 ESBT NTL 22 IP link via Bluenet 2
CWT-PE1 KP-PE1 CR1-CWT CR1-KP CR2-CWT CR2-KP 3)Rehome ESBT dark fibre (Dec) 11 ESBT NTL 22 X ESBT 3
CWT-PE1 KP-PE1 CR1-CWT CR1-KP CR2-CWT CR2-KP 4)Peer with Route Reflectors (Jan 08) 11 ESBT NTL 22 X ESBT 3 RR1 RR2 44
CR1-CWT CR1-KP CR2-CWT CR2-KP 4)Peer with Route Reflectors (w/o Bluenet) RR2 4 RR1 4
CR1-CWT CR1-KP CR2-CWT CR2-KP 5) Dealing with ATM and commodity (Feb 08) AR2-CWT X 5 RR2 RR1 AR1-KP AR3-CWT 5 X
CWT-PE1 KP-PE1 CR1-CWT CR1-KP AR3-CWT AR1-KP GBLX CR2-CWT CR2-KP Transition phase 2008 INEX GEANT New IPTr ESBT Clients via Bluenet
New IP Backbone Client’s access setup overview
IP link & BGP Client with telco connection to a central PoP -> NUI Maynooth nuim-sw1kp-sw1 AR1-CWT X Eircom DWDM nuim2 South AR1-DCU X Port-Channel1 dcu-sw1 BT HEAN HEAN G7/5 G1/0/1 G6/2.28 CR2-CWT CR2-KP X BLUENET NEW! cwt-sw1 cpe2-nuim North BLUENET CWT-PE1 Port-Channel1 nuim-sw2 IP link & BGP AT Slide 39
IP link & BGP Client access via Bluenet and cancelled Telco -> DIT Aungier St (planned) aungier- sw1 tcd-sw1 AR1-CWT X DCC fiber AR1-TCD X A3/1.8 G1/0/1 G6/2.22 CR2-CWT CR2-KP BLUENET KP-PE1 ? cwt-sw1 cpe1-dit BLUENET CWT-PE1 IP link & BGP AT Slide 40 G1/0/1 Eircom AVC atmsw1-cwt A1/0.28 X G0/2G2/0/10
IP link & BGP Client with leased line to Citywest, singlehomed -> EPA, Wexford AR1-CWT X F9/2 CR2-CWT CR2-KP BLUENET KP-PE1 ? cwt-sw1 cpe1-epa BLUENET CWT-PE1 IP link & BGP AT Slide 41 HEAN Mb/s BLI F0/0 X kp-sw1 ? static
IP link & BGP BGP client with full resilience. Will use Bluenet -> UL (not resilient though) AR1-LIM G0/2 CR2-CWT CR2-KP BLUENET KP-PE1 ul.lim.client BLUENET CWT-PE1 IP link & BGP AT Slide 42 X lim-sw1 LIM-PE1 REDNET CR1-LIM trunk
IP link & BGP IoT: Limerick IT AR1-LIM G0/2 CR2-CWT CR2-KP BLUENET KP-PE1 cpe1-lit BLUENET CWT-PE1 IP link & BGP AT Slide 43 X lim-sw1 LIM-PE1 trunk lit-sw1 AR1-GAL X
converted leased line IP link & BGP IoT: Tipperary Institute CR2-CWT CR2-KP BLUENET KP-PE1 TippInst Thurles BLUENET CWT-PE1 AT Slide 44 gal-sw1 AR1-CWT X A3/1.44 HEAN0109 STM-1 TippInst Clonmel AR1-GAL HEAN0111 2Mb/s S3/2 X cwt-sw1 IP link & BGP
IoT: Limerick IT AR1-LIM G0/2 cpe1-lit AT Slide 45 X CR2-CWT CR2-KP BLUENET KP-PE1 BLUENET CWT-PE1 lim-sw1 LIM-PE1 trunk lit-sw1 AR1-GAL X IP link & BGP