Download presentation
Presentation is loading. Please wait.
Published byLizbeth Andrews Modified over 9 years ago
1
University of Oklahoma Network Infrastructure and National Lambda Rail
2
Why High Speed? Moving data. Collaboration.
3
What’s Our Strategy? End-to-end “big picture” design. Constantly shifting target architecture. Consistent deployment methods. Supportable and sustainable resources.
4
How Do We Design for Today’s Needs and Tomorrows Requirements?
5
Cabling… Yesterday: Category 5 Split-pair deployment for voice and data Cheapest vendor Poor performance for today’s demands
6
Cabling… (cont) Today: Category 6+ Standardized on Krone TrueNet Gigabit capable Trained and certified installation team Issues with older installations still exist
7
Cabling…(cont) Tomorrow: Krone 10G 10-Gigabit capable Purchasing new test equipment Deployed at National Weather Center Upgrade of older installations to 6+ or 10G
8
Fiber Optics… Yesterday: Buy cheap Pull to nearest building Terminate what you need
9
Fiber Optics… (cont) Today: WDM capable fiber Pull to geographic route node Terminate, test, and validate Issues with “old” fiber still exist
10
Fiber Optics… (cont) Tomorrow: Alternate cable paths Life-cycle replacement Inspection and re-validation
11
Network Equipment… Yesterday: 10Mb/s or 10/100Mb/s to desktop 100Mb/s or Gigabit to the building Buy only what you need (no port growth)
12
Network Equipment… (cont) Today: 10/100/1000 to the desktop Gigabit to the wiring closet 25% expansion space budgeted on purchase PoE, per-port QoS, DHCP snooping, etc.
13
Network Equipment… (cont) Tomorrow: 10-Gig to the wiring closet Non-blocking switch backplanes Enhanced PoE, flow collection
14
Servers… Yesterday: One application = one server Run it on whatever can be found No consideration for network, power, HVAC, redundancy, or spare capacity
15
Servers… (cont) Today: Virtualizing the environment Introducing VLANs to the server farm Clustering and load balancing Co-locating to take advantage of economies of scale (HVAC, power, rack space)
16
Servers… (cont) Tomorrow: Data center construction Infiniband and iSCSI “Striping” applications across server platforms App environment “looks like” a computing cluster (opportunities to align support)
17
ISP (OneNet)… Yesterday: Two, dark-fiber Gigabit connections Poor relationship between ISP and OU
18
ISP… (cont) Today: Excellent partnership between ISP & OU 10-Gigabit BGP peer over DWDM 10-Gig connection to NLR BGP peer points in disparate locations
19
ISP… (cont) Tomorrow: Dual, 10-Gig peer… load shared Gigabit, FC, and 10-Gigabit “on-demand” anywhere on the optical network Additional ISP peering relationships to better support R&D tenants
20
WAN… Yesterday: OC-12 to I2 OC-12 and OC-3 to I1 All co-located in the same facility
21
WAN… (cont) Today: 10-Gigabit (Chicago) and 1-Gigabit (Houston) “routed” connection to NLR OC-12 to I2, with route preference to NLR Multiple I1 connections Multiple I1 peers in disparate locations
22
WAN… (cont) Tomorrow: LEARN connection for redundant NLR and I2 connectivity DWDM back-haul extensions to allow NLR and I2 terminations "on-campus“
23
To what end??? “Condor” pool evolution “Real-time” data migrations and streaming Knowledge share Support share Ability to “spin-up” bandwidth anytime, anywhere (within reason)
24
Questions? zgray@ou.edu
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.