Download presentation
Presentation is loading. Please wait.
Published byAnn Carpenter Modified over 9 years ago
1
Internet2: Technology Innovation and Distributed Infrastructure Guy Almes Internet2 Project NANOG Meetings Denver — February 1, 1999
2
Overview Universities, Engineering, and Applications Technical Innovation Distributed Infrastructure
3
The challenge before us Universities, by their nature, mix teaching and research collaborate with scholars at other universities Thus, advanced applications for conferencing remote instrument access digital libraries What networks will these need?
4
Applications and engineering Applications Engineering MotivateEnables
5
What makes this hard? Combination of: high bandwidth wide area intrinsically bursty applications Need for multicast Need for quality of service Need for measurements
6
Internet2 History / Status Initiated 1-Oct-96 by 34 research universities (NGI Program announced one week later) UCAID incorporated Oct-97 Board of Directors drawn from university presidents Staff mainly in three locations Compact, growing set of international partners
7
History/Status, continued We now have about 140 universities A few dozen corporate members also make key contributions Key goal: create and support advanced applications Key infrastructure tactic: campus, gigapop, backbone structure
8
Working Group Progress IPv6 Measurement Multicast Network Management Network Storage Quality of Service Routing Security Topology
9
Technical Innovation: Measurement Chair: David Wasley, Univ California and Matt Zekauskas, Internet2 staff Focus: Places to measure: at campuses, at gigaPoPs, within interconnect(s) Things to measure: traffic utilization performance: delay and packet loss traffic characterization
10
Backbone ‘A’ Backbone ‘B’
11
Backbone ‘A’ Backbone ‘B’
12
Backbone ‘A’ Backbone ‘B’
13
Active Measurements of Performance IETF IPPM WG defining one-way delay Take all delay to be due to: Propagation Transmission Queuing Variation in delay suggests congestion
19
Passive Measurements of Traffic Characterization OC3MON and OC12MON Developed by MCI vBNS engineering with NLANR group at UCSD passive taps into fiber links extracts IP packet headers gradually improving maturity Help understand nature of Internet use
20
Technical Innovation: Multicast Chair: Kevin Almeroth, Univ California at Santa Barbara Focus: Make native IP multicast scalable and operationally effective Must be coordinated across backbones, gigaPoPs, and campuses Must be coordinated with unicast routing
21
1999: A key year for multicast In the past, multicast has meant ‘MBone’ core set of committed users and engineers ‘legacy’ non-scalable approaches to routing Our hope: PIM-Sparse Mode MBGP, MSDP, etc. enable scalable use of high-speed multicast flows throughout the Internet2 structure
22
Technical Innovation: Quality of Service Chair: Ben Teitelbaum, Internet2 staff Focus: Multi-network IP-based QoS Relevant to advanced applications Interoperability: carriers and kit Architecture QBone distributed testbed
23
Big Problem #1: Understanding Application Requirements Range of poorly-understood needs Both intolerant and tolerant apps important Many apps need absolute, per-flow QoS assurances Adaptive apps may require a minimum level of QoS, but can exploit additional network resources if available
24
Big Problem #2: Scalability # flows through core >> # flows through edge Goal: keep per-flow state out of the core Design principles Put “smarts” in edge routers Allow core routers to be fast and dumb
25
Big Problem #3: Interoperability Campus Networks GigaPoPs Campus Networks … and between multiple implementations of network elements... Backbone Networks (vBNS, Abilene, …)... between separately administered and designed clouds... … is crucial if we are to provide end-to-end QoS.
26
DiffServ Architecture BB Leaf Router (police, mark flows) BB Ingress Edge Router (classify, police, mark aggregates) Egress Edge Router (shape aggregates) Core routers Core routers Source Bandwidth Brokers (perform admissions control, manage network resources, configure leaf and edge devices) Destination
27
Premium Service Emulates a leased line Contract: peak rate profile PHB = “forward me first” ( e.g. priority queuing, WFQ) Policing rule = drop out-of-profile packets On egress, clouds need to shape Premium aggregates to mask induced burstiness
28
Internet2 “QBone” A “meta-testbed” for absolute diff-serv services Many Internet2 clouds already keenly interested in experimenting with diff-serv Objectives: Fostering interoperability among participant clouds Encouraging collective problem solving Creating opportunities for inter-disciplinary dialogue Growing a snowball of participating clouds Technical diversity Topological diversity Contiguity
29
Summary Internet2’s WGs focus on project’s needs Complement IETF WGs Membership by invitation of chair
30
Distributed Infrastructure Campuses: scalable 10/100 Mb/s multicast GigaPoPs: scalable access to wide-area resources Backbones: vBNS Abilene
31
Recent progress and challenges Early gigaPoPs getting stronger Recent major advances: CalREN2 Great Plains Network Northern Crossroads
32
JET Collaboration Joint Engineering Team federal NGI agency Internet2 NGIX effort exchange points appropriate for Internet2 / NGI / non-US similar networks Ideal: connect universities and labs with advanced performance/functionality
34
Abilene: Design and Status Guy Almes Internet2 Project NANOG Meetings Denver — February 1, 1999
35
Abilene and Internet2 Internet2 as infrastructure: 140+ campus LANs about 35 gigaPoPs a few interconnect backbones Abilene is the 2nd Backbone OC-48 trunks from Qwest Cisco 12008 routers with IP/Sonet OC-3 and OC-12 access to gigaPoPs
36
Seattle Kansas City Denver Cleveland New York Atlanta Houston Indianapolis Abilene Core at 29-Jan-99 Sacramento Los Angeles
37
Abilene Architecture Core Architecture Access Architecture Network Operations Center at Indiana University Schedule: 14-Apr-98: announced Sep-98: demonstrated 29-Jan-99: operational
38
Abilene Architecture: Core Router Nodes located at Qwest PoPs Cisco 12008 GSR ICS Unix PC: IPPM and Network Mgmt Cisco 3640 Remote Access for NOC 100BaseT LAN and ‘console port’ access Remote 48v DC Power Controllers Initially, ten Router Nodes
40
Seattle Kansas City Denver Cleveland New York Atlanta Houston Indianapolis Abilene: by end of February 1999 Sacramento Los Angeles
41
Abilene Architecture: Access Access Nodes Located at Qwest PoPs Sonet: Connects Local to Long-distance Initially, about 120 Access Nodes: This list grows as the Qwest Sonet plant grows
42
Seattle Kansas City Denver Cleveland New York Atlanta Houston Pittsburgh Minneapolis Columbus Washington Phoenix Raleigh Trent on Salt Lake City Wilmington Dallas New Orleans Lincoln New Haven Detroit Miami Westfield Nashville Philadelp hia Indianapolis Newar k Albuquerque Oklahoma City Abilene, with Some Access Nodes Access NodeRouter Node Sacramento Oakland Eugene Los Angeles Anaheim Boston Chicago
43
Abilene NOC Located at Indiana University Excellent Operations and Engineering Skills Commitment evidenced in Abilene Rollout
44
Schedule Design work: Mar-98 and ongoing Rack design: May-98 to Jul-98 Initial assembly / testing: Jul-98 to Aug-98 Router Nodes / Interior Lines: Jul-98 Demo network installed: Sep-98 Production began: 29-Jan-99 Completion of OC-48 Core: mid-1999 Continuing improvement: ongoing
45
Seattle Kansas City Denver Cleveland New York Atlanta Houston Indianapolis Jun-99: Core Architecture Sacramento Los Angeles
46
Seattle Kansas City Denver Cleveland New York Atlanta Houston Indianapolis Sep-99: Core Architecture Sacramento Los Angeles Washington
47
Outline of Engineering Issues Routing: OSPF, BGP4, Routing Arbiter Database Multicast PIM-SparseMode, MBGP, MSDP Measurements Surveyor: One-way delay and loss Traffic utilization End to end flows with gigaPoP help OC3MON -- passive measurements
48
Broader Internet2, NGI, and International Advanced Net Initial NGIX sites Possible CA*net3 peering sites StarTap
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.