Presentation is loading. Please wait.

Presentation is loading. Please wait.

Weren’t routers supposed

Similar presentations


Presentation on theme: "Weren’t routers supposed"— Presentation transcript:

1 Weren’t routers supposed
to be simple? ICSI May 8th, 2002 Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University

2 Background We tell our students that Internet routers are simple. All routers do is make a forwarding decision, update a header, then forward packets to the correct outgoing interface. But I don’t understand them anymore. List of required features is huge and still growing, Software is complex and unreliable, Hardware is complex and power-hungry, Yet still the throughput is less than 100%. Software: IOS based on 8-10M lines of code (5ESS is about 18M); reliability 500min/yr compared to 5min/yr for the phone system. Hardware: 10Gb/s linecard has about 30M gates, 2Gbits of memory, consumes 300W and costs $200k. Not surprising that for a given size, cost and power, a transport cct switch has about 4-8x the capacity.

3 Outline What limits the performance of a router
What are the basic requirements Basic functions: RFC 1812 Throughput 0.25s of Buffering What are the “new” requirements Multicast IPv6 DiffServ, IntServ, priorities, WFQ etc. Latency Packet sequence Others: Drop policies, VPNs, ACLs, DOS traceback, measurement, statistics, … What might be possible

4 Generic router architecture
Lookup IP Address Update Header Header Processing Data Hdr 1M prefixes Off-chip DRAM Address Table Next Hop Queue Packet Buffer Memory 1M packets

5 Generic router architecture
Lookup IP Address Update Header Header Processing Address Table Queue Packet 1 1 Buffer Memory Lookup IP Address Update Header Header Processing Address Table Queue Packet 2 2 Buffer Memory Scheduler Lookup IP Address Update Header Header Processing Address Table Queue Packet N N Buffer Memory

6 Router linecard OC192c linecard Scheduler Optics 30M gates
Lookup Tables Buffer & State Memory Optics Packet Processing Buffer Mgmt & Scheduling Physical Layer Framing & Maintenance Buffer Mgmt & Scheduling 30M gates 2.5Gbits of memory 2 square feet $25k cost, $200k price. Buffer & State Memory Scheduler

7 Router vital statistics
Cisco GSR 12416 Juniper M160 19” 19” Capacity: 160Gb/s Power: 4.2kW Capacity: 80Gb/s Power: 2.6kW 6ft 3ft 2ft 2.5ft

8 Router capacity x2.2/18 months DWDM Link speed x2/8 months Internet
x2/yr Router capacity x2.2/18 months Moore’s law x2/18 m DRAM access rate x1.1/18 m

9 An Example: Packet buffers 40Gb/s router linecard
10Gbits Buffer Memory Write Rate, R One 40B packet every 8ns Read Rate, R One 40B packet every 8ns Buffer Manager Use SRAM? + Fast enough random access time, but Too low density to store 10Gbits of data. Use DRAM? + High density means we can store data, but Can’t meet random access time.

10 An Example: Packet processing
CPU Instructions per minimum length packet since 1996

11 Will we need faster routers?
If in 10 years we have a 210 = 1024-fold increase in capacity of the Internet, we won’t have 1024 times as much POP space to hold the routers, 1024 times as many batteries, 1024 times as many fans. (and implement some new features too)

12 Outline What limits the performance of a router
What are the basic requirements Basic functions: RFC 1812 Throughput 0.25s of Buffering What are the “new” requirements Multicast IPv6 DiffServ, IntServ, priorities, WFQ etc. Latency Packet sequence Others: Drop policies, VPNs, ACLs, DOS traceback, measurement, statistics, … What might be possible

13 The Problem Output queued switches are impractical output DRAM R
1 N Can’t I just use N separate memory devices per output? R R R R DRAM data NR NR

14 Potted history [Karol et al. 1987] Throughput limited to by head-of-line blocking for Bernoulli IID uniform traffic. [Tamir 1989] Observed that with “Virtual Output Queues” (VOQs) Head-of-Line blocking is reduced and throughput goes up.

15 Potted history [Anderson et al. 1993] Observed analogy to maximum size matching in a bipartite graph. [M et al. 1995] (a) Maximum size match can not guarantee 100% throughput. (b) But maximum weight match can – O(N3). [Mekkittikul and M 1998] A carefully picked maximum size match can give 100% throughput. [Prabhakar and Dai 2000] 100% throughput possible for maximal matching with a speedup of two. Matching O(N2.5)

16 Throughput results Theory: Practice: Input Queueing (IQ) IQ + VOQ,
Different weight functions, incomplete information, pipelining. Randomized algorithms 100% [Tassiulas, 1998] 100% [Various] Theory: Input Queueing (IQ) IQ + VOQ, Maximum weight matching Sub-maximal size matching e.g. PIM, iSLIP. 100% [M et al., 1995] IQ + VOQ, Maximal size matching, Speedup of two. 100% [Dai & Prabhakar, 2000] 58% [Karol, 1987] Practice: Input Queueing (IQ) Various heuristics, distributed algorithms, and amounts of speedup

17 Outline What limits the performance of a router
What are the basic requirements Basic functions: RFC 1812 Throughput 0.25s of Buffering What are the “new” requirements Multicast: queues, bandwidth, backpressure, lookups, dropping. IPv6 DiffServ, IntServ, priorities, WFQ etc. Latency: 125us, pipelines, “cell size”. Packet sequence: parallelism and load-balancing. Others: Drop policies, VPNs, ACLs, DOS traceback, measurement, statistics, … What might be possible

18 What might be possible Router 1 rate, R rate, R 1 1 2 rate, R rate, R
N N k Bufferless

19 Characteristics Advantages Problems kh a memory bandwidth i
kh a lookup/classification rate i kh a routing/classification table size i Problems Throughput Multicast Packet order Latency Priorities and QoS

20 Intriguing possibility Two-stage Load-Balancing Router
External Inputs External Outputs Buffers 1 N 1 1 N N Recently shown to maximize throughput [C.S.Chang et al.:

21 Optical two-stage router
Linecards Lookup Phase 1 Buffer 1 Lookup Buffer Phase 2 2 Lookup Buffer 3

22 100’s of Tb/s router project Mark Horowitz, David Miller, Olav Solgaard, Nick McKeown
Passive Optical Switch Electronic Linecard #1 Electronic Linecard #625 160- 320Gb/s 160- 320Gb/s 40Gb/s Line termination IP packet processing Packet buffering Line termination IP packet processing Packet buffering 40Gb/s 160Gb/s 40Gb/s 40Gb/s (100Tb/s = 625 * 160Gb/s)

23 What Seems Impractical
What Hurts Maintaining packet order Buffering packets in external DRAM What Seems Impractical Low latency Multicast Delay guarantees


Download ppt "Weren’t routers supposed"

Similar presentations


Ads by Google