Download presentation
Presentation is loading. Please wait.
1
High Performance Networking with Little or No Buffers Yashar Ganjali on behalf of Prof. Nick McKeown High Performance Networking Group Stanford University yganjali@stanford.edu http://www.stanford.edu/~yganjali Joint work with: Guido Appenzeller, Ashish Goel, Tim Roughgarden February 17, 2005 Stanford Networking Research Center Final Project Review Workshop
2
February 17, 2005 SNRC – Final Project Review2 Motivation Networks with Little or No Buffers Problem Internet traffic is doubled every year Disparity between traffic and router growth (space, power, cost) Possible Solution All-Optical Networking Consequences Large capacity large traffic Little or no buffers
3
February 17, 2005 SNRC – Final Project Review3 Which would you choose? DSL Router 1DSL Router 2 $50 4 x 10/100 Ethernet 1.5Mb/s DSL connection 1Mbit of packet buffer $55 4 x 10/100 Ethernet 1.5Mb/s DSL connection 4Mbit of packet buffer Bigger buffers are better
4
February 17, 2005 SNRC – Final Project Review4 What we learn in school Packet switching is good Long haul links are expensive Statistical multiplexing allows efficient sharing of long haul links Packet switching requires buffers Packet loss is bad Use big buffers Luckily, big buffers are cheap
5
February 17, 2005 SNRC – Final Project Review5 Statistical Multiplexing Observations 1.The bigger the buffer, the lower the packet loss. 2.If the buffer never goes empty, the outgoing line is busy 100% of the time.
6
February 17, 2005 SNRC – Final Project Review6 What we learn in school Queueing Theory 1 1 M/M/1 X Buffer size Loss rate Observations 1.Can pick buffer size for a given loss rate. 2.Loss rate falls fast with increasing buffer size. 3.Bigger is better.
7
February 17, 2005 SNRC – Final Project Review7 What we learn in school Moore’s Law: Memory is plentiful and halves in price every 18 months. 1Gbit memory holds 500k packets and costs $25. Conclusion: Make buffers big. Choose the $55 DSL router.
8
February 17, 2005 SNRC – Final Project Review8 Why bigger isn’t better Network users don’t like buffers Network operators don’t like buffers Router architects don’t like buffers We don’t need big buffers We’d often be better off with smaller ones
9
February 17, 2005 SNRC – Final Project Review9 Backbone Router Buffers Universally applied rule-of-thumb: A router needs a buffer size: 2T is the two-way propagation delay C is capacity of bottleneck line Context Mandated in backbone and edge routers. Appears in RFPs and IETF architectural guidelines.. Usually referenced to Villamizar and Song: “High Performance TCP in ANSNET”, CCR, 1994. Already known by inventors of TCP [Van Jacobson, 1988] Has major consequences for router design C Router Source Destination 2T
10
February 17, 2005 SNRC – Final Project Review10 Review: TCP Congestion Control Only W packets may be outstanding Rule for adjusting W If an ACK is received: W ← W+1/W If a packet is lost:W ← W/2 SourceDest t Window size
11
February 17, 2005 SNRC – Final Project Review11 Buffer Size in the Core Probability Distribution B 0 Buffer Size
12
February 17, 2005 SNRC – Final Project Review12 Backbone router buffers It turns out that The rule of thumb is wrong for a core routers today Required buffer is instead of
13
February 17, 2005 SNRC – Final Project Review13 Simulation Required Buffer Size
14
February 17, 2005 SNRC – Final Project Review14 Impact on Router Design 10Gb/s linecard with 200,000 x 56kb/s flows Rule-of-thumb: Buffer = 2.5Gbits Requires external, slow DRAM Becomes: Buffer = 6Mbits Can use on-chip, fast SRAM Completion time halved for short-flows 40Gb/s linecard with 40,000 x 1Mb/s flows Rule-of-thumb: Buffer = 10Gbits Becomes: Buffer = 50Mbits Many thanks to Guido Appenzeller at Stanford. For more details, see Sigcomm 2004 paper available at: http://www.stanford.edu/~nickm/papers
15
February 17, 2005 SNRC – Final Project Review15 How small can buffers be? Imagine you want to build an all-optical router for a backbone network… …and you can build a few dozen packets in delay lines. Conventional wisdom: It’s a routing problem (hence deflection routing, burst- switching, etc.) Our belief: First, think about congestion control.
16
February 17, 2005 SNRC – Final Project Review16 The chasm between theory and practice M/M/1 1 1 X = 50%, EX = 1 packet = 75%, EX = 3 packets = 50%, P[X>10] < 10 -3 = 75%, P[X>10] < 0.06 Theory (benign conditions) Practice Typical OC192 router linecard buffers over 2,000,000 packets Can we make the traffic arriving at the routers Poisson “enough” to get most of the benefit?
17
February 17, 2005 SNRC – Final Project Review17 Observations 1. Data-rate of most flows small fraction of line- rate Access links throttle flows to low-rate (1-2Mb/s) Core:Access > 1000:1 (Today TCP’s window size is limited) 2. If packet arrivals were Poisson Buffer size of only 5-10 packets We get 80-90% of the capacity of the link 3. In an all-optical network capacity is plentiful (presumably).
18
February 17, 2005 SNRC – Final Project Review18 What we know so far about very small buffers Arbitrary Injection Process If Poisson Process with load < 1 Complete Centralized Control Any rate > 0 need unbounded buffers TheoryExperiment Need buffer size of approx: O(logD + logW) i.e. 20-30 pkts D=#of hops W=window size [Goel 2004] TCP Pacing: Results as good or better than for Poisson Constant fraction throughput with constant buffers [Leighton]
19
February 17, 2005 SNRC – Final Project Review19 Early results Congested core router with 10 packet buffers. Average offered load = 80% RTT = 100ms; each flow limited to 2.5Mb/s router source server source 10Gb/s >10Gb/s
20
February 17, 2005 SNRC – Final Project Review20 Slow access links, lots of flows, 10 packet buffers router source server source 10Gb/s 5Mb/s Congested core router with 10 packet buffers. RTT = 100ms; each flow limited to 2.5Mb/s
21
February 17, 2005 SNRC – Final Project Review21 Conclusion We can reduce 1,000,000 packet buffers to 10,000 today. We can probably reduce to 10-20 packet buffers: With many small flows, no change needed With some large flows, need pacing in the access routers or at the edge devices. Need more experiments.
22
Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.