Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sachin Katti, CS244 Slides courtesy: Nick McKeown

Similar presentations


Presentation on theme: "Sachin Katti, CS244 Slides courtesy: Nick McKeown"— Presentation transcript:

1 Sachin Katti, CS244 Slides courtesy: Nick McKeown
Sizing Router Buffers Sachin Katti, CS244 Slides courtesy: Nick McKeown

2 Routers need Packet Buffers
It’s well known that routers need packet buffers It’s less clear why and how much Goal of this work is to answer the question: How much buffering do routers need? Given that queueing delay is the only variable part of packet delay in the Internet, you’d think we’d know the answer already!

3 How much Buffer does a Router need?
Source Router Destination C 2T Universally applied rule-of-thumb: A router needs a buffer size: 2T is the two-way propagation delay (or just 250ms) C is capacity of bottleneck link Context Mandated in backbone and edge routers. Appears in RFPs and IETF architectural guidelines.. Usually referenced to Villamizar and Song: “High Performance TCP in ANSNET”, CCR, 1994. Already known by inventors of TCP [Van Jacobson, 1988] Has major consequences for router design Notes: Widely used rule-of-thumb that router per interface needs one delay bw product or 2T*C Router serves link of capacity C, for this interface needs buffer of 2T*C The RFC is 3439 “Some Internet Architectural Guidelines

4 Only W=2 packets may be outstanding
TCP Only W=2 packets may be outstanding Router Source Dest C’ > C C TCP Congestion Window controls the sending rate Sender sends packets, receiver sends ACKs Sending rate is controlled by Window W, At any time, only W unacknowledged packets may be outstanding The sending rate of TCP is Storyline: TCP controls sending rate using Congestion Window W The rule is that you never have more than W packets outstanding These outstanding packets can be in one of three places - the buffer, on the wire or dropped Here is a practical example Rate of TCP is W/RTT. Send W packets, wait one RTT before can send the next one. Don’t: Mention congestion Outstanding packet argument

5 For every W ACKs received,
Single TCP Flow Router with large enough buffers for full link utilization For every W ACKs received, send W+1 packets B Source Dest C’ > C C t Window size RTT Storyline: With no buffer we can’t get full utilization Now let’s assume we have enough buffer for full utilization (and work ourselves backwards to how much that is) (click) If full utilization, bneck link is always full Clocked acks (in pkts/s) Clocked sends Suprising: TCP sends at constant rate, does not vary! It might stop occasionally (e.g. when window scales down) but will either send at C or not at all Occasionally sender increases W – what happens with those extra packets? Go into the buffer If buffer is full, they are dropped, window size reduced Summary if there is enough buffering All links are filled (allmost) all of the time Sender increases window, where do packets go? Extra packets are absorbed by the buffer Buffer compensates for changes in Window size Also follows sawtooth pattern (so does RTT) As R const, W sawtooth, RTT same sawtooth When no of outstanding packets is reduced (W->W/2), buffer has to compensate for scale down

6 Required buffer is height of sawtooth
t

7 Buffer = rule of thumb R is const as RTT=Window

8 Over-buffered Link Story: Too much buffering
When window size drops, buffer reduced but not empty Always fully utilized But additional latency More would be more latency

9 Under-buffered Link Story:
Again the buffer empties when window is decreased But not enough packets outstanding to fill link Bottleneck link is underutilized

10 Origin of rule-of-thumb
Before and after reducing window size, the sending rate of the TCP sender is the same Inserting the rate equation we get The RTT is part transmission delay T and part queueing delay B/C . We know that after reducing the window, the queueing delay is zero. Don’t: mention one of several ways Spend too much time

11 Rule-of-thumb Rule-of-thumb makes sense for one flow
Typical backbone link has > 20,000 flows Does the rule-of-thumb still hold? Answer: If flows are perfectly synchronized, then Yes. If flows are desynchronized then No. Comment: Today RoT is appied everywhere

12 Outline The Rule of Thumb
The buffer requirements for a congested router Synchronized flows Desynchronized flows The 2T×C/sqrt(n) rule Buffer requirements for short flows (slow-start) Experimental Verification Conclusion

13 If flows are synchronized
Story: If we have several flows, role of buffer is to absorb fluctuation in the sum of the window sizes In this case three flows, each flow has 1/3 of W of a single flow saturating the router If they are synchronized they add up to one big sawtooth t Aggregate window has same dynamics Therefore buffer occupancy has same dynamics Rule-of-thumb still holds.

14 When are Flows Synchronized?
Small numbers of flows tend to synchronize Large aggregates of flows are not synchronized For > 200 flows, synchronization disappears Measurements in the core give no indication of synchronization

15 If flows are not synchronized
Probability Distribution B Buffer Size Story What happens if flows are not synchronized What I mean with not synchronized is that the congestion windows W(t) independentl of each other What can we say about the sum of the congestion windows? We would expect the fluctuation to be smaller – statistical multiplexing! There is a simple argument we can make A – Same Distr - The congestion windows will have the same distribution as they see the same loss probability B - Indep - Sum of independent random variables should is according to CLT a gaussian Measure This is actually the case Parameters: 800 Mb/s 2000 Flows Each win size in low 10’s.

16 Quantitative Model Model congestion window of a flow as random variable model as where For many de-synchronized flows We assume congestion windows are independent All congestion windows have the same probability distribution XXX-Script Now central limit theorem gives us the distribution of the sum of the window sizes

17 Buffer vs. Number of Flows for a given Bandwidth
If for a single flow we have For a given C, the window W scales with 1/n and thus Standard deviation of sum of windows decreases with n Story: How does the required buffer depend on number of flows? Let’s assume we know average window and standard deviation for a single flow If we have twice as many flows, each will have have half the mean and half the standard deviation Both scale with 1/n If we plug this into the formula from the last page, we find that standard deviation of the sum of windows should decrease with 1/sqrt(n) We would thus expect the buffer to decrease with 1/sqrt(n) Thus as n increases, buffer size should decrease

18 Required buffer size Simulation

19 Summary Flows in the core are desynchronized
For desynchronized flows, routers need only buffers of Notes: - Mention contrary to what was previously assumed

20 Experimental Evaluation Overview
Simulation with ns2 Over 10,000 simulations that cover range of settings Simulation time 30s to 5 minutes Bandwidth 10 Mb/s - 1 Gb/s Latency 20ms -250 ms, Physical router Cisco GSR with OC3 line card In collaboration with University of Wisconsin Experimental results presented here Long Flows - Utilization Mixes of flows - Flow Completion Time (FCT) Mixes of flows - Heavy Tailed Flow Distribution Short Flows – Queue Distribution

21 Long Flows - Utilization (I) Small Buffers are sufficient - OC3 Line, ~100ms RTT
99.9% 99.5% 98.0%

22

23

24

25

26

27

28

29

30 Impact on Router Design
10Gb/s linecard with 200,000 x 56kb/s flows Rule-of-thumb: Buffer = 2.5Gbits Requires external, slow DRAM Becomes: Buffer = 6Mbits Can use on-chip, fast SRAM Completion time halved for short-flows 40Gb/s linecard with 40,000 x 1Mb/s flows Rule-of-thumb: Buffer = 10Gbits Becomes: Buffer = 50Mbits For more details… “Sizing Router Buffers – Guido Appenzeller, Isaac Keslassy and Nick McKeown, to appear at SIGCOMM 2004 Flavio Bonomi: 10%-40% less cost, power, space on board Allows to build cards at higher line rates If we can move buffer on chip, radically changes line card design


Download ppt "Sachin Katti, CS244 Slides courtesy: Nick McKeown"

Similar presentations


Ads by Google