Download presentation
Presentation is loading. Please wait.
1
CS144 An Introduction to Computer Networks
Flow Control, Congestion Control and the size of Router Buffers Section Nick McKeown Professor of Electrical Engineering and Computer Science, Stanford University
2
Outline Flow Control Congestion Control The size of Router Buffers
CS144, Stanford University
3
Sliding Window Window Size Data ACK’d Outstanding Data
sent, but un-ack’d Data OK to send Data not OK to send yet
4
Flow Control Inside Destination OS user RcvBuffer Arriving packets
Resequence, ACK, etc RcvBuffer Inside Destination Arriving packets CS144, Stanford University
5
Dynamics of flow control
Animation at: CS144, Stanford University
6
Outline Flow Control Congestion Control The size of Router Buffers
CS144, Stanford University
7
TCP Sliding Window A B (2) R x RTT = Window size
ACK Window Size Round-trip time (2) R x RTT = Window size Round-trip time Window Size Window Size A B ACK ACK ACK (1) R x RTT > Window size
8
“Bag of packets” If there is a single flow, TCP is probing to find out how big the “bag” is so it can fill it. In general, a TCP flow is trying to figure out how much room there is in the “bag” for its flow. CS144, Stanford University
9
TCP Congestion Control
TCP varies the number of outstanding packets in the network by varying the window size: Window size = min{Advertised window, Congestion Window} Receiver Transmitter (“cwnd”)
10
AIMD Additive Increase, Multiplicative Decrease
CS144, Stanford University
11
Leads to the AIMD “sawtooth”
cwnd “Drops” halved t
12
Dynamics of an AIMD flow
Animation at: CS144, Stanford University
13
Outline Flow Control Congestion Control The size of Router Buffers
CS144, Stanford University
14
The size of a router buffer
C
15
Rule-of-Thumb Buffer size = 2T x C, where:
2T = RTTmin = 2(propagation delay + packetization delay) C = capacity of outgoing line. Example: 10Gb/s interface, with 2T = 250ms 300Mbytes of buffering. Read and write new packet every 32ns.
16
The Story 10,000 20 1,000,000 # packets at 10Gb/s
After this relatively long introduction, let me give an overview of the rest of my presentation. I'll talk about three different rules for sizing router buffers. The first rule is the rule-of-thumb which I just described. As I mentioned, this rule is based on the assumption that we want to have 100% link utilization at the core links. The second rule is a more recent result proposed by Appenzeller, Keslassy, and McKeown which basically challenges the original rule-of-thumb. Based on this rule if we have N flows going through the router, we can reduce the buffer size by a factor of sqrt(N) The underlying assumption is that we have a large number of flows, and the flows are desynchronized. Finally, the third rule which I’ll talk about today, says that If we are willing to sacrifice a very small amount of throughput, i.e. if having a throughput less than 100% is acceptable, We might be able to reduce the buffer sizes significantly to just O(log(W)) packets. Here W is the maximum congestion window size. If we apply each of these rules to a 10Gb/s link We will need to buffer 1,000,000 packets based on the first rule, About 10,000 packets based on the 2nd one, And only 20 packets based on the 3rd rule. For the rest of this presentation I’ll show you the intuition behind each of these rules; and Will provide some evidence that validates the rule. Let’s start with the rule-of-thumb. Assume: Large number of desynchronized flows; 100% utilization Assume: Large number of desynchronized flows; <100% utilization
17
Time Evolution of a Single TCP Flow
Time evolution of a single TCP flow through a router. Buffer is 2T*C Time evolution of a single TCP flow through a router. Buffer is < 2T*C
18
Buffer size = 2T x C Interval magnified on next slide
19
When sender pauses, buffer drains
one RTT Drop
20
Origin of rule-of-thumb
Before and after reducing window size, the sending rate of the TCP sender is the same Inserting the rate equation we get The RTT is part transmission delay T and part queueing delay B/C . We know that after reducing the window, the queueing delay is zero.
21
Rule-of-thumb Rule-of-thumb makes sense for one flow
Typical backbone link has > 20,000 flows Does the rule-of-thumb still hold? Answer: If flows are perfectly synchronized, then Yes. If flows are desynchronized then No.
22
The Story 10,000 20 1,000,000 # packets at 10Gb/s
Assume: Large number of desynchronized flows; 100% utilization Assume: Large number of desynchronized flows; <100% utilization
23
Synchronized Flows Aggregate window has same dynamics
Therefore buffer occupancy has same dynamics Rule-of-thumb still holds.
24
Many TCP Flows Probability Distribution Buffer Size
25
Required Buffer Size - Simulations
26
Level3 WAN Experiment High link utilization for two weeks
Buffer sizes on three parallel links: 190ms (190K packets) 10ms (10K packets) 1ms (1k packets) Note: This slide had a typo that caused some confusion (for me!) during class.
27
Drop vs. Load Buffer = 190ms, 10ms
28
Drop vs. Load Buffer = 1ms
29
The Story 10,000 20 1,000,000 # packets at 10Gb/s
Assume: Large number of desynchronized flows; 100% utilization Assume: Large number of desynchronized flows; <100% utilization
30
Buffer Size 1,000,000 10Gb/s WAN 10,000 Throughput Number of packets
100% t Window Size On-chip buffers Smaller design Lower power Buffer Throughput Number of packets 1,000,000 10Gb/s WAN 10,000
31
Integrated all-optical
Buffer Size 100% log(W) ~ 90% On-chip buffers Smaller design Lower power 20 pkts Integrated all-optical buffer [UCSB 2008] Throughput Number of packets 1,000,000 10Gb/s WAN ~50 10,000
32
Consequences? 10-50 packets on a chip
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.