Download presentation
Presentation is loading. Please wait.
Published byUlrika Siv Lundström Modified over 5 years ago
1
EECS 122: Introduction to Computer Networks TCP Variations
Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA
2
Today’s Lecture: 11 Application Transport Network (IP) Link Physical 2
17, 18, 19 Application 10,11 6 Transport 14, 15, 16 7, 8, 9 Network (IP) 21, 22, 23 Link Physical 25
3
Outline TCP congestion control Router-based support Quick Review
TCP flavors Equation-based congestion control Impact of losses Cheating Router-based support RED ECN Fair Queueing XCP
4
Quick Review Slow-Start: cwnd++ upon every new ACK
Congestion avoidance: AIMD if cwnd > ssthresh ACK: cwnd = cwnd + 1/cwnd Drop: ssthresh =cwnd/2 and cwnd=1 Fast Recovery: duplicate ACKS: cwnd=cwnd/2 Timeout: cwnd=1
5
TCP Flavors TCP-Tahoe TCP-Reno TCP-newReno TCP-Vegas, TCP-SACK
cwnd =1 whenever drop is detected TCP-Reno cwnd =1 on timeout cwnd = cwnd/2 on dupack TCP-newReno TCP-Reno + improved fast recovery TCP-Vegas, TCP-SACK
6
TCP Vegas Improved timeout mechanism
Decrease cwnd only for losses sent at current rate avoids reducing rate twice Congestion avoidance phase: compare Actual rate (A) to Expected rate (E) if E-A > , decrease cwnd linearly if E-A < , increase cwnd linearly rate measurements ~ delay measurements see textbook for details!
7
TCP-SACK SACK = Selective Acknowledgements
ACK packets identify exactly which packets have arrived Makes recovery from multiple losses much easier
8
Standards? How can all these algorithms coexist?
Don’t we need a single, uniform standard? What happens if I’m using Reno and you are using Tahoe, and we try to communicate?
9
Equation-Based CC Simple scenario Observations:
assume a drop every k’th RTT (for some large k) w, w+1, w+2, ...w+k-1 DROP (w+k-1)/2, (w+k-1)/2+1,... Observations: In steady state: w= (w+k-1)/2 so w=k-1 Average window: 1.5(k-1) Total packets between drops: 1.5k(k-1) Drop probability: p = 1/[1.5k(k-1)] Throughput: T ~ (1/RTT)*sqrt(3/2p)
10
Equation-Based CC Idea: Approach:
Forget complicated increase/decrease algorithms Use this equation T(p) directly! Approach: measure drop rate (don’t need ACKs for this) send drop rate p to source source sends at rate T(p) Good for streaming audio/video that can’t tolerate the high variability of TCP’s sending rate
11
Question! Why use the TCP equation? Why not use any equation for T(p)?
12
Cheating Three main ways to cheat:
increasing cwnd faster than 1 per RTT using large initial cwnd Opening many connections
13
Increasing cwnd Faster
B x D E y y C x increases by 2 per RTT y increases by 1 per RTT Limit rates: x = 2y x
14
Increasing cwnd Faster
B x D E y
15
Larger Initial cwnd A B D E x y x starts SS with cwnd = 4
y starts SS with cwnd = 1
16
Open Many Connections A B D E x y Assume A starts 10 connections to B
D starts 1 connection to E Each connection gets about the same throughput Then A gets 10 times more throughput than D
17
Cheating and Game Theory
B x D E y D Increases by 1 Increases by 5 A Increases by 1 Increases by 5 (x, y) 22, 22 10, 35 35, 10 15, 15 Too aggressive Losses Throughput falls Individual incentives: cheating pays Social incentives: better off without cheating Classic PD: resolution depends on accountability
18
Lossy Links TCP assumes that all losses are due to congestion
What happens when the link is lossy? Recall that Tput ~ 1/sqrt(p) where p is loss prob. This applies even for non-congestion losses
19
Example p = 0 p = 1% p = 10%
20
What can routers do to help?
21
Paradox Routers are in middle of action
But traditional routers are very passive in terms of congestion control FIFO Drop-tail
22
FIFO: First-In First-Out
Maintain a queue to store all packets Send packet at the head of the queue Next to transmit Arriving packet Queued packets
23
Tail-drop Buffer Management
Drop packets only when buffer is full Drop arriving packet Next to transmit Arriving packet Drop
24
Ways Routers Can Help Packet scheduling: non-FIFO scheduling
Packet dropping: not drop-tail not only when buffer is full Congestion signaling
25
Question! Why not use infinite buffers? no packet drops!
26
The Buffer Size Quandary
Small buffers: often drop packets due to bursts but have small delays Large buffers: reduce number of packet drops (due to bursts) but increase delays Can we have the best of both worlds?
27
Random Early Detection (RED)
Basic premise: router should signal congestion when the queue first starts building up (by dropping a packet) but router should give flows time to reduce their sending rates before dropping more packets Therefore, packet drops should be: early: don’t wait for queue to overflow random: don’t drop all packets in burst, but space drops out
28
RED FIFO scheduling Buffer management:
Probabilistically discard packets Probability is computed as a function of average queue length (why average?) Discard Probability 1 Average Queue Length min_th max_th queue_len
29
RED (cont’d) min_th – minimum threshold max_th – maximum threshold
avg_len – average queue length avg_len = (1-w)*avg_len + w*sample_len Discard Probability 1 min_th max_th queue_len Average Queue Length
30
Discard Probability (P)
RED (cont’d) If (avg_len < min_th) enqueue packet If (avg_len > max_th) drop packet If (avg_len >= min_th and avg_len < max_th) enqueue packet with probability P Discard Probability (P) 1 min_th max_th queue_len Average Queue Length
31
RED (cont’d) P = max_P*(avg_len – min_th)/(max_th – min_th)
Improvements to spread the drops (see textbook) Discard Probability max_P 1 P Average Queue Length min_th max_th queue_len avg_len
32
Average vs Instantaneous Queue
33
RED Advantages High network utilization with low delays
Average queue length small, but capable of absorbing large bursts Many refinements to basic algorithm make it more adaptive (requires less tuning)
34
Explicit Congestion Notification
Rather than drop packets to signal congestion, router can send an explicit signal Explicit congestion notification (ECN): instead of optionally dropping packet, router sets a bit in the packet header If data packet has bit set, then ACK has ECN bit set Backward compatibility: bit in header indicates if host implements ECN note that not all routers need to implement ECN
35
Picture W W/2 A B
36
ECN Advantages No need for retransmitting optionally dropped packets
No confusion between congestion losses and corruption losses
37
Remaining Problem Internet vulnerable to CC cheaters!
Single CC standard can’t satisfy all applications EBCC might answer this point Goal: make Internet invulnerable to cheaters allow end users to use whatever congestion control they want How?
38
One Approach: Nagle (1987) Round-robin among different flows
one queue per flow
39
Round-Robin Discussion
Advantages: protection among flows Misbehaving flows will not affect the performance of well-behaving flows Misbehaving flow – a flow that does not implement any congestion control FIFO does not have such a property Disadvantages: More complex than FIFO: per flow queue/state Biased toward large packets – a flow receives service proportional to the number of packets
40
Solution? Bit-by-bit round robin Can you do this in practice?
No, packets cannot be preempted (why?) …we can only approximate it
41
Fair Queueing (FQ) Define a fluid flow system: a system in which flows are served bit-by-bit Then serve packets in the increasing order of their deadlines Advantages Each flow will receive exactly its fair rate Note: FQ achieves max-min fairness
42
Max-Min Fairness Denote Max-min fair rate computation:
C – link capacity N – number of flows ri – arrival rate Max-min fair rate computation: compute C/N if there are flows i such that ri <= C/N, update C and N if no, f = C/N; terminate go to 1 A flow can receive at most the fair rate, i.e., min(f, ri)
43
Example C = 10; r1 = 8, r2 = 6, r3 = 2; N = 3 C/3 = 3.33 C = C – r3 = 8; N = 2 C/2 = 4; f = 4 8 6 2 4 f = 4: min(8, 4) = 4 min(6, 4) = 4 min(2, 4) = 2 10
44
Implementing Fair Queueing
Idea: serve packets in the order in which they would have finished transmission in the fluid flow system
45
Example Flow 1 (arrival traffic) 1 2 3 4 5 6 time Flow 2 (arrival traffic) 1 2 3 4 5 time Service in fluid flow system 1 2 3 4 5 6 1 2 3 4 5 time Packet system 1 2 1 3 2 3 4 4 5 5 6 time
46
System Virtual Time: V(t)
Measure service, instead of time V(t) slope – rate at which every active flow receives service C – link capacity N(t) – number of active flows in fluid flow system at time t V(t) time Service in fluid flow system 1 2 3 4 5 6 1 2 3 4 5 time
47
Fair Queueing Implementation
Define - finishing time of packet k of flow i (in system virtual time reference system) - arrival time of packet k of flow i - length of packet k of flow i The finishing time of packet k+1 of flow i is
48
FQ Advantages FQ protect well-behaved flows from ill-behaved flows
Example: 1 UDP (10 Mbps) and 31 TCP’s sharing a 10 Mbps link
49
Alternative Implementations of Max-Min
Deficit round-robin Core-stateless fair queueing label packets with rate drop according to rates check at ingress to make sure rates are truthful Approximate fair dropping keep small sample of previous packets estimate rates based on these apply dropping as above wins because few large flows per-elephant state, not per-mouse state RED-PD: not max-min, but punishes big cheaters
50
Big Picture FQ does not eliminate congestion it just manages the congestion You need both end-host congestion control and router support for congestion control end-host congestion control to adapt router congestion control to protect/isolate
51
Explicit Rate Signaling (XCP)
Each packet contains: cwnd, RTT, feedback field Routers indicate to flows whether to increase or decrease: give explicit rates for increase/decrease amounts feedback is carried back to source in ACK Separation of concerns: aggregate load allocation among flows
52
XCP (continued) Aggregate: Allocation:
measures spare capacity and avg queue size computes desired aggregate change: D=aRS-bQ Allocation: uses AIMD positive feedback is same for all flows negative feedback is proportional to current rate when D=0, reshuffle bandwidth all changes normalized by RTT want equal rates, not equal windows
53
XCP (continued) Challenge: Solution:
how to give per-flow feedback without per-flow state? do you keep track of which flows you’ve signaled and which you haven’t? Solution: figure out desired change divide from expected number of packets from flow in time interval give each packet share of rate adjustment flow totals up all rate adjustment
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.