Download presentation
Presentation is loading. Please wait.
1
Router-Assisted Congestion Control
6.829 Computer Networks Lecture 4: Router-Assisted Congestion Control Mohammad Alizadeh Fall 2016
2
Recap of last time End-to-end congestion control
Congestion signal: Packet loss (TCP) sources dynamically set window Goals: Efficiency & “Fairness” Additive increase, multiplicative decrease t Window halved Loss
3
Routers can do more than drop packets
Congestion happens at the routers Have a lot of visibility Extent of congestion, queue size, which flows are misbehaving, etc. E2E congestion control cannot enforce isolation Where the actions of one flow does not affect others Protecting against uncooperative (e.g., malicious or buggy) sources needs router support Point 2 and 3 generally require scheduling mechanisms in routers (next lecture)
5
Outline How can routers help with congestion control?
Active queue management Random Early Detection (RED) PI / PIE Explicit congestion control ECN XCP
6
How can routers help? Buffer management Congestion signaling
Which packets to drop? When to signal congestion? Congestion signaling Drop, mark, send explicit messages Scheduling Which flow’s packet to send next? Today Thursday
7
Active Queue Management
8
Drop-tail queues Losses due to buffer overflow
Sender 1 Losses due to buffer overflow De-facto mechanism today Receiver Very simple to implement Filled buffers (large delay) Synchronizes flows Sender 2
9
Queuing delay in the wild: “Bufferbloat”
Mark Allman, Comments on bufferbloat, SIGCOMM-CCR, 43(1), Jan. 2013
10
RTT affects soft-real-time apps
R. Peon & W. Chan, SPDYEssentials, Google Tech Talk, 12/8/11
11
Random Early Detection (RED)
Proposed in 1993 Proactively drop packets probabilistically to Prevent onset of congestion by reacting early Remove synchronization between flows Reference to RED
12
RED Operation Drop based on average queue length x(t)
(EWMA algorithm used for averaging) minth maxth maxP 1.0 Avg queue length x(t) P(drop) t -> - q(t) - x(t) Single router Every T: x(t) (1 - wq) * x(t – T) + wq * q(t) x(t): smoothed, time averaged q(t) Based on slide by Vishal Misra (Columbia)
13
RED Problems Many parameters Performance very sensitive to parameters
minth, maxth, wq (EWMA averaging), … Performance very sensitive to parameters Tuning is difficult; control theory provides framework Poorly tuned system can be worse than drop-tail Implemented in routers, but usually turned off
14
RED queue length depends on number of flows, RTT, …
How does queue length change as the number of (long) TCP flows increases? minth maxth maxP 1.0 Avg queue length x(t) P(drop)
15
Proportional Integral (PI)
Three Ideas: Remove EWMA Faster response than RED Integral control Decouples queue length & number of flows Use derivative of queue Improves stability (less oscillation)
16
Proportional Integral (PI) Algorithm
Goal: Drive error to zero e(t) = q(t) – qref (“error”) qref q(t) Every T: p(t) p(t – T) + α*(q(t) – qref) (“Integral control”)
17
Proportional Integral (PI) Algorithm
What’s different about these two points? t q(t) qref Every T: p(t) p(t – T) + α*(q(t) – qref) + β*(q(t) – q(t-T))
18
PIE – PI “Enhanced” Control delay instead of queue length Auto-tune parameters α and β based on value of p Other heuristics (e.g. burst allowance)
19
Example: Varying TCP Traffic Intensity on 10Mbps Link
150 Flows 100 Flows 100 Flows 50 Flows 50 Flows Credit: Rong Pan (Cisco)
20
PIE vs. RED Queuing Latency
Credit: Rong Pan (Cisco)
21
PIE Drop Probability Credit: Rong Pan (Cisco)
22
Explicit Congestion Control
23
ECN: “mark” instead of drop
Sender 1 ECN = Explicit Congestion Notification ECN Mark (1 bit) Receiver Sender 2 23
24
Beyond binary feedback
Marks/drops are crude binary signals Don’t tell source extent of congestion As a result, TCP must increase window slowly, and back off aggressively upon “congestion” Esp. problematic in high BDP networks Could take many RTTs to ramp up Need big buffers to absorb TCP oscillations How can routers provide more information?
25
XCP: An eXplicit Control Protocol
Efficiency Controller Fairness Controller Credit: Dina Katabi (MIT)
26
Motivation for Decoupling
Router computes a flow’s fair rate explicitly Shuffle bandwidth in aggregate to converge to fair rates To make a decision, router needs state of all flows To make a decision, router needs state of this flow Unscalable Put a flow’s state in its packets [Stoica, CSFQ] Nice consequence: Dynamics of aggregate does not depend on number of flows. Scalable Credit: Dina Katabi (MIT)
27
How does XCP Work? Congestion Header Feedback Round Trip Time
Congestion Window Congestion Header Feedback Round Trip Time Congestion Window Feedback = packet Credit: Dina Katabi (MIT)
28
How does XCP Work? Round Trip Time Congestion Window
Feedback = packet Round Trip Time Congestion Window Feedback = packet Credit: Dina Katabi (MIT)
29
How does XCP Work? Congestion Window = Congestion Window + Feedback Routers compute feedback without keeping any per-flow state Credit: Dina Katabi (MIT)
30
How Does an XCP Router Compute the Feedback?
Efficiency Controller Fairness Controller Goal: Matches input traffic to link capacity & drains the queue Goal: Divides between flows to converge to fairness Looks at aggregate traffic & queue Looks at a flow’s state in Congestion Header Algorithm: Aggregate traffic changes by ~ Spare Bandwidth ~ - Queue Size So, = davg Spare - Queue Algorithm: If > 0 Divide equally between flows If < 0 Divide between flows proportionally to their current rates
31
Rate Control Protocol (RCP)
Switch stamps packet with a rate: R(t). Receiver feeds back R(t). Source sends at R(t)
32
Control interval (average RTT)
RCP Algorithm Control interval (average RTT) Estimate of # flows
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.