Presentation is loading. Please wait.

Presentation is loading. Please wait.

Router-assisted congestion control Lecture 8 CS 653, Fall 2010.

Similar presentations


Presentation on theme: "Router-assisted congestion control Lecture 8 CS 653, Fall 2010."— Presentation transcript:

1 Router-assisted congestion control Lecture 8 CS 653, Fall 2010

2 TCP congestion control performs poorly as bandwidth or delay increases Round Trip Delay (sec) Avg. TCP Utilization Bottleneck Bandwidth (Mb/s) Avg. TCP Utilization Shown analytically in [Low01] and via simulations Because TCP lacks fast response Spare bandwidth is available  TCP increases by 1 pkt/RTT even if spare bandwidth is huge When a TCP starts, it increases exponentially  Too many drops  Flows ramp up by 1 pkt/RTT, taking forever to grab the large bandwidth Because TCP lacks fast response Spare bandwidth is available  TCP increases by 1 pkt/RTT even if spare bandwidth is huge When a TCP starts, it increases exponentially  Too many drops  Flows ramp up by 1 pkt/RTT, taking forever to grab the large bandwidth 50 flows in both directions Buffer = BW x Delay RTT = 80 ms 50 flows in both directions Buffer = BW x Delay BW = 155 Mb/s

3 Proposed Solution: Decouple Congestion Control from Fairness High Utilization; Small Queues; Few Drops Bandwidth Allocation Policy

4 Proposed Solution: Decouple Congestion Control from Fairness Example: In TCP, Additive-Increase Multiplicative- Decrease (AIMD) controls both Coupled because a single mechanism controls both How does decoupling solve the problem? 1.To control congestion: use MIMD which shows fast response 2.To control fairness: use AIMD which converges to fairness

5 Characteristics ofSolution 1.Improved Congestion Control (in high bandwidth- delay & conventional environments): Small queues Almost no drops 2.Improved Fairness Flexible bandwidth allocation: min-max fairness, proportional fairness, differential bandwidth allocation,… 3.Scalable (no per-flow state)

6 XCP: An eXplicit Control Protocol 1. Congestion Controller 2. Fairness Controller

7 Feedback Round Trip Time Congestion Window Congestion Header Feedback Round Trip Time Congestion Window How does XCP Work? Feedback = + 0.1 packet

8 Round Trip Time Congestion Window Feedback = - 0.3 packet How does XCP Work?

9 Congestion Window = Congestion Window + Feedback Routers compute feedback without any per-flow state How does XCP Work? XCP extends ECN and CSFQ

10 How Does an XCP Router Compute the Feedback? Congestion Controller Fairness Controller Goal: Divides  between flows to converge to fairness Looks at a flow’s state in Congestion Header Algorithm: If  > 0  Divide  equally between flows If  < 0  Divide  between flows proportionally to their current rates MIMD AIMD Goal: Matches input traffic to link capacity & drains the queue Looks at aggregate traffic & queue Algorithm: Aggregate traffic changes by   ~ Spare Bandwidth  ~ - Queue Size So,  =  d avg Spare -  Queue  Congestion Controller Fairness Controller

11  =  d avg Spare -  Queue Theorem: System converges to optimal utilization (i.e., stable) for any link bandwidth, delay, number of sources if: (Proof based on Nyquist Criterion) Getting the devil out of the details … Congestion Controller Fairness Controller Algorithm: If  > 0  Divide  equally between flows If  < 0  Divide  between flows proportionally to their current rates Need to estimate number of flows N RTT pkt : Round Trip Time in header Cwnd pkt : Congestion Window in header T: Counting Interval No Per-Flow State No Per-Flow State

12 Implementation Implementation uses few multiplications & additions per packet Practical! XCP can co-exist with TCP and can be deployed gradually Gradual Deployment Liars? Policing agents at edges of the network or statistical monitoring Easier to detect than in TCP

13 Performance

14 Bottleneck S1S1 S2S2 R 1, R 2, …, R n SnSn Subset of Results Similar behavior over:

15 XCP Remains Efficient as Bandwidth or Delay Increases Bottleneck Bandwidth (Mb/s) Utilization as a function of Bandwidth Avg. Utilization Round Trip Delay (sec) Avg. Utilization Utilization as a function of Delay

16 Avg. Utilization XCP Remains Efficient as Bandwidth or Delay Increases Bottleneck Bandwidth (Mb/s)Round Trip Delay (sec) Utilization as a function of Delay XCP increases proportionally to spare bandwidth  and  chosen to make XCP robust to delay Utilization as a function of Bandwidth

17 XCP Shows Faster Response than TCP XCP shows fast response! Start 40 Flows Stop the 40 Flows

18 XCP Deals Well with Short Web-Like Flows Arrivals of Short Flows/sec AverageUtilization AverageQueue Drops

19 XCP is Fairer than TCP Flow ID Different RTT Same RTT Avg. Throughput Flow ID Avg. Throughput (RTT is 40 ms 330 ms )

20 XCP Summary  XCP  Outperforms TCP  Efficient for any bandwidth  Efficient for any delay  Scalable  Benefits of Decoupling  Use MIMD for congestion control which can grab/release large bandwidth quickly  Use AIMD for fairness which converges to fair bandwidth allocation

21 XCP Pros and Cons  Long-lived flows: Works well  Convergence to fair share rates, high link utilization, small queue, low loss  Mix of flow lengths: Deviates from processor sharing  Non-trivial convergence time  Flow durations longer

22

23

24

25

26

27

28

29

30 ATM ABR congestion control ABR: available bit rate:  “Elastic service”  If sender’s path “underloaded”:  sender should use available bandwidth  If sender’s path congested:  sender throttled to minimum guaranteed rate RM (resource management) cells:  Sent by sender, interspersed with data cells  Bits in RM cell set by switches (“network-assisted”)  NI bit: no increase in rate (mild congestion)  CI bit: congestion indication  RM cells returned to sender by receiver, with bits intact

31 ATM ABR congestion control  Two-byte ER (explicit rate) field in RM cell  congested switch may lower ER value in cell  sender’ send rate thus minimum supportable rate on path  EFCI bit in data cells: set to 1 in congested switch  if data cell preceding RM cell has EFCI set, sender sets CI bit in returned RM cell

32 ATM ERICA Switch Algorithm  ERICA: Explicit rate indication for congestion avoidance goals:  Utilization: allocate all available capacity to ABR flows  Queueing delay: keep queue small  Fairness: max-min “sought only after utilization achieved” (decoupled from utilization?)  Stability, ie reaches steady-state, and robustness, ie graceful degradation, when no steady-state

33 ERICA: Setting explicit rate (ER)  Initialization  MaxAllocPrev = MaxAllocCur = FairShare  End of avg’ing interval  Total ABR Cap. = Link Cap. - VBR Cap.  Target ABR Cap. = Fraction*Tot. ABR Cap.  Z = ABR Input rate  FairShare = Target ABR Cap. / # Active VCs  Goto Initialization  During congestion  VCShare = VCRate/Z  If (Z > 1+ ± ) ER = max(FairShare, VCShare)  Else ER = max(MaxAllocPrev, VCShare)  MaxAllocCur = max(MaxAllocCur, ER)  If (ER > FairShare and VCRate < FairShare) ER = FairShare

34 ABR vs. XCP or RCP?  Similarities?  Differences?


Download ppt "Router-assisted congestion control Lecture 8 CS 653, Fall 2010."

Similar presentations


Ads by Google