Presentation is loading. Please wait.

Presentation is loading. Please wait.

6/15/991 Re-engineering TCP Vegas Neal Cardwell Boris Bak.

Similar presentations


Presentation on theme: "6/15/991 Re-engineering TCP Vegas Neal Cardwell Boris Bak."— Presentation transcript:

1 6/15/991 Re-engineering TCP Vegas Neal Cardwell Boris Bak

2 6/15/992 Building a Better TCP l In deciding when to send, TCP errs on the side of simplicity u Doesn’t find the right rate u Sends in bursts l Some more aggressive approaches: u TCP Vegas: finding the right rate u Pacing: smoothing the bursts l Old ideas, but not deployed u Don’t fully understand benefits, costs

3 6/15/993 Overview l Congestion control review l The current approach & its problems l An alternative: TCP Vegas l Implementing Vegas in the real world: u The devil is in the details u Problems and patches

4 6/15/994 Congestion Control: The Problem l How fast should TCP send? l Assumptions: u IP’s best-effort service model u Primary feedback: t ACKs from receiver t Packet loss u No information from routers u Perhaps info from past or current flows along the same path

5 6/15/995 The Sub-Problems l Finding the right rate at startup l Staying at the right rate l Reacting to changes: u Path or traffic changes u More or less bandwidth available

6 6/15/996 TCP Reno l cwnd: bounds un-ACKed packets l ssthresh: guess as to safe cwnd l By default, probe for more bandwidth u slow start: cwnd *= 1.5 each RTT u congestion avoidance: cwnd+=.5 each RTT l Packet loss signals congestion u Fast retransmit, fast recovery: cwnd/=2 u Retransmission timeout: cwnd=1

7 6/15/997 Problems with TCP Reno l During slow start u Underutilizes and then swamps path l No “right rate”: cwnd traces a sawtooth u Underutilizes path u Increases queuing delay u Causes loss, reducing throughput u Inherently biased against long RTTs

8 6/15/998 Alternatives: Startup l Original [Jacobson 88] : always cwnd*=1.5 l Several studied in [Allman, Paxson 99] u Tracking slow start flights [BP95] u Closely-spaced ACKs [Hoe 96] u Tracking closely-spaced ACKs [AD98] u Receiver-side estimation [AP99] l But no studies w/ typical paths

9 6/15/999 Alternatives: Steady-State l Original [Jacobson 88]: always cwnd++ l CARD [Jain 89]: RTT l Tri-S [Wang, Crowcroft 91]: rate l DUAL [Wang,Crowcroft 92]: RTT l TCP Vegas [Brakmo,Peterson 94]: rate l Buffer-Fill Avoidance [Awadallah, Rai 98]: RTT l New, from UCB (Mo, Walrand 99): RTT, rate

10 6/15/9910 TCP Vegas l In congestion avoidance u cwnd = (actual rate)x(baseRTT) + 2 pkts u Each RTT, tweak cwnd by 1 pkt if needed l During slow start u To reduce overshoot, increase cwnd only every other RTT u Exit slow start when t cwnd > (actual rate)x(baseRTT) + 1 pkt

11 6/15/9911 Vegas in Theory and Practice l In isolation, 3-70% higher BW than Reno l Loses to Reno b/c it queues few packets l In steady state, 2 pkts queued per flow u No loss l Stable l Basically fair w.r.t. other Vegas flows u No RTT bias, unlike Reno u Slight bias toward newer connections

12 6/15/9912 Our Experience l Plan: implement Vegas for Linux l 2 months fixing Linux TCP bugs l Found problems with Vegas u Some fixed, some are works in progress... l This congestion control business is tricky stuff...

13 6/15/9913 Time Stamp Granularity l Arizona Vegas used 1ms l Linux has 10ms-granularity time in kernel l Nice for smoothing out noise? Nope! u RTT to UCB goes from 2 ticks to 3 - whoa! l Need resolution much smaller than typical RTT to avoid granularity artifacts l We now use microsecond resolution

14 6/15/9914 Slow Start Problems l Vegas slow start is too slow... u Increase by 1.5x every other RTT u Most flows are short, so...ouch! l But still eventually overshoots… u Suppose cwnd of W = bottleneck BW u It will send 1.5*W for two RTTs before slowing down l Our implementation increases every RTT u Need a better heuristic for exiting slow start

15 6/15/9915 Delayed ACKs l When receiver’s delayed ACK timer fires: u Causes ~100ms delay that looks just like queuing u Artificially small actual rate u Can compensate by min filtering l When receiver ACKs every other packet: u Sender sends burst of two packets u Only get RTT sample for 2nd packet of bursts u But this packet was queued behind the 1st… u Can compensate by allowing >1 packets queued

16 6/15/9916 Idleness Bug l Arizona Vegas assumes flow never idle u thus always has recent actual rate info l On restart from idle, must wait until we get fresh info before making a decision l Our implementation incorporates this fix

17 6/15/9917 Loss Hurts Vegas More l TCP uses duplicate ACKs to detect losses l So big windows help loss recovery l But Vegas tries to keep a small window u For my 512Kbps line, Vegas cwnd=4 u If Vegas loses 2 packets to my house, RTO l Could compensate w/ NetReno [Lin, Kung 98] u Send new packet out for every duplicate ACK u No RTO unless all packets in flight are lost

18 6/15/9918 Route Changes l Long noted: If new route is longer, this looks like queuing delay l Vegas slows when it should speed up! l Simulated proposals: choose baseRTT as min of RTTs over last N ACKs or T secs l Our experience: tricky; depends on bandwidth, delay

19 6/15/9919 Persistent Queuing l Long noted: Vegas can’t distinguish propagation delay from persistent queuing l New flows get bigger baseRTT, cwnd l Two approaches: u OK; prioritizes short flows u Not OK;causes unacceptable queuing delay l Proposal: treat as route change l We have yet to try this

20 6/15/9920 Conclusions l Hard to fine-tune and be robust to: u Random queuing delay u Delayed ACKs u Route changes u Link layer loss, retransmission, compression l Reno may be dumb, but it’s robust l I still have high hopes for Vegas...

21 6/15/9921 Future Directions l Lots of experiments & tweaking u Different approaches to slow start u Different heuristics for detecting route change u Low bandwidth, and high bandwidth-delay l Integrate Vegas with FACK l Competing with Reno; more aggressive? l What if we could get help from routers?...

22 6/15/9922 Bibliography: Experience l L. Brakmo, S. O'Malley, and L. Peterson. “TCP Vegas: New techniques for congestion detection and avoidance.” SIGCOMM 94. ftp://ftp.cs.arizona.edu/xkernel/Papers/vegas.ps l L. S. Brakmo and L. L. Peterson, "TCP Vegas: End to End Congestion Avoidance on a Global Internet.” IEEE JSAC, Vol. 13, No. 8, October 1995. ftp://ftp.cs.arizona.edu/xkernel/Papers/jsac.ps.Z l J.S. Ahn, Peter B. Danzig, Z. Liu and L. Yan, "Evaluation of TCP Vegas: Emulation and Experiment.” SIGCOMM 95. http://catarina.usc.edu/yaxu/Vegas/Reference/vegas95.ps

23 6/15/9923 Bibliography: Modeling l T. Bonald. “Comparison of TCP Reno and TCP Vegas via Fluid Approximation.” http://www.inria.fr/mistral/personnel/Thomas.Bonald/po stscript/vegas.ps.gz l J. Mo, R. La, V. Anantharam, J. Walrand. “Analysis and Comparison of TCP Reno and Vegas.” INFOCOM 99. http://walrandpc.eecs.berkeley.edu/Papers/vegas.pdf l G. Hasegawa, M. Murata, H. Miyahara. “Fairness and Stability of Congestion Control Mechanism of TCP.” INFOCOM 99.


Download ppt "6/15/991 Re-engineering TCP Vegas Neal Cardwell Boris Bak."

Similar presentations


Ads by Google