TCP Westwood (with Faster Recovery) Claudio Casetti Mario Gerla Scott Seongwook Lee Saverio Mascolo Medy Sanadidi Computer Science Department University of California, Los Angeles, USA
TCP Congestion Control Based on a sliding window algorithm Two stages: –Slow Start, initial probing for available bandwidth (“exponential” window increase until a threshold is reached) –Congestion Avoidance,”linear” window increase by one segment per RTT Upon loss detection (coarse timeout expiration or duplicate ACK) the window is reduced to 1 segment (TCP Tahoe)
Congestion Window of a TCP Connection Over Time
Shortcomings of current TCP congestion control After a sporadic loss, the connection needs several RTTs to be restored to full capacity It is not possible to distinguish between packet loss caused by congestion (for which a window reduction is in order) and a packet loss caused by wireless interference The window size selected after a loss may NOT reflect the actual bandwidth available to the connection at the bottleneck
New Proposal:TCP with “Faster Recovery” Estimation of available bandwidth (BWE): –performed by the source –computed from the arrival rate of ACKs, smoothed through exponential averaging Use BWE to set the congestion window and the Slow Start threshold
TCP FR: Algorithm Outline When three duplicate ACKs are detected: –set ssthresh=BWE*rtt (instead of ssthresh=cwin/2 as in Reno) –if (cwin > ssthresh) set cwin=ssthresh When a TIMEOUT expires: –set ssthresh=BWE*rtt (instead of ssthresh=cwnd/2 as in Reno) and cwin=1
Experimental Results Compare behavior of TCP Faster Recovery with Reno and Sack Compare goodputs of TCP with Faster Recovery, TCP Reno and TCP Sack –with bursty traffic (e.g., UDP traffic) –over lossy links
FR/Reno Comparison normalized throughput Time (sec) 1 TCP + 1 On/Off UDP (ON=OFF=100s) 5 MB buffer - 1.2s RTT Mb/s Cap. FR Reno
Goodput in presence of UDP Different Bottleneck Sizes Goodput [Mb/s] Bottleneck bandwidth [Mb/s] FR Reno Sack
Wireless and Satellite Networks e e e+06 1e-101e-091e-081e-071e-061e goodput (bits/s) bit error rate (logscale) Tahoe Reno FR link capacity = 1.5 Mb/s - single “one-hop” connection
Experiment Environment New version of TCP FR called “TCP Westwood” TCP Westwood is implemented in Linux kernel Link emulator can emulate: link delay loss event Sources share bottleneck through router to destination.
Goodput Comparison with Reno (Sack) Bottleneck capacity 5Mb Packet loss rate 0.01 Larger pipe size corresponds to longer delay Link delay 300ms Bottleneck bandwidth 5Mb Concurrent on-off UDP traffic
Friendliness with Reno Goodput comparison when TCP-W and Reno share the same bottleneck –over perfect link –5 Reno start first –5 west start after 5 seconds –100 ms link delay Goodput comparison when TCP-W and Reno share the same bottleneck –over lossy link(1%) –3 Reno start first then 2 Westwood –100 ms link delay TCP-W improves the performance over lossy link but does not catch the link.
Current Status & Open Issues Extended testing of TCP WEswoh Friendliness/greediness towards other TCP schemes Refinements of bandwidth estimation process Behavior with short-lived flows, and with large number of flows
Extra slides follow
Losses Caused by UDP Different RTT Goodput [Mb/s] one-way RTT (s) FR Reno Sack
Losses Caused by UDP Differerent Number of Connections Goodput [Mb/s] no. of connections FR1 Reno Sack
TCP over Lossy links Different Bottleneck Size Goodput [Mb/s] Bottleneck bandwidth [Mb/s] FR Reno Sack
Bursty traffic differerent number of connections Goodput [Mb/s] no. of connections FR Reno Sack
Fairness of TCP Westwood Cwnds of two TCP Westwood connections –over lossy link –concurrent UDP traffic –timeshifted –link delay 100ms Concurrent TCP-W connections goodput –5 connections (other2 are similar) –link delay 100ms.