Download presentation
Presentation is loading. Please wait.
Published byGrant Lambert Modified over 9 years ago
1
TCP Westwood: Efficient Transport for High-speed wired/wireless Networks 2008
2
TCP Westwood (Mobicom 2001) Key Idea: Enhance congestion control via the Rate Estimate (RE) Estimate is computed at the sender by sampling and exponential filtering Samples are determined from ACK inter- arrival times and info in ACKs regarding amounts of bytes delivered RE is used by sender to properly set cwnd and ssthresh after packet loss (indicated by 3 DUPACKs, or Timeout)
3
Rate Estimation (BE-> RE) Ideally, would like to determine the connection fair share of the bottleneck bandwidth Since fair share is difficult (to define or determine), we instead estimate the achieved rate: Rate Estimate (RE) Receiver Sender Internet Bottleneck packets ACKs measure
4
First TCPW version used a “bandwidth like” estimator (BE) given by: “Original” Rate estimation (BE-> RE) t k-1 t k dk dk sample exponential filter filter gain RE/BE Estimation is similar to Keshav Packet Pair estimation
5
TCP Westwood: the control algorithm TCPW Algorithm Outline: When three duplicate ACKs are detected: set ssthresh=BE*RTT min (instead of ssthresh=cwin/2 as in Reno) if (cwin > ssthresh) set cwin=ssthresh When a TIMEOUT expires: set ssthresh=BE*RTT min (instead of ssthresh=cwnd/2 as in Reno) and cwin=1 Note: RTT min = min round trip delay experienced by the connection
6
At equilibrium, RE -> Fair RE Initially, two connections have different Wi and Ri. In the increase phase windows grow at the same rate Just before overflow : W i = R i x RTT= R i (Buf/Cap + RTT min ) for i = 1,2 At overflow, RE estimate reduces windows back to “zero backlog” line, ie: W i ’ = RE RTT min = Ri RTT min = Wi (RTT min /RTT) zero backlog bottleneck overflow Fair rate share Connection 1 Window Connection 2 Window RTT = RTT min +T same for both connections
7
Related TCP + Bdw estimation work Bandwidth estimate used by Vegas and Keshav PP TCP Vegas monitors Bdw and RTT to infer the bottleneck queue; then, from queue it derives feedback to congestion window Target bottleneck queue thresholds TH1 and TH2 used Keshav’s Packet Pair scheme also monitors bandwidth to estimate the bottleneck queue and compare to common queue target; it adjusts source rate Main difference: TCP Westwood does not need common queue target; it enforces fair share instead.
8
TCP Westwood Benefits What do we gain by using RE “feedback” in addition to packet loss? (a) better performance with random loss (ie, loss caused by random errors as opposed to overflow) (b) ability to distinguish random loss from buffer loss (c) using RE to estimate bottleneck bdw during slow start
9
TCPW and random loss Reno overreacts to random loss (cwin cut by half) TCPW less sensitive to random loss (1) a small fraction of “randomly” lost packets minimally impacts the rate estimate RE (2) Thus, cwin = RE x RTT remains unchanged As a result, TCPW throughput is higher than Reno and SACK
10
TCPW in a wireless lossy environment Efficiency: Improvement significant on high (Bdw x Length) paths Fairness: better fairness than RENO under varying RTT Friendliness: TCPW is friendly to TCP Reno
11
NASA Workshop Demo (From Steve Schultz, NASA) Internet Throughput Measurement
12
TCPW in presence of random loss: Analysis and Simulation
13
TCPW Friendliness Friendliness: fairness across different TCP flavors “Friendly share” principle: TCPW is allowed to recover the bandwidth wasted by NewReno because of “blind” window reduction TCPW original RE filter has Friendliness Problem…. 5 connections total (TCPW + RENO) ; Average throughput per connection is shown in the next slide
14
Lossy link ( 1% pkt loss ) TCPW Reno TCPW & Reno friendliness Average TCPW & Reno throughputs vs. TCPW/Reno mix (5 connections total) No link errors TCPW Reno 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Average Throughput(Mbps) 012345 No. of Reno connections Average Throughput(Mbps) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 012345 No. of Reno connections
15
Recall that the first TCPW version used a “bandwidth like” estimator (BE) given by: TCPW original estimation (BE) t k-1 t k dk dk sample exponential filter filter gain
16
TCPW Rate Estimation (TCP RE) Rate estimate (RE) is obtained by aggregating the data ACKed during the interval T (typically = RTT): sample exponential filter filter gain dkdk T d k-1 tktk T is the sample interval
17
BE overestimates fair rate ( = 2.5 Mbps) TCPW BE Not friendly to NewReno! TCPW RE or BE interaction with RENO No errors (bottleneck gets saturated) fair share Errors (0.5%), no congestion RE underestimates fair rate (=3.6 Mbps) TCPW RE does not improve thruput! fair share * (*) TCPW fair share > 50% because NewReno is incapable of getting 50% One TCPW RE or BE and one Reno share a 5Mbps bottleneck
18
TCPW with adaptive filter (AF) Neither RE or BE estimator are optimal for all situations BE is more effective in random loss RE is more appropriate in congestion loss (ie, buffer overflow) KEY IDEA: dynamically select the aggressive estimate (BE) or the conservative estimate (RE) depending on current channel status (congestion or random loss?) NEEDED: a “congestion measure” that gives us an idea of the most probable cause of packet loss (congestion or random) The Adaptive Filter actually provides a smooth transition from aggressive to conservative measure
19
TCPW AF: Sampling Adapting the size of sampling intervals to congestion level measure TkTk Congestion: T k grows TkTk No Congestion: T k = inter ACK continuous adaptation Rate sample
20
TCPW AF: Sampling (cont) The sample size T k is continuously adjusted according to current congestion: Max throughput assuming there is no congestion in the network actual achieved throughput Sample interval T k Upon ACK Receipt Severe Congestion: T k --> RTT Link Under Utilized: T k --> 0 (ie, inter ACK intrv) ABE Computes Congestion Level
21
TCP Westwood with Agile Probing: Handling Dynamic Large Leaky Pipes -- Problem address Infocom 2004 Leaky Pipes: packet loss due to error Unjustified cwnd cut and premature Slow Start exit Large Pipes: Large capacity and long delay Control scheme may not scale Dynamic Pipes: Dynamic load/changing link bandwidth (Due to change of technologies, e.g., 802.11, Bluetooth, GPRS) Linear increase limits efficiency
22
Persistent Non-Congestion Detection (PNCD) Persistent Non-congestion detected, Agile Probing invoked Dominant flows leave At around 50 sec
23
Agile Probing Objective: Guided by ERE (Eligible Rate Estimate), converge faster to more appropriate ssthresh adaptively and repeatedly resets ssthresh to ERE*RTTmin Exponentially increase cwnd if ssthresh >cwnd Linearly increase cwnd if ERE < ssthresh Exit Agile Probing when packet loss is detected
24
Agile Probing
25
Performance Evaluation (1) Throughput vs. bottleneck capacity during first 20 seconds (RTT=100ms)
26
Performance Evaluation (2) Throughput vs. delay: 100 flows (each last 30sec) randomly spread out during 20 minutes (bottleneck capacity = 45Mbps)
27
Convergence/Friendliness 2 connections 10 connections (5 each) ( one TCPW one NewReno) TCPW with Agile Probing converges to the same rate as NewReno, showing friendliness
28
Lab Measurements Results (FreeBSD Implementation) Persistent Non-Congestion Detection(PNCD) invokes Agile Probing after dominant flow left Startup invokes Agile Probing
29
Performance Evaluation (3) Friendliness and convergence
30
Summary Introduced the concept of Rate Estimation and related work Reviewed end-to-end estimation based congestion control methods Presented TCP Westwood, and the evolution of “fair rate” estimate to improve the performance; showed simulation results to evaluate the method Compared TCPW with other methods
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.