Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multirate Congestion Control Using TCP Vegas Throughput Equations Anirban Mahanti Department of Computer Science University of Calgary Calgary, Alberta.

Similar presentations


Presentation on theme: "Multirate Congestion Control Using TCP Vegas Throughput Equations Anirban Mahanti Department of Computer Science University of Calgary Calgary, Alberta."— Presentation transcript:

1 Multirate Congestion Control Using TCP Vegas Throughput Equations Anirban Mahanti Department of Computer Science University of Calgary Calgary, Alberta Canada T2N 1N4

2 2 Problem Overview Context: Live or schedule multicast of popular content to thousands of clients “Layered Encoding” to serve heterogeneous clients Employ a “multirate” congestion control protocol Receiver-driven for scalability Internet Video Server ADSL Dial-up High-speed Access

3 3 The Multirate CC Wish List 1. “TCP friendly” 2. Operate without inducing packet losses while probing for bandwidth 3. Receivers behind a common bottleneck link receive media of the same quality 4. Responsive to congestion, yet achieve consistent playback quality

4 4 TCP Friendliness for Multimedia Streams TCP-friendly bandwidth share? As much as a TCP flow under similar condition (e.g., RLC Infocom’98) Function of the number of receivers (e.g., WEBRC Sigcomm’02) Equation-based approach Fair sharing of bandwidth Lower variation in reception rate compared to TCP- like AIMD approaches

5 5 Objective Develop a new multirate congestion control protocol using TCP Vegas throughput model – “Adaptive Vegas Multicast Rate Control” Less oscillatory throughput? Fewer packet losses? Reduced RTT bias? Prior work: Reno-like rate control (e.g., RLM Sigcomm’96, RLC, FLID-DL in NGC’00 etc)

6 6 TCP Reno Throughput Model Reno (Mathis et al. ACM CCR 1997, Padhye et al. Sigcomm’98)

7 7 TCP Vegas Window Evolution Window Size NO LOSS WINDOW EVOLUTION Stable Backlog: No-loss Window evolution between loss events [Samois & Vernon’03]

8 TCP Throughput Models

9 9 TCP Vegas Throughput Model [Samois & Vernon’03]

10 10 TCP Throughput Models: Summary RTT bias None when packet losses are negligible In presence of packet losses some RTT bias, but lower than that of TCP Reno Relative aggressiveness of TCP Vegas flows depend on: Vegas threshold parameters!! Buffer space available at bottleneck router!! How to adaptively set the TCP Vegas threshold parameters?

11 11 Online Estimation of Parameters: RTT E.g., Exponential Weighted Moving Average for RTT What “weights” should be used?

12 12 Average Loss Interval (ALI) Method ObtainedLost s1s1 s3s3 s2s2

13 AVMRC Protocol

14 14 Adaptive Vegas Multicast Rate Control End-to-end protocol Server transmits data for a media object using multiple multicast channels Clients independently determine their reception rate using TCP Vegas model subscribe to multiple multicast channels, such that client reception rate approximately matches estimated fair share

15 15 AVMRC Overview Continued … Dynamically vary Vegas threshold parameters Short-term and long-term averages of loss event rate and delay RTT approximated as average queuing delay along path from server to client plus some “aggressiveness constant” Clients are “weakly” synchornized

16 16 AVMRC: Dynamic TCP Vegas Thresholds

17 17 Time Slot: Protocol Invocation Granularity How often clients compute new throughput estimates? Once every T seconds ( a time slot) T = ??? Time slot dilemma Longer slots for reliable estimates of RTT & p Smaller slots to enable quick channel drop in the event of an aggressive add!

18 18 AVMRC: Time Slot Dilemma AVMRC default: T = 100 ms Maintain short-term & long-term estimates Smaller slots to enable quick channel drop based on short-term estimates Channel adds governed by stable long-term estimates

19 19 AVMRC: Receiver Synchronization Add operations can impede convergence to fair share Quick drop by a client, however, do not impede converge of other receivers. AVMRC solution: weak synchronization Server inserts a marker in the data stream once every T seconds; is this enough? Bottleneck A B Congestion by A causes B to drop below fair share

20 20 AVMRC: Channel add/drop Frequency Reception rate choices may be coarse-grained, resulting in client reception rate oscillations Allow add operations every T add = nT Clusters channel additions behind a common bottleneck when nxT larger than n/w delay variations Channel drops allowed every T seconds (time slot) 200 Kb 300 Kb 500 Kb Fair Share Subscription oscillates

21 21 AVMRC: RTT Estimation How to define RTT for multicast traffic? Little or no reverse traffic Obtain RTT by end-to-end control info. exchange Use a fixed RTT (e.g., FLID-DL, RLC) AVMRC default: Fixed RTT + Queuing Delay Queuing Delay calculation doesn’t require synchronized clocks

22 22 AVMRC: Rules for Changing Subscription

23 Evaluation Methodology

24 24 Performance Evaluation - Goals Explore properties of AVMRC Compare AVMRC with an analogous protocol (RMRC) that used TCP Reno throughput model Other factors of AVMRC considered: Synchronization Policy RTT Estimation Policy Data Transmission Policy – Bursty vs. smooth Protocol Reactivity Evaluation using Network Simulator (ns-2)

25 25 AVMRC: Default protocol Parameters Slot duration T = 0.1s RTT: Fixed value (0.1s) + variable queuing delay ALI with n=8 for loss event rate comp. Weak synchronization Bursty transmissions – once every 0.1s Cumulative layered encoding with following rates: 256, 384, 576, 864, 1296, 1944, 2916, 4374, 6561Kbps RMRC uses the same parameters

26 26 Network Model Dumbbell topology with a single bottleneck 3Mbps to 100Mbps Drop-tail FIFO buffering approx. 50 to 250 ms Background traffic simulated HTTP FTP UDP Round-trip prop. delay in [20, 460]ms

27 AVMRC Performance Evaluation

28 28 No Background Traffic (a) AVMRC(b) RMRC

29 29 No Background Traffic: Scalability (1) Bottleneck = 3Mbps Buffer = 80 packets

30 30 No Background Traffic: Scalability (2) Bottleneck = 3Mbps Buffer = 80 packets

31 31 UDP Background Traffic Bottleneck = 3Mbps, Buffer = 80 packets If bottleneck link lightly loaded, AVMRC operates without inducing packet losses.

32 32 FTP Background Traffic Bottleneck = 3Mbps, Buffer = 80 packets AVMRC experiences no packet losses in a majority of the experiments

33 33 Dynamic Vegas Thresholds (1) Bottleneck = 45Mbps, Buffer = 250 packets Background Flows: 90% HTTP, 10% FTP; RTT in [20,420]ms

34 34 Dynamic Vegas Thresholds (2) Scaling bottleneck link capacity & background traffic mix Dynamic threshold works!

35 35 RTT Estimation Policy Bottleneck capacity = 10 Mbps, Buffer = 150 packets, 90 Background HTTP sessions

36 36 Protocol Reactivity: Session Scalability Bottleneck capacity = 3 Mbps, Buffer = 80 packets, no background traffic

37 37 Protocol Reactivity: HTTP Bkg. Traffic Bottleneck capacity = 10 Mbps, Buffer = 150 packets, background traffic is HTTP

38 38 Conclusions & Future Work AVMRC, a new multirate CC protocol based on TCP Vegas throughput model Can operate without inducing losses No feedback from source No explicit coordination among clients No constraints on data transmission policy Fair sharing with TCP Reno Dynamic TCP Vegas threshold estimation Incremental deployment of Vegas? Unicast rate control?

39 39 For Details … Anirban Mahanti, “Scalable Reliable On-Demand Media Streaming Protocols”, Ph.D. Thesis, Dept. of Computer Science, Univ. of Saskatchewan, March 2004. Anirban Mahanti, Derek L. Eager, and Mary K. Vernon, “Improving Multirate Congestion Control Using TCP Vegas Throughput Equations”, Computer Networks Journal, to appear 2004. Email: mahanti@cpsc.ucalgary.camahanti@cpsc.ucalgary.ca http://www.cpsc.ucalgary.ca/~mahanti


Download ppt "Multirate Congestion Control Using TCP Vegas Throughput Equations Anirban Mahanti Department of Computer Science University of Calgary Calgary, Alberta."

Similar presentations


Ads by Google