Presentation is loading. Please wait.

Presentation is loading. Please wait.

L Subramanian*, I Stoica*, H Balakrishnan +, R Katz* *UC Berkeley, MIT + USENIX NSDI’04, 2004 Presented by Alok Rakkhit, Ionut Trestian.

Similar presentations


Presentation on theme: "L Subramanian*, I Stoica*, H Balakrishnan +, R Katz* *UC Berkeley, MIT + USENIX NSDI’04, 2004 Presented by Alok Rakkhit, Ionut Trestian."— Presentation transcript:

1 L Subramanian*, I Stoica*, H Balakrishnan +, R Katz* *UC Berkeley, MIT + USENIX NSDI’04, 2004 Presented by Alok Rakkhit, Ionut Trestian

2  Introduction  OverQos Architecture  Controlled-Loss Virtual Link (CLVL)  OverQoS Implementation  Two Sample Application  Evaluation  Conclusions 2

3  Today’s Internet still continues to provide only a best-effort service. The main reason is the requirement of these proposals that all network elements implement QoS mechanisms.  The authors propose OverQoS, an overlay based QoS architecture for enhancing Internet QoS. 3

4  Enhancements:  Smoothing losses ▪ Reduce or even eliminate the loss bursts by smoothing packet losses across time  Packet prioritization ▪ Protect important packets  Statistical Bandwidth and Loss Guarantees 4

5  Assumptions  The placement of overlay nodes is pre-specified  The end-to-end path on top of an overlay network is fixed  Using existing approaches like RON to determine the overlay path.  Terms  Virtual link – The IP path between two overlay nodes  Bundle – A stream of application data packets carried across the virtual link 5

6  Overlay-based QoS challenges  Node Placement and Cross Traffic  Fairness ▪ Should not hurt the cross traffic  Stability ▪ Many virtual links overlapping on congested physical links should be able to co-exist 6

7  A Solution builds on two principles  Bundle loss control ▪ Using controlled-loss virtual link (CLVL) to bound the loss rate  Resource management within a bundle ▪ Control the loss and bandwidth allocations 7

8  The CLVL provides a loss rate bound, q.  Using a combination of FEC and ARQ  The bandwidth overhead should be minimized  The total traffic consists of:  The traffic of the bundle  The redundancy traffic  The available bandwidth for the flows in the bundle 8 b(t): Traffic bound at time t r(t): Fraction of redundancy traffic

9  If the traffic arrival rate is larger than available bandwidth c, the extra traffic is dropped at the entry overlay node  With priority  Statistical bandwidth guarantees , where u represents the probability of not meeting the bandwidth guarantee  As long as the total allocated bandwidth is less than c min 9

10  Application-OverQoS Interface 10  It needs to tunnel its packets through the overlay network using an OverQoS proxy  The proxy is responsible for signaling the application specific requirements to OverQoS OverQoS proxy is application specific

11  End-to-end Recovery vs. Overlay CLVL  Using FEC to apply end-to-end loss control is far more expensive than on an aggregate level  With a better distribution of overlay nodes, they expect the overlay links to have much smaller RTTs than end- to-end RTTs ▪ ARQ recovery is better in overlay-level  Delay guarantees  Overlay has no control in queuing delays  Over-provisioning  Overlay are the right platform for translating intra domain QoS to end-to-end QoS guarantees 11

12  Estimating b  Based on an N-TCP pipe abstraction which provides a bandwidth which is N times the throughput of a single TCP connection. ▪ Use MulTCP to emulate the behavior ▪ N is equal to the number of flows in the bundle  Node Architecture 12 q: target loss-rate c: available bandwidth p: loss rate b: maximum sending rate

13  Achieving target loss rate q  FEC vs. ARQ trade-off ▪ Bandwidth overhead and packet recovery time  FEC+ARQ based CLVL ▪ Restrict # of retransmissions to at most one ▪ The expected packet loss rate ▪ The expected bandwidth overhead ▪ The optimal solution is when r 1 = 0 13  After two rounds  Goal  Minimizes r is the redundancy factor

14  Application-dependent proxy  Choosing parameters  N as the average number of flows observed over a larger period of time  q = 0.1%  Startup phase  Using a slow-start phase to estimate the initial value of b  FEC implementation  Operating on small window sizes (n < 1000)  coding is not a bottleneck

15  Two enhancements  The quality can be enhanced by converting bursty losses into smooth losses  for streaming audio  Recovering packets preferentially can improve the quality  for MPEG streaming  Not consume any additional bandwidth  Retransmits an important lost packet and drops a later lesser important packet 15

16  Streaming Audio  MPEG streaming 16 Perceptual Evaluation of Speech Quality (PESQ) (5 is ideal) Increase 0.15 – 0.2 Average loss rate Mazu-Korea – 2% Intel-Lulea – 3% Not only improves the quality in the average case but also the minimum quality of a stream

17  Problem  Client unable to connect to the server  Cause skips or get disconnected  Alleviate the problem of bursty losses by performing:  Recover from bursty network losses by using an FEC+ARQ based CLVL  Smoothly drop data packets equivalent to the size of the burst at the overlay node  Identify control packets based on packet size and not drop these packets 17

18  Sequence number plot illustrating smoothing of packet losses using OverQoS 18  Smoothing losses works well only when the bursty loss-periods are relatively short by compensating  Unable to achieve the target loss-rate due to congestion periods with very high loss-rates 10% loss-rate

19  Methodology  Wide-Area Evaluation Testbed ▪ RON and PlanetLab – use 19 diverse nodes  Simulation Environment ▪ Ns-2 – a single congested link of 10 Mbps where they vary the background traffic ▪ Long lived TCP connections ▪ Self similar traffic ▪ Web traffic 19

20  Simulations  Wide Area Evaluation  Achieve target over 80 of the 83 virtual links  The causes of the other 3 virtual links ▪ Short outages – a period of time all packets are lost (< 5s) ▪ Bi-modal loss distributions – bursty losses 20 q = 0.1%

21 21 Monitor 83 unique virtual links u = 0.01 and u = 0.005 The value of c min is greater than 100Kbps for more than 80% of the links N-TCP, N = 10  Stability of c min 1)The value of c min is very stable, which does not deviate more than 10% around its mean 2)Set P = 1%, the actual value is no more than 1.3% Calculate c min based on a history of 200 seconds The average sending rate of N-TCP is between 120Kbps to 2Mbps

22  Overhead Characteristics 22 The difference between avg. loss & FEC+ARQ is the amount of FEC used in the second round The burstier the background traffic, the higher the amount of FEC required to recover from these losses

23  Delay Characteristics  Two reasons for increasing delay ▪ The recovery process ▪ Support in-sequence delivery of packets 23 Three different models (a)No packet ordering (b)End-to-end ordering (c)Hop-by-hop ordering 1)E2E is better than Hop-by-hop 2)Adding new OverQoS nodes increasing limited delay

24  Three OverQoS bundles (with N=2, N=4, N=8) compete on a shared bottleneck under two different scenarios 24  No cross-traffic  Cross-traffic consisting of five long lived TCPs 1)Three OverQoS bundles co-exist with each other and with the background traffic 2)The ratio of throughputs of the three bundles is preserved

25  OverQoS can enhance Internet QoS without any support from the underlying IP network  OverQoS is able to achieve the three enhancements with little (i.e., 5%) or no extra bandwidth.  Future work  Combine admission control and path selection  Determine the “optimal” placement of the OverQoS nodes in the network 25


Download ppt "L Subramanian*, I Stoica*, H Balakrishnan +, R Katz* *UC Berkeley, MIT + USENIX NSDI’04, 2004 Presented by Alok Rakkhit, Ionut Trestian."

Similar presentations


Ads by Google