Presentation is loading. Please wait.

Presentation is loading. Please wait.

N. Hu (CMU)L. Li (Bell labs) Z. M. Mao. (U. Michigan) P. Steenkiste (CMU) J. Wang (AT&T) Infocom 2005 Presented By Mohammad Malli PhD student seminar Planete.

Similar presentations


Presentation on theme: "N. Hu (CMU)L. Li (Bell labs) Z. M. Mao. (U. Michigan) P. Steenkiste (CMU) J. Wang (AT&T) Infocom 2005 Presented By Mohammad Malli PhD student seminar Planete."— Presentation transcript:

1 N. Hu (CMU)L. Li (Bell labs) Z. M. Mao. (U. Michigan) P. Steenkiste (CMU) J. Wang (AT&T) Infocom 2005 Presented By Mohammad Malli PhD student seminar Planete Project A Measurement Study of Internet Bottlenecks

2 November 28, 2005 2 Goals Recently, many active probing tools have been developed for measuring and locating bandwidth bottlenecks, but Q1. How persistent are the Internet bottlenecks? –Important for measurement frequency Q2. Are bottlenecks shared by end users within the same prefix? –Useful for path bandwidth inference Q3. What relationship exists between bottleneck and packet loss and queuing delay? –Useful for congestion identification Q4. What relationship exists between bottleneck and router and link properties? –Important for traffic engineering

3 November 28, 2005 3 Related Work Persistence of Internet path properties –Zhang [IMW-01], Paxson [TR-2000], Labovitz [TON-1998, Infocom-1999] loss, delay, pkt ordering,.. The persistence of the bottleneck location does not considered Congestion points sharing –Katabi [TR-2001], Rubenstein [Sigmetrics-2000] Flows-based study and not e2e paths-based Correlation among Internet path properties –Paxson [1996] e2e level and not at the location level Correlation between router and link properties –Agarwal [PAM 2004]

4 November 28, 2005 4 Data collection Probing –Source: a CMU host –Destinations: 960 IP addresses –10 continuous probings for each destination (1.5 minutes) Repeat for 38 days (for persistence study) S D D D D D D D CMU 960 Internet Destinations Day-1 Day-2 Day-38 …

5 November 28, 2005 5 Pathneck An active probing tool that can detect Internet bottleneck location –For details, refer to “Locating Internet Bottlenecks: Algorithms, Measurements, and Implications” [SIGCOMM’04] –Source code: www.cs.cmu.edu/~hnn/pathneckwww.cs.cmu.edu/~hnn/pathneck Pathneck characteristics –Low overhead (i.e., in order of 10s-100s KB) –Single-end control (sender only) Pathneck output used in this work –Bottleneck link location –Route

6 November 28, 2005 6 Recursive Packet Train (RPT) in Pathneck Load packets 60 pkts, 500 B TTL 255 measurement packets measurement packets 30 pkts, 60 B 2130 12 Load packets are used to measure available bandwidth Measurement packets are used to obtain location information UDP packets

7 November 28, 2005 7 Gap value RouterSender Packet train Time axis

8 November 28, 2005 8 RouterSender Drop m. packet Send ICMP Gap value

9 November 28, 2005 9 RouterSender Drop m. packet Send ICMP Recv ICMP Gap value

10 November 28, 2005 10 RouterSender Drop m. packet Send ICMP Recv ICMP Drop m. packet Send ICMP Gap value

11 November 28, 2005 11 RouterSender Drop m. packet Send ICMP Recv ICMP Drop m. packet Send ICMP Recv ICMP Gap value RPT probing is repeated 10 times for each pair of nodes

12 November 28, 2005 12 Terminology Persistent probing set is the probing set where all n probings follow the same route

13 November 28, 2005 13 Route Persistence Route change is very common and must be considered for bottleneck persistence analysis –Consistent with the results from Zhang, et. al. [IMW-01] on route persistence AS level Location level over 9 days

14 November 28, 2005 14 Bottleneck Persistence Persistence of a bottleneck R Bottleneck Persistence of a path Max(Persist(R)) for all bottlenecks R Two views: 1.End-to-end view ― per (src, dst) pair –Includes the impact of route change 2.Route-based view ― per route –Removes the impact of route change Persist(R) = # of persistent probing sets R is bottleneck # of persistent probing sets R appears

15 November 28, 2005 15 Bottleneck Persistence 1.Bottleneck persistence in route-based view is higher than end-to-end view 2.AS-level bottleneck persistence is very similar to that from location level 3.20% bottlenecks have perfect persistence in end-to-end view, and 30% for route-based view 3 2 2 1

16 November 28, 2005 16 Results summary Only 20-30% Internet bottlenecks have perfect persistence –Application should be ready for bottleneck location change Bottleneck locations have a strong (60%) correlation with packet loss locations (2 hops away) –Bottleneck and loss detections should be used together for congestion detection Only less than 10 % of the destinations in a prefix cluster share a bottleneck more than half of the time –End users can not assume common bottlenecks Bottleneck has no clear relationship with link capacity, router CPU load, and memory usage A clear correlation between bottlenecks and link loads –Network engineers should focus on traffic load to eliminate bottlenecks

17 November 28, 2005 17 Limitations Interesting study but.. How much the obtained statistics are representative for the whole Internet, since –the few sources used for probing are a CMU node, 8 Planetlab nodes, and 13 RON nodes –the number of probed destinations are 960 <<< # of Internet paths Pathneck limitations –Load pkts are larger than what the firewalls permit only forward the 60 byte UDP packets –Anyway, Pathneck is not able to measure the pkt train length on the last link due to ICMP rate limiting theoricaly, the destination must send a ‘destination port unreachable’ for each pkt

18 November 28, 2005 18 Thank you for your listening

19 November 28, 2005 19 Backup

20 November 28, 2005 20 Bottleneck vs. loss | delay Possible congestion indication –Large queuing delay –Packet loss –Bottleneck They do not always occur together –Packet scheduling algorithm  large queuing delay –Traffic burstiness or RED  packet loss –Small link capacity  bottleneck Bottleneck  ?  link loss | large link delay

21 November 28, 2005 21 Trace Collected on the same set of 960 paths, but independent measurements 1.Detect bottleneck location using Pathneck 2.Detect loss location using Tulip –Only use the forward path results 3.Detect link queuing delay using Tulip –medianRTT – minRTT [ Tulip was developed in University of Washington, SOSP ’ 03 ] The analysis is based on the 382 paths for which both bottleneck location and packet loss are detected

22 November 28, 2005 22 Bottleneck  Packet Loss

23 November 28, 2005 23 Bottleneck  Queueing Delay


Download ppt "N. Hu (CMU)L. Li (Bell labs) Z. M. Mao. (U. Michigan) P. Steenkiste (CMU) J. Wang (AT&T) Infocom 2005 Presented By Mohammad Malli PhD student seminar Planete."

Similar presentations


Ads by Google