Download presentation
Presentation is loading. Please wait.
Published byRandall Randall Modified over 9 years ago
1
Internet Performance Measurements and Measurement Techniques Jim Kurose Department of Computer Science University of Massachusetts/Amherst http://www.cs.umass.edu/~kurose
2
Overview Introduction – why and what to measure Measuring per-hop performance –tricks, successes, “failures” End-to-end measurements –correlation in end-end loss, delay –“confidence” in measurements What lies ahead?
3
What “performance” to measure? packet delay packet loss link or path capacity/availability where? over what time scale? –sub-second, minute, hours? end-to-end per-hop
4
Why measure? end-end measurements: benchmarking, monitoring (e.g., Imeter) fault identification (e.g., routing instabilities) understanding end-end perf –misordering, loss (e.g., tcp studies by Paxson) –correlation-time scale for end-end loss, delay use in adaptive applications per-hop measurements: network operations (proprietary?) understanding where in end-end path performance impairments occur –use in reliable multicast protocols, active services, network modeling
5
Measuring Per-hop performance Question: what is loss, delay, capacity at given hop? Question: what per-hop delays does a packet see? Complication: –routers do not report performance stats to end users need to infer performance statistics –cleverly use little “machinery” that we have –develop an inferencing methodology per-hop
6
Clever use of existing protocols traceroute, pathchar: –use ICMP packets and time-to-live (TTL) field –each router decrements TTL on forwarding –TTL = 0 results in ICMP error msg back to sender Used to discover all routers on path to destination ttl=3 ttl=2 ttl=1 ICMP err router = x x
7
Clever use of exiting protocols (cont.) ICMP/TTL-field trick also gives link bandwidth: –find min roundtrip delay to x-1 (use many probe pkts) –find min rt delay to hop x –difference gives prop. delay plus transmission delay –vary pkt size to get link bandwidth gives variable queueing delay, loss of path to x –isolating hop x behavior difficult hop x-1 hop x data packet (d bits) ICMP reply (r bits) d/bw 2*prop r/bw time
8
Can we measure per-hop delays? motivation - a typical modeling paper: “We model the network as a single link ….” is this a valid assumption? –does a packet generally experience “most” of its delay at one link?
9
Measuring per-hop delays: send unicast probes along path use “IP options” on probes to gather timestamps –packet passing through specified router timestamped problem: only 4 timestamps in each packet solution: send multiple probes at one time: x y x y y x ts(x) y x ts(y) probe 1 probe 2 x x x x xx x x
10
Measuring per-hop delays: problem: IP options packets treated differently –data packets forwarded on fast path –IP options packet detoured (hopefully briefly) solution: send non-option packet with probes –only analyze probes when non-option packet delay close to probe delay (hope: negligible options processing delays) probe 1 x x x x probe 2 xx x x options forwarding non-option pkt
11
Analyzing the per-hop data Consider only probes with e-e queueing delays > 100ms filter cases where probe and option pkt delays “close” (20 ms) Hypothesis: e-e delays of filtered probes from same distribution as all probes –hypothesis rejected with negligible probability of being wrong :-(
12
Can we measure per-hop packet delay/loss? timestamping approach not statistically valid inspiration (!) from another on-going effort: multicast loss question: where in a multicast tree does loss occur? –backbone? edges? –implications for design of reliable multicast protocols
13
Using multicast to infer per-hop performance correlation of received mcast pkts provides glimpse inside simple loss model –independent loss probabilities k on link k method –multicast n packets from source –data: list of packets received at each receiver –check consistency of data with independent loss model –analysis: Maximum Likelihood Estimator find which maximizes Prob[data | ] R1R1 R2R2 R1R1 R2R2 11 33 22
14
Multicast inference: evaluation through ns simulations –2-8 receivers –different topologies –TCP, on/off background sources approach tracks probe loss well good estimate of back- ground traffic loss
15
Multicast Inference: to-do list Observations: multicast-based inference promising for loss applicable to delays Research questions: what if topology partially unknown? can we identify bottleneck links? Potential Applications: Internet weather map use in adaptive applications UMass collaboration with AT&T, LBNL
16
End-End Loss Delay Characteristics Question: time correlation of e-e loss, delay ? Application: –adjustment of FEC for audio, video, data –playout delay adjustment for audio –analytic models: how many “states” needed in Markovian models? Approach: collect/analyze point-point, multicast traces of periodically generated UDP probes
17
Analysis Issues: stationarity of traces: –look for increasing trends in avg, variance over trace –non-stationary traces not considered removal of clock skew –algorithm for removing constant clock drift how “confident” are we in the measured value? 150 hours of measurement data –there’s an exception to every “typical” result
18
Analysis Metrics: delay autocorrelation: d j : measured delay of pkt j loss autocorrelation: x j = 0 if pkt j received = 0 if pkt j lost conditional average delay given loss:
19
Delay Autocorrelation Note: typically autocorrelation dies down quickly
20
Conditional Delay Given Loss: Interesting behavior! Loss appears to be predictor of near term higher- than average delays
21
Loss Autocorrelation: generally: loss correlation timescale < 500 ms modeling: length of consecutive losses, successful reception can be modeled accurately by 2 or 3 state Markov process
22
How many states needed in analytic model? For n-state Markov model, determine transition probabilities from observed data needed: rigorous hypothesis testing of agreement between model and observed distributions
23
“Confidence” in loss probability estimates suppose: we send 10 packets and see 3 lost –view loss as a random process –is loss rate “really” 30%? –could be true loss rate is 20% or 50% ! –if we sample more, we’d have more “confidence” in the estimate goal: interval estimator for loss rate –e.g.: 95% confident that true loss in range [ p 1,p 2 ] use: adaptive applications (e.g., using RTCP)
24
Example: Bernoulli loss process: each pkt lost independently with probability p 95% confidence interval around MLE : find [ p 1, p 2 ] such that Pr{loss [k, n] | p = p 1 } = Pr{loss [0, k] | p = p 2 } = 0.025 n = 10 k = 3 MLE = k/n = 0.3 Loss probability confidence: model p 1 0.07p 2 0.65 012345678910 0.1 0.2 0.3 0.4 0.5 012345678910 0.1 0.2 0.3 0.4 0.5
25
Loss probability estimation: intervals number of packets sent n 95% confidence interval width relative to MLE
26
What’s ahead? need for statistically rigorous, empirically verified, end-user oriented performance measurement tools and technique –research just beginning middleware: network-to-user performance feedback? –when and in what form? informed use of performance measurements in: –adaptive applications –active services
27
For More Information ….. This talk: ftp://gaia.cs.umass.edu/pub/kurose/intel98.ps Group publications: http://gaia.cs.umass.edu/papers WWW sites: –Cooperative Association for Internet Data Analysis www.caida.org –National Laboratory for Applied Network Research www.nlanr.net
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.