Internet Performance Measurements and Measurement Techniques Jim Kurose Department of Computer Science University of Massachusetts/Amherst

Slides:



Advertisements
Similar presentations
Collaborators: Mark Coates, Rui Castro, Ryan King, Mike Rabbat, Yolanda Tsang, Vinay Ribeiro, Shri Sarvotham, Rolf Reidi Network Bandwidth Estimation and.
Advertisements

COS 461 Fall 1997 Routing COS 461 Fall 1997 Typical Structure.
Networks and TCP/IP Part 2. PORTS Ports – What and Why are They?  Typically: Computers usually have only one network access point to the internet 
1 Locating Internet Bottlenecks: Algorithms, Measurement, and Implications Ningning Hu (CMU) Li Erran Li (Bell Lab) Zhuoqing Morley Mao (U. Mich) Peter.
RTP: A Transport Protocol for Real-Time Applications Provides end-to-end delivery services for data with real-time characteristics, such as interactive.
User-level Internet Path Diagnosis Ratul Mahajan, Neil Spring, David Wetherall and Thomas Anderson Designed by Yao Zhao.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli University of Calif, Berkeley and Lawrence Berkeley National Laboratory SIGCOMM.
Improving TCP Performance over Mobile Ad Hoc Networks by Exploiting Cross- Layer Information Awareness Xin Yu Department Of Computer Science New York University,
Using FEC for Rate Adaptation of Multimedia Streams Marcin Nagy Supervised by: Jörg Ott Instructed by: Varun Singh Conducted at Comnet, School of Electrical.
Chapter 6 outline r 6.1 Multimedia Networking Applications r 6.2 Streaming stored audio and video m RTSP r 6.3 Real-time, Interactive Multimedia: Internet.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
1 Estimating Shared Congestion Among Internet Paths Weidong Cui, Sridhar Machiraju Randy H. Katz, Ion Stoica Electrical Engineering and Computer Science.
Lo Presti 1 Network Tomography Francesco Lo Presti Dipartimento di Informatica - Università dell’Aquila.
CPSC 441 Tutorial - Network Tools 1 Network Tools CPSC 441 – Computer Communications Tutorial.
1 Estimating Shared Congestion Among Internet Paths Weidong Cui, Sridhar Machiraju Randy H. Katz, Ion Stoica Electrical Engineering and Computer Science.
Internet Traffic Patterns Learning outcomes –Be aware of how information is transmitted on the Internet –Understand the concept of Internet traffic –Identify.
AQM for Congestion Control1 A Study of Active Queue Management for Congestion Control Victor Firoiu Marty Borden.
A New Adaptive FEC Loss Control Algorithm for Voice Over IP Applications Chinmay Padhye, Kenneth Christensen and Wilfirdo Moreno Department of Computer.
Multiple constraints QoS Routing Given: - a (real time) connection request with specified QoS requirements (e.g., Bdw, Delay, Jitter, packet loss, path.
1 Network Tomography Venkat Padmanabhan Lili Qiu MSR Tab Meeting 22 Oct 2001.
1 A Suite of Schemes for User-level Network Diagnosis without Infrastructure Yao Zhao, Yan Chen Lab for Internet and Security Technology, Northwestern.
Available bandwidth measurement as simple as running wget D. Antoniades, M. Athanatos, A. Papadogiannakis, P. Markatos Institute of Computer Science (ICS),
Streaming Media. Unicast Redundant traffic Multicast One to many.
Measurement in the Internet. Outline Internet topology Bandwidth estimation Tomography Workload characterization Routing dynamics.
Network Tomography through End- End Multicast Measurements D. Towsley U. Massachusetts collaborators: R. Caceres, N. Duffield, F. Lo Presti (AT&T) T. Bu,
User-level Internet Path Diagnosis R. Mahajan, N. Spring, D. Wetherall and T. Anderson.
Estimating Congestion in TCP Traffic Stephan Bohacek and Boris Rozovskii University of Southern California Objective: Develop stochastic model of TCP Necessary.
Bandwidth Measurements Jeng Lung WebTP Meeting 10/25/99.
Detecting Shared Congestion of Flows Via End- to-end Measurement Dan Rubenstein Jim Kurose Don Towsley Computer Networks Research Group.
Scalable and Deterministic Overlay Network Diagnosis Yao Zhao, Yan Chen Northwestern Lab for Internet and Security Technology (LIST) Dept. of Computer.
Network Measurement Bandwidth Analysis. Why measure bandwidth? Network congestion has increased tremendously. Network congestion has increased tremendously.
Bandwidth Estimation: Metrics Mesurement Techniques and Tools By Ravi Prasad, Constantinos Dovrolis, Margaret Murray and Kc Claffy IEEE Network, Nov/Dec.
IP-UDP-RTP Computer Networking (In Chap 3, 4, 7) 건국대학교 인터넷미디어공학부 임 창 훈.
1 ICMP : Internet Control Message Protocol Computer Network System Sirak Kaewjamnong.
CCNA Introduction to Networking 5.0 Rick Graziani Cabrillo College
Reading Report 14 Yin Chen 14 Apr 2004 Reference: Internet Service Performance: Data Analysis and Visualization, Cross-Industry Working Team, July, 2000.
Routing Algorithms (Ch5 of Computer Network by A. Tanenbaum)
Particle Filtering in Network Tomography
Computer Networks: Multimedia Applications Ivan Marsic Rutgers University Chapter 3 – Multimedia & Real-time Applications.
Introduction to Networks CS587x Lecture 1 Department of Computer Science Iowa State University.
POSTECH DP&NM Lab. Internet Traffic Monitoring and Analysis: Methods and Applications (1) 4. Active Monitoring Techniques.
TFRC: TCP Friendly Rate Control using TCP Equation Based Congestion Model CS 218 W 2003 Oct 29, 2003.
Computer Networks Performance Metrics. Performance Metrics Outline Generic Performance Metrics Network performance Measures Components of Hop and End-to-End.
Estimating Bandwidth of Mobile Users Sept 2003 Rohit Kapoor CSD, UCLA.
Measurement and Modeling of Packet Loss in the Internet Maya Yajnik.
Hung X. Nguyen and Matthew Roughan The University of Adelaide, Australia SAIL: Statistically Accurate Internet Loss Measurements.
Introduction1-1 Chapter 1 Computer Networks and the Internet Computer Networking: A Top Down Approach Featuring the Internet, 2 nd edition. Jim Kurose,
Internet Protocol ECS 152B Ref: slides by J. Kurose and K. Ross.
1 Internet Control Message Protocol (ICMP) Used to send error and control messages. It is a necessary part of the TCP/IP suite. It is above the IP module.
CS551: End-to-End Packet Dynamics Paxon’99 Christos Papadopoulos (
03/11/2015 Michael Chai; Behrouz Forouzan Staffordshire University School of Computing Streaming 1.
Simulation Tutorial By Bing Wang Assistant professor, CSE Department, University of Connecticut Web site.
Detection of Routing Loops and Analysis of Its Causes Sue Moon Dept. of Computer Science KAIST Joint work with Urs Hengartner, Ashwin Sridharan, Richard.
Multiplicative Wavelet Traffic Model and pathChirp: Efficient Available Bandwidth Estimation Vinay Ribeiro.
Trajectory Sampling for Direct Traffic Oberservation N.G. Duffield and Matthias Grossglauser IEEE/ACM Transactions on Networking, Vol. 9, No. 3 June 2001.
Network Simulation Motivation: r learn fundamentals of evaluating network performance via simulation Overview: r fundamentals of discrete event simulation.
N. Hu (CMU)L. Li (Bell labs) Z. M. Mao. (U. Michigan) P. Steenkiste (CMU) J. Wang (AT&T) Infocom 2005 Presented By Mohammad Malli PhD student seminar Planete.
On the Characteristics and Origins of Internet Flow Rates ACM SIGCOMM 2002 ICIR AT&T Labs – Research
Deadline-based Resource Management for Information- Centric Networks Somaya Arianfar, Pasi Sarolahti, Jörg Ott Aalto University, Department of Communications.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
Precision Measurements with the EVERGROW Traffic Observatory Péter Hága István Csabai.
1 Advanced Transport Protocol Design Nguyen Multimedia Communications Laboratory March 23, 2005.
Measurement and Modelling of the Temporal Dependence in Packet loss Maya Yajnik, Sue Moon, Jim Kurose, Don Towsley Department of Computer Science University.
Access Link Capacity Monitoring with TFRC Probe Ling-Jyh Chen, Tony Sun, Dan Xu, M. Y. Sanadidi, Mario Gerla Computer Science Department, University of.
A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their original slides that accompany the.
PATH DIVERSITY WITH FORWARD ERROR CORRECTION SYSTEM FOR PACKET SWITCHED NETWORKS Thinh Nguyen and Avideh Zakhor IEEE INFOCOM 2003.
Monitoring Persistently Congested Internet Links
RTP: A Transport Protocol for Real-Time Applications
Project proposal Multi-stream and multi-path audio transmission
Presentation transcript:

Internet Performance Measurements and Measurement Techniques Jim Kurose Department of Computer Science University of Massachusetts/Amherst

Overview Introduction – why and what to measure Measuring per-hop performance –tricks, successes, “failures” End-to-end measurements –correlation in end-end loss, delay –“confidence” in measurements What lies ahead?

What “performance” to measure? packet delay packet loss link or path capacity/availability where? over what time scale? –sub-second, minute, hours? end-to-end per-hop

Why measure? end-end measurements: benchmarking, monitoring (e.g., Imeter) fault identification (e.g., routing instabilities) understanding end-end perf –misordering, loss (e.g., tcp studies by Paxson) –correlation-time scale for end-end loss, delay use in adaptive applications per-hop measurements: network operations (proprietary?) understanding where in end-end path performance impairments occur –use in reliable multicast protocols, active services, network modeling

Measuring Per-hop performance Question: what is loss, delay, capacity at given hop? Question: what per-hop delays does a packet see? Complication: –routers do not report performance stats to end users need to infer performance statistics –cleverly use little “machinery” that we have –develop an inferencing methodology per-hop

Clever use of existing protocols traceroute, pathchar: –use ICMP packets and time-to-live (TTL) field –each router decrements TTL on forwarding –TTL = 0 results in ICMP error msg back to sender Used to discover all routers on path to destination ttl=3 ttl=2 ttl=1 ICMP err router = x x

Clever use of exiting protocols (cont.) ICMP/TTL-field trick also gives link bandwidth: –find min roundtrip delay to x-1 (use many probe pkts) –find min rt delay to hop x –difference gives prop. delay plus transmission delay –vary pkt size to get link bandwidth gives variable queueing delay, loss of path to x –isolating hop x behavior difficult hop x-1 hop x data packet (d bits) ICMP reply (r bits) d/bw 2*prop r/bw time

Can we measure per-hop delays? motivation - a typical modeling paper: “We model the network as a single link ….” is this a valid assumption? –does a packet generally experience “most” of its delay at one link?

Measuring per-hop delays: send unicast probes along path use “IP options” on probes to gather timestamps –packet passing through specified router timestamped problem: only 4 timestamps in each packet solution: send multiple probes at one time: x y x y y x ts(x) y x ts(y) probe 1 probe 2 x x x x xx x x

Measuring per-hop delays: problem: IP options packets treated differently –data packets forwarded on fast path –IP options packet detoured (hopefully briefly) solution: send non-option packet with probes –only analyze probes when non-option packet delay close to probe delay (hope: negligible options processing delays) probe 1 x x x x probe 2 xx x x options forwarding non-option pkt

Analyzing the per-hop data Consider only probes with e-e queueing delays > 100ms filter cases where probe and option pkt delays “close” (20 ms) Hypothesis: e-e delays of filtered probes from same distribution as all probes –hypothesis rejected with negligible probability of being wrong :-(

Can we measure per-hop packet delay/loss? timestamping approach not statistically valid inspiration (!) from another on-going effort: multicast loss question: where in a multicast tree does loss occur? –backbone? edges? –implications for design of reliable multicast protocols

Using multicast to infer per-hop performance correlation of received mcast pkts provides glimpse inside simple loss model –independent loss probabilities  k on link k method –multicast n packets from source –data: list of packets received at each receiver –check consistency of data with independent loss model –analysis: Maximum Likelihood Estimator find  which maximizes Prob[data |  ] R1R1 R2R2 R1R1 R2R2 11 33 22

Multicast inference: evaluation through ns simulations –2-8 receivers –different topologies –TCP, on/off background sources approach tracks probe loss well good estimate of back- ground traffic loss

Multicast Inference: to-do list Observations: multicast-based inference promising for loss applicable to delays Research questions: what if topology partially unknown? can we identify bottleneck links? Potential Applications: Internet weather map use in adaptive applications UMass collaboration with AT&T, LBNL

End-End Loss Delay Characteristics Question: time correlation of e-e loss, delay ? Application: –adjustment of FEC for audio, video, data –playout delay adjustment for audio –analytic models: how many “states” needed in Markovian models? Approach: collect/analyze point-point, multicast traces of periodically generated UDP probes

Analysis Issues: stationarity of traces: –look for increasing trends in avg, variance over trace –non-stationary traces not considered removal of clock skew –algorithm for removing constant clock drift how “confident” are we in the measured value? 150 hours of measurement data –there’s an exception to every “typical” result

Analysis Metrics: delay autocorrelation: d j : measured delay of pkt j loss autocorrelation: x j = 0 if pkt j received = 0 if pkt j lost conditional average delay given loss:

Delay Autocorrelation Note: typically autocorrelation dies down quickly

Conditional Delay Given Loss: Interesting behavior! Loss appears to be predictor of near term higher- than average delays

Loss Autocorrelation: generally: loss correlation timescale < 500 ms modeling: length of consecutive losses, successful reception can be modeled accurately by 2 or 3 state Markov process

How many states needed in analytic model? For n-state Markov model, determine transition probabilities from observed data needed: rigorous hypothesis testing of agreement between model and observed distributions

“Confidence” in loss probability estimates suppose: we send 10 packets and see 3 lost –view loss as a random process –is loss rate “really” 30%? –could be true loss rate is 20% or 50% ! –if we sample more, we’d have more “confidence” in the estimate goal: interval estimator for loss rate –e.g.: 95% confident that true loss in range [ p 1,p 2 ] use: adaptive applications (e.g., using RTCP)

Example: Bernoulli loss process: each pkt lost independently with probability p 95% confidence interval around MLE : find [ p 1, p 2 ] such that Pr{loss  [k, n] | p = p 1 } = Pr{loss  [0, k] | p = p 2 } = n = 10 k = 3 MLE = k/n = 0.3 Loss probability confidence: model p 1  0.07p 2 

Loss probability estimation: intervals number of packets sent n 95% confidence interval width relative to MLE

What’s ahead? need for statistically rigorous, empirically verified, end-user oriented performance measurement tools and technique –research just beginning middleware: network-to-user performance feedback? –when and in what form? informed use of performance measurements in: –adaptive applications –active services

For More Information ….. This talk: ftp://gaia.cs.umass.edu/pub/kurose/intel98.ps Group publications: WWW sites: –Cooperative Association for Internet Data Analysis –National Laboratory for Applied Network Research