Inferring TCP Connection Characteristics Through Passive Measurements Sharad Jaiswal, Gianluca Iannaccone, Christophe Diot, Jim Kurose, Don Towsley Proceedings.

Slides:



Advertisements
Similar presentations
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Advertisements

Simulation-based Comparison of Tahoe, Reno, and SACK TCP Kevin Fall & Sally Floyd Presented: Heather Heiman September 10, 2002.
1 Transport Protocols & TCP CSE 3213 Fall April 2015.
Hui Zhang, Fall Computer Networking TCP Enhancements.
Doc.: IEEE /0604r1 Submission May 2014 Slide 1 Modeling and Evaluating Variable Bit rate Video Steaming for ax Date: Authors:
1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
1 TCP - Part II. 2 What is Flow/Congestion/Error Control ? Flow Control: Algorithms to prevent that the sender overruns the receiver with information.
Introduction 1 Lecture 14 Transport Layer (Transmission Control Protocol) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer.
TDC365 Spring 2001John Kristoff - DePaul University1 Internetworking Technologies Transmission Control Protocol (TCP)
Transport Layer3-1 Congestion Control. Transport Layer3-2 Principles of Congestion Control Congestion: r informally: “too many sources sending too much.
Modeling TCP Throughput Jeng Lung WebTP Meeting 11/1/99.
CSCE 515: Computer Network Programming Chin-Tser Huang University of South Carolina.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #7 TCP New Reno Vs. Reno.
CSCE 515: Computer Network Programming Chin-Tser Huang University of South Carolina.
Week 9 TCP9-1 Week 9 TCP 3 outline r 3.5 Connection-oriented transport: TCP m segment structure m reliable data transfer m flow control m connection management.
1 Internet Networking Spring 2003 Tutorial 11 Explicit Congestion Notification (RFC 3168)
1 Internet Networking Spring 2004 Tutorial 10 TCP NewReno.
EEC-484/584 Computer Networks Lecture 14 Wenbing Zhao (Part of the slides are based on Drs. Kurose & Ross ’ s slides for their Computer.
TCP in Heterogeneous Network Md. Ehtesamul Haque # P.
Advanced Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Fourth.
CMPE 257 Spring CMPE 257: Wireless and Mobile Networking Spring 2005 E2E Protocols (point-to-point)
1 K. Salah Module 6.1: TCP Flow and Congestion Control Connection establishment & Termination Flow Control Congestion Control QoS.
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
On the Characteristics and Origins of Internet Flow Rates ACM SIGCOMM 2002 Yin Zhang Lee Breslau Vern Paxson Scott Shenker AT&T Labs – Research
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Lect3..ppt - 09/12/04 CIS 4100 Systems Performance and Evaluation Lecture 3 by Zornitza Genova Prodanoff.
3: Transport Layer3b-1 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle”
Transport Layer 4 2: Transport Layer 4.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
COMT 4291 Communications Protocols and TCP/IP COMT 429.
Computer Networks Performance Metrics. Performance Metrics Outline Generic Performance Metrics Network performance Measures Components of Hop and End-to-End.
Principles of Congestion Control Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control!
Copyright 2002, S.D. Personick. All Rights Reserved.1 Telecommunications Networking II Topic 20 Transmission Control Protocol (TCP) Ref: Tanenbaum pp:
Chapter 12 Transmission Control Protocol (TCP)
CSE679: Computer Network Review r Review of the uncounted quiz r Computer network review.
Copyright © Lopamudra Roychoudhuri
1 TCP - Part II Relates to Lab 5. This is an extended module that covers TCP data transport, and flow control, congestion control, and error control in.
Lecture 9 – More TCP & Congestion Control
On the Characteristics and Origins of Internet Flow Rates ACM SIGCOMM 2002 ICIR AT&T Labs – Research
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
TCP Behavior Inference Tool Jitendra Padhye, Sally Floyd Presented by Songjie Wei.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
1 Transport Control Protocol for Wireless Connections ElAarag and Bassiouni Vehicle Technology Conference 1999.
1 TCP - Part II. 2 What is Flow/Congestion/Error Control ? Flow Control: Algorithms to prevent that the sender overruns the receiver with information.
Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar.
TCP OVER ADHOC NETWORK. TCP Basics TCP (Transmission Control Protocol) was designed to provide reliable end-to-end delivery of data over unreliable networks.
Internet Networking recitation #11
TCP Westwood: Efficient Transport for High-speed wired/wireless Networks 2008.
TCP Congestion Control 컴퓨터공학과 인공지능 연구실 서 영우. TCP congestion control2 Contents 1. Introduction 2. Slow-start 3. Congestion avoidance 4. Fast retransmit.
Development of a QoE Model Himadeepa Karlapudi 03/07/03.
TCP as a Reliable Transport. How things can go wrong… Lost packets Corrupted packets Reordered packets …Malicious packets…
CIS679: TCP and Multimedia r Review of last lecture r TCP and Multimedia.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Internet Networking recitation #9
Introduction to Networks
Introduction to Congestion Control
Generalizing The Network Performance Interference Problem
TCP Westwood(+) Protocol Implementation in ns-3
Hojun Lee TCP enhancements Hojun Lee 11/8/2018.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
Measurement and Classification of Out-of-Sequence Packets in a Tier-1 IP Backbone Sharad Jaiswal, Gianluca Iannaccone, Member, IEEE, Christophe Diot, Jim.
Transport Layer: Congestion Control
TCP flow and congestion control
Presentation transcript:

Inferring TCP Connection Characteristics Through Passive Measurements Sharad Jaiswal, Gianluca Iannaccone, Christophe Diot, Jim Kurose, Don Towsley Proceedings of Infocom 2004

Outline 1. Introduction 2. Related Work 3. Tracking The Congestion Window 4. Round-Trip Time Estimation 5. Sources of Estimation Uncertainty 6. Evaluation 7. Backbone Measurements 8. Conclusions

Introduction Motivation: To infer the congestion window (cwnd) and round trip time (RTT) by a passive measurement methodology that just observes the sender –to- receiver and receiver-to-sender segments in a TCP connection.

Introduction Contributions: The authors develop a passive methodology to infer a sender’s congestion window by observing TCP segments passing through a measurement point. Their methodology can be applied to examine a remarkably large and diverse number of TCP connections. (10 million connections from tier-1 network.) TCP congestion control flavors (Tahoe, Reno and NewReno) generally have a minimal impact on the sender’s throughput.

Related Work 1.J. Padhye and S. Floyd developed a tool to actively send requests to web servers and drop strategically chosen response packets to observe the server’s response to loss. 2.V. Paxson described a tool, tcpanaly, to analyze the traces captured by tcpdump and reported on the differences in behavior of 8 different TCP implementations. 3.Y. Zhang passively monitor TCP connections to study the rate-limiting factors of TCP.

Tracking The Congestion Window A “replica” of the TCP sender’s state is constructed for each TCP connection observed at the measurement point. The replica takes the form of a finite state machine (FSM) that updates its current estimate of the sender’s cwnd based on observed receiver-to-sender ACKs. Connection StatesConnection Variables DEFAULT FAST_RECOVERY (for Reno and NewReno) cwnd, ssthresh, awin

Tracking The Congestion Window Challenges of estimating the state of a distant sender  The replica can only perform limited processing and maintain minimal state because of the large amounts of data. State transitions can’t be neither backtrack or reverse.  The replica may not observe the same sequence of packets as the sender.  The modification of cwnd after packet loss is dictated by the favor of the sender’s congestion control algorithm. The authors just considered three congestion control algorithms – Tahoe, Reno and NewReno.  Implementation details of the TCP sender are invisible to the replica.

Tracking The Congestion Window A. TCP flavor identification The usable window size of the sender = min(cwnd, awnd) 1.For every data packet sent by the sender, they check whether this packet is allowed by the current FSM estimate for each particular flavor. 2.Given a flavor, if the packet is not allowed, then the observed data packet represents a “violation”. 3.A counter is maintained to count the number of such violations incurred by each of the candidate flavors. 4.The sender’s flavor is inferred to be that flavor with the minimum number of violations.

Tracking The Congestion Window B. Use of SACK and ECN The measurement point do not have access to SACK (Selective Acknowledgements) blocks or infer the use of SACK information during fast recovery. The measurement point could estimate the congestion window of the sender just by looking at the ECN bits in the TCP header. However, 0.14% of the connections were ECN-aware.

Round-Trip Time Estimation Fig. 1. TCP running sample based RTT estimation

Sources of Estimation Uncertainty A. Under-estimation of cwnd sender receiver measurement point Send seq. # x-1 Send seq. # x Send seq. # x+1 Send seq. # x+2 Send seq. # x+3 ACK x-1 Send seq. # x

Sources of Estimation Uncertainty B. Over-estimation of cwnd Acknowledgements lost after the measurement point sender receiver measurement point Send seq. # x ACK x

Sources of Estimation Uncertainty Entire window of data packets lost before the measurement point sender receiver measurement point Send seq. # x+1 Send seq. # x+2 Send seq. # x+3 Send seq. # x+1

Sources of Estimation Uncertainty C. Window Scaling They only collect the first 44 bytes of the packets and thus can’t track the advertised window if window scaling option is enabled in the connection. New window size = window size defined in the header x 2 window scale factor Fig. 1. TCP header

Sources of Estimation Uncertainty Identify connections probable with window scaling option enabled: 1.They infer those window scaling option enabled connections by the size of the SYN and SYN+ACK packet where should accommodate the 3 bytes in the options of the TCP header. 2.From the above connections, they count the connections for which cwnd could exceed awnd.

Sources of Estimation Uncertainty D. Issues with TCP implementation 1.Several previous works ([15] On Inferring TCP behavior, [16] Automated packet trace analysis of TCP implementation, 1997) have uncovered bugs in the TCP implementations of various OS stacks, such as no window cut down after a loss. 2.The initial ssthresh value may be different. Some TCP implementations cache the value of the sender’s cwnd just before a connection to a particular destination IP-address terminates, and reuse this value to initialize for subsequent connections to this destination.

Sources of Estimation Uncertainty E. Impact on RTT estimation RTT estimation is directly affected by estimation inaccuracies in cwnd. Their methodology needs to know cwnd in order to identify the data packet whose transmission by the sender is triggered by the receipt of the ACK used in the estimation. Therefore, over- (under-) estimation of will result in an over- (under-) estimation of the RTT.

Evaluation A. Simulations They generated long lived flows for analysis and cross traffic consisting of 5,700 short lived flows (40 packets) with arrival times uniformly distributed through the length of the simulation. The bottleneck link is located either between the sender and the measurement node or after the measurement point. Different parameters are set for the bottleneck link, varying the bandwidth, buffer size and propagation delay for the simulations. The average loss rate in the various scenarios varied from 2% to 4%.

Evaluation A. Simulations Fig. 2. Mean relative error of cwnd and RTT estimates in the simulations

Evaluation A. Simulations 1.Out of the 280 senders, the TCP flavor of 271 senders was identified correctly. 2.Of the remaining senders, 4 either had zero violations for all flavors (i.e., they did not suffer a specific loss scenario that allows us to distinguish among the flavors) or had an equal number of violations in more than one flavor (including the correct one). 3.Five connections were misclassified. This can happen if the FSM corresponding to the TCP sender’s flavor underestimates the sender’s congestion window

Evaluation B. Experiments over the network OC-3 link monitored by IPMON system PCs are running either FreeBSD 4.3 or 4.7 operating systems with a modified kernel to export the connection variables. 200 TCP connections (divided between Reno and NewReno flavors) are set up for the experiments. Univ. of Massachusetts, in Amherst, MA Sprint ATL, in Burlingame, CA

Evaluation B. Experiments over the network Fig. 3. Mean relative error of cwnd and RTT estimates with losses induced by dummynet

Backbone Measurements Table I. Summary of the traces

Backbone Measurements A. Congestion window Maximum sender window Cumulative fraction of senders Fig. 4. Cumulative fraction of senders as a function of the maximum window

Backbone Measurements A. Congestion window Maximum sender window Cumulative fraction of packets Fig. 5. Cumulative fraction of packets as a function of the sender’s maximum window

Backbone Measurements B. TCP flavors Table II. TCP Flavors

Backbone Measurements B. TCP flavors Fig. 6. Percentage of Reno/NewReno senders (above) and packets (below) as a function of the data packets to transmit Threshold (packets) Percentage of packets

Backbone Measurements C. Greedy senders A sender is defined as “greedy” if at all times the number of unacknowledged packets in the network equals the the available window size. sender receiver mp1 Inferred RTT mp2mp3 ACT-time Proximity indication = ACK-time / RTT

Backbone Measurements C. Greedy senders Fig. 7. Fraction of greedy senders based on the distance between measurement point and receiver ACK-time / RTT

Backbone Measurements Fig. 8. qq-plot of flow size between flows with large ACK times, and all flows log 10(size in packets), Senders with ACK/RTT > 0.75 log 10(Size in packets). All senders

Backbone Measurements D. Round trip times Minimum RTT (in msec) Median RTT (in msec) Cumulative fraction of senders Fig. 9. Top: CDF of minimum RTT; Bottom: CDF of median RTT

Backbone Measurements D. Round trip times Minimum RTT (in msec) RTT 95th percentile – RTT 5th percentile Cumulative fraction of senders Fig. 10. Variability of RTT. Top: ratio 95th/th percentile; Bottom: difference between 95th and 5th percentile RTT 95th percentile – RTT 5th percentile RTT 95th percentile / RTT 5th percentile

Backbone Measurements E. Efficiency of slow-start Cumulative fraction of senders Ratio of maximum sender window to the window size before exiting slow-start Fig. 11. Ratio of maximum sender window to the window size before exiting slow-start

Conclusions 1.A passive measurement methodology that observes the segments in a TCP connection and infers/tracks the time evolution of two critical sender variables: the sender’s congestion window (cwnd) and the connection round trip time (RTT) is presented. 2.They have also identified the difficulties involved in tracking the state of a distant sender and described the network events that may introduce uncertainty into their estimation, given the location of the measurement point. 3.Observations: The sender throughput is often limited by lack of data to send, rather than by network congestion. In the few cases where TCP flavor is distinguishable, it appears that NewReno is the dominant congestion control algorithm implemented. Connections do not generally experience large RTT variations in their lifetime.