The Gaussian Nature of TCP Mark Shifrin Supervisor: Supervisor: Dr. Isaac Keslassy M.Sc Seminar Faculty of Electrical Engineering.

Slides:



Advertisements
Similar presentations
EE384Y: Packet Switch Architectures
Advertisements

Internet Measurement Conference 2003 Source-Level IP Packet Bursts: Causes and Effects Hao Jiang Constantinos Dovrolis (hjiang,
Michele Pagano – A Survey on TCP Performance Evaluation and Modeling 1 Department of Information Engineering University of Pisa Network Telecomunication.
1 Understanding Buffer Size Requirements in a Router Thanks to Nick McKeown and John Lockwood for numerous slides.
Modeling TCP Throughput Jitendra Padhye Victor Firoiu Don Towsley Jim Kurose Presented by Jaebok Kim A Simple Model and its Empirical Validation.
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Fourth Edition,Peterson.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
TCP Stability and Resource Allocation: Part II. Issues with TCP Round-trip bias Instability under large bandwidth-delay product Transient performance.
Sizing Router Buffers Guido Appenzeller Isaac Keslassy Nick McKeown Stanford University.
On Modeling Feedback Congestion Control Mechanism of TCP using Fluid Flow Approximation and Queuing Theory  Hisamatu Hiroyuki Department of Infomatics.
The War Between Mice and Elephants Presented By Eric Wang Liang Guo and Ibrahim Matta Boston University ICNP
Transport Layer 3-1 outline r TCP m segment structure m reliable data transfer m flow control m congestion control.
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
Designing Networks with Little or No Buffers or Can Gulliver Survive in Lilliput? Yashar Ganjali High Performance Networking Group Stanford University.
High Performance All-Optical Networks with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
AQM for Congestion Control1 A Study of Active Queue Management for Congestion Control Victor Firoiu Marty Borden.
Modeling TCP Throughput Jeng Lung WebTP Meeting 11/1/99.
High Performance Networking with Little or No Buffers Yashar Ganjali High Performance Networking Group Stanford University
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Third Ed.,Peterson.
High Performance Networking with Little or No Buffers Yashar Ganjali on behalf of Prof. Nick McKeown High Performance Networking Group Stanford University.
Sizing Router Buffers (Summary)
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
TCP Congestion Control TCP sources change the sending rate by modifying the window size: Window = min {Advertised window, Congestion Window} In other words,
1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug 1993), pp
Modeling TCP in Small-Buffer Networks
A Switch-Based Approach to Starvation in Data Centers Alex Shpiner Joint work with Isaac Keslassy Faculty of Electrical Engineering Faculty of Electrical.
Reducing the Buffer Size in Backbone Routers Yashar Ganjali High Performance Networking Group Stanford University February 23, 2005
Isaac Keslassy (Technion) Guido Appenzeller & Nick McKeown (Stanford)
Random Early Detection Gateways for Congestion Avoidance
Statistical Approach to NoC Design Itamar Cohen, Ori Rottenstreich and Isaac Keslassy Technion (Israel)
Estimating Congestion in TCP Traffic Stephan Bohacek and Boris Rozovskii University of Southern California Objective: Develop stochastic model of TCP Necessary.
Advanced Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Fourth.
Routers with Small Buffers Yashar Ganjali High Performance Networking Group Stanford University
Advanced Computer Networks : RED 1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking,
Lect3..ppt - 09/12/04 CIS 4100 Systems Performance and Evaluation Lecture 3 by Zornitza Genova Prodanoff.
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
Congestion models for bursty TCP traffic Damon Wischik + Mark Handley University College London DARPA grant W911NF
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
Sizing Router Buffers How much packet buffers does a router need? C Router Source Destination 2T The current “Rule of Thumb” A router needs a buffer size:
TFRC: TCP Friendly Rate Control using TCP Equation Based Congestion Model CS 218 W 2003 Oct 29, 2003.
MIT Fun queues for MIT The importance of queues When do queues appear? –Systems in which some serving entities provide some service in a shared.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
1 On Class-based Isolation of UDP, Short-lived and Long-lived TCP Flows by Selma Yilmaz Ibrahim Matta Computer Science Department Boston University.
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
TCP with Variance Control for Multihop IEEE Wireless Networks Jiwei Chen, Mario Gerla, Yeng-zhong Lee.
Analysis of Buffer Size in Core Routers by Arthur Dick Supervisor Anirban Mahanti.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
WB-RTO: A Window-Based Retransmission Timeout Ioannis Psaras Demokritos University of Thrace, Xanthi, Greece.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
1 Analysis of a window-based flow control mechanism based on TCP Vegas in heterogeneous network environment Hiroyuki Ohsaki Cybermedia Center, Osaka University,
Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
H. OhsakiITCom A control theoretical analysis of a window-based flow control mechanism for TCP connections with different propagation delays Hiroyuki.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
1 Advanced Transport Protocol Design Nguyen Multimedia Communications Laboratory March 23, 2005.
Buffers: How we fell in love with them, and why we need a divorce Hot Interconnects, Stanford 2004 Nick McKeown High Performance Networking Group Stanford.
Networks with Very Small Buffers Yashar Ganjali, Guido Appenzeller, High Performance Networking Group Prof. Ashish Goel, Prof. Tim Roughgarden, Prof. Nick.
Sachin Katti, CS244 Slides courtesy: Nick McKeown
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Queue Dynamics with Window Flow Control
TCP Westwood(+) Protocol Implementation in ns-3
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Amogh Dhamdhere, Hao Jiang and Constantinos Dovrolis
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
TCP Congestion Control
Routers with Very Small Buffers
Presentation transcript:

The Gaussian Nature of TCP Mark Shifrin Supervisor: Supervisor: Dr. Isaac Keslassy M.Sc Seminar Faculty of Electrical Engineering

2 Large Network Presentation Characterizing traffic is essential to understanding the behavior of large Internet networks. Network is large and complex – contains millions of flows. The interaction and mutual impact is difficult to understand.

3 Large Network Related Challenges –What do we pursue? Ability to analyze problems:  Congestions  High packet losses Ability to make planning decisions:  Bottleneck capacity decision  Buffer size  RTT (Round Trip Time) distribution

4 Obstacles and Difficulties Today: no existing efficient large network model because of numerous problems. Problems:  Many TCP feedback behaviors (congestion avoidance, slow-start, …)  Complex interactions among flows  Complex queueing analysis  Many topologies  Many protocols (TCP, UDP, …)  Complex interactions between flows and ACKs

5 Reducing the Network into Basic Components Bottlenecks dictate the behavior of the flows. Assumption: The network can be subdivided into basic dumbbell topologies with a single forward bottleneck.

6 Previous work – rule-of-thumb Universally applied rule-of-thumb for TCP flows:  A router needs a buffer size: RTT is the two-way propagation delay C is capacity of bottleneck link Context  Mandated in backbone and edge routers.  Appears in RFPs and IETF architectural guidelines.  Usually referenced in [Villamizar and Song, ’94]  Already known by inventors of TCP [Van Jacobson, ‘88]  Has major consequences for router design C Router Source Destination RTT

7 Previous Work – Stanford Model [Appenzeller et al., ’04 | McKeown and Wischik, ’05 | McKeown et al., ‘06] Assumption 1: TCP Flows modeled as i.i.d.  total window W has Gaussian distribution Assumption 2: Queue is the only variable part  queue has Gaussian distribution: Q=W-CONST  use smaller buffer than in rule of thumb We also consider small buffers. But, with small buffers: Buffers lose Gaussian property Link variations cannot be neglected

8 Previous Work – Total Arrival is Poisson Distributed Assumption: M/M/1/K model of buffer with TCP ]Avrachenkov et al., ‘01] Using the model of [Padhye et al., ’98] for the M/M/1/K analysis. We show that the Poisson model is far from reality.

9 What is new in our analysis? Dividing the Dumbbell Topology into components (Lines).  Finding separately the pdf of the traffic of each component we can describe the statistics of the entire network.  We show the impact of one component on another. We find a new model of TCP congestion window (cwnd) distribution, considering correlated losses.  We use it to find the pdf of the traffic on the Lines. We prove that distributions of the packets on the Lines are Gaussian. We find Transmission rate, Queue (Q) distribution and the packet loss.

10 Line L 2 Line L 1 Line L 3 Line L 4 Line L 5 Line L 6 Contents of the Dumbbell Topology Q

11 Line L 2 Line L 1 Line L 3 Line L 4 Line L 5 Line L 6 Contents of the Dumbbell Topology Q

12 Outline Models of single objects  Finding the model of cwnd.  Correlated losses  Packet Loss Event derivation  Model of l i 1 Transmission rate and packet loss derivation  Arrival rate of single flow  Total arrival rate  Q distribution  Packet loss Gaussian models of Lines  L1 model, Lindeberg Central Limit Theorem  Models of other lines and of W. Conclusions

13 Model Development Flow cwnd Q pdf total arrival rate l i 1 pdf l 1 i arrival rates packet loss

14 TCP Newreno Congestion Control (Reminder) Slow Start w → w*2 Congestion Avoidance: w → w+1 Fast Retransmit and Fast Recovery: w → w/2  Treats multiple losses in the same window Timeout (TO): w →1

15 CWND Simple Markov Chain Solution Simple Markov Chain.  Assumption: Timeout (TO) probability is small comparatively to the halving. Model 1: Independent losses [Altman et al., 04]

16 Model 2: Packet loss event derivation p: packet loss probability. Original transition was given by p*n. We cannot use p because the losses are correlated. probability of the Loss Event for window size n is equal to p e (n)  n Assumption: In window of size n with losses, each packet in the same window has equal probability to experience the packet loss event. Intuition: p e is the effective packet loss, which is independent for every packet. Fast Retransmit and Fast Recovery able to recover almost in the same way no matter how many packets are lost in the window. w=n/2 w=n p e (n)*n

17 Correlated Losses Packet losses are strongly correlated: given one packet was lost in a TCP window, the probability is higher for others in the same window to be lost. Assumption: For a given window size n, given at least one lost packet, the distribution of the number of lost packets is independent of the network condition! (No actual impact of p, RTT distribution, Buffer size) Denote this distribution as Pr(l(n)=k), for k=1…n

18 Packet Loss Burst Size Distribution

19 Packet Loss Burst Size Distribution

20 Example of the Q behaviour

21 Expected packet loss burst size Pr(l(n)=l) = Pr(l lost packets for w=n|loss event happened) E[l(n)]- expected value of the size of the bursty loss, given at least one packet was lost, for window size n. At every loss event in window of size n, E[l(n)] packets fall in average. Every loss event is effectively treated as a single loss (approximation) p e – effective packet loss probability.

22 What is this p e actually? Theorem: p e (n)/p =1/ E[l(n)], i.e. independent of p for a given n. Proof outline:  For arbitrary lost packet B denote A B as P(some packets lost in window containing B)  p=P(B|A B )*p(A B )+ P(B|A B C )*p(A B C )  P(B|A B )=E[l(n)]/n  p(A B )=p e (n)*n Conclusion: p e (n)=p/ E[l(n)] Intuition: E[l(n)] losses treated as a single loss! 0

23 p e /p - simulation results 500 flows traced by NS2 tools, event by event.

24 MC Transitions (Approach 1 & 2) Approach 1:  Pr(w(t+1)=w(t)/2 | w(t)=n)=n*p e (n)  Pr(TO)=const (known)  Pr(w(t+1)=w(t)+1 | w(t)=n)=1-Pr(TO)- Pr(w(t+1)=w(t)/2 | w=n). Approach 2:  Pr(w(t+1)=w(t)/2 | w(t)=n)=P h1 +P h2  Then P h1 is the probability of loss in the first half of the window and P h2 is the probability of loss in the second half of the window. Improvement: introduce Slow Start in the model

25 Simple Markov Chain (Approach 2 – details) The loss happened in the first half of the window, The window transmission is accomplished to the half. The place of the fall is not important (within the first half) Summation of all the cases of more then half of the packets were acked first, and then the lost event occurred. The lost event finished the window in this case, so other acks are treated in the next one, during the Fast Recovery. The last packet is not counted because it refers to the Fast Retransmit in the next window

26 Results ( pdf,low p)

27 Results (cdf, low p)

28 Results (pdf, medium p)

29 Results (cdf, medium p)

30 Results (pdf, high p)

31 Results (cdf, high p)

32 Model of l i 1 cwnd l i 1 pdf packet loss

33 l i 1 distribution – uniform vs. bursty li3li3 li1li1 li5li5 li2li2 li4li4 li6li6 Approach 1: Uniform packet distribution. l 1 (t) i = w i (t)* tp i 1 /rtt i. Approach 2: Bursty Packet Distribution B source dest. rtt i =tp 1 i +tp 2 i +tp 3 i +tp 4 i +tp 5 i +tp 6 i

34 Bursty Transmission and l i 1 model Assumption: All packets in a flow move in a single burst  Transmission and queueing times of the entire burst negligible comparatively to propagation latencies Conclusion 1: All packets belonging to an arbitrary flow i are present almost always on the same link (l 1 i,l 2 i,l 3 i,l 4 i,l 5 i,l 6 i ). Conclusion 2: The probability of burst of flow i being present on the certain link is equal to the ratio of its propagation latency to the total rtt i.

35 l i 1 probability density function We compare the measured results vs. the models 1 and 2 Reminder of the model 1: l 1 (t) i = w i (t)* tp i 1 /rtt i.

36 Model results l i 1, logarithmic scale

37 Model results l i 1, linear scale

38 Outline Models of single objects  Finding the model of cwnd.  Correlated losses  Packet Loss Event derivation  Model of l i 1 Transmission rate and packet loss derivation  Arrival rate of single flow  Total arrival rate  Q distribution  Packet loss Gaussian Models of Lines  L1 model, Lindeberg Central Limit Theorem  Models of other lines and of W. Conclusions

39 Model Development Flow cwnd total arrival rate l i 1 pdf l 1 i arrival rates

40 Rate Transmission Derivation The objective is to find the pdf of the number of packets sent on some link i, r i, in a time unit δt. Assumptions:  The rate on each one of the links in L 1 is statistically independent  We assume that the transmissions are bursty.  We assume that the rate is proportional to the distribution of l 1 i and to the ratio δt/tp 1 i. tp 1 i is the latency on the corresponding link.

41 Arrival rate of a single flow li1li1 li2li2 source B δtδt tp 1 i l i 1 *δt/tp 1 i δt is the same for all flows. We find the arrival distribution for every flow in δt msec.

42 Rate PDF on the single line The distribution is the same, but with different parameters. For finding the total rate Lindeberg Central Limit Theorem is needed.

43 Total Rate Theorem: The proof is using the Lindeberg condition.  The idea: Central Limit Theorem holds if the share of each flow comparatively to the sum is negligible as the number of the flows grows.  Argument for the proof: cwnd is limited by maximum value  so does the l 1 i and r i. Another alternative: Using a Poisson arrival rate - too small variance (result of the bursty transmission not reflected)

44 Lindeberg Condition Denote η i as E(r i ) σ i is a standard deviation of r i For any ε we choose.

45 Rate Model - Results Probability Total arrival rate – number of packets per δt

46 Model Development Flow cwnd Q pdf total arrival rate l i 1 pdf l 1 i arrival rates packet loss

47 Q pdf Two possible ways to find the Q distribution:  Running MC simulation using samples of R  G/D/1/K analysis using R for arrivals [Tran- Gia and Ahmadi, ‘88] Both ways yield almost the same results. Packet loss p is derived from the Q distribution.

48 Q pdf - results Probability Q state – 0 to 585 packets

49 Fixed Point Solution: p=f(p) cwnd Q pdf total arrival rate l i 1 pdf l 1 i arrival rates packet loss

50 Packet Loss Rate Solution by gradient algorithm 1. Start with approximate p 2. Find the cwnd and all l 1 i 3. Find all rates of all flows 4. Find the total rate and Q pdf 5. Find new p 6. Make a correction according to the error weighted by step size and go to Till the error small enough – fixed point found

51 Packet loss - results Model gives about 10%-25% of discrepancy. Case 1: Measured: p=2.7%, Model: p=3% Case 2: Measured: p=0.8%, Model: p=0.98% Case 3: Measured: p=1.4%, Model: p=1.82% Case 4: Measured: p=0.452%, Model: p=0.56%

52 Outline Models of single objects  Finding the model of cwnd.  Correlated losses  Packet Loss Event derivation  Model of l i 1 Transmission rate and packet loss derivation  Arrival rate of single flow  Total arrival rate  Q distribution  Packet loss Gaussian Models of Lines  L1 model, Lindeberg Central Limit Theorem  Models of other lines and of W. Conclusions

53 L 1 (reminder) Line L 1 Line L 3 Line L 5 Q

54 Lindeberg Central Limit Theorem All l i 1 are statistically independent. The distribution is the same but with different parameters – because of different RTT – simple Central Limit Theorem won’t fit. Theorem: Conclusion : Using the model for cwnd and then for l i 1 we are able to find the L 1 distribution.

55 Lindeberg Condition Denote μ i as E(l i 1 ) σ i is a standard deviation of l i 1 The Theorem for L 1 holds if Lindberg condition holds: For any ε we choose.

56 L 1 Model Results

57 What else can we know from our model? Line L 2 Line L 1 Line L 3 Line L 4 Line L 5 Line L 6 Q Stanford Model: Gaussian Queue, Gaussian W, constant traffic on the links Our model: non-Gaussian Queue, Gaussian W, Gaussian traffic on the links.

58 Packets on other lines li3li3 li1li1 li5li5 li2li2 li4li4 li6li6 B client server cwndbcwnd

59 Model Development Flow cwnd W bcwnd li2li2 li4li4 li5li5 li6li6 L2L2 L4L4 L5L5 L6L6 Total

60 cwnd on the Lines We know the pdf for the cwnd – the window at the source. This window travels the lines as the single burst The burst which passes through the Q loses some packets. How can we know the distribution of the window of packets after it passes the Q? Suggestion: Using l(n) distribution for window of size n. Reminder: Pr(l(n)=l) = Pr(l lost packets for w=n|loss event happened)

61 Packet Loss Burst Size Distribution (Reminder)

62 Markov Chain of bcwnd w=n-1 w=n w=2w=3w=1 1-P loss P loss *P(l(n)=n-3) P loss *P(l(n)=n-2) P loss *P(l(n)=n-1) P loss *P(l(n)=1)

63 Back cwnd algorithm (bcwnd) The fall probability for window of size n: P loss = p e (n)*n. (TO is small and omitted) For each n from 1 to 64 update:  Pr(bcwnd=n)=P(cwnd=n)-P(cwnd=n)* P loss  For each I, i=0 to n-1 update: P(bcwnd=i)=P(cwnd=i)+ P(cwnd=n)* P loss *P(l(n)=n-i) The update done from low to the highest state.

64 bcwnd and cwnd comparison

65 Models for L 2, L 4,L 5,L 6 Typical case: tp 1 i, tp 2 i, tp 4 i, tp 5 i, tp 6 i, all have a different distribution Approach 1: Use model of L 1 with cwnd and scale by ratio of means of tp.  ratio of tp gives unreliable result for both the expected value and variance Approach 2: Use bcwnd for the model for l 2 i, l 4 i, l 5 i, l 6 i.  The model is exactly the same as for l 1 i, but bcwnd instead of cwnd used.  Corresponding latencies for each line used.  Use The Lindeberg Central Limit Theorem, exactly in the same way as for L 1 !

66 L 2 Model – result demonstration

67 Topology – Reminder of the Lines Line L 2 Line L 1 Line L 3 Line L 4 Line L 5 Line L 6 Q

68 The Gaussian distributions NS2 simulation of 500 flows with all different time propagations L4L4 L2L2 L1L1 L5L5 L6L6 W

69 Are these lines really Gaussian?

70 W – sum of all cwnd Two approaches to find W.  cwnd model – is known. All TCP connections are i.i.d – classical Central Limit Theorem is applicable.  Counting by summing all the components: W=L 1 +L 2 +L 3 +L 4 +L 5 +L 6 +Q. L 3 and Q – are non Gaussian components. cwnd i of different flows have some small correlation. L 1,L 2,L 4,L 5,L 6 - all Gaussian. L 1 – almost the perfect Gaussian.

71 What Size of Buffer do We Need? Case 1 Case 2 (%)

72 Outline Models of single objects  Finding the model of cwnd.  Correlated losses  Packet Loss Event derivation  Model of l i 1 Transmission rate and packet loss derivation  Arrival rate of single flow  Total arrival rate  Q distribution  Packet loss Gaussian Models of Lines  L1 model, Lindeberg Central Limit Theorem  Models of other lines and of W. Conclusions

73 Conclusions.. We know the probabilistic models of different components of the network  Single flows  General components We can plan routers or networks now – by deciding the C and the B, given some topology and packet loss limit.

74 Thanks are going to: Dr. I. Keslassy for guidance and inspiration Lab Staff for relentless support:  Hai Vortman, Yoram Yihyie, Yoram Or-Chen, Alex Students for some simulations. Listeners in this class for the patience..