Queueing analysis of a feedback- controlled (TCP/IP) network Gaurav RainaDamon WischikMark Handley CambridgeUCLUCL.

Slides:



Advertisements
Similar presentations
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Advertisements

Michele Pagano – A Survey on TCP Performance Evaluation and Modeling 1 Department of Information Engineering University of Pisa Network Telecomunication.
CSIT560 Internet Infrastructure: Switches and Routers Active Queue Management Presented By: Gary Po, Henry Hui and Kenny Chong.
Queueing theory and Internet congestion control Damon Wischik, UCL Mark Handley, UCL Gaurav Raina, Cambridge / IIT Madras.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Mathematical models of the Internet Frank Kelly Hood Fellowship Public Lecture University of Auckland 3 April 2012.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
Mohammad Alizadeh Adel Javanmard and Balaji Prabhakar Stanford University Analysis of DCTCP:Analysis of DCTCP: Stability, Convergence, and FairnessStability,
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
TCP Stability and Resource Allocation: Part II. Issues with TCP Round-trip bias Instability under large bandwidth-delay product Transient performance.
Fair queueing and congestion control Jim Roberts (France Telecom) Joint work with Jordan Augé Workshop on Congestion Control Hamilton Institute, Sept 2005.
Sizing Router Buffers Guido Appenzeller Isaac Keslassy Nick McKeown Stanford University.
On Modeling Feedback Congestion Control Mechanism of TCP using Fluid Flow Approximation and Queuing Theory  Hisamatu Hiroyuki Department of Infomatics.
Texas A&M University Improving TCP Performance in High Bandwidth High RTT Links Using Layered Congestion Control Sumitha.
Control Theory in TCP Congestion Control and new “FAST” designs. Fernando Paganini and Zhikui Wang UCLA Electrical Engineering July Collaborators:
Comparing flow-oblivious and flow-aware adaptive routing Sara Oueslati and Jim Roberts France Telecom R&D CISS 2006 Princeton March 2006.
High Performance Networking with Little or No Buffers Yashar Ganjali High Performance Networking Group Stanford University
TCP Stability and Resource Allocation: Part I. References The Mathematics of Internet Congestion Control, Birkhauser, The web pages of –Kelly, Vinnicombe,
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
High Performance Networking with Little or No Buffers Yashar Ganjali on behalf of Prof. Nick McKeown High Performance Networking Group Stanford University.
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug 1993), pp
Modeling TCP in Small-Buffer Networks
The Effect of Router Buffer Size on HighSpeed TCP Performance Dhiman Barman Joint work with Georgios Smaragdakis and Ibrahim Matta.
Reducing the Buffer Size in Backbone Routers Yashar Ganjali High Performance Networking Group Stanford University February 23, 2005
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Isaac Keslassy (Technion) Guido Appenzeller & Nick McKeown (Stanford)
Random Early Detection Gateways for Congestion Avoidance
Presented by Anshul Kantawala 1 Anshul Kantawala FAST TCP: From Theory to Experiments C. Jin, D. Wei, S. H. Low, G. Buhrmaster, J. Bunn, D. H. Choe, R.
Core Stateless Fair Queueing Stoica, Shanker and Zhang - SIGCOMM 98 Rigorous fair Queueing requires per flow state: too costly in high speed core routers.
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
1 A State Feedback Control Approach to Stabilizing Queues for ECN- Enabled TCP Connections Yuan Gao and Jennifer Hou IEEE INFOCOM 2003, San Francisco,
Buffer requirements for TCP: queueing theory & synchronization analysis Gaurav RainaDamon Wischik CambridgeUCL.
New Designs for the Internet Why can’t I get higher throughput? Why is my online video jerky? How is capacity shared in the Internet?
Lect3..ppt - 09/12/04 CIS 4100 Systems Performance and Evaluation Lecture 3 by Zornitza Genova Prodanoff.
Buffer requirements for TCP Damon Wischik DARPA grant W911NF
The teleology of Internet congestion control Damon Wischik, Computer Science, UCL.
Congestion models for bursty TCP traffic Damon Wischik + Mark Handley University College London DARPA grant W911NF
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
Sizing Router Buffers How much packet buffers does a router need? C Router Source Destination 2T The current “Rule of Thumb” A router needs a buffer size:
CS/EE 145A Congestion Control Netlab.caltech.edu/course.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
Queueing theory for TCP Damon Wischik University College London TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Requirements for Simulation and Modeling Tools Sally Floyd NSF Workshop August 2005.
Congestion Control for High Bandwidth-Delay Product Networks D. Katabi (MIT), M. Handley (UCL), C. Rohrs (MIT) – SIGCOMM’02 Presented by Cheng.
Queueing theory for TCP Damon Wischik, UCL Gaurav Raina, Cambridge.
The history of the Internet 1974: First draft of TCP/IP “A protocol for packet network interconnection”, Vint Cerf and Robert Kahn 1983: ARPANET switches.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
1 Time-scale Decomposition and Equivalent Rate Based Marking Yung Yi, Sanjay Shakkottai ECE Dept., UT Austin Supratim Deb.
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
New designs for Internet congestion control Damon Wischik (UCL)
Network teleology Damon Wischik
Scalable Laws for Stable Network Congestion Control Fernando Paganini UCLA Electrical Engineering IPAM Workshop, March Collaborators: Steven Low,
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
XCP: eXplicit Control Protocol Dina Katabi MIT Lab for Computer Science
Queueing theory for TCP Damon Wischik, UCL Gaurav Raina, Cambridge.
Queueing theory, control theory, & buffer sizing Damon Wischik DARPA grant W911NF
Buffers: How we fell in love with them, and why we need a divorce Hot Interconnects, Stanford 2004 Nick McKeown High Performance Networking Group Stanford.
Networks with Very Small Buffers Yashar Ganjali, Guido Appenzeller, High Performance Networking Group Prof. Ashish Goel, Prof. Tim Roughgarden, Prof. Nick.
Introduction to Congestion Control
FAST TCP : From Theory to Experiments
Stability of Congestion Control Algorithms Using Control Theory with an application to XCP Ioannis Papadimitriou George Mavromatis.
Internet congestion control
Understanding Congestion Control Mohammad Alizadeh Fall 2018
Gaurav Raina Damon Wischik Mark Handley Cambridge UCL UCL
Presentation transcript:

Queueing analysis of a feedback- controlled (TCP/IP) network Gaurav RainaDamon WischikMark Handley CambridgeUCLUCL

Some Internet History 1974: First draft of TCP/IP “A protocol for packet network interconnection”, Vint Cerf and Robert Kahn 1983: ARPANET switches on TCP/IP 1986: Congestion collapse 1988: Congestion control for TCP “Congestion avoidance and control”, Van Jacobson “A Brief History of the Internet”, the Internet Society

Sizing router buffers SIGCOMM 2004 Abstract. All Internet routers contain buffers to hold packets during times of congestion. Today, the size of the buffers is determined by the dynamics of TCP's congestion control algorithm. In particular, the goal is to make sure that when a link is congested, it is busy 100% of the time; which is equivalent to making sure its buffer never goes empty. A widely used rule-of-thumb states that each link needs a buffer of size B = RTT*C, where RTT is the average round-trip time of a flow passing across the link, and C is the data rate of the link. For example, a 10Gb/s router linecard needs approximately 250ms*10Gb/s = 2.5Gbits of buffers; and the amount of buffering grows linearly with the line-rate. Such large buffers are challenging for router manufacturers, who must use large, slow, off-chip DRAMs. And queueing delays can be long, have high variance, and may destabilize the congestion control algorithms. In this paper we argue that the rule-of-thumb (B = RTT*C) is now outdated and incorrect for backbone routers. This is because of the large number of flows (TCP connections) multiplexed together on a single backbone link. Using theory, simulation and experiments on a network of real routers, we show that a link with N flows requires no more than B = (RTT*C)/  N, for long-lived or short-lived TCP flows. The consequences on router design are enormous: A 2.5Gb/s link carrying 10,000 flows could reduce its buffers by 99% with negligible difference in throughput; and a 10Gb/s link carrying 50,000 flows requires only 10Mbits of buffering, which can easily be implemented using fast, on-chip SRAM. Guido AppenzellerIsaac KeslassyNick McKeown Stanford UniversityStanford UniversityStanford University

TCP if (seqno > _last_acked) { if (!_in_fast_recovery) { _last_acked = seqno; _dupacks = 0; inflate_window(); send_packets(now); _last_sent_time = now; return; } if (seqno < _recover) { uint32_t new_data = seqno - _last_acked; _last_acked = seqno; if (new_data < _cwnd) _cwnd -= new_data; else _cwnd=0; _cwnd += _mss; retransmit_packet(now); send_packets(now); return; } uint32_t flightsize = _highest_sent - seqno; _cwnd = min(_ssthresh, flightsize + _mss); _last_acked = seqno; _dupacks = 0; _in_fast_recovery = false; send_packets(now); return; } if (_in_fast_recovery) { _cwnd += _mss; send_packets(now); return; } _dupacks++; if (_dupacks!=3) { send_packets(now); return; } _ssthresh = max(_cwnd/2, (uint32_t)(2 * _mss)); retransmit_packet(now); _cwnd = _ssthresh + 3 * _mss; _in_fast_recovery = true; _recover = _highest_sent; } time [0-8 sec] bandwidth [0-100 kB/sec]

How TCP shares capacity sum of flow bandwidths time available bandwidth individual flow bandwidths

Macroscopic description of TCP Let x be the mean bandwidth of a flow [pkts/sec] Let RTT be the flow’s round-trip time [sec] Let p be the packet loss probability The TCP algorithm increases x at rate 1/RTT 2 [pkts/sec] and reduces x by x/2 for every packet loss average increase in rate = average decrease in rate: 1/RTT 2 = (p x) x/2

Macroscopic description Let x be the mean bandwidth of a flow [pkts/sec] Let RTT be the flow’s round-trip time [sec] Let p be the packet loss probability The TCP algorithm increases x at rate 1/RTT 2 [pkts/sec] and reduces x by x/2 for every packet loss average increase in rate = average decrease in rate: 1/RTT 2 = (p x) x/2 Consider a link with N identical flows Let NC be the capacity of the link [pkts/sec] packet loss ratio = fraction of work that exceeds service rate: p = (Nx-NC) + /Nx = (x-C) + /x

Fixed-Point Models for the End-to-End Performance Analysis of IP Networks ITC 2000 RJ Gibbens, SK Sargood, C Van Eijl, FP Kelly, H Azmoodeh, RN Macfadyen, NW Macfadyen Statistical Laboratory, Cambridge; and BT, Adastral Park Abstract. This paper presents a new approach to modeling end-to-end performance for IP networks. Unlike earlier models, in which end stations generate traffic at a constant rate, the work discussed here takes the adaptive behaviour of TCP/IP into account. The approach is based on a fixed-point method which determines packet loss, link utilization and TCP throughput across the network. Results are presented for an IP backbone network, which highlight how this new model finds the natural operating point for TCP, which depends on route lengths (via round-trip times and number of resources), end-to-end packet loss and the number of user sessions.

Fixed-point analysis C*RTT=4 pkts C*RTT=20 pkts C*RTT=100 pkts traffic intensity x/C log 10 of pkt loss probability

Queue simulations queue size arrival rate x time N=50N=100N=500N=1000 What’s the queueing theory behind p = (x-C) + /x ? Where does buffer size come in? Simulate a queue fed by N Poisson flows, each of rate x pkts/sec (x=0.95 then 1.05 pkts/sec) served at rate NC (C=1 pkt/sec) with buffer size N 1/2 B (B=3 pkts)

A Poisson Limit for Buffer Overflow Probabilities SIGCOMM 2002 Jin Cao, Kavita Ramanan Bell Labs Abstract. A key criterion in the design of high-speed networks is the probability that the buffer content exceeds a given threshold. We consider n independent traffic sources modelled as point processes, which are fed into a link with speed proportional to n. Under fairly general assumptions on the input processes we show that the steady state probability of the buffer content exceeding a threshold b>0 tends to the corresponding probability assuming Poisson input processes. We verify the assumptions for a large class of long-range dependent sources commonly used to model data traffic. Our results show that with superposition, significant multiplexing gains can be achieved for even smaller buffers than suggested by previous results, which consider O(n) buffer size. Moreover, simulations show that for realistic values of the exceedance probability and moderate utilisations, convergence to the Poisson limit takes place at reasonable values of the number of sources superposed. This is particularly relevant for high-speed networks in which the cost of high-speed memory is significant.

Mean-field limit Consider a link with N flows and capacity NC and buffer N 1/2 B Let x t be the average bandwidth at time t Let p t be the packet loss probability at time t As N  we believe a mean-field limit holds.

Mean-field limit Fluid-based Analysis of a Network of AQM Routers Supporting TCP Flows with an Application to RED SIGCOMM 2000 Vishal Misra, Wei-Bo Gong, Don Towsley Rate-based versus queue-based models of congestion control ACM Sigmetrics 2004 Supratim Deb, R. Srikant Mean field convergence of a rate model of multiple TCP connections through a buffer implementing RED To appear in Annals of Applied Probability David McDonald, Julien Reynier

Stability/instability of fluid model For some values of C*RTT, the differential equation is stable For others it is unstable and there are oscillations (i.e. the flows are partially synchronized) When it is unstable, we can calculate the amplitude of the oscillations time arrival rate x/C

Instability plot C*RTT=4 pkts C*RTT=20 pkts C*RTT=100 pkts traffic intensity x/C log 10 of pkt loss probability

Illustration: 20 flows Standard TCP, single bottleneck link, no AQM service C=60 pkts/sec/flow, RTT=200 ms, #flows N=20 B=20 pkts (Kelly rule) B=54 pkts (Stanford rule) B=240 pkts (rule of thumb)

Illustration: 200 flows Standard TCP, single bottleneck link, no AQM service C=60 pkts/sec/flow, RTT=200 ms, #flows N=200 B=20 pkts (Kelly rule) B=170 pkts (Stanford rule) B=2,400 pkts (rule of thumb)

Illustration: 2000 flows Standard TCP, single bottleneck link, no AQM service C=60 pkts/sec/flow, RTT=200 ms, #flows N=2000 B=20 pkts (Kelly rule) B=537 pkts (Stanford rule) B=24,000 pkts (rule of thumb)

Alternative buffer-sizing rules Stanford rule buffer = bandwidth*delay / sqrt(#flows) or Rule of thumb, no AQM buffer = bandwidth*delay Rule of thumb with RED buffer=bandwidth*delay*{¼,1,4} Kelly rule, no AQM buffer={10,20,50} pkts Kelly rule, no AQM, ScalableTCP buffer={50,1000} pkts

Scalable TCP: improving performance in highspeed wide area networks SIGCOMM CCR 2003 Tom Kelly CERN -- IT division Abstract. TCP congestion control can perform badly in highspeed wide area networks because of its slow response with large congestion windows. The challenge for any alternative protocol is to better utilize networks with high bandwidth-delay products in a simple and robust manner without interacting badly with existing traffic. Scalable TCP is a simple sender-side alteration to the TCP congestion window update algorithm. It offers a robust mechanism to improve performance in highspeed wide area networks using traditional TCP receivers. Scalable TCP is designed to be incrementally deployable and behaves identically to traditional TCP stacks when small windows are sufficient. The performance of the scheme is evaluated through experimental results gathered using a Scalable TCP implementation for the Linux operating system and a gigabit transatlantic network. The preliminary results gathered suggest that the deployment of Scalable TCP would have negligible impact on existing network traffic at the same time as improving bulk transfer performance in highspeed wide area networks.

Rate control in communication networks: shadow prices, proportional fairness and stability Journal of the Operational Research Society, 1998 F.P.Kelly, A.K.Maulloo, D.K.H.Tan Statistical Laboratory, Cambridge Abstract. This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks. The algorithms provide natural generalizations to large-scale networks of simple additive increase/multiplicative decrease schemes, and are shown to be stable about a system optimum characterized by a proportional fairness criterion. Stability is established by showing that, with an appropriate formulation of the overall optimization problem, the network's implicit objective function provides a Lyapunov function for the dynamical system defined by the rate control algorithm. The network's optimization problem may be cast in primal or dual form: this leads naturally to two classes of algorithm, which may be interpreted in terms of either congestion indication feedback signals or explicit rates based on shadow prices. Both classes of algorithm may be generalized to include routing control, and provide natural implementations of proportionally fair pricing.

Teleological description Consider several TCP flows sharing a single link Let x r be the mean bandwidth of flow r [pkts/sec] Let y be the total bandwidth of all flows [pkts/sec] Let C be the total available capacity [pkts/sec] TCP and the network act so as to solve maximise  r U(x r ) - P(y,C) over x r  0 where y=  r x r x U(x)U(x) y P(y,C) C

Teleological description Consider several TCP flows sharing a single link Let x r be the mean bandwidth of flow r [pkts/sec] Let y be the total bandwidth of all flows [pkts/sec] Let C be the total available capacity [pkts/sec] TCP and the network act so as to solve maximise  r U(x r ) - P’(y,C) over x r  0 where y=  r x r x U(x)U(x) y P’(y,C) C By reducing buffer size, we increase the penalty for high utilization.

Conclusion Analysis: –Use fixed-point model to find the equilibrium point; –Find a mean-field limit, and calculate how stable it is. Three rules for choosing buffer size lead to three different mean-field limits. –Rule of thumb e.g. 10 Gbytes –Stanford rule e.g. 100 Mbytes –Kelly rule e.g. 20 kbytes The network acts to solve an optimization problem. –It may or may not attain the solution. –We can choose which optimization problem, by choosing the right buffer size & changing TCP’s code.