*Orange Labs, Sophia Antipolis, France

Slides:



Advertisements
Similar presentations
EE384Y: Packet Switch Architectures
Advertisements

Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Impact of Background Traffic on Performance of High-speed TCPs
Reconsidering Reliable Transport Protocol in Heterogeneous Wireless Networks Wang Yang Tsinghua University 1.
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Autotuning in Web100 John W. Heffner August 1, 2002 Boulder, CO.
Helping TCP Work at Gbps Cheng Jin the FAST project at Caltech
Ramin Khalili (T-Labs/TUB) Nicolas Gast (LCA2-EPFL)
Addition Facts
Milano 25/2/20031 Bandwidth Estimation for TCP Sources and its Application Prepared for QoS IP 2003 R. G. Garroppo, S.Giordano, M. Pagano, G. Procissi,
Michele Pagano – A Survey on TCP Performance Evaluation and Modeling 1 Department of Information Engineering University of Pisa Network Telecomunication.
TCP--Revisited. Background How to effectively share the network? – Goal: Fairness and vague notion of equality Ideal: If N connections, each should get.
TCP Probe: A TCP with Built-in Path Capacity Estimation Anders Persson, Cesar Marcondes, Ling-Jyh Chen, Li Lao, M. Y. Sanadidi, Mario Gerla Computer Science.
RED-PD: RED with Preferential Dropping Ratul Mahajan Sally Floyd David Wetherall.
Addition 1’s to 20.
On Individual and Aggregate TCP Performance Lili Qiu Yin Zhang Srinivasan Keshav Cornell University 7th International Conference on Network Protocols Toronto,
A Measurement Study of Available Bandwidth Estimation Tools MIT - CSAIL with Jacob Strauss & Frans Kaashoek Dina Katabi.
FAST TCP Anwis Das Ajay Gulati Slides adapted from : IETF presentation slides Link:
Cheng Jin David Wei Steven Low FAST TCP: design and experiments.
1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
Simulating Large Networks using Fluid Flow Model Yong Liu Joint work with Francesco LoPresti, Vishal Misra Don Towsley, Yu Gu.
CUBIC Qian HE (Steve) CS 577 – Prof. Bob Kinicki.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
The War Between Mice and Elephants LIANG GUO, IBRAHIM MATTA Computer Science Department Boston University ICNP (International Conference on Network Protocols)
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
Fair queueing and congestion control Jim Roberts (France Telecom) Joint work with Jordan Augé Workshop on Congestion Control Hamilton Institute, Sept 2005.
On Modeling Feedback Congestion Control Mechanism of TCP using Fluid Flow Approximation and Queuing Theory  Hisamatu Hiroyuki Department of Infomatics.
Texas A&M University Improving TCP Performance in High Bandwidth High RTT Links Using Layered Congestion Control Sumitha.
An Implementation and Experimental Study of the eXplicit Control Protocol (XCP) Yongguang Zhang and Tom Henderson INFOCOMM 2005 Presenter - Bob Kinicki.
The War Between Mice and Elephants Presented By Eric Wang Liang Guo and Ibrahim Matta Boston University ICNP
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
Sizing Router Buffers Nick McKeown Guido Appenzeller & Isaac Keslassy SNRC Review May 27 th, 2004.
Fluid-based Analysis of a Network of AQM Routers Supporting TCP Flows with an Application to RED Vishal Misra Wei-Bo Gong Don Towsley University of Massachusetts,
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Random Early Detection Gateways for Congestion Avoidance
Core Stateless Fair Queueing Stoica, Shanker and Zhang - SIGCOMM 98 Rigorous fair Queueing requires per flow state: too costly in high speed core routers.
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
Adaptive Control for TCP Flow Control Thesis Presentation Amir Maor.
Advanced Computer Networks : RED 1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking,
Core Stateless Fair Queueing Stoica, Shanker and Zhang - SIGCOMM 98 Fair Queueing requires per flow state: too costly in high speed core routers Yet, some.
Performance Evaluation Lecture 2: Epidemics Giovanni Neglia INRIA – EPI Maestro 9 January 2014.
1 Robust Transport Protocol for Dynamic High-Speed Networks: enhancing XCP approach Dino M. Lopez Pacheco INRIA RESO/LIP, ENS of Lyon, France Congduc Pham.
Raj Jain The Ohio State University R1: Performance Analysis of TCP Enhancements for WWW Traffic using UBR+ with Limited Buffers over Satellite.
Much better than the old TCP Flavours 1Rajon Bhuiyan.
1 MaxNet and TCP Reno/RED on mice traffic Khoa Truong Phan Ho Chi Minh city University of Technology (HCMUT)
Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.
CA-RTO: A Contention- Adaptive Retransmission Timeout I. Psaras, V. Tsaoussidis, L. Mamatas Demokritos University of Thrace, Xanthi, Greece This study.
Fluid-based Analysis of a Network of AQM Routers Supporting TCP Flows with an Application to RED Vishal Misra Wei-Bo Gong Don Towsley University of Massachusetts,
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
1 On Class-based Isolation of UDP, Short-lived and Long-lived TCP Flows by Selma Yilmaz Ibrahim Matta Computer Science Department Boston University.
The Impact of Active Queue Management on Multimedia Congestion Control Wu-chi Feng Ohio State University.
TCP with Variance Control for Multihop IEEE Wireless Networks Jiwei Chen, Mario Gerla, Yeng-zhong Lee.
Analysis of Buffer Size in Core Routers by Arthur Dick Supervisor Anirban Mahanti.
Recent Congestion Control Research at UCLA Presenter: Cesar Marcondes PhD Candidate CS/UCLA Chicago, July IRTF/ICCRG Meeting Presenter: Cesar Marcondes.
Thoughts on the Evolution of TCP in the Internet (version 2) Sally Floyd ICIR Wednesday Lunch March 17,
1 Analysis of a window-based flow control mechanism based on TCP Vegas in heterogeneous network environment Hiroyuki Ohsaki Cybermedia Center, Osaka University,
Hamilton Institute Evaluating TCP Congestion Control Doug Leith Hamilton Institute Ireland Thanks: Robert Shorten, Yee Ting Lee, Baruch Even.
TCP Westwood: Efficient Transport for High-speed wired/wireless Networks 2008.
Murari Sridharan Windows TCP/IP Networking, Microsoft Corp. (Collaborators: Kun Tan, Jingmin Song, MSRA & Qian Zhang, HKUST)
1 Three ways to (ab)use Multipath Congestion Control Costin Raiciu University Politehnica of Bucharest.
CUBIC Marcos Vieira.
Experimental Networking (ECSE 4963)
Open Issues in Router Buffer Sizing
TCP Westwood(+) Protocol Implementation in ns-3
ECF: an MPTCP Scheduler to Manage Heterogeneous Paths
Amogh Dhamdhere, Hao Jiang and Constantinos Dovrolis
Review of Internet Protocols Transport Layer
Designing a Relative Delay Estimator for Multipath Transport
Presentation transcript:

*Orange Labs, Sophia Antipolis, France Understanding TCP Cubic Performance in the Cloud: a Mean-field Approach Sonia Belhareth*, Lucile Sassatelli◊, Denis Collange*, Dino Lopez-Pacheco ◊, Guillaume Urvoy-Keller ◊ *Orange Labs, Sophia Antipolis, France ◊ Laboratoire I3S, Université Nice Sophia Antipolis – CNRS, France IEEE Cloudnet 2013

Motivation Preliminary: TCP is (obviously) the dominant transport protocol in cloud and data center scenarios We focus on the following scenario: N long-lived TCP connections sharing a bottleneck link Two flavors of TCP: TCP Cubic (default CC of Linux) TCP NewReno as a legacy reference

Contributions Mean field approach -> fluid model of interactions of TCP connections Validation against ns-2 simulations Extensive comparison between Cubic and New Reno in cloud scenarios

TCP Cubic For large BDP (bandwidth delay product) network – long fat pipe where: t is the time since the last loss C is a constant wmax is the largest congestion window prior to last loss

TCP Cubic Advantages of Cubic : Linux kernel since 2.6.19 Window growth independent from RTT but only time t since last loss Fast increase until last max congestion window followed by smooth probing for additional bandwidth Linux kernel since 2.6.19

TCP Cubic Cubic can also operate in low BDP networks: where R(t) is the estimated RTT at time t In practice: w(t)=max(wc(t),wtcp(t)) and the state of Cubic connection is < w(t),wmax> Key remark: for a given scenario (latency, capacity and buffer size), Cubic is either in Cubic or TCP mode [See paper for details]

Target scenarios : FTTH, intra-DC and inter-DC Scenario A: FTTH client  DC Scenario B: intra DC Scenario C: inter DC Clouds: most traffic stays within a rack (75%) Colocation of apps and dependent components Other DCs: > 50% leaves the rack Bandwidth RTT BDP Buffer size FTTH client 100 Mbps 20 ms 166 pkts 50 pkts intra DC 1 Gbps 1 ms 83 pkts inter DC 50 ms 4150 pkts 500 pkts

Network model The state of a connection is The state of the queue is Buffer size: NB The state of a connection is The state of the queue is The current RTT is The current loss probability is Capacity : NL pkts/s N TCP Cubic connections

Performance analysis State of the system: is a mean-field interaction system with N objects The occupancy measure is the fraction of connections in each state at time t: Theorem 3.1 of [K70] ensures that converges uniformly almost surely to the solution of coupled ODE: Y is a homogeneous Markov chain [K70] T. G. Kurtz, Solutions of Ordinary Differential Equations as Limits of Pure Jump Markov Processes, Journal of Applied Probability, vol. 7, no. 1, pp. 49–58, 1970. [BL08] M. Benaïm and J.-Y. Le Boudec, A class of mean field interaction models for computer and communication systems, Performance Evaluation, vol. 65, no. 11-12, pp. 823–838, 2008.

Performance analysis The cx detects a loss The cx gets the ACK Input rate

Performance analysis Former model derived from the model for NewReno proposed in F. Baccelli, D. R. McDonald, and J. Reynier, “A mean-field model for multiple TCP connections through a buffer implementing red,” Perform. Eval., vol. 49, no. 1-4, Sep. 2002. Our extensions : Extension to Cubic whose window growth rate depends on time Need to account for loss time (loss process is assumed Poisson as in Baccelli et al.)

Numerical validation Comparison against ns-2 simulations Note that we do not model the slow start Very good accuracy for FTTH  DC and intra DC scenarios

Numerical validation Less accuracy for inter-DC Only scenario in pure Cubic mode The synchronization also studied by Hassayoun et al. through simulations. Persists even with RED, traffic on reverse path or multiplexing level.

Performance Analysis Question 1: is TCP Cubic as fair as NewReno? At least for TCP mode of Cubic in first two scenarios Question 2 : how efficient is TCP Cubic with small buffer sizes? [Lei07] observed through experimentation detrimental effects of small buffers for Cubic Hence the question : is it due to (early) implementation of Cubic or is it intrinsic to Cubic itself?

Fairness CoV (std/mean) of congestion window Cubic remplit tout le temps et beaucoup Reno remplit souvent pas du tout => Cubic plus performants quand buffers petits, et c’est ce qu’on veut CoV (std/mean) of congestion window CoV close to 0 : very good fairness The larger the CoV, the smaller the fairness (CoV related to Jain Fairness index) Take-away: Cubic is more fair than TCP (in TCP mode)

Impact of buffer size Better utilization by Cubic Both Cubic and New Reno are greedy  not good for newly arriving cx Cubic is more greedy than New Reno TCP New Reno is clearly less efficient than Cubic for buffer sizes smaller than 60% of the BDP Our model suggests that Cubic is able to survive with buffer sizes as small as 20% of the BDP Cubic remplit tout le temps et beaucoup Reno remplit souvent pas du tout => Cubic plus performants quand buffers petits, et c’est ce qu’on veut

Conclusions and future work Model for TCP mode of Cubic (and NewReno) Valid for a large set of cloud related scenarios (for 1 Gb/s link, need 16 ms or RTT for triggering Cubic mode) Allow to investigate some fundamental features related to fairness and impact of buffer sizes Future work: Introduction of heterogeneity - mix of short and long-lived connections, different RTT, other versions (Compound) Need to investigate synchronization effects of Cubic mode