Removing Exponential Backoff from TCP

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

Michele Pagano – A Survey on TCP Performance Evaluation and Modeling 1 Department of Information Engineering University of Pisa Network Telecomunication.
When TCP Friendliness Becomes Harmful Amit Mondal Aleksandar Kuzmanovic Northwestern University
Congestion Control Algorithms: Open Questions Benno Overeinder NLnet Labs.
Rice Networks Group Aleksandar Kuzmanovic Edward W. Knightly Low-Rate TCP-Targeted Denial of Service Attacks (The Shrew.
 Liang Guo  Ibrahim Matta  Computer Science Department  Boston University  Presented by:  Chris Gianfrancesco and Rick Skowyra.
Improving TCP Performance over Mobile Ad Hoc Networks by Exploiting Cross- Layer Information Awareness Xin Yu Department Of Computer Science New York University,
1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
Simulating Large Networks using Fluid Flow Model Yong Liu Joint work with Francesco LoPresti, Vishal Misra Don Towsley, Yu Gu.
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Fourth Edition,Peterson.
The War Between Mice and Elephants Presented By Eric Wang Liang Guo and Ibrahim Matta Boston University ICNP
The Power of Explicit Congestion Notification Aleksandar Kuzmanovic Northwestern University
Presented by Prasanth Kalakota & Ravi Katpelly
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Third Ed.,Peterson.
A Poisoning-Resilient TCP Stack Amit Mondal Aleksandar Kuzmanovic Northwestern University
Aleksandar Kuzmanovic & Edward W. Knightly A Performance vs. Trust Perspective in the Design of End-Point Congestion Control Protocols.
1 TCP-LP: A Distributed Algorithm for Low Priority Data Transfer Aleksandar Kuzmanovic, Edward W. Knightly Department of Electrical and Computer Engineering.
Advanced Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Fourth.
Low-Rate TCP-Targeted Denial of Service Attacks Presenter: Juncao Li Authors: Aleksandar Kuzmanovic Edward W. Knightly.
Low-Rate TCP Denial of Service Defense Johnny Tsao Petros Efstathopoulos Tutor: Guang Yang UCLA 2003.
Ns Simulation Final presentation Stella Pantofel Igor Berman Michael Halperin
1 Ns Tutorial: Case Studies John Heidemann (USC/ISI) Polly Huang (ETH Zurich) March 14, 2002.
3: Transport Layer3b-1 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle”
Transport Layer 4 2: Transport Layer 4.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
COMT 4291 Communications Protocols and TCP/IP COMT 429.
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.
CA-RTO: A Contention- Adaptive Retransmission Timeout I. Psaras, V. Tsaoussidis, L. Mamatas Demokritos University of Thrace, Xanthi, Greece This study.
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
Lab The network simulator ns The network simulator ns Allows us to watch evolution of parameters like cwnd and ssthresh Allows us to watch evolution of.
WB-RTO: A Window-Based Retransmission Timeout Ioannis Psaras Demokritos University of Thrace, Xanthi, Greece.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
1 SIGCOMM ’ 03 Low-Rate TCP-Targeted Denial of Service Attacks A. Kuzmanovic and E. W. Knightly Rice University Reviewed by Haoyu Song 9/25/2003.
1 TCP - Part II. 2 What is Flow/Congestion/Error Control ? Flow Control: Algorithms to prevent that the sender overruns the receiver with information.
CATNIP – Context Aware Transport/Network Internet Protocol Carey Williamson Qian Wu Department of Computer Science University of Calgary.
Murari Sridharan Windows TCP/IP Networking, Microsoft Corp. (Collaborators: Kun Tan, Jingmin Song, MSRA & Qian Zhang, HKUST)
Streaming Video over TCP with Receiver-based Delay Control
TCP - Part II.
Transmission Control Protocol (TCP) Retransmission and Time-Out
Monitoring Persistently Congested Internet Links
Topics discussed in this section:
Transport Layer CS 381 3/7/2017.
Chapter 3 outline 3.1 transport-layer services
CUBIC Marcos Vieira.
Satellite TCP Lecture 19 04/10/02.
Johns Hopkins university
Introduction to Congestion Control
Chapter 3 outline 3.1 Transport-layer services
Generalizing The Network Performance Interference Problem
TCP Congestion Control
Measuring Service in Multi-Class Networks
TCP-LP Distributed Algorithm for Low-Priority Data Transfer
TCP-LP: A Distributed Algorithm for Low Priority Data Transfer
Open Issues in Router Buffer Sizing
Ns Tutorial: Case Studies
Amogh Dhamdhere, Hao Jiang and Constantinos Dovrolis
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
FAST TCP : From Theory to Experiments
AMP: A Better Multipath TCP for Data Center Networks
CS640: Introduction to Computer Networks
RAP: Rate Adaptation Protocol
State Transition Diagram
April 10, 2006, Northwestern University
TCP Congestion Control
TCP III - Error Control TCP Error Control.
AI Applications in Network Congestion Control
Modeling and Evaluating Variable Bit rate Video Steaming for ax
Lecture 6, Computer Networks (198:552)
Presentation transcript:

Removing Exponential Backoff from TCP Amit Mondal Aleksandar Kuzmanovic EECS Department Northwestern University http://networks.cs.northwestern.edu

TCP Congestion Control Slow-start phase Double the sending ... ... rate each round-trip ... time Reach high throughput ...quickly

TCP Congestion Control Additive Increase – ...Multiplicative Decrease Fairness among flows

TCP Congestion Control Exponential .backoff System stability

Our breakthrough Exponential backoff fundamentally wrong! Rest of my presentation, I will argue (and hopefully convince) why so!

Contribution Untangle retransmit timer backoff mechanism Challenge the need of exponential backoff in TCP Demonstrate exponential backoff can be removed from TCP without causing congestion collapse Incrementally deployable two-step task

Implications Dramatically improve performance of short-lived and interactive applications Increase TCP's resiliency against low-rate (shrew attack) and high-rate (bandwidth flooding) DoS attack Other impacts by avoiding unnecessary backoffs

Background Origin on RTO backoff Adopted from classical Ethernet protocol IP gateway similar to 'ether' in shared-medium Ethernet network Exponential backoff is essential for Internet stability "an unstable system (a network subject to random load shocks and prone to congestion collapse) can be stabilized by adding some exponential damping (exponential timer backoff) to its primary excitation (senders, traffic sources)“ [Jacobson88]

Rationale behind revisions No admission control in the Internet No bound on number of active flows Stability results in Ethernet protocol not applicable IP gateway vs classical Ethernet Classical Ethernet: Throughput reduces to zero in overloaded scenarios IP gateway: Forwards packets at full capacity even in extreme congested scenarios Dynamic network environment Finite flow sizes and skewed traffic distribution Increased bottleneck capacities

Implicit Packet Conservation Principle RTO > RTT Karn-Partridge algorithm and Jacobson's algorithm ensures this End-to-end performance cannot suffer if endpoints uphold the principle Formal proof for single bottleneck case in paper Extensive evaluation with network testbed Single bottleneck Multiple bottleneck Complex topologies

Experimental methodology Testbed Emulab 64-bit Intel Xeon machine + FreeBSD 6.1 RTT in 10ms - 200ms Bottleneck 10Mbps TCP Sack + RED Workload Trace-II: Synthetic HTTP traffic based on empirical distribution Trace-I : Skewed towards shorter file-size Trace-III: Skewed towards longer file-size NS2 simulations

Evaluation TCP*(n) : sub exponential backoff algorithms No backoff for first “n” consecutive timeouts Impact of RTO backoff mechanism on response time Impact of minRTO and initRTO on end-to-end performance

Sub-exponential backoff algorithms End-to-end performance does not degrade after removing exponential backoff from TCP Trace-I Trace-II Trace-III

Impact of (minRTO, initRTO) parameters RFC 2988 recommendation (1.0s, 3.0s) Current practice (0.2s, 3.0s) Aggressive version (0.2s, 0.2s) Our key hypothesis is that setting these parameters more aggressively will not hurt the end-to-end performance as long as the endpoints uphold implicit packet conservation principle.

Impact of minRTO and initRTO Poor performance of (1.0s,3.0s) RTO pair, the CCDF tail is heaviest Improved performance both for (0.2s, 3.0s) and (0.2s, 02s) pair Aggressive minRTO and initRTO parameters do not hurt e2e performance as long as endpoints uphold implicit packet conservation principle TCP TCP*(∞) The figures depict the CCDF of response time profiles for TCP, TCP*(3) and TCP*(6) for different combination of (minRTO, initRTO) parameter. First observation is the poor performance of (1.0s, 3.0s) RTO pair; the CCDF tail is heaviest. Second observation: all figures show an improved performance for (0.2s, 3.0s) and (0.2s, 0.2s) pairs over (1s, 3s) pair although the order is not uniform. For TCP case, choosing more aggressive minRTO and initRTO and leaving the backoff untouched make the aggressive choice worse. This is because packet loss prob increases due to aggressive minRTO, initRTO parameter, and it is possible for active connections to be moved into long shutoffs, and thereby reducing their performance relative to (0.2s, 3.0s) scenario. TCP*(3)

Role of bottleneck capacity TCP*(∞) out performs classical TCP independent of bottleneck capacity

Dynamic environments ON-OFF flow arrival period Inter-burst: 50ms – 10s

Dynamic environments ON-OFF flow arrival period Inter-burst: 1 sec Time series of active connections

TCP variants and Queuing disciplines TCP Tahoe, TCP Reno, TCP Sack Droptail, RED The backoff-less TCP stacks outperform regular stacks irrespective of TCP versions and queuing disciplines

Multiple bottlenecks Dead packets Topology Packets that exhaust network resources upstream, but are then dropped downstream In multiple bottleneck scenario there is a chance that dead packets impact the performance of flows sharing the upstream bottleneck. We do modeling and extensive experiment to explore such scenarios R1 R2 R3 R4 S0 C0 S1 C1 S2 C2 L0 L1 L2 p1 p2 Till now, we have shown that independent of the location of bottleneck link (upstream or downstream), the backoff less TCP stack can only improve end-to-end performance, irrespective of TCP version and queuing disciplines. Now the concern is that when the bottleneck is downstream, near the receiver side, a more aggressive endpoint can generate large number of dead packets which get dropped at downstream. In multiple bottleneck scenario there is a chance that dead packets impact the performance of flows sharing the upstream bottleneck.

Impact on network efficiency Fraction of dead packet at upstream bottleneck: < 5% flows experience multiple bottleneck α = 0.002475 for (1%, 5%) very small

Impact on end-to-end performance What happens if the percent of multiple-bottleneck flows increases dramatically? What is the impact of backoff-less TCP approach on end-to-end performance in such scenarios? Emulab experiment Set L0/(L0+L1)= 0.25 >> current situation

Impact on end-to-end performance Improves response times distributions of both set of flows Similar result as Trace-II Multiple-bottlenecked flows improve their response times without causing catastrophic effect other flows even when their presence is significant Trace-II Multiple-bottlenecked flows improve response times, while upstream single-bottlenecked flows only marginally degrades response times Trace-I These figures show response time distribution of both two-bottleneck flows and flows sharing only the upstream bottleneck for all three kind of Traces. The left side figure shows removing backoff altogether improves the response times of both set of flows. This is because in an environment dominated by long pauses due to exponential backoff only degrades the overall response times. With trace-II, the set of multiple-bottlenecked flows improve their aggregate response times, while flows with only single upstream-bottleneck only marginally degrades their response time. The result is similar with Trace-III. Trace-III

Realistic network topologies Orbis-scaled HOT topology 10 Gbps core link 100 Mbps server edge link 1 – 10Mbps client side link 10ms link delay Workload HTTP HTTP + P2P The improvement is more significant in presence of p2p traffic Response times distribution improves in absence of p2p traffic

Incremental deployment TCP's performance degrades non-negligibly when present with TCP*(∞) Two-step Task TCP to TCP*(3) TCP*(3) to TCP*(∞)

Summary Challenged the need of RTO backoff in TCP End-to-end performance can only improve if endpoints uphold implicit packet conservation principle Extensive testbed evaluation for single bottleneck and multiple bottleneck scenario, and with complex topologies Incrementally deployable two-step task

Thank you

Impact of minRTO and initRTO Aggressive minRTO and initRTO parameters do not hurt e2e performance as long as endpoints uphold implicit packet conservation principle TCP TCP*(3) TCP*(∞)

Impact of minRTO and initRTO Aggressive minRTO and initRTO parameters do not hurt e2e performance as long as endpoints uphold implicit packet conservation principle TCP TCP*(3) TCP*(∞)