Congestion Avoidance and Control Van Jacobson, Michael J. Karels Presenter: Shegufta Bakht Ahsan 09 October 2014.

Slides:



Advertisements
Similar presentations
3/2/2001Hanoch Levy, CS, TAU1 TCP Behavior and Performance Workshop on QoS Hanoch Levy April 2004.
Advertisements

24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
NET0183 Networks and Communications Lecture 28 TCP: a transport layer protocol... the story continues... Sagan halda áfram 8/25/20091 NET0183 Networks.
1 TCP Congestion Control. 2 TCP Segment Structure source port # dest port # 32 bits application data (variable length) sequence number acknowledgement.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
1 End to End Bandwidth Estimation in TCP to improve Wireless Link Utilization S. Mascolo, A.Grieco, G.Pau, M.Gerla, C.Casetti Presented by Abhijit Pandey.
Introduction 1 Lecture 14 Transport Layer (Transmission Control Protocol) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Networking (from an OS perspective) Yin Lou 10/08/2009.
Transport Layer 3-1 Fast Retransmit r time-out period often relatively long: m long delay before resending lost packet r detect lost segments via duplicate.
Congestion Control Tanenbaum 5.3, /12/2015Congestion Control (A Loss Based Technique: TCP)2 What? Why? Congestion occurs when –there is no reservation.
Transport Layer3-1 Congestion Control. Transport Layer3-2 Principles of Congestion Control Congestion: r informally: “too many sources sending too much.
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Third Ed.,Peterson.
Week 9 TCP9-1 Week 9 TCP 3 outline r 3.5 Connection-oriented transport: TCP m segment structure m reliable data transfer m flow control m connection management.
1 689 Lecture 2 Review of Last Lecture Networking basics TCP/UDP review.
1 TCP Transport Control Protocol Reliable In-order delivery Flow control Responds to congestion “Nice” Protocol.
Congestion Avoidance and Control Van Jacobson Jonghyun Kim April 1, 2004.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
Data Communication and Networks
Congestion Avoidance and Control CSCI 780, Fall 2005.
CMPE 257 Spring CMPE 257: Wireless and Mobile Networking Spring 2005 E2E Protocols (point-to-point)
1 K. Salah Module 6.1: TCP Flow and Congestion Control Connection establishment & Termination Flow Control Congestion Control QoS.
TCP Congestion Control
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Lect3..ppt - 09/12/04 CIS 4100 Systems Performance and Evaluation Lecture 3 by Zornitza Genova Prodanoff.
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Modeling TCP Throughput: A Simple Model and its Empirical Validation Ross Rosemark Penn State University.
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
ECEN “Internet Protocols and Modeling” Course Materials: Papers, Reference Texts: Bertsekas/Gallager, Stuber, Stallings, etc Grading (Tentative):
1 Transport Protocols (continued) Relates to Lab 5. UDP and TCP.
Chapter 12 Transmission Control Protocol (TCP)
EE 122: Congestion Control and Avoidance Kevin Lai October 23, 2002.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Copyright © Lopamudra Roychoudhuri
1 TCP - Part II Relates to Lab 5. This is an extended module that covers TCP data transport, and flow control, congestion control, and error control in.
Chapter 24 Transport Control Protocol (TCP) Layer 4 protocol Responsible for reliable end-to-end transmission Provides illusion of reliable network to.
Lecture 9 – More TCP & Congestion Control
What is TCP? Connection-oriented reliable transfer Stream paradigm
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
TCP: Transmission Control Protocol Part II : Protocol Mechanisms Computer Network System Sirak Kaewjamnong Semester 1st, 2004.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
TCP End-To-End Congestion Control Wanida Putthividhya Dept. of Computer Science Iowa State University Jan, 27 th 2002 (May, 25 th 2001)
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
Congestion Avoidance and Control Van Jacobson and Michael Karels Presented by Sui-Yu Wang.
TCP Congestion Control 컴퓨터공학과 인공지능 연구실 서 영우. TCP congestion control2 Contents 1. Introduction 2. Slow-start 3. Congestion avoidance 4. Fast retransmit.
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Internet Networking recitation #9
Topics discussed in this section:
Chapter 3 outline 3.1 transport-layer services
Congestion Control.
TCP Vegas: New Techniques for Congestion Detection and Avoidance
ECEN “Internet Protocols and Modeling”
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
ECEN “Internet Protocols and Modeling”
CS640: Introduction to Computer Networks
CS4470 Computer Networking Protocols
Transport Layer: Congestion Control
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Lecture 6, Computer Networks (198:552)
Presentation transcript:

Congestion Avoidance and Control Van Jacobson, Michael J. Karels Presenter: Shegufta Bakht Ahsan 09 October 2014

Published in ACM SIGCOMM, 1988 Citation Count : This paper also has a big contribution on shaping the network today - HOW? We will explore it in today’s presentation 2

Congestion Collapse Happens in a packet switched computer network Little or no useful communication happens due to congestion Generally occurs at “Choke Points” in the network Identified as a possible problem as far back as 1984 (RFC 896, 6 th January) First observed in the early Internet in October 1986 Data throughput from Lawrence Berkley Lab to UC Berkeley (separated by only 400 yards and two IMP hops !) dropped from 32 Kbps to 40 bps!!! Since then, it continued to occur until end nodes started implementing Van Jacobson’s congestion control between 1987 and

Congestion Collapse - Reason When more packets are sent than could be handled by intermediate routers, the intermediate routers discarded many packets expecting the end points of the network to retransmit the information Early TCP implementation had weird retransmission behavior ! When this packet loss occurred, the end points sent extra packets that repeated the information lost, doubling the data rate sent exactly the opposite of what should be done during congestion !!! This pushed the entire network into a “congestion collapse” where most packets were lost and the resultant throughput was negligible 4

Investigation !!! Was the 4.3BSD TCP misbehaving? YES Could it be tuned to work better under abysmal network conditions? YES 5

Steps Taken !!! Since that time, seven new algorithms were put into the 4BSD TCP: i.Round-trip-time variance estimation ii.Exponential retransmit timer backoff iii.Slow-start iv.More aggressive receiver ack policy v.Dynamic window sizing on congestion vi.Karn’s clamped retransmit backoff vii.Fast retransmit We will focus on algorithm i to v, as they evolve from one common observation: The flow on a TCP connection should obey a “conservation of packets” principle 6

Conservation of Packets Conservation of packets means, connection is “in equilibrium”. In another word, a connection will run stably with a full window of data in transit. Such kind of transit is what a physicist would call “conservative”: A new packet isn’t put into the network until an old packet leaves Packet conservation can be failed in only three ways: 1.The connection doesn’t get to equilibrium, or 2.A sender injects a new packet before an old packet has exited, or 3.The equilibrium can’t be reached because of resource limits along the path 7

Getting to Equilibrium: Slow-start Sender can use ACKs as a “clock” to strobe new packets into the network The receiver can not generate ACK faster than data packets get through the network, hence this protocol is “self clocking” 8

Getting to Equilibrium: Slow-start Figure 1 is a schematic representation of a sender and receiver on high bandwidth networks connected by a slower long-haul net The vertical dimension is bandwidth The horizontal dimension is time Each of the shaded box is a packet Bandwidth*Time = Bits, so the area of each box is the packet size Packet size does not change, hence a packet squeezed into the smaller long-haul bandwidth must spread out in time. 9

Getting to Equilibrium: Slow-start Sender has just started sending its “first burst” of packet back-to-back The ACK for the first of those packets is about to arrive back at the sender. P b : minimum packet spacing on the slowest link P r : incoming packet spacing in receiver side There is no queuing, therefore P b = P r 10

Getting to Equilibrium: Slow-start A r : Spacing between ACKs on the receiver’s side A r = P r = P b Time Slot P b is big enough for a packet, hence it would be big enough for ACK also. So the ACK spacing is preserved along the return path. A b = A r A s : Spacing between ACKs on the sender’s side A s = A b Hence, A s = P b 11

Getting to Equilibrium: Slow-start Hence, if packets after the “first burst” are sent only in response to an ACK, the sender’s packet spacing will exactly match the packet time on the slowest link in the path BUT, how we will manage the “first burst” ? Sounds like the “chicken or the egg” problem ! To get data flowing, there must be ACKs to clock out packets BUT to get ACKs there must be data flowing 12

Getting to Equilibrium: Slow-start Slow Start algorithm: developed to start the clock by gradually increasing the amount of data in transit 1.Add a congestion window, cwnd, to the per-connection state 2.When starting or restarting after a loss, set cwnd to one packet 3.On each ACK for new data, increase cwnd by one packet 4.When sending, send the minimum of the receiver’s advertised window and cwnd 13

Getting to Equilibrium: Slow-start th RTT 1 st RTT 2 nd RTT 3 rd RTT One Round Trip Time One Packet Time 14

Getting to Equilibrium: Slow-start Slow start gradually utilizes the channel’s idle bandwidth According to the authors: Slow-start window increase isn’t that slow: it takes Rlog 2 W where R is the round-trip-time and W is the window size in packets. This means, the window opens quickly enough to have a negligible effect on performance And the algorithm guarantees that, a connection will generate data at a rate at most twice the maximum possible on the path 15

Getting to Equilibrium: Slow-start On a side note: in the paper “Why Flow-Completion Time is the Right metric for Congestion Control and why this means we need new algorithm”, authors argue that TCP’s slow-start mechanism increases the Flow Completion Time (FCT) Now a days, end-users are more interested to a network, which guarantees minimum FCT In this paper, authors explains how a new protocol called Rate Control Protocol (RCP) can achieve much better FCT than TCP or XCP 16

Getting to Equilibrium: Slow-start

Conservation at equilibrium: Round-Trip Timing Packet conservation can be failed in only three ways: 1.The connection doesn’t get to equilibrium, or 2.A sender injects a new packet before an old packet has exited, or 3.The equilibrium can’t be reached because of resource limits along the path Once data is flowing reliably, problem (2) and (3) should be addressed. 18

Conservation at equilibrium: Round-Trip Timing If the protocol is correct, (2) represent a failure of sender’s retransmit timer. The TCP protocol specification suggests estimating mean round trip time via the low-pass filter R = αR + (1- α)M Where: R -> the average RTT estimate M -> round trip time measurement from the most recently acked data packet α -> is a filter gain constant with a suggest value of 0.9 Once R estimate is updated, the retransmit timeout interval, RTO for the next packet is set to βR β accounts for the RTT variation 19

Conservation at equilibrium: Round-Trip Timing 20

Adapting to the path: congestion avoidance Packet conservation can be failed in only three ways: 1.The connection doesn’t get to equilibrium, or 2.A sender injects a new packet before an old packet has exited, or 3.The equilibrium can’t be reached because of resource limits along the path A timeout most probably indicates a lost packet, provided that the timers are in good shape Packets get lost for two reasons: 1.They are damaged in transit 2.The network is congested and somewhere on the path there is insufficient buffer capacity 21

Adapting to the path: congestion avoidance According to the authors, “On most network paths, loss due to damage is rare (<<1%)” ! Hence it is highly probable that a packet loss is due to a congestion in the network The “congestion avoidance” strategy should have two components: 1.The network must be able to signal the transport endpoints that congestion is occurring (or about to occur) 2.The end point must have a policy that decreases utilization if this signal is received and increases utilization if the signal isn’t received 22

Adapting to the path: congestion avoidance In this scheme, if source detects a congestion, it will reduce its window size W i = dW i-1 (d<1) It is a multiplicative decrease of the window size d is selected as 0.5 (WHY ?) 23

Adapting to the path: congestion avoidance Alice and Bob are equally sharing a 10Mbps channel If suddenly Alice shuts down her computer, 5Mbps bandwidth will be wasted unless Bob increases his window size and gradually capture the remaining idle bandwidth How to increase window size ? 24

Adapting to the path: congestion avoidance Should it be W i = bW i-1, 1 < b < 1/d ? NO The result will oscillate wildly and on the average, deliver poor throughput Rather, this paper state that, the best increase policy is to make small, constant changes to the window size: W i = W i-1 + u u = 1 25

Adapting to the path: congestion avoidance Congestion Avoidance algorithm: 1.On any timeout, set cwnd to half the current window size (Multiplicative Decrease) 2.On each ACK for new data, increase cwnd by 1/cwnd (Additive Increase) 3.When sending, send the minimum of the receiver’s advertised window and cwnd 26

Adapting to the path: congestion avoidance Packet conservation can be failed in only three ways: 1.The connection doesn’t get to equilibrium, or 2.A sender injects a new packet before an old packet has exited, or 3.The equilibrium can’t be reached because of resource limits along the path 27

Slow Start + Congestion Avoidance Sender keeps two state variables for congestion control: 1.A slow-start/congestion-avoidance window, ‘cwnd’ 2.A threshold size, ‘ssthresh’ to switch between two algorithm Sender always sends minimum of cwnd and the window advertised by the receiver On a timeout, half of the current window size is recorded in ssthresh (Multiple Decrease), after that cwnd is set to 1 ssthresh = cwnd/2; cwnd = 1; // initiates slow start 28

Slow Start + Congestion Avoidance When new data is ACKed, the sender does: if(cwnd < ssthresh) { cwnd += 1; // make cwnd double per RTT; multiplicative increase } else { cwnd += (1/cwnd); // increament cwnd by 1 per RTT, additive increase } 29

Slow Start + Congestion Avoidance * 6 * 0 th RTT 1 st RTT 2 nd RTT One Round Trip Time One Packet Time Current cwnd = 2 cwnd = cwnd + 1/cwnd = 2+ ½ = 2.5 cwnd = cwnd + 1/cwnd = 2.5+ ½ = 3 30

Evaluation Test setup to examine the interaction of multiple, simultaneous TCP conversations sharing a bottleneck link 1 MByte transfers ( data-byte packets) were initiated 3 seconds apart from four machines at LBL to four machines at UCB, one conversation per machine pair (the dotted lines above show the pairing) All traffic went via a Kbps link The microwave link queue can hold up to 50 packets Each connection was given a window of 16 KB ( byte packets) Thus any two connections could overflow the available buffering and the four connections exceeded the queue capacity by 160% 31

Evaluation 4,000 out of 11,000 packet sent were retransmission Among the 25KBps, one TCP conversation got 8KBps, two got 5KBps, one got 0.5KBps 32

Evaluation 89 out of 8281 packets were retransmitted Among the 25KBps, two TCP conversation got 8KBps and another two got 4.5KBps The difference between high and low bandwidth senders was due to the receivers. 33

Evaluation Normalized to the 25KBps link bandwidth Thin line – without congestion avoidance: The sender send 25% more than link bandwidth Thick line – with congestion avoidance The first 5 second is low: slow-start Large oscillation from 5 to 20: congestion control is searching for the correct window size Remaining time: run at the wire bandwidth Around 110: bandwidth ‘re-negotiation’ due to the connection one shutting down 34

Evaluation Thin Line: without congestion avoidance. 75% bandwidth used for data, rest was used in retransmission Thick Line: with congestion avoidance 35

Thank You! Questions ? Increased Flow Completion Time TCP Global Synchronization Problem TCP RED (Random Early Detection) “-Maybe we can try different initial values for cwnd and evaluate slow-start’s performance in different settings.-” 36