Handout # 14: Congestion Control

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

TCP and Congestion Control
Congestion Control Reading: Sections
Congestion Control Reading: Sections
24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
Congestion Control Jennifer Rexford Advanced Computer Networks Tuesdays/Thursdays 1:30pm-2:50pm.
1 Comnet 2010 Communication Networks Recitation 9 Fairness & TCP Congestion Control.
1 TCP Congestion Control. 2 TCP Segment Structure source port # dest port # 32 bits application data (variable length) sequence number acknowledgement.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Congestion Control Reading: Sections COS 461: Computer Networks Spring 2009 (MW 1:30-2:50 in CS 105) Mike Freedman Teaching Assistants: Wyatt Lloyd.
Transport Layer 3-1 Fast Retransmit r time-out period often relatively long: m long delay before resending lost packet r detect lost segments via duplicate.
Congestion Control Reading: Sections COS 461: Computer Networks Spring 2011 Mike Freedman
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Third Ed.,Peterson.
CSEE W4140 Networking Laboratory Lecture 7: TCP flow control and congestion control Jong Yul Kim
15-744: Computer Networking L-10 Congestion Control.
Week 9 TCP9-1 Week 9 TCP 3 outline r 3.5 Connection-oriented transport: TCP m segment structure m reliable data transfer m flow control m connection management.
CSEE W4140 Networking Laboratory Lecture 7: TCP congestion control Jong Yul Kim
Computer Networks : TCP Congestion Control1 TCP Congestion Control.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
Data Communication and Networks
Networks : TCP Congestion Control1 TCP Congestion Control.
Networks : TCP Congestion Control1 TCP Congestion Control Presented by Bob Kinicki.
L13: Sharing in network systems Dina Katabi Spring Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans.
1 K. Salah Module 6.1: TCP Flow and Congestion Control Connection establishment & Termination Flow Control Congestion Control QoS.
Congestion Control Michael Freedman COS 461: Computer Networks
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
Queuing and Queue Management Reading: Sections 6.2, 6.4, 6.5 COS 461: Computer Networks Spring 2011 Mike Freedman
Principles of Congestion Control Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control!
Link Scheduling & Queuing COS 461: Computer Networks
Chapter 12 Transmission Control Protocol (TCP)
CSE679: Computer Network Review r Review of the uncounted quiz r Computer network review.
Copyright 2008 Kenneth M. Chipps Ph.D. Controlling Flow Last Update
Copyright © Lopamudra Roychoudhuri
1 TCP - Part II Relates to Lab 5. This is an extended module that covers TCP data transport, and flow control, congestion control, and error control in.
Lecture 9 – More TCP & Congestion Control
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
Winter 2003CS244a Handout 71 CS492B Project #2 TCP Tutorial # Jin Hyun Ju.
ECE 4110 – Internetwork Programming
TCP. TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver Arrival of in-order segment with expected seq #. All data up to expected seq # already.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Fall 2004FSU CIS 5930 Internet Protocols1 TCP – Data Exchange Reading: Section 24.4.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Retransmission. Automatic Repeat reQuest (ARQ) 2 Time Packet ACK Timeout Automatic Repeat Request –Receiver sends acknowledgment (ACK) when it receives.
11 CS716 Advanced Computer Networks By Dr. Amir Qayyum.
1 TCP ProtocolsLayer name DNSApplication TCP, UDPTransport IPInternet (Network ) WiFi, Ethernet Link (Physical)
1 Flow & Congestion Control Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans Kaashoek, Hari Balakrishnan, and Sam Madden Prof. Dina Katabi.
Chapter 3 outline 3.1 transport-layer services
Chapter 6 TCP Congestion Control
Introduction to Networks
TCP Congestion Control
Chapter 5 TCP Transmission Control
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
Other Methods of Dealing with Congestion
Other Methods of Dealing with Congestion
Chapter 6 TCP Congestion Control
TCP Congestion Control
Transport Layer: Congestion Control
Chapter 3 outline 3.1 Transport-layer services
TCP flow and congestion control
Congestion Michael Freedman COS 461: Computer Networks
Presentation transcript:

Handout # 14: Congestion Control CSC 458/2209 – Computer Networks Handout # 14: Congestion Control Professor Yashar Ganjali Department of Computer Science University of Toronto yganjali@cs.toronto.edu http://www.cs.toronto.edu/~yganjali

University of Toronto – Fall 2014 Announcements Problem Set 2 Posted on class web site Due: Nov. 7th at 5pm Submit electronically as ps2.pdf Programming Assignment 2 Will be posted next week Due on Nov. 28th More information on this next week CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 Announcements Problem Set 1 Marks have been emailed to you Please check your CDF account. Midterm Have been marked An email has been sent to your CDF account. You can pick up the papers TODAY! CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 Today’s Lecture Principles of congestion control Learning that congestion is occurring Adapting to alleviate the congestion TCP congestion control Additive-increase, multiplicative-decrease Slow start and slow-start restart Related TCP mechanisms Nagle’s algorithm and delayed acknowledgments Active Queue Management (AQM) Random Early Detection (RED) Explicit Congestion Notification (ECN) CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 What is Congestion? A1(t) 10Mb/s H1 R1 D(t) 1.5Mb/s H3 H2 A2(t) 100Mb/s A1(t) D(t) A2(t) X(t) A2(t) Cumulative bytes A1(t) X(t) D(t) t CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Flow Control vs. Congestion Control Keeping one fast sender from overwhelming a slow receiver Congestion control Keep a set of senders from overloading the network Different concepts, but similar mechanisms TCP flow control: receiver window TCP congestion control: congestion window TCP window: min{congestion window, receiver window} CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Time Scales of Congestion Too many users using a link during a peak hour 7:00 8:00 9:00 TCP flows filling up all available bandwidth 1s 2s 3s Two packets colliding at a router – also referred to as contention 100µs 200µs 300µs CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Dealing with Congestion Example: two flows arriving at a router ? R1 A2(t) Strategy Drop one of the flows Buffer one flow until the other has departed, then send it Re-Schedule one of the two flows for a later time Ask both flows to reduce their rates CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Congestion is Unavoidable Two packets arrive at the same time The node can only transmit one … and either buffer or drop the other If many packets arrive in a short period of time The node cannot keep up with the arriving traffic … and the buffer may eventually overflow CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Arguably Congestion is Good! We use packet switching because it makes efficient use of the links. Therefore, buffers in the routers are frequently occupied. If buffers are always empty, delay is low, but our usage of the network is low. If buffers are always occupied, delay is high, but we are using the network more efficiently. So how much congestion is too much? CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 Congestion Collapse Definition: Increase in network load results in a decrease of useful work done Many possible causes Spurious retransmissions of packets still in flight Classical congestion collapse Solution: better timers and TCP congestion control Undelivered packets Packets consume resources and are dropped elsewhere in network Solution: congestion control for ALL traffic CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 What Do We Want, Really? High throughput Throughput: measured performance of a system E.g., number of bits/second of data that get through Low delay Delay: time required to deliver a packet or message E.g., number of msec to deliver a packet These two metrics are sometimes at odds E.g., suppose you drive a link as hard as possible … then, throughput will be high, but delay will be, too CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Load, Delay, and Power Typical behavior of queuing systems with random arrivals: A simple metric of how well the network is performing: Average Packet delay Power Load Load “optimal load” Goal: maximize power CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 Fairness Effective utilization is not the only goal We also want to be fair to the various flows … but what the heck does that mean? Simple definition: equal shares of the bandwidth N flows that each get 1/N of the bandwidth? But, what if the flows traverse different paths? CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Resource Allocation vs. Congestion Control How nodes meet competing demands for resources E.g., link bandwidth and buffer space When to say no, and to whom Congestion control How nodes prevent or respond to overload conditions E.g., persuade hosts to stop sending, or slow down Typically has notions of fairness (i.e., sharing the pain) CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Simple Resource Allocation Simplest approach: FIFO queue and drop-tail Link bandwidth: first-in first-out queue Packets transmitted in the order they arrive Buffer space: drop-tail queuing If the queue is full, drop the incoming packet CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Simple Congestion Detection Packet loss Packet gets dropped along the way Packet delay Packet experiences high delay How does TCP sender learn this? Loss Timeout Triple-duplicate acknowledgment Delay Round-trip time estimate CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Options for Congestion Control Implemented by host versus network Reservation-based, versus feedback-based Window-based versus rate-based. CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

TCP Congestion Control TCP implements host-based, feedback-based, window-based congestion control. TCP sources attempts to determine how much capacity is available TCP sends packets, then reacts to observable events (loss). CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Idea of TCP Congestion Control Each source determines the available capacity … so it knows how many packets to have in transit Congestion window Maximum # of unacknowledged bytes to have in transit The congestion-control equivalent of receiver window MaxWindow = min{congestion window, receiver window} Send at the rate of the slowest component: receiver or network Adapting the congestion window Decrease upon losing a packet: backing off Increase upon success: optimistically exploring CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Additive Increase, Multiplicative Decrease How much to increase and decrease? Increase linearly, decrease multiplicatively A necessary condition for stability of TCP Consequences of over-sized window are much worse than having an under-sized window Over-sized window: packets dropped and retransmitted Under-sized window: somewhat lower throughput Multiplicative decrease On loss of packet, divide congestion window in half Additive increase On success for last window of data, increase linearly CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 Additive Increase Src D D A A D D D A A A D A Dest Actually, TCP uses bytes, not segments to count: When ACK is received: CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Leads to the TCP “Sawtooth” Window Loss halved t CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Congestion Window Evolution Rule for adjusting W If an ACK is received: W ← W+1/W If a packet is lost: W ← W/2 Only W packets may be outstanding CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Congestion Window Evolution CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 Practical Details Congestion window Represented in bytes, not in packets (Why?) Packets have MSS (Maximum Segment Size) bytes Increasing the congestion window Increase by MSS on success for last window of data In practice, increase a fraction of MSS per received ACK # packets per window: CWND / MSS Increment per ACK: MSS * (MSS / CWND) Decreasing the congestion window Never drop congestion window below 1 MSS CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 TCP Sending Rate What is the sending rate of TCP? Acknowledgement for sent packet is received after one RTT Amount of data sent until ACK is received is the current window size W Therefore sending rate is R = W/RTT Is the TCP sending rate saw tooth shaped as well? CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

TCP Sending Rate and Buffers CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Need to start with a small CWND to avoid overloading the network. Getting Started Need to start with a small CWND to avoid overloading the network. Window But, could take a long time to get started! t CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 “Slow Start” Phase Start with a small congestion window Initially, CWND is 1 MSS So, initial sending rate is MSS/RTT That could be pretty wasteful Might be much less than the actual bandwidth Linear increase takes a long time to accelerate Slow-start phase (really “fast start”) Sender starts at a slow rate (hence the name) … but increases the rate exponentially … until the first loss event CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Slow Start in Action Double CWND per round-trip time = Increase CWND by 1 for each ACK received 1 2 4 8 Src D D A A D D D D D A A A A A Dest CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Slow Start and the TCP Sawtooth Window Loss t Exponential “slow start” Why is it called slow-start? Because TCP originally had no congestion control mechanism. The source would just start by sending a whole window’s worth of data. CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 Two Kinds of Loss in TCP Triple duplicate ACK Packet n is lost, but packets n+1, n+2, etc. arrive Receiver sends duplicate acknowledgments … and the sender retransmits packet n quickly Do a multiplicative decrease and keep going Timeout Packet n is lost and detected via a timeout E.g., because all packets in flight were lost After the timeout, blasting away for the entire CWND … would trigger a very large burst in traffic So, better to start over with a low CWND CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Repeating Slow Start After Timeout Window timeout Slow start in operation until it reaches half of previous cwnd. t Slow-start restart: Go back to CWND of 1, but take advantage of knowing the previous value of CWND. CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Repeating Slow Start After Idle Period Suppose a TCP connection goes idle for a while E.g., Telnet session where you don’t type for an hour Eventually, the network conditions change Maybe many more flows are traversing the link E.g., maybe everybody has come back from lunch! Dangerous to start transmitting at the old rate Previously-idle TCP sender might blast the network … causing excessive congestion and packet loss So, some TCP implementations repeat slow start Slow-start restart after an idle period CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Congestion Control in the Internet Maximum window sizes of most TCP implementations by default are very small Windows XP: 12 packets Linux/Mac: 40 packets (higher in new implementations) Often the buffer of a link is larger than the maximum window size of TCP A typical DSL line has 200 packets worth of buffer For a TCP session, the maximum number of packets outstanding is 40 The buffer can never fill up The router will never drop a packet CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 Other TCP Mechanisms Nagle’s Algorithm and Delayed ACK CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Motivation for Nagle’s Algorithm Interactive applications Telnet and rlogin Generate many small packets (e.g., keystrokes) Small packets are wasteful Mostly header (e.g., 40 bytes of header, 1 of data) Appealing to reduce the number of packets Could force every packet to have some minimum size … but, what if the person doesn’t type more characters? Need to balance competing trade-offs Send larger packets … but don’t introduce much delay by waiting CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 Nagle’s Algorithm Wait if the amount of data is small Smaller than Maximum Segment Size (MSS) And some other packet is already in flight I.e., still awaiting the ACKs for previous packets That is, send at most one small packet per RTT … by waiting until all outstanding ACKs have arrived Influence on performance Interactive applications: enables batching of bytes Bulk transfer: transmits in MSS-sized packets anyway ACK vs. CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Motivation for Delayed ACK TCP traffic is often bidirectional Data traveling in both directions ACKs traveling in both directions ACK packets have high overhead 40 bytes for the IP header and TCP header … and zero data traffic Piggybacking is appealing Host B can send an ACK to host A … as part of a data packet from B to A CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

TCP Header Allows Piggybacking Source port Destination port Sequence number Flags: SYN FIN RST PSH URG ACK Acknowledgment HdrLen Flags Advertised window Checksum Urgent pointer Options (variable) Data CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Example of Piggybacking Data B has data to send Data+ACK Data B doesn’t have data to send ACK Data A has data to send Data + ACK CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

Increasing Likelihood of Piggybacking Increase piggybacking TCP allows the receiver to wait to send the ACK … in the hope that the host will have data to send Example: rlogin or telnet Host A types characters at a UNIX prompt Host B receives the character and executes a command … and then data are generated Would be nice if B could send the ACK with the new data A B Data Data+ACK Data ACK Data Data + ACK CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 Delayed ACK Delay sending an ACK Upon receiving a packet, the host B sets a timer Typically, 200 msec or 500 msec If B’s application generates data, go ahead and send And piggyback the ACK bit If the timer expires, send a (non-piggybacked) ACK Limiting the wait Timer of 200 msec or 500 msec ACK every other full-sized packet CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014

University of Toronto – Fall 2014 Conclusions Congestion is inevitable Internet does not reserve resources in advance TCP actively tries to push the envelope Congestion can be handled Additive increase, multiplicative decrease Slow start, and slow-start restart Active Queue Management can help Random Early Detection (RED) Explicit Congestion Notification (ECN) CSC 458/CSC 2209 – Computer Networks University of Toronto – Fall 2014