Administrivia Review 1 Office hours moving to Thursday 10—12

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

TCP Variants.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
Congestion Control An Overview -Jyothi Guntaka. Congestion  What is congestion ?  The aggregate demand for network resources exceeds the available capacity.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Transport Layer 3-1 Fast Retransmit r time-out period often relatively long: m long delay before resending lost packet r detect lost segments via duplicate.
CS 268: Congestion Control and Avoidance Kevin Lai Feb 4, 2002.
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Third Ed.,Peterson.
15-744: Computer Networking L-10 Congestion Control.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #7 TCP New Reno Vs. Reno.
1 TCP Transport Control Protocol Reliable In-order delivery Flow control Responds to congestion “Nice” Protocol.
Computer Networks : TCP Congestion Control1 TCP Congestion Control.
1 Internet Networking Spring 2004 Tutorial 10 TCP NewReno.
Networks : TCP Congestion Control1 TCP Congestion Control.
Networks : TCP Congestion Control1 TCP Congestion Control Presented by Bob Kinicki.
L13: Sharing in network systems Dina Katabi Spring Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans.
TCP Congestion Control
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
CS/EE 145A Congestion Control Netlab.caltech.edu/course.
CSE 461 University of Washington1 Topic How TCP implements AIMD, part 1 – “Slow start” is a component of the AI portion of AIMD Slow-start.
EE 122: Congestion Control and Avoidance Kevin Lai October 23, 2002.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS 268: Lecture 5 (TCP Congestion Control) Ion Stoica February 4, 2004.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
1 Flow & Congestion Control Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans Kaashoek, Hari Balakrishnan, and Sam Madden Prof. Dina Katabi.
Other Methods of Dealing with Congestion
CS450 – Introduction to Networking Lecture 19 – Congestion Control (2)
Topics discussed in this section:
Chapter 3 outline 3.1 transport-layer services
Introduction to Networks
Introduction to Congestion Control
EECS 122: Introduction to Computer Networks Congestion Control
“Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks” Lecture Note 7.
Chapter 3 outline 3.1 Transport-layer services
Congestion Control and Resource Allocation
TCP Congestion Control
TCP Congestion Control
TCP.
Flow and Congestion Control
Lecture 19 – TCP Performance
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
“Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks”
CS 268: Lecture 4 (TCP Congestion Control)
Other Methods of Dealing with Congestion
Congestion Control (from Chapter 05)
COMP/ELEC 429/556 Introduction to Computer Networks
CS640: Introduction to Computer Networks
CS 268: Lecture 5 (TCP Congestion Control)
Congestion Control (from Chapter 05)
TCP Congestion Control
If both sources send full windows, we may get congestion collapse
Congestion Control (from Chapter 05)
Congestion Control Reasons:
TCP Congestion Control
EE 122: Lecture 10 (Congestion Control)
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
Transport Layer: Congestion Control
Chapter 3 outline 3.1 Transport-layer services
Congestion Control (from Chapter 05)
Congestion Control (from Chapter 05)
TCP flow and congestion control
Congestion Control (from Chapter 05)
Congestion Control and Resource Allocation
Congestion Michael Freedman COS 461: Computer Networks
Lecture 5, Computer Networks (198:552)
Lecture 6, Computer Networks (198:552)
Presentation transcript:

Administrivia Review 1 Office hours moving to Thursday 10—12 Specific instructions to structure review in lecture 1 slides 26/27 Office hours moving to Thursday 10—12 MUD: Send me your top 1—3 questions on this lecture Think about use cases and directions for your project!

Lecture 4, Computer Networks (198:552) Congestion Control Lecture 4, Computer Networks (198:552)

Edge vs. core division of labor Core: best-effort packet delivery Edge: end-to-end guarantees to applications Transport should figure out how to use the core effectively The Internet

The role of the endpoint Network discovery and bootstrapping How does the endpoint join the network? How does the endpoint get an address? Interface to networked applications What interface to higher-level applications? How does the host realize that abstraction? Distributed resource sharing What roles does the host play in network resource allocation decisions?

Packet-switched core network Network model Queuing delay Link rate (capacity) Bottleneck queue (max size B) Flows Packet-switched core network

TCP: Reliable & ordered transport Reliable delivery using ACKs Data ordered using sequence numbers Receiver assembles data in order before sending to the application Data Transmit buffer Receive buffer ACKs

But what about performance? Throughput: the width of the pipe (MBytes/sec) Delay: the length of the pipe (milliseconds) Packet drop: leaks from the pipe (percentage of packets)

Persistent queue buildup What happens at a queue? packet loss knee cliff Knee – point after which Throughput increases very slowly Delay increases fast Cliff – point after which Throughput starts to decrease very fast to zero Delay approaches infinity For an M/M/1 queue Delay = 1/(1 – utilization) Throughput congestion collapse Load Delay Load

Flow ctrl vs. Cong ctrl vs. Cong avoidance Flow control: Receiver tells sender how much it can accept in its buffer Congestion avoidance: Trying to stay to the left of the knee Congestion control: Trying to stay to the left of the cliff Final window == min (receiver window, congestion window)

How should an endpoint transmit data … and not overwhelm queues?

Control loop Congestion window: transmitted yet unACKed data ACKs Congestion window: transmitted yet unACKed data How should an endpoint adapt its congestion window … based on congestion signals provided by the network?

Congestion signals Explicit network signals Implicit network signals Send packet back to source (e.g., ICMP source quench) Set bit in header (e.g., ECN) Send direct feedback (e.g., sending rate) Unless on every router, still need end-to-end signal Implicit network signals Loss (e.g. TCP New Reno, SACK) Delay (e.g. TCP Vegas) Easily deployable Robustness? Wireless?

Exercise Buffer size B at the bottleneck queue Link capacity C Propagation delay T What’s the congestion window for a flow at the knee? Cliff? Is there a network buffer size that’s always sufficient? Is there a receiver buffer size that’s always sufficient?

Congestion Avoidance and Control ACM SIGCOMM ‘88 Van Jacobson and Michael Karels

One possible set of goals Persistent queue buildup One possible set of goals packet loss knee cliff Equilibrium: stay close to the “cliff” At equilibrium: Don’t send a new packet ... until current one leaves the network Return to equilibrium if you go “over” the cliff Try to approach the cliff if below it Share bandwidth ‘fairly’ across flows Throughput congestion collapse Load

TCP New Reno: Mechanisms Packet conservation Reacting to loss: multiplicative decrease Probing for more throughput: additive increase Getting started: slow start A better retransmission timeout Fast retransmit Fast recovery

TCP congestion control Additive increase, multiplicative decrease (AIMD) On packet loss, divide congestion window in half On success for last window, increase window linearly Window Loss halved t Mechanisms not shown: slow start, fast retransmit, recovery, timeout loss, etc.

Discussion of Jacobson’s paper Is loss always congestive? Wireless networks Errors and corruption? Is loss the best congestion signal? How about delays? Explicit congestion notifications from the network? Is packet conservation always a good thing? Differences in flow propagation delays? Detect incipient congestion? Why AIMD?

Some thoughts from Chiu and Jain’s ‘89 paper, Why AIMD? Some thoughts from Chiu and Jain’s ‘89 paper, [CJ89]: “An Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks”

Efficient allocation Too slow? Too fast? Every endpoint is doing it Underutilize the network Too fast? High delays, lose packets Every endpoint is doing it May all under/over shoot Large oscillations Optimal: xi=Xgoal, e.g., link capacity Efficiency = 1 - distance from efficiency line 2 user example overload User 2: x2 Efficiency line underload User 1: x1

Fair allocation Max-min fairness Assumes no knowledge of priorities Flows which share the same bottleneck get the same amount of bandwidth Assumes no knowledge of priorities Fairness = 1 - distance from fairness line 2 user example 2 getting too much fairness line User 2: x2 1 getting too much User 1: x1

Linear window adaptation rules xi(t): window or rate of the ith user at time t aI, aD, bI, bD: constant increase/decrease coefficients All users receive same network feedback Binary feedback: sense congestion or available capacity All users increase or decrease simultaneously

Multiplicative increase, additive decrease fairness line (bI(x1+aD), bI(x2+aD)) Does not converge to fairness Not stable at all Does not converge to efficiency Stable iff (x1,x2) (x1+aD,x2+aD) User 2: x2 efficiency line User 1: x1

Additive increase, additive decrease fairness line (x1+aD+aI), x2+aD+aI)) Does not converge to fairness Stable Does not converge to efficiency Stable iff (x1,x2) (x1+aD,x2+aD) User 2: x2 efficiency line User 1: x1

Multiplicative increase, mult. decrease fairness line Does not converge to fairness Stable Converges to efficiency iff (x1,x2) (bIbDx1, bIbDx2) (bdx1,bdx2) User 2: x2 efficiency line User 1: x1

Additive increase, multiplicative decrease (bDx1+aI, bDx2+aI) fairness line (x1,x2) Converges to fairness Converges to efficiency Increments smaller as fairness increases Effect on metrics? (bDx1,bDx2) User 2: x2 efficiency line User 1: x1

Significance of AIMD/CJ89 Characteristics Converges to efficiency, fairness Easily deployable: feedback is binary Fully distributed No need to know the full state of the system (e.g., # users, link rates) [CJ89] result that enabled the Internet to grow beyond the ‘80s A scalable Internet required a fully distributed congestion control Formed the basis of TCP as we know it

Modeling Critical to understanding complex systems [CJ89] model relevant even today (since ‘89!) 106 increase of bandwidth, 1000x increase in number of users Criteria for good models Realistic Simple Easy to work with Easy for others to understand Realistic, complex model  useless Unrealistic, simple model  may teach something!

Congestion Control for High Bandwidth-Delay Product Networks ACM SIGCOMM ‘02 Dina Katabi, Mark Handley, and Charlie Rohrs

Key ideas An explicit control system running in the network Use window X. I will use window X. X ACK An explicit control system running in the network Give direct, multi-bit feedback on transmission windows/rates to endpoints Separate efficiency control from fairness

XCP efficiency control loop Spare bandwidth Persistent queue Sender (roughly): cwnd = cwnd + Why do you need both terms of this equation?

XCP fairness rules If > 0, increase in throughput of all flows must be the same If < 0, decrease in throughput must be proportional to a flow’s current throughput Bandwidth shuffling: if too close to convergence, perturb by adding a small window to over-utilize the link

Discussion of XCP What window increase/decrease rules are used by XCP? Is it fair? Why couldn’t you separate efficiency & fairness in TCP New Reno? Is generating multi-bit feedback harder than binary? What about the end-to-end argument? Deployment concerns? Ensuring control loop runs at every bottleneck router? Selfish or rogue endpoints? Should you necessarily push queue size to zero? Is it always possible to keep the pipe full? How to accommodate noise in measurements or window sizes?

Some questions to think about… What should networks provide as feedback for endpoints? Should the network operate at the cliff, the knee, or elsewhere? What is a good definition of fairness? Take into account: Demands? Usage of multiple resources? App goals? What about hosts who cheat to hog resources? How to detect cheating? How to prevent/punish? What about wireless networks? Loss caused by interference, not just congestion Difficulty of detecting collisions (due to fading)

Backup slides

Proposition 1 from [Chiu and Jain ’89] In order to satisfy the requirements of distributed convergence to efficiency and fairness without truncation, the linear decrease policy should be multiplicative, and the linear increase policy should always have an additive component, and optionally it may have a multiplicative component with the coefficient no less than one.

Fast Retransmit Don’t wait for window to drain Resend a segment after 3 duplicate ACKs remember a duplicate ACK means that an out-of sequence segment was received Notes: duplicate ACKs due to packet reordering why reordering? iwindow may be too small to get duplicate ACKs segment 1 cwnd = 1 ACK 2 cwnd = 2 segment 2 segment 3 ACK 3 ACK 4 cwnd = 4 segment 4 segment 5 segment 6 ACK 4 segment 7 ACK 4 3 duplicate ACKs ACK 4

Fast Recovery After a fast-retransmit set cwnd to ssthresh/2 i.e., don’t reset cwnd to 1 But when RTO expires still do cwnd = 1 Fast Retransmit and Fast Recovery  implemented by TCP Reno; most widely used version of TCP today

Ethernet back-off mechanism Carrier sense: wait for link to be idle If idle, start sending; if not, wait until idle Collision detection: listen while transmitting If collision: abort transmission, and send jam signal Exponential back-off: wait before retransmitting Wait random time, exponentially larger on each retry