EECS 122: Introduction to Computer Networks TCP Variations

Slides:



Advertisements
Similar presentations
ECE 4450:427/527 - Computer Networks Spring 2015
Advertisements

Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
CS 268: Lecture 8 Router Support for Congestion Control Ion Stoica Computer Science Division Department of Electrical Engineering and Computer Sciences.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Lecture 16: Congestion control II Slides used with.
CS 4700 / CS 5700 Network Fundamentals Lecture 12: Router-Aided Congestion Control (Drop it like it’s hot) Revised 3/18/13.
EE 122: Congestion Control The Sequel October 1, 2003.
Congestion Control An Overview -Jyothi Guntaka. Congestion  What is congestion ?  The aggregate demand for network resources exceeds the available capacity.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
1 Congestion Control Outline Queuing Discipline Reacting to Congestion Avoiding Congestion.
1 Lecture 9: TCP and Congestion Control Slides adapted from: Congestion slides for Computer Networks: A Systems Approach (Peterson and Davis) Chapter 3.
Katz, Stoica F04 EECS 122: Introduction to Computer Networks TCP Variations Computer Science Division Department of Electrical Engineering and Computer.
Spring 2002CS 4611 Congestion Control Outline Queuing Discipline Reacting to Congestion Avoiding Congestion.
CS 268: Lecture 8 (Router Support for Congestion Control) Ion Stoica February 19, 2002.
Data Communication and Networks
Computer Networking Lecture 17 – Queue Management As usual: Thanks to Srini Seshan and Dave Anderson.
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
1 EE 122: Advanced TCP Ion Stoica TAs: Junda Liu, DK Moon, David Zats (Materials with thanks to Vern Paxson,
Transport Layer 4 2: Transport Layer 4.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
CSE 461 University of Washington1 Topic How TCP implements AIMD, part 1 – “Slow start” is a component of the AI portion of AIMD Slow-start.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
Congestion Control - Supplementary Slides are adapted on Jean Walrand’s Slides.
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
9.7 Other Congestion Related Issues Outline Queuing Discipline Avoiding Congestion.
Lecture 9 – More TCP & Congestion Control
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
Peer-to-Peer Networks 13 Internet – The Underlay Network
Univ. of TehranIntroduction to Computer Network1 An Introduction Computer Networks An Introduction to Computer Networks University of Tehran Dept. of EE.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Other Methods of Dealing with Congestion
CS450 – Introduction to Networking Lecture 19 – Congestion Control (2)
Internet Networking recitation #9
Topics discussed in this section:
Transport Layer CS 381 3/7/2017.
Chapter 3 outline 3.1 transport-layer services
COMP 431 Internet Services & Protocols
Introduction to Congestion Control
Congestion Control and AQM
Congestion Control Outline Queuing Discipline Reacting to Congestion
Chapter 6 Congestion Avoidance
Router-Assisted Congestion Control
Congestion Control Outline Queuing Discipline Reacting to Congestion
Chapter 3 outline 3.1 Transport-layer services
Congestion Control Outline Queuing Discipline Reacting to Congestion
Congestion Control and Resource Allocation
EE 122: Router Support for Congestion Control: RED and Fair Queueing
TCP, XCP and Fair Queueing
Lecture 19 – TCP Performance
Queuing and Queue Management
ECE 4450:427/527 - Computer Networks Spring 2017
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
Random Early Detection Gateways for Congestion Avoidance
Other Methods of Dealing with Congestion
Other Methods of Dealing with Congestion
Computer Science Division
Internet Networking recitation #10
EECS 122: Introduction to Computer Networks Midterm II Review
If both sources send full windows, we may get congestion collapse
Congestion Control Reasons:
TCP Congestion Control
EE 122: Congestion Control The Sequel
Computer Science Division
Transport Layer: Congestion Control
Chapter 3 outline 3.1 Transport-layer services
TCP flow and congestion control
Congestion Control and Resource Allocation
EECS 122: Introduction to Computer Networks Packet Scheduling and QoS
Presentation transcript:

EECS 122: Introduction to Computer Networks TCP Variations Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720-1776

Today’s Lecture: 11 Application Transport Network (IP) Link Physical 2 17, 18, 19 Application 10,11 6 Transport 14, 15, 16 7, 8, 9 Network (IP) 21, 22, 23 Link Physical 25

Outline TCP congestion control Router-based support Quick Review TCP flavors Equation-based congestion control Impact of losses Cheating Router-based support RED ECN Fair Queueing XCP

Quick Review Slow-Start: cwnd++ upon every new ACK Congestion avoidance: AIMD if cwnd > ssthresh ACK: cwnd = cwnd + 1/cwnd Drop: ssthresh =cwnd/2 and cwnd=1 Fast Recovery: duplicate ACKS: cwnd=cwnd/2 Timeout: cwnd=1

TCP Flavors TCP-Tahoe TCP-Reno TCP-newReno TCP-Vegas, TCP-SACK cwnd =1 whenever drop is detected TCP-Reno cwnd =1 on timeout cwnd = cwnd/2 on dupack TCP-newReno TCP-Reno + improved fast recovery TCP-Vegas, TCP-SACK

TCP Vegas Improved timeout mechanism Decrease cwnd only for losses sent at current rate avoids reducing rate twice Congestion avoidance phase: compare Actual rate (A) to Expected rate (E) if E-A > , decrease cwnd linearly if E-A < , increase cwnd linearly rate measurements ~ delay measurements see textbook for details!

TCP-SACK SACK = Selective Acknowledgements ACK packets identify exactly which packets have arrived Makes recovery from multiple losses much easier

Standards? How can all these algorithms coexist? Don’t we need a single, uniform standard? What happens if I’m using Reno and you are using Tahoe, and we try to communicate?

Equation-Based CC Simple scenario Observations: assume a drop every k’th RTT (for some large k) w, w+1, w+2, ...w+k-1 DROP (w+k-1)/2, (w+k-1)/2+1,... Observations: In steady state: w= (w+k-1)/2 so w=k-1 Average window: 1.5(k-1) Total packets between drops: 1.5k(k-1) Drop probability: p = 1/[1.5k(k-1)] Throughput: T ~ (1/RTT)*sqrt(3/2p)

Equation-Based CC Idea: Approach: Forget complicated increase/decrease algorithms Use this equation T(p) directly! Approach: measure drop rate (don’t need ACKs for this) send drop rate p to source source sends at rate T(p) Good for streaming audio/video that can’t tolerate the high variability of TCP’s sending rate

Question! Why use the TCP equation? Why not use any equation for T(p)?

Cheating Three main ways to cheat: increasing cwnd faster than 1 per RTT using large initial cwnd Opening many connections

Increasing cwnd Faster B x D E y y C x increases by 2 per RTT y increases by 1 per RTT Limit rates: x = 2y x

Increasing cwnd Faster B x D E y

Larger Initial cwnd A B D E x y x starts SS with cwnd = 4 y starts SS with cwnd = 1

Open Many Connections A B D E x y Assume A starts 10 connections to B D starts 1 connection to E Each connection gets about the same throughput Then A gets 10 times more throughput than D

Cheating and Game Theory B x D E y D  Increases by 1 Increases by 5 A Increases by 1 Increases by 5 (x, y) 22, 22 10, 35 35, 10 15, 15 Too aggressive Losses Throughput falls Individual incentives: cheating pays Social incentives: better off without cheating Classic PD: resolution depends on accountability

Lossy Links TCP assumes that all losses are due to congestion What happens when the link is lossy? Recall that Tput ~ 1/sqrt(p) where p is loss prob. This applies even for non-congestion losses

Example p = 0 p = 1% p = 10%

What can routers do to help?

Paradox Routers are in middle of action But traditional routers are very passive in terms of congestion control FIFO Drop-tail

FIFO: First-In First-Out Maintain a queue to store all packets Send packet at the head of the queue Next to transmit Arriving packet Queued packets

Tail-drop Buffer Management Drop packets only when buffer is full Drop arriving packet Next to transmit Arriving packet Drop

Ways Routers Can Help Packet scheduling: non-FIFO scheduling Packet dropping: not drop-tail not only when buffer is full Congestion signaling

Question! Why not use infinite buffers? no packet drops!

The Buffer Size Quandary Small buffers: often drop packets due to bursts but have small delays Large buffers: reduce number of packet drops (due to bursts) but increase delays Can we have the best of both worlds?

Random Early Detection (RED) Basic premise: router should signal congestion when the queue first starts building up (by dropping a packet) but router should give flows time to reduce their sending rates before dropping more packets Therefore, packet drops should be: early: don’t wait for queue to overflow random: don’t drop all packets in burst, but space drops out

RED FIFO scheduling Buffer management: Probabilistically discard packets Probability is computed as a function of average queue length (why average?) Discard Probability 1 Average Queue Length min_th max_th queue_len

RED (cont’d) min_th – minimum threshold max_th – maximum threshold avg_len – average queue length avg_len = (1-w)*avg_len + w*sample_len Discard Probability 1 min_th max_th queue_len Average Queue Length

Discard Probability (P) RED (cont’d) If (avg_len < min_th)  enqueue packet If (avg_len > max_th)  drop packet If (avg_len >= min_th and avg_len < max_th)  enqueue packet with probability P Discard Probability (P) 1 min_th max_th queue_len Average Queue Length

RED (cont’d) P = max_P*(avg_len – min_th)/(max_th – min_th) Improvements to spread the drops (see textbook) Discard Probability max_P 1 P Average Queue Length min_th max_th queue_len avg_len

Average vs Instantaneous Queue

RED Advantages High network utilization with low delays Average queue length small, but capable of absorbing large bursts Many refinements to basic algorithm make it more adaptive (requires less tuning)

Explicit Congestion Notification Rather than drop packets to signal congestion, router can send an explicit signal Explicit congestion notification (ECN): instead of optionally dropping packet, router sets a bit in the packet header If data packet has bit set, then ACK has ECN bit set Backward compatibility: bit in header indicates if host implements ECN note that not all routers need to implement ECN

Picture W W/2 A B

ECN Advantages No need for retransmitting optionally dropped packets No confusion between congestion losses and corruption losses

Remaining Problem Internet vulnerable to CC cheaters! Single CC standard can’t satisfy all applications EBCC might answer this point Goal: make Internet invulnerable to cheaters allow end users to use whatever congestion control they want How?

One Approach: Nagle (1987) Round-robin among different flows one queue per flow

Round-Robin Discussion Advantages: protection among flows Misbehaving flows will not affect the performance of well-behaving flows Misbehaving flow – a flow that does not implement any congestion control FIFO does not have such a property Disadvantages: More complex than FIFO: per flow queue/state Biased toward large packets – a flow receives service proportional to the number of packets

Solution? Bit-by-bit round robin Can you do this in practice? No, packets cannot be preempted (why?) …we can only approximate it

Fair Queueing (FQ) Define a fluid flow system: a system in which flows are served bit-by-bit Then serve packets in the increasing order of their deadlines Advantages Each flow will receive exactly its fair rate Note: FQ achieves max-min fairness

Max-Min Fairness Denote Max-min fair rate computation: C – link capacity N – number of flows ri – arrival rate Max-min fair rate computation: compute C/N if there are flows i such that ri <= C/N, update C and N if no, f = C/N; terminate go to 1 A flow can receive at most the fair rate, i.e., min(f, ri)

Example C = 10; r1 = 8, r2 = 6, r3 = 2; N = 3 C/3 = 3.33  C = C – r3 = 8; N = 2 C/2 = 4; f = 4 8 6 2 4 f = 4: min(8, 4) = 4 min(6, 4) = 4 min(2, 4) = 2 10

Implementing Fair Queueing Idea: serve packets in the order in which they would have finished transmission in the fluid flow system

Example Flow 1 (arrival traffic) 1 2 3 4 5 6 time Flow 2 (arrival traffic) 1 2 3 4 5 time Service in fluid flow system 1 2 3 4 5 6 1 2 3 4 5 time Packet system 1 2 1 3 2 3 4 4 5 5 6 time

System Virtual Time: V(t) Measure service, instead of time V(t) slope – rate at which every active flow receives service C – link capacity N(t) – number of active flows in fluid flow system at time t V(t) time Service in fluid flow system 1 2 3 4 5 6 1 2 3 4 5 time

Fair Queueing Implementation Define - finishing time of packet k of flow i (in system virtual time reference system) - arrival time of packet k of flow i - length of packet k of flow i The finishing time of packet k+1 of flow i is

FQ Advantages FQ protect well-behaved flows from ill-behaved flows Example: 1 UDP (10 Mbps) and 31 TCP’s sharing a 10 Mbps link

Alternative Implementations of Max-Min Deficit round-robin Core-stateless fair queueing label packets with rate drop according to rates check at ingress to make sure rates are truthful Approximate fair dropping keep small sample of previous packets estimate rates based on these apply dropping as above wins because few large flows per-elephant state, not per-mouse state RED-PD: not max-min, but punishes big cheaters

Big Picture FQ does not eliminate congestion  it just manages the congestion You need both end-host congestion control and router support for congestion control end-host congestion control to adapt router congestion control to protect/isolate

Explicit Rate Signaling (XCP) Each packet contains: cwnd, RTT, feedback field Routers indicate to flows whether to increase or decrease: give explicit rates for increase/decrease amounts feedback is carried back to source in ACK Separation of concerns: aggregate load allocation among flows

XCP (continued) Aggregate: Allocation: measures spare capacity and avg queue size computes desired aggregate change: D=aRS-bQ Allocation: uses AIMD positive feedback is same for all flows negative feedback is proportional to current rate when D=0, reshuffle bandwidth all changes normalized by RTT want equal rates, not equal windows

XCP (continued) Challenge: Solution: how to give per-flow feedback without per-flow state? do you keep track of which flows you’ve signaled and which you haven’t? Solution: figure out desired change divide from expected number of packets from flow in time interval give each packet share of rate adjustment flow totals up all rate adjustment