TCP, XCP and Fair Queueing

Slides:



Advertisements
Similar presentations
Fair Queueing. Design space Buffer management: –RED, Drop-Tail, etc. Scheduling: which flow to service at a given time –FIFO –Fair Queueing.
Advertisements

Computer Networking Lecture 20 – Queue Management and QoS.
ECE 4450:427/527 - Computer Networks Spring 2015
CS 268: Lecture 7 (Beyond TCP Congestion Control) Ion Stoica Computer Science Division Department of Electrical Engineering and Computer Sciences University.
CS 4700 / CS 5700 Network Fundamentals Lecture 12: Router-Aided Congestion Control (Drop it like it’s hot) Revised 3/18/13.
Router-assisted congestion control Lecture 8 CS 653, Fall 2010.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
Analysis and Simulation of a Fair Queuing Algorithm
ACN: Congestion Control1 Congestion Control and Resource Allocation.
Computer Networking Lecture 17 – Queue Management As usual: Thanks to Srini Seshan and Dave Anderson.
Core Stateless Fair Queueing Stoica, Shanker and Zhang - SIGCOMM 98 Rigorous fair Queueing requires per flow state: too costly in high speed core routers.
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
15-744: Computer Networking L-5 Fair Queuing. 2 Fair Queuing Core-stateless Fair queuing Assigned reading [DKS90] Analysis and Simulation of a Fair Queueing.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
Advance Computer Networking L-5 TCP & Routers Acknowledgments: Lecture slides are from the graduate level Computer Networks course thought by Srinivasan.
CS 268: Computer Networking L-5 Router Queue Management.
CS 268: Computer Networking L-6 Router Congestion Control.
Advance Computer Networking L-6 TCP & Routers Acknowledgments: Lecture slides are from the graduate level Computer Networks course thought by Srinivasan.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
Fair Queueing. 2 First-Come-First Served (FIFO) Packets are transmitted in the order of their arrival Advantage: –Very simple to implement Disadvantage:
March 29 Scheduling ?. What is Packet Scheduling? Decide when and what packet to send on output link 1 2 Scheduler flow 1 flow 2 flow n Buffer management.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Lecture 18: Quality of Service Slides used with.
CprE 458/558: Real-Time Systems (G. Manimaran)1 CprE 458/558: Real-Time Systems Real-Time Networks – WAN Packet Scheduling.
1 Fair Queuing Hamed Khanmirza Principles of Network University of Tehran.
XCP: eXplicit Control Protocol Dina Katabi MIT Lab for Computer Science
TeXCP: Protecting Providers’ Networks from Unexpected Failures & Traffic Spikes Dina Katabi MIT - CSAIL nms.csail.mit.edu/~dina.
Spring Computer Networks1 Congestion Control Sections 6.1 – 6.4 Outline Preliminaries Queuing Discipline Reacting to Congestion Avoiding Congestion.
15-744: Computer Networking L-5 TCP & Routers. 2 Fair Queuing Core-stateless Fair queuing Assigned reading [DKS90] Analysis and Simulation of a Fair Queueing.
15-744: Computer Networking L-5 TCP & Routers. 2 Fair Queuing Core-stateless Fair queuing Assigned reading [DKS90] Analysis and Simulation of a Fair Queueing.
Providing QoS in IP Networks
Univ. of TehranIntroduction to Computer Network1 An Introduction Computer Networks An Introduction to Computer Networks University of Tehran Dept. of EE.
04/02/08 1 Packet Scheduling IT610 Prof. A. Sahoo KReSIT.
QoS & Queuing Theory CS352.
CS 268: Computer Networking
CS 268: Computer Networking
CS 268: Lecture 6 Scott Shenker and Ion Stoica
Congestion control principles
Router-Assisted Congestion Control
Chapter 6 Queuing Disciplines
Congestion Control and Resource Allocation
TCP Congestion Control
TCP Congestion Control
Advance Computer Networking
EE 122: Router Support for Congestion Control: RED and Fair Queueing
Queuing and Queue Management
Lecture 22 – Queue Management and Quality of Service
ECE 4450:427/527 - Computer Networks Spring 2017
15-744: Computer Networking
Fair Queueing.
Advance Computer Networking
Computer Science Division
Advance Computer Networking
EE 122: Lecture 7 Ion Stoica September 18, 2001.
Network Simulation NET441
COMP/ELEC 429 Introduction to Computer Networks
Advanced Computer Networks
TCP Congestion Control
15-744: Computer Networking
EECS 122: Introduction to Computer Networks TCP Variations
COMP/ELEC 429/556 Fall 2017 Homework #2
Congestion Control Reasons:
Administrivia Review 1 due tomorrow Office hours on Thursdays 10—12
15-744: Computer Networking
Congestion Control and Resource Allocation
EECS 122: Introduction to Computer Networks Packet Scheduling and QoS
Fair Queueing.
کنترل جریان امیدرضا معروضی.
Lecture 6, Computer Networks (198:552)
Presentation transcript:

TCP, XCP and Fair Queueing Supplemental slides 02/23/2007 Aditya Akella

How does XCP Work? Congestion Header Feedback Round Trip Time Congestion Window Congestion Header Feedback Round Trip Time Congestion Window Feedback = + 0.1 packet

How does XCP Work? Feedback = + 0.1 packet Round Trip Time Congestion Window Feedback = - 0.3 packet

Routers compute feedback without any per-flow state How does XCP Work? Congestion Window = Congestion Window + Feedback XCP extends ECN and CSFQ Routers compute feedback without any per-flow state

How Does an XCP Router Compute the Feedback? Congestion Controller  Congestion Controller Fairness Controller Fairness Controller Goal: Divides  between flows to converge to fairness Looks at a flow’s state in Congestion Header Algorithm: If  > 0  Divide  equally between flows If  < 0  Divide  between flows proportionally to their current rates Goal: Matches input traffic to link capacity & drains the queue Looks at aggregate traffic & queue Algorithm: Aggregate traffic changes by   ~ Spare Bandwidth ~ - Queue Size So,  =  davg Spare -  Queue MIMD AIMD

Getting the devil out of the details … Congestion Controller Fairness Controller Algorithm: If  > 0  Divide  equally between flows If  < 0  Divide  between flows proportionally to their current rates Need to estimate number of flows N RTTpkt : Round Trip Time in header Cwndpkt : Congestion Window in header T: Counting Interval  =  davg Spare -  Queue Theorem: System converges to optimal utilization (i.e., stable) for any link bandwidth, delay, number of sources if: (Proof based on Nyquist Criterion) No Parameter Tuning No Per-Flow State

Fairness Goals Allocate resources fairly Isolate ill-behaved users Router does not send explicit feedback to source Still needs e2e congestion control Still achieve statistical muxing One flow can fill entire pipe if no contenders Work conserving  scheduler never idles link if it has a packet

What is Fairness? At what granularity? Flows, connections, domains? What if users have different RTTs/links/etc. Should it share a link fairly or be TCP fair? Maximize fairness index? Fairness = (Sxi)2/n(Sxi2) 0<fairness<1 Basically a tough question to answer – typically design mechanisms instead of policy User = arbitrary granularity

Max-min Fairness Allocate user with “small” demand what it wants, evenly divide unused resources to “big” users Formally: Resources allocated in terms of increasing demand No source gets resource share larger than its demand Sources with unsatisfied demands get equal share of resource

Max-min Fairness Example Assume sources 1..n, with resource demands X1..Xn in ascending order Assume channel capacity C. Give C/n to X1; if this is more than X1 wants, divide excess (C/n - X1) to other sources: each gets C/n + (C/n - X1)/(n-1) If this is larger than what X2 wants, repeat process

Implementing Max-min Fairness Generalized processor sharing Fluid fairness Bitwise round robin among all queues Why not simple round robin? Variable packet length  can get more service by sending bigger packets Unfair instantaneous service rate What if arrive just before/after packet departs?

Bit-by-bit RR Single flow: clock ticks when a bit is transmitted. For packet i: Pi = length, Ai = arrival time, Si = begin transmit time, Fi = finish transmit time Fi = Si+Pi = max (Fi-1, Ai) + Pi Multiple flows: clock ticks when a bit from all active flows is transmitted  round number Can calculate Fi for each packet if number of flows is know at all times Why do we need to know flow count?  need to know A  This can be complicated

Bit-by-bit RR Illustration Not feasible to interleave bits on real networks FQ simulates bit-by-bit RR

Fair Queuing Mapping bit-by-bit schedule onto packet transmission schedule Transmit packet with the lowest Fi at any given time How do you compute Fi?

Delay Allocation Reduce delay for flows using less than fair share Advance finish times for sources whose queues drain temporarily Schedule based on Bi instead of Fi Fi = Pi + max (Fi-1, Ai)  Bi = Pi + max (Fi-1, Ai - d) If Ai < Fi-1, conversation is active and d has no effect If Ai > Fi-1, conversation is inactive and d determines how much history to take into account Infrequent senders do better when history is used

Fair Queuing Tradeoffs FQ can control congestion by monitoring flows Non-adaptive flows can still be a problem – why? Complex state Must keep queue per flow Hard in routers with many flows (e.g., backbone routers) Flow aggregation is a possibility (e.g. do fairness per domain) Complex computation Classification into flows may be hard Must keep queues sorted by finish times dR/dt changes whenever the flow count changes