CIS, University of Delaware

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

TCP Vegas: New Techniques for Congestion Detection and Control.
01. Apr INF-3190: Congestion Control Congestion Control Foreleser: Carsten Griwodz
Congestion Control Reasons: - too many packets in the network and not enough buffer space S = rate at which packets are generated R = rate at which receivers.
ECE 4450:427/527 - Computer Networks Spring 2015
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
CSEE W4140 Networking Laboratory Lecture 7: TCP flow control and congestion control Jong Yul Kim
CSEE W4140 Networking Laboratory Lecture 7: TCP congestion control Jong Yul Kim
Data Communication and Networks
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #8 Explicit Congestion Notification (RFC 3168) Limited Transmit.
Receiver-driven Layered Multicast Paper by- Steven McCanne, Van Jacobson and Martin Vetterli – ACM SIGCOMM 1996 Presented By – Manoj Sivakumar.
Multicast Congestion Control in the Internet: Fairness and Scalability
3: Transport Layer3b-1 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle”
Transport Layer 4 2: Transport Layer 4.
CS 4396 Computer Networks Lab
MaxNet NetLab Presentation Hailey Lam Outline MaxNet as an alternative to TCP Linux implementation of MaxNet Demonstration of fairness, quick.
1 TCP III - Error Control TCP Error Control. 2 ARQ Error Control Two types of errors: –Lost packets –Damaged packets Most Error Control techniques are.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
TCP: Transmission Control Protocol Part II : Protocol Mechanisms Computer Network System Sirak Kaewjamnong Semester 1st, 2004.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
TCP OVER ADHOC NETWORK. TCP Basics TCP (Transmission Control Protocol) was designed to provide reliable end-to-end delivery of data over unreliable networks.
TCP. TCP ACK generation [RFC 1122, RFC 2581] Event at Receiver Arrival of in-order segment with expected seq #. All data up to expected seq # already.
TCP Congestion Control 컴퓨터공학과 인공지능 연구실 서 영우. TCP congestion control2 Contents 1. Introduction 2. Slow-start 3. Congestion avoidance 4. Fast retransmit.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
Probabilistic Congestion Control for Non-Adaptable Flows Jörg Widmer, Martin Mauve, Jan Peter Damm (NOSSDAV’02) Presented by Ankur Upadhyaya for CPSC 538A.
CMPE 252A: Computer Networks
TCP - Part II.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
By, Nirnimesh Ghose, Master of Science,
Internet Networking recitation #9
Topics discussed in this section:
Approaches towards congestion control
Chapter 3 outline 3.1 transport-layer services
Chapter 6 TCP Congestion Control
COMP 431 Internet Services & Protocols
Chapter 3 outline 3.1 Transport-layer services
TCP Congestion Control
Video Multicast over the Internet (IEEE Network, March/April 1999)
TCP-LP Distributed Algorithm for Low-Priority Data Transfer
Congestion Control, Internet transport protocols: udp
Transport Layer Unit 5.
TCP, XCP and Fair Queueing
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
TCP.
Flow and Congestion Control
ECE 4450:427/527 - Computer Networks Spring 2017
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
Chapter 6 TCP Congestion Control
PRESENTATION COMPUTER NETWORKS
CS640: Introduction to Computer Networks
Internet Networking recitation #10
Lecture 18 – More TCP & Congestion Control
An Integrated Congestion Management Architecture for Internet Hosts
RAP: Rate Adaptation Protocol
State Transition Diagram
Congestion Control Reasons:
CSE 4213: Computer Networks II
TCP III - Error Control TCP Error Control.
Computer Science Division
Transport Layer: Congestion Control
Chapter 3 outline 3.1 Transport-layer services
TCP flow and congestion control
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Congestion Michael Freedman COS 461: Computer Networks
Lecture 6, Computer Networks (198:552)
Presentation transcript:

CIS, University of Delaware Congestion control in Multicast ElEG 667 May 7, 2003 by Keyur Shah CIS, University of Delaware

TCP Friendly Congestion Control Schemes Single-rate Multi-rate Window-based Rate-based Window-based Rate-based

Single Rate and Multi Rate Data is sent to all receivers at the same rate. The rate is typically limited by the slowest receiver Thus a single slow receiver can drag down the data rate of the whole group

Multi-rate : Allows for a more flexible allocation of bandwidth along different network paths. Have the ability to generate the same data at different rates over multiple streams. Receivers try to listen to one or more streams depending on their capacity Receivers with different needs can be served at a rate closer to their needs

Window-based and Rate-based uses a congestion window Each transmitted packet consumes a slot in the congestion window Acknowledgment of a packet received, frees one slot Sender is allowed to transmit packets only if a slot is free Rate-based : Dynamically adapts the transmission rate according to some network feedback mechanism that indicates congestion

Issues Single rate multicast congestion control scheme between one sender and one or more receivers In Multicast, it is typical to use NAKs instead of ACKs to avoid ACK implosion. Also, NAK supression is done so that the Network Elements forward only 1 NAK per group of receivers to the upstream router. This results in delayed feedback to the sender.

These delays make the system unresponsive to variations in the network conditions. This leads to instability in the network. Another issue besides Stability…. TCP FRIENDLINESS Such unresponsive flows which are slow in reacting to congestion can drive the other competing slow, responsive flows to a very low throughput

Pragmatic General Multicast Congestion Control (PGMCC)

PGMCC tries to achieve faster response It also includes positive ACKs But having each receiver send an ACK causes the scalability problem Solution: Elect a group representative (called ACKER) who is responsible for sending positive ACKs The ACKER is dynamically selected to be the receiver which would have the lowest throughput Note: Due to the dynamics of the network the receiver with the lowest throughput may change from time to time… i.e. The ACKER will also change.

Window based Controller used by pgmcc Window based congestion control scheme is run between the sender and the ACKER Uses AIMD Uses 2 variables: Window Size (W) – describes the current window size in packets Token Count (T) – the no of packets that can be transmitted

On start up or after timeout : W = T =1 On Transmit : T = T-1 On ACK reception : W = W + 1/W T = T + 1 + 1/W On loss detection : W = W/2 by dupacks Pgmcc like TCP does some exponential opening of the window (Slow Start). It is limited to a fixed size 4-6 packets Primarily done to quickly open the window beyond the dupack threshold

ACKER selection Aim – To determine the receiver which would have the lowest throughput Throughput for each receiver can be estimated using Round Trip Time Loss Rate Initial ACKER selection When receivers get a data packet with no ACKER selected, all receivers generate a dummy NAK report. The sender will select the source of the first incoming NAK as the new ACKER

Whenever an ACK or NAK arrives from any receiver: The sender computes the expected throughput (T_i) for that receiver using the RTT and loss rate. The sender has already stored the expected throughput for the current ACKER (T_ACKER) If T_i < C * T_ACKER Node i is selected as new ACKER Note: C is between 0-1, used to avoid too frequent ACKER changes

RTT measurement Explicit Timestamp : Transmit a timestamp with every data packet Receiver echoes the most recent timestamp in the ACK or NAK Sender computes the RTT by subtracting the received timestamp from the current value of the clock

Implicit Timestamp : Record a timestamp for each data packet, but timestamp isn’t transmitted The receiver reports the most recent sequence no in the ACK or NAK using which the sender can find the corresponding timestamp

Using Sequence numbers : Least precise technique Doesn’t require the presence of a high resolution clock on the nodes Sender doesn’t compute any timestamp Receiver echoes the most recently received Sequence no Sender computes RTT as the difference of the most recently sent Sequence no and the one echoed in the ACK or NAK Thus RTT is in terms of sequence numbers and not seconds

Timeouts In TCP, Timeout value is calculated by accumulating statistics of SRTT and RTTVAR PGMCC can use a similar scheme, only that whenever the ACKER changes the computation of SRTT and RTTVAR must be restarted An ACKER may leave the group without notifying the sender To avoid many successive timeouts due to absence of an ACKER, new election of ACKER should be performed after TWO successive timeouts

Fairness !!!

non-lossy link Lossy link Intra-protocol fairness with non-lossy and lossy links

non-lossy link Lossy link Inter-protocol fairness

Another Approach Source based scheme is limited by the slowest receiver The conflicting bandwidth requirements of all receivers cannot be simultaneously met with one transmission rate This can be achieved if we transfer the burden of rate adaption to the receivers The source can generate data at different rate over multiple Streams The receivers try to listen to one or more streams depending on their capacity.

Receiver driven Layered Multicast (RLM) The source simply transmits each layer of its signal on a separate Multicast group Receiver has the key functionality It adapts by joining and leaving groups Conceptually the receiver, - On congestion, drops a layer - On spare capacity, adds a layer Receiver searches for the optimal level of subscription Similar to TCP which searches for its optimal rate using AIMD

S-R1 path has high capacity - R1 subscribes to all 3 layers and gets highest quality signal R2, R3 have to drop layer 3 because the 512 kb/s link becomes congested

How many layers to subscribe ? Receiver needs to determine whether its current level of subscription is too high or low Too high is easy to find- as it’ll cause congestion Too low – there is no signal to the receiver to indicate that its subscription is too low

Subscribing to layers in RLM RLM performs a join-experinment -spontaneously adds layers at “well chosen” times If a join-experiment causes congestion - the receiver quickly drops the offending layer If a join experiment is successful - the receiver is one step closer to the optimal operating point

Join experiments cause transient congestion Need to minimize the frequency and duration of the join experiments Solution : A learning strategy Doing join experiments infrequently when they are likely to fail And doing them readily when they are likely to succeed Done by, managing a separate Join Timer for each level of subscription

4 C 3 D E B F 2 A 1 Layer # Time

Scalability of RLM If each receiver carries out the adaptation algorithm The system scales poorly As the session membership grows Aggregate frequency of join experiments increases network congestion increases

Also, join-experiments can interfere with each other congestion R1 can misinterpret the congestion and back off layer 2 join-timer

Solution – Shared Learning Receiver notifies the entire group, that it is now performing a join-experiment on layer ‘x’ All receivers can learn from the failed join experiments of other receivers