Download presentation
Presentation is loading. Please wait.
Published byGilbert Booker Modified over 9 years ago
1
© Janice Regan, CMPT 128, 2007-2012 CMPT 371 Data Communications and Networking Congestion Control 0
2
Simple Congestion Scenario Two pairs hosts share a connection through a common router. Host A and host B are sending data at bytes/sec through a shared link (through the router) with capacity R The router has a buffer (infinite) which can queue packets if they arrive faster that the link to hosts B and C can accept them Large queuing delays result Janice Regan 2012 1
3
2 Fig 3.43 text Fig 3.44.text From Kurose and Ross 6ed
4
Another Scenario Now assume the buffer is finite, packets arriving at a full buffer will be dropped Dropped packets will eventually be retransmitted Host A and host B are sending data at. bytes/sec through a shared link (through the router) with capacity R is known as the offered load Janice Regan 2012 3
5
4 Fig 3.45 text From Kurose and Ross 6ed
6
Janice Regan 2012 5 Scenarios Offered load = application sending rate = R/2 Host A transmits only when buffer is available (by chance) so no packets are lost Offered load = application sending rate + retransmission rate = R/2 Retransmission occurs only when packet is definitely lost. (e.g. expiry of very long RTT timer) One out of two original packets sent is retransmitted ADDITIONAL COST: RETRANSMISSION OF PACKETS THAT ARE DROPPED DUE TO CONGESTION
7
Janice Regan 2012 6 Scenarios As for B but retransmission occurs when a packet may be lost. Retransmission rate now includes some duplicate transmissions. Duplicate transmissions are transmissions of packets that were delayed rather than lost ADDITIONAL COST: RETRANSMISSION OF DUPLICATE PACKETS In the diagram each packet assumed to be forwarded twice
8
Janice Regan 2012 7 Fig 3.46 text From Kurose and Ross 6ed
9
TCP congestion window Congestion window Usually smaller than the RecWindow (size of sliding window) for sliding windows Allows reduction of the size of the sliding window to deal with congestion Sliding windows will use smallest of RecWindow and CongWindow Consider RTT to be the time transmission of CongWindow begins until all of CongWindow is acknowledged Send rate is about CongWindow length / RTT 8 Janice Regan 2012
10
TCP: detecting congestion Two types of ‘loss events’ indicate congestion is occuring Third copy of ACK arrives causing retransmission Transmission timer expires These events indicate high levels of congestion because They are caused by packets being lost Packets are lost when queuing buffers fill due to congestion and overflow When acknowledgements arrive as expected, and packets are not lost TCP assumes there is no congestion. Faster arrival of ACKS indicates possible available bandwidth Slower arrival of ACKS indicates possible congestion (low levels) 9 Janice Regan 2012
11
10 Managing the CongWindow Slow Start Begins with a CongWindow length of 1 Maximum Segment Size Initial throughput is MSS / RTT If data is transmitted successfully double the length of the CongWindow (or increase to RecWindow if increase is smaller) Continue doubling the CongWindow each RTT until the RecWindow is reached or a loss event occurs If a loss event (triple duplicate ACK) occurs cut the CongWindow in half, exit the slow start procedure, moving to CA mode (fast recovery). CongWindow will not be decreased past 1 MSS If a loss event (timeout) occurs reinitialize SS mode, and continues in SS mode until the CongWindow reaches a Threshold size equal to half the size when the loss occured
12
Effect of Slow Start 11 Janice Regan 2012
13
Managing the CongWindow AIMD: Additive increase multiplicative decrease If data is transmitted successfully increase the length of the CongWindow by one MSS (or increase to RecWindow if increase is smaller) Continue increasing the CongWindow each RTT until the RecWindow is reached or a loss event occurs 12 Janice Regan 2012
14
Managing the CongWindow AIMD: Additive increase multiplicative decrease If a loss event ( in particular a third duplicate ACK) occurs cut the CongWindow in half (and add 3 RTT) then move to fast recovery state (resume linear increases) If a loss event (timeout) occurs enter SS mode, and continues in SS mode until the CongWindow reaches a Threshold size equal to half the size when the loss occured While running this algorithm TCP is in collision avoidance (CA) mode 13 Janice Regan 2012
15
14 Figure 3.51.text Janice Regan 2012 From Kurose and Ross 6ed
16
Janice Regan 2012 15 Fairness (1) Consider N connections through a single router to the same host. Both connections have the same MSS and RTT If TCP is fairall connections should end up with the same throughput For simplicity consider 2 connections both of the connections is operating in CA mode If both connections are operating below R/2 (capacity of connection is R) then loss should not occur and both connections will increase their CongWindow at the same rate
17
Janice Regan 2012 16 Fairness (2) At some point the combined load will become larger than R, queues will fill and a packet will be lost This will reduce the combined offered load. When the load has been reduced (one or multiple reductions) until packets are no longer lost then the combined offered load will be less than R This cycle of increase followed by decrease will continue
18
Janice Regan 2012 17 Fairness (3) For any given packet arriving at the router the probability that it will be dropped is the same If the offered load of connection 1 is larger than connection 2, then more packets are arriving from connection 1 and is more likely that the packet from connection 1 will be dropped This will mean the connection with the larger throughput is more likely to have its CongWindow reduced. The net effect is that the load of the two connections will over time converge.
19
Janice Regan 2012 18 Fig 3.54 text Fig 3.55 text
20
Janice Regan 2012 19 Fairness: UDP UDP does not have built in congestion control mechanisms. ICMP choke packets may be used to control congestion is ICMP is activated at all points along the path (ICMP is commonly disabled by hosts/routers as a security measure) A protocol that uses UDP will be able to continue pumping data into the Internet at a rate as large as its maximum transmission rate. Thus, it is possible for UDP traffic to fill the available bandwidth of the connection and to prevent the transmission of TCP traffic.
21
Janice Regan 2012 20 Fairness: TCP an UDP The discussion of fairness and TCP assumes that each user is making one connection. Many TCP applications make multiple TCP connections to increase the throughput of data. This means that the per user fairness is skewed.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.