Download presentation
1
Data Communication and Networks
Lecture 11 Network Congestion: Causes, Effects, Controls November 16, 2006 Transport Layer
2
What Is Congestion? Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity of the network Congestion control aims to keep number of packets below level at which performance falls off dramatically Data network is a network of queues Generally 80% utilization is critical Finite queues mean data may be lost A top-10 problem! Transport Layer
3
Queues at a Node Transport Layer
4
Effects of Congestion Packets arriving are stored at input buffers
Routing decision made Packet moves to output buffer Packets queued for output transmitted as fast as possible Statistical time division multiplexing If packets arrive to fast to be routed, or to be output, buffers will fill Can discard packets Can use flow control Can propagate congestion through network Transport Layer
5
Interaction of Queues Transport Layer
6
Causes/costs of congestion: scenario 1
two senders, two receivers one router, infinite buffers no retransmission large delays when congested maximum achievable throughput Hosts A and B both send at rate in Packets pass through router and over shared link of capacity C Router has buffers to store outgoing packets Clearly, per connection rate of delivery from router to destination (from A or B) cannot exceed C/2, no matter what the value of in is. That is, out <= C/2 Note that when in > C/2, delay grows asymptotically since queue depth in router is infinite. Transport Layer
7
Causes/costs of congestion: scenario 2
one router, finite buffers sender retransmission of lost packet Router in this case has a finite number of buffers. Assume transport layer is “reliable”: that is, if router drops a packet, sender (A or B) will retransmit. “Offered load” now includes both “original” packets (rate is in) and “retransmitted” packets. The rate for all packets transmitted by A or B (original + retransmitted) is ’in. Transport Layer
8
Causes/costs of congestion: scenario 2
l in out = always: (’in = in) “perfect” retransmission only when loss: retransmission of delayed (not lost) packet makes larger (than perfect case) for same l in out > l in l out Performance depends on how A and B do retransmission. Suppose A,B send only when they know router has a free buffer, so no loss occurs. Then, ’in = in. Somewhat more realistically, suppose A,B only retransmit when know for sure that packet is lost (“perfect retransmission”). Then packets arrive at destination at a rate < C/2 (middle graph is an example). Even more realistically, some packets will be sent twice because sender’s timer is occasionally not quite long enough. So, delivery rate to receiver is even worse since receiver will discard duplicate (right hand graph). So, cost of congestion is BOTH cost of resend lost packet AND cost of duplicates (wasted transmission). “costs” of congestion: more work (retrans) for given “goodput” unneeded retransmissions: link carries multiple copies of pkt Transport Layer
9
Causes/costs of congestion: scenario 3
four senders multihop paths timeout/retransmit l in Q: what happens as and increase ? l in A-C path shares router R1 with D-B path, and shares R2 with B-D path Connections between routers have transmission capacity C. When in is small, no retransmission occurs. Transport Layer
10
Causes/costs of congestion: scenario 3
Consider this case: Since R2 is second hop for A-C, packets on A-C path from R1 cannot arrive at R2 at a rate > C, regardless of rate of in. If ’in is very large for all connections, then arrival rate of B packets at R2 can be >> than A packets sent from R1. So, more of R2’s buffers are used for B packets than A packets. As offered load goes even higher, eventually only B packets are in buffer There is no room for A packets, so R2 drops A packets. So, throughput on A-C path (measured at C) goes to ZERO! Another cost of congestion: When a packet is dropped at a downstream node on a path, all of the capacity used to send the packet from the upstream nodes has been WASTED! Another “cost” of congestion: when packet dropped, any “upstream transmission capacity used for that packet was wasted! Transport Layer
11
Approaches towards congestion control
Two broad approaches towards congestion control: End-end congestion control: no explicit feedback from network congestion inferred from end-system observed loss, delay approach taken by TCP Network-assisted congestion control: routers provide feedback to end systems single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) explicit rate sender should send at Transport Layer
12
Case study: ATM ABR congestion control
ABR: available bit rate: “elastic service” if sender’s path “underloaded”: sender should use available bandwidth if sender’s path congested: sender throttled to minimum guaranteed rate RM (resource management) cells: sent by sender, interspersed with data cells bits in RM cell set by switches (“network-assisted”) NI bit: no increase in rate (mild congestion) CI bit: congestion indication RM cells returned to sender by receiver, with bits intact Transport Layer
13
Case study: ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell sender’ send rate thus minimum supportable rate on path EFCI bit in data cells: set to 1 in congested switch if data cell preceding RM cell has EFCI set, sender sets CI bit in returned RM cell Transport Layer
14
TCP Congestion Control
end-end control (no network assistance) sender limits transmission: LastByteSent-LastByteAcked CongWin Roughly, CongWin is dynamic, function of perceived network congestion How does sender perceive congestion? loss event = timeout or 3 duplicate acks TCP sender reduces rate (CongWin) after loss event three mechanisms: AIMD slow start conservative after timeout events rate = CongWin RTT Bytes/sec Transport Layer
15
TCP AIMD multiplicative decrease: cut CongWin in half after loss event
additive increase: increase CongWin by 1 MSS every RTT in the absence of loss events: probing Long-lived TCP connection Transport Layer
16
TCP Slow Start When connection begins, increase rate exponentially fast until first loss event When connection begins, CongWin = 1 MSS Example: MSS = 500 bytes & RTT = 200 msec initial rate = 20 kbps available bandwidth may be >> MSS/RTT desirable to quickly ramp up to respectable rate Transport Layer
17
TCP Slow Start (more) When connection begins, increase rate exponentially until first loss event: double CongWin every RTT done by incrementing CongWin for every ACK received Summary: initial rate is slow but ramps up exponentially fast Host A Host B one segment RTT two segments four segments time Transport Layer
18
Refinement Philosophy: 3 dup ACKs indicates network capable of delivering some segments timeout before 3 dup ACKs is “more alarming” After 3 dup ACKs: CongWin is cut in half window then grows linearly But after timeout event: CongWin instead set to 1 MSS; window then grows exponentially to a threshold, then grows linearly Transport Layer
19
Refinement (more) Implementation:
Q: When should the exponential increase switch to linear? A: When CongWin gets to 1/2 of its value before timeout. Implementation: Variable Threshold At loss event, Threshold is set to 1/2 of CongWin just before loss event Transport Layer
20
Summary: TCP Congestion Control
When CongWin is below Threshold, sender in slow-start phase, window grows exponentially. When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold. When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS. Transport Layer
21
TCP Fairness Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 bottleneck router capacity R TCP connection 2 Transport Layer
22
Why is TCP fair? Two competing sessions:
Additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally R equal bandwidth share loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 2 throughput loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 1 throughput R Transport Layer
23
Fairness (more) Fairness and parallel TCP connections Fairness and UDP
nothing prevents app from opening parallel cnctions between 2 hosts. Web browsers do this Example: link of rate R supporting 9 cnctions; new app asks for 1 TCP, gets rate R/10 new app asks for 11 TCPs, gets R/2 ! Fairness and UDP Multimedia apps often do not use TCP do not want rate throttled by congestion control Instead use UDP: pump audio/video at constant rate, tolerate packet loss Research area: TCP friendly Transport Layer
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.