Download presentation
Presentation is loading. Please wait.
Published byLiliana Boone Modified over 9 years ago
1
Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)
2
Transport Layer2 TCP Flow Control r receive side of TCP connection has a receive buffer: r speed-matching service: matching the send rate to the receiving app’s drain rate r app process may be slow at reading from buffer sender won’t overflow receiver’s buffer by transmitting too much, too fast flow control
3
Transport Layer3 TCP Flow control: how it works (Suppose TCP receiver discards out-of-order segments) spare room in buffer = RcvWindow = RcvBuffer-[LastByteRcvd - LastByteRead] Rcvr advertises spare room by including value of RcvWindow in segments Sender limits unACKed data to RcvWindow m guarantees receive buffer doesn’t overflow
4
Transport Layer4 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle” r different from flow control! r manifestations: m lost packets (buffer overflow at routers) m long delays (queueing in router buffers) r a top-10 problem!
5
Transport Layer5 Congestion: A Close-up View r knee – point after which m throughput increases very slowly m delay increases fast r cliff – point after which m throughput starts to decrease very fast to zero (congestion collapse) m delay approaches infinity r Note (in an M/M/1 queue) m delay = 1/(1 – utilization) Load Throughput Delay kneecliff congestion collapse packet loss
6
Transport Layer6 Congestion Control vs. Congestion Avoidance r Congestion control goal m stay left of cliff r Congestion avoidance goal m stay left of knee r Right of cliff: m Congestion collapse Load Throughput kneecliff congestion collapse
7
Transport Layer7 Congestion Collapse: How Bad is It? r Definition: Increase in network load results in decrease of useful work done r Many possible causes m Spurious retransmissions of packets still in flight m Undelivered packets Packets consume resources and are dropped elsewhere in network m Fragments Mismatch of transmission and retransmission units m Control traffic Large percentage of traffic is for control m Stale or unwanted packets Packets that are delayed on long queues
8
Transport Layer8 i ii r If information about i, and is known in a central location where control of i or can be effected with zero time delays, the congestion problem is solved! r Capacity ( ) cannot be provisioned very fast => demand must be managed r Perfect callback: Admit packets into the network from the user only when the network has capacity (bandwidth and buffers) to get the packet across. Solution Directions…. 1 n Capacity Demand Problem: demand outstrips available capacity
9
Transport Layer9 Causes/costs of congestion: scenario 3 r four senders r multihop paths r timeout/retransmit in Q: what happens as and increase ? in finite shared output link buffers Host A in : original data Host B out ' in : original data, plus retransmitted data
10
Transport Layer10 Causes/costs of congestion: scenario 3 Another “cost” of congestion: r when packet dropped, any “upstream transmission capacity used for that packet was wasted! HostAHostA HostBHostB o u t
11
Transport Layer11 Approaches towards congestion control End-end congestion control: r no explicit feedback from network r congestion inferred from end-system observed loss, delay r approach taken by TCP Network-assisted congestion control: r routers provide feedback to end systems m single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM) m explicit rate sender should send at Two broad approaches towards congestion control:
12
Transport Layer12 TCP Congestion Control r end-end control (no network assistance) r sender limits transmission: LastByteSent-LastByteAcked CongWin r Roughly, CongWin is dynamic, function of perceived network congestion How does sender perceive congestion? r loss event = timeout or 3 duplicate acks TCP sender reduces rate ( CongWin ) after loss event three mechanisms: m AIMD m slow start m conservative after timeout events rate = CongWin RTT Bytes/sec
13
Transport Layer13 TCP AIMD (Additive increase and multiplicative decrease) multiplicative decrease: cut CongWin in half after loss event additive increase: increase CongWin by 1 MSS every RTT in the absence of loss events: probing Long-lived TCP connection
14
Transport Layer14 TCP Slow Start When connection begins, CongWin = 1 MSS m Example: MSS = 500 bytes & RTT = 200 msec m initial rate = 20 kbps r available bandwidth may be >> MSS/RTT m desirable to quickly ramp up to respectable rate r When connection begins, increase rate exponentially fast until first loss event
15
Transport Layer15 TCP Slow Start (more) r When connection begins, increase rate exponentially until first loss event: double CongWin every RTT done by incrementing CongWin for every ACK received r Summary: initial rate is slow but ramps up exponentially fast Host A one segment RTT Host B time two segments four segments
16
Transport Layer16 Refinement r After 3 dup ACKs: CongWin is cut in half m window then grows linearly r But after timeout event: CongWin instead set to 1 MSS; m window then grows exponentially m to a threshold, then grows linearly 3 dup ACKs indicates network capable of delivering some segments timeout before 3 dup ACKs is “more alarming” Philosophy:
17
Transport Layer17 Refinement (more) Q: When should the exponential increase switch to linear? A: When CongWin gets to 1/2 of its value before timeout. Implementation: r Variable Threshold r At loss event, Threshold is set to 1/2 of CongWin just before loss event
18
Transport Layer18 Summary: TCP Congestion Control When CongWin is below Threshold, sender in slow-start phase, window grows exponentially. When CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold. When timeout occurs, Threshold set to CongWin/2 and CongWin is set to 1 MSS.
19
Transport Layer19 Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 bottleneck router capacity R TCP connection 2 TCP Fairness
20
Transport Layer20 Why is TCP fair? Two competing sessions: r Additive increase gives slope of 1, as throughout increases r multiplicative decrease decreases throughput proportionally R R equal bandwidth share Connection 1 throughput Connection 2 throughput congestion avoidance: additive increase loss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2
21
Transport Layer21 Fairness (more) Fairness and UDP r Multimedia apps often do not use TCP m do not want rate throttled by congestion control r Instead use UDP: m pump audio/video at constant rate, tolerate packet loss r Research area: TCP friendly Fairness and parallel TCP connections r nothing prevents app from opening parallel connections between 2 hosts. r Web browsers do this r Example: link of rate R supporting 9 cnctions; m new app asks for 1 TCP, gets rate R/10 m new app asks for 11 TCPs, gets R/2 !
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.