Presentation is loading. Please wait.

Presentation is loading. Please wait.

cs/ee/ids 143 Communication Networks Chapter 4 Transport

Similar presentations


Presentation on theme: "cs/ee/ids 143 Communication Networks Chapter 4 Transport"— Presentation transcript:

1 cs/ee/ids 143 Communication Networks Chapter 4 Transport
Text: Walrand & Parakh, 2010 Steven Low CMS, EE, Caltech Economic Dispatch in Power Networks with Storage March 10-12, 2010: Workshop on Distributed Decisions via Games and Price Mechanisms, Lund University, Sweden (Anders Rantzer)

2 Agenda Internetworking Transport layer
Routing across LANs, layer2-layer3 DHCP NAT Transport layer Connection setup Error recovery: retransmission Congestion control

3

4 Transport services UDP Datagram service No congestion control
No error/loss recovery Lightweight TCP Connection oriented service Congestion control Error/loss recovery Heavyweight

5 UDP 1 ~ (216-1) UDP header ≤ Bytes – 8 Bytes (UDP header) – 20 Bytes (IP header) Usually smaller to avoid IP fragmentation (e.g., Ethernet MTU 1500 Bytes)

6 TCP TCP header

7 Possible issue: SYN flood attack
Example TCP states 3-way handshake 4-way handshake Possible issue: SYN flood attack Result in large numbers of half-open connections and no new connections can be made.

8 Window Flow Control ~ W packets per RTT
Source 1 2 W 1 2 W time data ACKs Destination 1 2 W 1 2 W time ~ W packets per RTT Lost packet detected by missing ACK

9 ARQ (Automatic Repeat Request)
Go-back-N TCP Sender & receiver negotiate whether or not to use Selective Repeat (SACK) Can ack up to 4 blocks of contiguous bytes that receiver got correctly e.g. [3; 10, 14; 16, 20; 25, 33] Selective repeat

10 Window control Limit the number of packets in the network to window W
Source rate = bps If W too small then rate « capacity If W too big then rate > capacity => congestion Adapt W to network (and conditions)

11 TCP window control Receiver flow control Network congestion control
Avoid overloading receiver Set by receiver awnd: receiver (advertised) window Network congestion control Avoid overloading network Set by sender Infer available network capacity cwnd: congestion window Set W = min (cwnd, awnd)

12 TCP congestion control
Source calculates cwnd from indication of network congestion Congestion indications Losses Delay Marks Algorithms to calculate cwnd Tahoe, Reno, Vegas, …

13 TCP Congestion Controls
Tahoe (Jacobson 1988) Slow Start Congestion Avoidance Fast Retransmit Reno (Jacobson 1990) Fast Recovery Vegas (Brakmo & Peterson 1994) New Congestion Avoidance

14 TCP Tahoe (Jacobson 1988) window time SS CA : Slow Start
: Congestion Avoidance : Threshold

15 Slow Start Start with cwnd := 1 (slow start)
On each successful ACK increment cwnd cwnd := cnwd + 1 Exponential growth of cwnd each RTT: cwnd := 2 x cwnd Enter CA when cwnd >= ssthresh

16 Congestion Avoidance Starts when cwnd >= ssthresh
On each successful ACK: cwnd := cwnd + 1/cwnd Linear growth of cwnd each RTT: cwnd := cwnd + 1

17 Packet Loss Assumption: loss indicates congestion
Packet loss detected by Retransmission TimeOuts (RTO timer) Duplicate ACKs (at least 3) (Fast Retransmit) 1 2 3 4 5 6 Packets Acknowledgements 7

18 Fast Retransmit Wait for a timeout is quite long
Immediately retransmits after 3 dupACKs without waiting for timeout Adjusts ssthresh flightsize := min(awnd, cwnd) ssthresh := max(flightsize/2, 2) Enter Slow Start (cwnd := 1)

19 Summary: Tahoe Basic ideas Gently probe network for spare capacity
Drastically reduce rate on congestion Windowing: self-clocking for every ACK { if (W < ssthresh) then W++ (SS) else W += 1/W (CA) } for every loss { ssthresh := W/2 W := 1 Seems a little too conservative?

20 TCP Reno (Jacobson 1990) for every ACK { W += 1/W (AI) }
CA SS for every ACK { W += 1/W (AI) } for every loss { W := W/ (MD) How to halve W without emptying the pipe? Fast Recovery

21 Fast recovery Idea: each dupACK represents a packet having left the pipe (successfully received) Enter FR/FR after 3 dupACKs Set ssthresh := max(flightsize/2, 2) Retransmit lost packet Set cwnd := ssthresh + ndup (window inflation) Wait till W := min(awnd, cwnd) is large enough; transmit new packet(s) On non-dup ACK, set cwnd := ssthresh (window deflation) Enter CA After FR/FR, when CA is entered, cwnd is half of the window when lost was detected. So the effect of lost is halving the window. [Source: RFC 2581, Fall & Floyd, “Simulation based Comparison of Tahoe, Reno, and SACK TCP”]

22 Example: FR/FR Fast retransmit Fast recovery Retransmit on 3 dupACKs
1 2 3 4 5 6 8 7 1 7 4 9 4 4 11 10 time Exit FR/FR 4 time R 8 cwnd 8 ssthresh Fast retransmit Retransmit on 3 dupACKs Fast recovery Inflate window while repairing loss to fill pipe

23 Summary: Reno Basic ideas dupACKs: halve W and avoid slow start
dupACKs: fast retransmit + fast recovery Timeout: slow start dupACKs congestion avoidance FR/FR timeout slow start retransmit

24 Delay-based TCP: Vegas (Brakmo & Peterson 1994)
window time SS CA Reno with a new congestion avoidance algorithm Converges (provided buffer is large) !

25 Congestion avoidance Each source estimates number of its own packets in pipe from RTT Adjusts window to maintain estimate # of packets in queues between a and b for every RTT { if W/RTTmin – W/RTT < a / RTTmin then W ++ if W/RTTmin – W/RTT > b / RTTmin then W -- } for every loss W := W/2

26 Implications Congestion measure = end-to-end queueing delay
At equilibrium Zero loss Stable window at full utilization Nonzero queue, larger for more sources Convergence to equilibrium Converges if sufficient network buffer Oscillates like Reno otherwise

27 Theory-guided design: FAST
We will study them further in TCP modeling in the following weeks

28 Summary UDP header/TCP header TCP 3-way/4-way handshake
ARQ: Go-back-N/selective repeat Tahoe/Reno/New Reno/Vegas/FAST -- useful details for your project

29 Why both TCP and UDP? Most applications use TCP, as this avoids re-inventing error recovery in every application But some applications do not need TCP For example: Voice applications Some packet loss is fine. Packet retransmission introduces too much delay. For example: an application that sends just one message, like DNS/SNMP/RIP. TCP sends several packets before the useful one. We may add reliability at application layer instead. 31

30 Mathematical model

31 TCP/AQM pl(t) xi(t) AQM: DropTail RED REM/PI AVQ TCP: Reno Vegas FAST
Congestion control is a distributed asynchronous algorithm to share bandwidth It has two components TCP: adapts sending rate (window) to congestion AQM: adjusts & feeds back congestion information They form a distributed feedback control system Equilibrium & stability depends on both TCP and AQM And on delay, capacity, routing, #connections

32 Network model Network Sources i Routing matrix R
Links l of capacities cl and congestion measure pl(t) Sources i Source rates xi(t) Routing matrix R x1(t) x2(t) x3(t) p1(t) p2(t)

33 TCP CC model consists of
Network model F1 FN G1 GL R RT TCP Network AQM x y q p TCP CC model consists of specs for Fi and Gl IP routing Reno, Vegas Droptail, RED

34 Examples Derive (Fi, Gl) model for Focus on Congestion Avoidance
Reno/RED Vegas/Droptail FAST/Droptail Focus on Congestion Avoidance

35 Model: Reno for every ack (ca) { W += 1/W } for every loss

36 Model: Reno for every ack (ca) { W += 1/W } for every loss
throughput window size round-trip loss probability link loss probability

37 Model: Reno for every ack (ca) { W += 1/W } for every loss

38 Model: RED queue length marking prob 1 aggregate link rate source rate

39 Model: Reno/RED

40 Decentralization structure
F1 FN G1 GL R RT TCP Network AQM x y q p y q

41 Validation – Reno/REM 30 sources, 3 groups with RTT = 3, 5, 7 ms
Same scenario as Reno 3 groups of 10 sources each, with RTT (propagation delay + queueing, from measurement) 9, 11 & 13 ms. Group 1 starts at time 0, group 2 starts at time 500, and group 3 starts at time 1000. Graph shows the mean window process, averaged over the 10 sources of each group. Window is in unit of packets, each of size 1kBytes. Note Windows are equalized, independent of RTTs Measured windows match well with those predicted by the model Note: Individual windows were halved on each decrement, but the graph shows the window process averaged over 10 sources in each group. 30 sources, 3 groups with RTT = 3, 5, 7 ms Link capacity = 64 Mbps, buffer = 50 kB Smaller window due to small RTT (~0 queueing delay)

42 Model: Vegas/Droptail
for every RTT { if W/RTTmin – W/RTT < a then W ++ if W/RTTmin – W/RTT > a then W -- } for every loss W := W/2 queue size Fi: pl(t+1) = [pl(t) + yl (t)/cl - 1]+ Gl:

43 Model: FAST/Droptail periodically { }

44 Before ~2000, no one in the world can predict the throughputs of a set of TCP flows sharing a simple network (that’s more than a single bottleneck), despite a decade of research. With the theory developed in the last decade or so, we can now predict the throughput, delay, loss in an arbitrary TCP network (under appropriate assumptions). Here is one example. L., Peterson, Wang, JACM 2002

45 Validation: matching transients
[Jacobsson et al 2009] Same RTT, no cross traffic Same RTT, cross traffic Different RTTs, no cross traffic

46 Recap Protocol (Reno, Vegas, FAST, Droptail, RED…) Equilibrium
Performance Throughput, loss, delay Fairness Utility Dynamics Local stability Global stability


Download ppt "cs/ee/ids 143 Communication Networks Chapter 4 Transport"

Similar presentations


Ads by Google