Download presentation
Presentation is loading. Please wait.
Published byLindsey Campbell Modified over 9 years ago
1
1 Lecture 14 High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation and QoS
2
2 Protection against wraparound What is wraparound: A byte with a sequence number x may be sent at one time and then on the same connection a byte with the same sequence number x may be sent again. Wrap Around: controlled by the 32-bit SequenceNum The maximum lifetime of an IP datagram is 120 sec thus we need to have a wraparound time at least 120 sec. For slow links OK but no longer sufficient for optical networks. Bandwidth & Time Until Wrap Around Bandwidth T1 (1.5Mbps) Ethernet (10Mbps) T3 (45Mbps) FDDI (100Mbps) STS-3 (155Mbps) STS-12 (622Mbps) STS-24 (1.2Gbps) Time Until Wrap Around 6.4 hours 57 minutes 13 minutes 6 minutes 4 minutes 55 seconds 28 seconds
3
3 Keeping the pipe full The SequenceNum, the sequence number space (32 bits) should be twice as large as the window size (16 bits). It is. The window size (the number of bytes in transit) is given by the the AdvertisedWindow field (16 bits). The higher the bandwidth the larger the window size to keep the pipe full. Essentially we regard the network as a storage system and the amount of data is equal to: ( bandwidth x delay )
4
4 Required window size for a 100 msec RTT. Bandwidth T1 (1.5Mbps) Ethernet (10Mbps) T3 (45Mbps) FDDI (100Mbps) STS-3 (155Mbps) STS-12 (622Mbps) STS-24 (1.2Gbps) Delay x Bandwidth Product 18KB 122KB 549KB 1.2MB 1.8MB 7.4MB 14.8MB
5
5 Original Algorithm for Adaptive Retransmission Measure SampleRTT for each segment/ACK pair Compute weighted average of RTT EstimatedRTT = x EstimatedRTT + (1- x SampleRTT where 0.8 < 0.9 Set timeout based on EstimatedRTT TimeOut = 2 x EstimatedRTT
6
6 Karn/Partridge Algorithm Do not sample RTT when re-transmitting Double timeout after each retransmission
7
7 Karn/Partridge Algorithm
8
8 Jacobson/Karels Algorithm New calculation for average RTT Diff = SampleRTT - EstimatedRTT EstimatedRTT = EstimatedRTT + ( x Deviation = Deviation + (|Diff|- Deviation) where is a fraction between 0 and 1 Consider variance when setting timeout value TimeOut = x EstimatedRTT + x Deviation where = 1 and = 4 Notes algorithm only as good as granularity of clock (500 microseconds on Unix) accurate timeout mechanism important to congestion control (later)
9
9 Congestion Control Mechanisms The sender must perform retransmissions to compensate for lost packets due to buffer overflow. Unneeded retransmissions by the sender due to large delays causes a router to use link bandwidth to forward unneeded copies of a packet. When a packet is dropped along a path the capacity used used at each upstream routers to forward packets to the point where it was dropped was wasted.
10
10 Delay/Throughput Tradeoffs
11
11
12
12 Router with infinite buffer capacity
13
13 Fairness of TCP congestion mechanism
14
14 Flows and resource allocation Flow: sequence of packets with a common characteristics A layer-N flow the common attribute a layer-N attribute All packets exchanged between two hosts network layer flow All packets exchanged between two processes transport layer flow
15
15
16
16 Min-max fair bandwidth allocation Goal: fairness in a best-effort network. Consider: Unidirectional flows Routers with infinite buffer space Link capacity is the only limiting factor.
17
17 Algorithm Start with an allocation of zero Mbps for each flow. Increment equally the allocation for each flow until one of the links of the network becomes saturated. Now all the flows passing through the saturated link get an equal fraction of the link capacity. Increment equally the allocation for each flow that does not pass through the first saturated link until a second link becomes saturated. Now all the flows passing through the saturated link get an equal fraction of the link capacity. Continue by incrementing equally the allocations of all flows that do not use a saturated link until all flows use at least one saturated link.
18
18 QoS in a datagram network ? Buffer acceptance algorithms. Explicit Congestion Notification. Packet Classification. Flow measurements
19
19 Buffer acceptance algorithms Tail Drop. RED – Random Early Detection RIO – Random Early Detection with In and Out packet dropping strategies.
20
20
21
21 Explicit Congestion Notification (ECN) The TCP congestion control mechanism discussed earlier has a major flow; it detects congestion after the routers have already started dropping packets. Network resources are wasted because packets are dropped at some point along their path, after using link bandwidth as well as router buffers and CPU cycles up to the point where they are discharged. The question that comes to mind is: Could routers prevent congestion by informing the source of the packets when they become lightly congested, but before they start dropping packets? This strategy is called source quench.
22
22 Source quench Send explicit notifications to the source, e.g., use the ICMP. Yet, sending more packets in a network that shows signs of congestion may not be the best idea. Modify a congestion notification flag in the IP header to inform the destination; then have the destination inform the source by setting a flag in the TCP header of segments carrying acknowledgments.
23
23 Problems with ECN (1) TCP must be modified to support the new flag. (2) Routers must be modified to distinguish between ECN-capable flows and those who do not support ECN. (3) IP must be modified to support the congestion notification flag. (4) TCP should allow the sender to confirm the congestion notification to the receiver, because acknowledgments could be lost.
24
24 Maximum and minimum bandwidth guarantees A. Packet classification. Identify the flow the packet belongs to. At what layer should be done? Network layer? At each router too expensive. The edge routers may be able to do that. At application layer? Difficult. MPLS – multi protocol label switch. Add an extra header in front of the IP header. Now a router decides the output link based upon the input link and the MPLS header.
25
25 Maximum and minimum bandwidth guarantees B. Flow measurements How to choose the measurement interval to accommodate bursty traffic? Token bucket
26
26 The token bucket filter Characterized by : (1) A token rate R, and (2) The depth of the bucket, B Basic idea the sender is allocated tokens at a given rate and can accumulate tokens in the bucket until the bucket is filled. To send a byte the sender must have a token. The maximum burst can be of size B because at most B token can be accumulated.
27
27 Example Flow A: generates data at a constant rate of 1 Mbps. Its filter will support a rate of 1 Mbps and a bucket depth of 1 byte, Flow B: alternates between 0.5 and 2.0 Mbps. Its filter will support a rate of 1 Mbps and a bucket depth of 1 Mbps Note: a single flow can be described by many token buckets.
28
28 Example
29
29
30
30 Token bucket L = packet length C = # of tokens in the bucket --------------------------------------------------- if ( L <= C ) { accept the packet; C = C - L; } else drop the packet;
31
31 A shaping buffer delays packets that do not confirm to the traffic shape if ( L <= C ) { accept the packet; C = C - L;} else { /* the packet arrived early, delay it */ while ( C < L ) { wait; } transmit the packet; C = C - L;}
32
32 A QoS Capable Router
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.