Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Samrat Sen Computer Science and Engineering. 2 Outline Introduction Abbreviations TCP Friendliness Existing TCP Ongoing Work Single Rate Congestion.

Similar presentations


Presentation on theme: "1 Samrat Sen Computer Science and Engineering. 2 Outline Introduction Abbreviations TCP Friendliness Existing TCP Ongoing Work Single Rate Congestion."— Presentation transcript:

1 1 Samrat Sen Computer Science and Engineering

2 2 Outline Introduction Abbreviations TCP Friendliness Existing TCP Ongoing Work Single Rate Congestion Control Protocols Multi-Rate Congestion Control Protocols Conclusion References

3 3 Introduction New Trends in communication with audio and video streaming applications has increased the amount of non-TCP traffic in the Internet This new applications do not share the bandwidth fairly with applications built on TCP TCP has end-to-end congestion control mechanism and hence TCP flows with similar round-trip times share the bandwidth of a common bottleneck fairly

4 4 Introduction To reduce this unfair situation many rate- adaptation rules and mechanisms for non- TCP traffic that are compatible with TCP have been

5 5 TCP Friendliness What is TCP Friendliness ? The effect that a non TCP flow has on competing TCP flows. Unicast A flow (non-TCP) is TCP friendly if it does not reduce the long term throughput of any TCP flow more than another TCP flow on the same path would do.

6 6 TCP Friendliness Multicast Defined in terms of “bounded fairness” a · rTCP <= r <= b · rTCP, r is the rate of multicast flow rTCP is the rate of TCP flow under the same conditions a and b are functions of the number of receivers of the flow

7 7 Slowstart – start phase AIMD Additive Increase – increasing the congestion window by one segment per RTT Multiplicative decrease – Halving the congestion window Existing TCP Mechanics

8 8 TCP Developments Reno & New Reno : To remain in FRCV when packets are lost and hence avoid expensive timeouts. Sack : Solution for recovering more than one loss per RTT To estimate more precisely the number of packets in the pipe

9 9 Ongoing Work Improving Slow Start – Changing the initial window to greater than1 along with limited byte counting Establishing many parallel TCP connections for the same transfer with an adaptive mechanism (depending on congestion) Elimination of long delay link – Spoofing - The long delay link is replaced by an optimized transport protocol like STP

10 10 Ongoing Work Avoiding unnecessary Retransmit Timeouts Limited transmit [proposed in RFC2760] Instead of waiting for duplicate ACK’s wait for 1 or 2.

11 11 Ongoing Work Avoiding unnecessary congestion control to Reordered or Delayed Packets Need to distinguish b/w delayed/reordered and lost packets

12 12 Ongoing Work To identify and separate the delayed or reordered segments D-SACK has been proposed (RFC 2883) whereby the sender can correctly identify duplicate segments. In this way the receiver reports the duplicate segments. The overhead of reducing the window size by half is saved.

13 13 Ongoing Work ECN Notification For small flows the marking of packets instead of dropping prevents timeouts.

14 14 Ongoing Work Asymmetric Networks Use header compression Delay ACK at destination A. ACK sent after every d packets (based on the congestion) Scan the buffer at A and if full, replace with new

15 15 Congestion Control Schemes Classification of Congestion Control Schemes Window based vs Rate Based Window based has the same sliding window logic as TCP Rate based achieves friendliness by mimicking AIMD or is Model based Single-rate vs Multi-rate In single rate data is sent to all receivers at the same rate Multi-rate congestion control allows more flexible allocation of bandwidth along different network paths

16 16 Classification Schemes for TCP friendly Protocols

17 17 Single Rate Congestion Control Protocols Rate Based

18 18 Rate Adaption Protocol Source sends data packets with sequence numbers Receiver acknowledges packets Congestion detection  Indicated by lost packets  Variable such as RTT calculated similar to the way TCP calculates them  Maintains Transmission history

19 19 RAP Performance Simulation shows TCP-friendliness Performance is a little different from TCP  TCP more sensitive to number of outstanding packets  RAP more aggressive due to clustered loss detection and fine-grain rate adaptation

20 20 RAP – Simulation Results

21 21 RAP – Simulation Results The result show that the deviation of RAP w.r.t TCP depends on The deviation of the TCP protocol from AIMD behavior

22 22 RAP – Simulation Results

23 23 TFRC- Protocol TCP friendly Rate Control TFRC is a congestion control mechanism designed for unicast flows operating in a Internet environment and competing with TCP traffic TFRC is a receiver-based mechanism, with the calculation of the congestion control information (i.e., the loss event rate) in the data receiver rather in the data sender.

24 24 TFRC- Protocol TFRC's congestion control mechanism works as follows: The receiver measures the loss event rate and feeds this information back to the sender. The sender also uses these feedback messages to measure the round- trip time (RTT). The loss event rate and RTT are then fed into TFRC's throughput equation, giving the acceptable transmit rate.

25 25 TFRC- Protocol The sender then adjusts its transmit rate to match the calculated rate. (Loss event consists of one or more packets dropped within a single round trip time. The TFRC uses the TCP response function)

26 26 TFRC- Protocol X: is the transmit rate in bytes/second. S: is the packet size in bytes. R: is the round trip time in seconds. P: is the loss event rate, between 0 and 1.0, of the number of loss events as a fraction of the number of packets transmitted. t_RTO is the TCP retransmission timeout value in seconds. b: is the number of packets acknowledged by a single TCP acknowledgement.

27 27 TFRC- Protocol Simulation Results

28 28 LDA + protocol Loss Delay Based Adaption Algorithm Addition and reduction of rate based dynamically on the current network situation During loss situations, flow bandwidth is calculated by L – loss fraction, M-packet size, tout – retransmission timeout, D- # of ACK packet by each acknowdlegement packet R – Xmission rate

29 29 LDA + protocol No loss condition Additive increase rate is calculated as follows: R- Bottleneck Bandwidth, r - Xmission rate To limit A to the bottleneck bandwidth..

30 30 LDA + protocol Finally the rate should not be more than a TCP connection sharing the same link Round trip delay (T) Time b/w two receiver report The additive increase value Am is set to &

31 31 LDA + protocol – Simulation Results

32 32 LDA + protocol – Simulation Results

33 33 TEAR Protocol TCP Emulation at Receivers Shift most of flow control functions to receivers.  Instead of reporting congestion signals, process them immediately at receivers. Receivers emulate the TCP window adjustment protocol.  Increase: congestion avoidance and slow start.  Decrease: fast recovery and timeout.

34 34 TEAR Protocol Emulate TCP window adjustment The sender sets its xmission rate to R SenderReceiver Report rate R. CWND

35 35 TEAR Protocol Instead of reporting an instantaneous (oscillating) rate, the receiver can find the equilibrium operating point (more smoothed averaged rate) Emulate TCP window adjustment Receiver Report rate R. Equilibrium operating point Perform smoothing using Weighted averaging

36 36 TEAR- Results

37 37 TEAR Protocol In TCP, after an initial packet loss in a window, at least cwnd packets are sent (including the lost packet) – this is true no matter which packet is lost in that window. If TCP sender does not detect FR by the time that these packets wound be acknowledged (some of them would be lost), timeout will occur. cwnd:5 TCP Only two TDs received Timeout

38 38 TEAR Protocol If TEAR receiver does not detect FR before the reception of a packet with x+cwnd-1 or higher after the initial loss (including the lost packet), then TEAR enters timeout. Or, Ttimeout (= Tinterarrival * cwnd * 2DEV) has expired after the initial packet loss. cwnd:5 TEAR receiver detects timeout TEAR Ignore packet losses in next RTT period X X+4

39 39 Single Rate Congestion Control Protocols Window Based

40 40 RLA Protocol Random Listening Algorithm Multicast fairness. Simple, similar to TCP. Upon receiving a congestion signal sender reduces its window sized by 1/n where n is the number of receivers reporting frequent losses. If all receivers experience the same average congestion sender reacts as if listening to one representative of them

41 41 RLA Protocol S R1R1 R2R2 RNRN RN1RN1 11 22  N-1 NN Multicast connection TCP connection m1m1 m2m2 m 1 +1

42 42 RLA Protocol Each receiver stores the smooth RTT and the measured congestion probability If congestion is detected window is halved if - the previous window cut was made long time ago - a generated random number is <= 1/n When a packet has been ACK’d by all receivers the congestion windows cwnd = cwnd + 1/cwnd

43 43 Performance of RLA S G G G R1 L1 L21 L31 L41

44 44 Result: Drop-tail Gateways Throughput congestion Essentially fair to TCP RED: 1/3 lTCP  lRLA  3n lTCP.. drop-tail: 1/4 lTCP  lRLA  2n lTCP

45 45 LPR Protocol Linear Proportional Response Improvement over the RLA mechanism Probability with which a multicast sender reduces its cwnd is proportional to the loss probability at the receiver Xi is the # of losses at the receiver

46 46 MTCP Protocol Hierarchical Congestion Reports Internal tree nodes ­ sender's agent (SA) receivers send feedback to their SAs SAs send a summary of the congestion level of their children to their parents

47 47 MTCP Protocol Window Based Control Send controls its rate based on its summary Congestion Window Adjustment (when CWND goes down) RTD timeout Fast retransmission (in conjunction with selective acknowledgment) Three NACKs for the same packet reduces the window (note that not every loss causes CWND to go down by half) Based on TCP­Vegas scheme (I.e., long RTT causes it to go down)

48 48 MTCP - Results

49 49 MTCP - Results

50 50 NCA Protocol Nominee-based Congestion Avoidance In order to achieve TCP friendliness it tries to find the path in which the TCP session will receive the least bandwidth in the multicast tree. Highest value of Once a nominee has been found the source unicasts a message to this receiver soliciting per packet ACK from it

51 51 NCA Protocol - Result

52 52 PGMCC Protocol Sender Receiver Acker Data NACK ACK Sender sends packetAcker sends ACK

53 53 Sender Receiver Acker Data NACK ACK Sender sends next packetPacket lost by one receiverReceiver sends NACK and Acker sends ACKIf the NACK is from a receiver worse than Acker, then it is designated as the new Acker PGMCC Protocol

54 54 Sender Receiver Acker Data NACK ACK Sender sends next PacketNew Acker sends ACK PGMCC Protocol

55 55 Sender Receiver Acker Data NACK ACK Sender sends packetAcker leavesSender waits until timeout (unnecessary starvation) PGMCC- Unnecessary Starvation

56 56 RRR TS TR PSPR1PR2 Congested Link NAK Suppression TS: TCP Sender TR: TCP Receiver PS: PGMCC Sender PR: PGMCC Receiver R: Router NACK

57 57 Multi-Rate Congestion Control Protocols Rate Based

58 58 RLC- Protocol Receiver driven layered congestion control Layered method, the BW of each new layer increases exponentially The time receiver has to wait for the new layer is also exponential Increase in bandwidth is proportional to the amount of time required to pass without packet loss withou any loss The dropping of one layer is a multiplicative decrease

59 59 RLC - Results

60 60 FLID-DL Protocol Fair Layered Increase/Decrease with Dynamic Layering Sender calculates increase signals according to FLID Receiver reacts to increase signals according to DL

61 61 FLID Scheme Fair Layered Increase/Decrease scheme Increase/Decrease reception rates to achieve the same avg throughput as a TCP flow Uses RLC’s sender initiated synchronization point method to coordinate receivers FLID uses probabilistic increase signals(rathre than packet bursts as in RLC) for the receivers to subscribe to additional layers

62 62 FLID Scheme RLC works with fixed download rates  Rate to each layer twice that of previous layer  Possible download rates are multiples of two FLID works with arbitrary download rates  Rate to each layer independent of previous layer  Possible download rates can be much more fine- grained  Example – Designed so that cumulative rate increases by 1.3 for each additional layer, instead of by 2.0 as for RLC

63 63 FLID-DL Results With Increase of queue size, FLID-DL is not able to adjust

64 64 MLDA Protocol Multicast Loss Delay Based Adaption algorithm The sender periodically transmits reports about the sent layers Each receiver measures the loss and delay of the reports for some time and determines the TCP friendly bandwidth share between the receiver and the sender Based on the calculated share and the rates of the layers (from sender) receiver decides to leave, join or stay on a layer

65 65 MLDA – Protocol (cont’d) The receivers schedule transmission of reports indicating their calculated bandwidth share Based on the receiver reports the sender adjusts the sizes of the different layers

66 66 MLDA – Results

67 67 PLM Protocol Packet Pair Receiver Driven Cumulative Layered Multicast Protocol It uses packet pair characteristics A rate decrease is followed by unsubscribing a appropriate number of layers A rate increase is obtained to the minimum bandwidth estimated during the interval

68 68 PLM Results

69 69 PLM - Results

70 70 Multi-Rate Congestion Control Protocols Window Based

71 71 Rainbow - Protocol Receiver based, request for transmission of each individual data packet, each marked with a label Receiver maintains a cwnd When a packet loss is detected cwnd=cwnd/2 The cwnd is increased +1 for each received packet during SS or when a full window is received Uses Digital Fountain Encoding

72 72 Rainbow - Protocol (cont’d) The intermediate routers store info about the requests that they have received The reply packet is forwarded towards the receiver and the intermediate routers delete the information

73 73 Rainbow - Protocol (cont’d)

74 74 Conclusion The various protocols presented are not comparable because of lack of standard methods for comparison of congestion control protocols The congestion control methods used simple formula which need to be improved Router based mechanisms may open up new dimensions and have not been considered in the survey

75 75 References 1.W. Richard Stevens, ‘TCP/IP Illustrated Vol 1’ 2.‘A survey on TCP Friendly Congestion Control’, Joerg Widmer, Robert Denda, Martin Mauve, University of Mannheim, Germany, IEEE Network May/June 2001 3.‘A Report on Recent Developments in TCP Congestion Control’, Sally Floyd, AT&T Center for Internet Research at ICSI, IEEE Communications Magazine, April 2001 4.‘TCP Performance in a Heterogenous Network: A Survey, Chadi Barakat, Eitan Altman and Walid Dabbous’, INRI, IEEE Communications Magazine, January 2000. 5. ‘Simulation-based Comparisons of Tahoe, Reno and Sack TCP’, Lawrence Berkley National Library

76 76 Thank You


Download ppt "1 Samrat Sen Computer Science and Engineering. 2 Outline Introduction Abbreviations TCP Friendliness Existing TCP Ongoing Work Single Rate Congestion."

Similar presentations


Ads by Google