TCP-LP Distributed Algorithm for Low-Priority Data Transfer Group 5 ECE 4605 Neha Jain Shashwat Yadav
Motivation Service Prioritization Utilize excess network bandwidth Best Effort Service All of Internet Low Priority Bulk transfer (FTP downloads, P2P) Utilize excess network bandwidth End point protocol Absence of network support Sender side modification
Objectives Utilize unused bandwidth TCP transparency WHY ? Caused by TCP congestion control Cross traffic ACK delays due to reverse traffic TCP transparency Fairness amongst simultaneous TCP-LP flows
TCP-LP at Work
Implementation Low priority service transparent (running simultaneously) to TCP Two class hierarchical scheduling model TCP packets have strict priority over TCP-LP packets. Fairness amongst microflows within each class
Congestion Notification Early Congestion Indication (ECI) Congestion indication prior to TCP One way packet delays Smoothed one way delay Early congestion indication condition Where is delay smoothing parameter is delay threshold Advantages Sender receiver clocks need not be synchronized. Unperturbed by reverse traffic.
Congestion Avoidance Linear Increase with TCP Initial Early Congestion Indicator W = W/2 Inference State No Another ECI ? Inference Time Out ? Yes No Yes W = 1
Mechanisms Congestion avoidance Modifies LIMD by addition of an inference phase Observes network responses
Parameter Settings Delay smoothing parameter( ) Small values degrade ability to detect congestion Higher values may cause false ECI’s balances the two requirements Inference Time Out Timer (itt) Smaller values correspond to throughput aggressive nature Higher values increase congestion responsiveness strikes the balance
Delay Threshold Smaller values increase TCP-LP transparency Tradeoff between TCP transparency and throughput Higher values increase the throughput
Bulk Data Transfer Simultaneous FTP downloads No reverse traffic No excess capacity available TCP-LP slightly perturbs TCP flow >>WHY?
Reverse Background Traffic 10 FTP/TCP flows in reverse direction Forward direction ACK delays and losses decrease TCP throughput. Excess capacity utilized by TCP-LP flows. Nearly perfect TCP transparency.
Experimental Results TCP-LP has additive increase when TCP congestion window collapses TCP-LP is already in the inference phase when TCP reaches its maximum capacity TCP-LP encounters ECI. Backs off >> WHY do TCP-LP peaks occur before TCP?
HTTP Background Traffic Impact on HTTP Response Times Case I TCP used for HTTP Vs TCP used for HTTP; simultaneous bulk transfer using TCP-LP Slight Increase in mean retrieval time. Case II TCP used for HTTP and bulk transfer Vs TCP used for HTTP; simultaneous bulk transfer using TCP-LP Web retrieval times decreased by 80%. Bulk transfer increased by less than 10% Results Results
RTT Heterogeneity WHY? Case 1: Single bottleneck HTTP uses TCP When bulk transfer uses TCP-LP the web retrieval rate increases by 80% WHY? Case 2: Multiple bottlenecks HTTP uses TCP When bulk transfer uses TCP-LP the web retrieval rate increases by 40%
Critique AIMD unsuitable for high speed connections High speed connections result in Large window sizes Faster ACK’s so lesser span to transmit TCP-LP TCP-LP needs slow start Quickly make use of the excess bandwidth.
Conclusion An end to end protocol – easy to deploy. Sender side modification of TCP. Doesn’t require any functionality from network routers, nor any other protocol changes. Suitable for applications such as ftp, p2p Utilizing excess network bandwidth without significantly perturbing TCP.