TCP-LP: A Distributed Algorithm for Low Priority Data Transfer Aleksandar Kuzmanovic & Edward W. Knightly Rice Networks Group http://www.ece.rice.edu/networks
Motivation Traditional view of service differentiation: High priority: real-time service Best-effort: everything else What’s missing? Low-priority (receiving only excess bandwidth) Lower than best-effort! Non-interactive apps, bulk download Speeds up best-effort service Inference of available bandwidth for resource selection Routers could achieve via a low (strict) priority queue Objective: realize low-priority via end-point control Premise: routers will not help
Applications for Low Priority Service LP vs. rate-limiting: P2P file sharing Often rate limited Isolation vs. sharing LP vs. fair-share: Bulk downloads Improve my other applications Data-base replication across the Internet
Problem Formulation & Design Objectives Low-priority service objectives Utilize the “excess/available” capacity What no other flows are using TCP-transparency (non-intrusiveness) Inter-LP flow fairness (fair-share of the available bandwidth)
Origins of the Available Bandwidth Why is excess bandwidth available when TCP is greedy? TCP is imperfect Cross-traffic burstiness Delayed ACKs due to reverse traffic frees up available bandwidth Short-lived flows Majority of traffic consists of short-lived flows (web browsing) Bandwidth gaps between short lived-flows
Illustration of TCP Transparency LP flow utilizes only excess bandwidth Does not reduce the throughput of TCP flows
How Is This Different from TCP? In presence of TCP cross-traffic: TCP achieves fairness LP achieves TCP-transparency
Fairness Among LP Flows Inter-LP-fairness is essential for simultaneous file transfers estimates of available bandwidth
TCP-LP: A Congestion Control Protocol Key concepts Early congestion indication One-way delay thresholds Modified congestion avoidance policy Less aggressive than TCP Implication: Sender-side modification of TCP incrementally deployable and easy to implement
Early Congestion Indication For transparency, TCP-LP must know of congestion before TCP Idealized objective: buffer threshold indication Endpoint inference: one-way delay threshold RFC1323 Source - destination time stamping Synchronized clocks not needed Eliminates bias due to reverse traffic
TCP-LP Congestion Avoidance Objectives: LP-flow fairness and TCP transparency LP-flow fairness AIMD with early congestion indication Transparency Early congestion indication Inference phase goals: Infer the cross-traffic Improve dynamic properties “MD” not conservative enough
TCP-LP Timeline Illustration - Send 1 pkt/RTT - Ensure available x bandwidth > 0
TCP-LP Timeline Illustration - AI phase - CWND/2 upon __early congestion xxindication - Inference phase
TCP-LP Timeline Illustration 2nd CI => CWND=1 - Inference phase
TCP-LP Timeline Illustration
Low-Aggregation Regime Hypothesis: TCP cannot attain 1.5 Mb/s throughput due to reverse cross-traffic How much capacity remains and can TCP-LP utilize it?
TCP-LP is invisible to TCP traffic! TCP-LP in Action TCP alone 745.5 Kb/s TCP vs. 739.5 Kb/s TCP-LP 109.5 Kb/s TCP-LP is invisible to TCP traffic!
High-Aggregation Regime with Short-Lived Flows Bulk FTP flow using TCP-LP vs. TCP Explore delay improvement to web traffic Explore throughput penalty to FTP/TCP-LP flow
TCP Background Bulk Data Transfer Web response times are normalized
TCP-LP Background Bulk Data Transfer Web response times improved 3-5 times FTP throughput: TCP: 58.2% TCP-LP: 55.1%
Conclusions http://www.ece.rice.edu/networks/TCP-LP TCP-LP adds a new service to the Internet General low priority service (compared to “best-effort”) TCP-LP is easy to deploy and use Sender side modification of TCP without changes to routers TCP-LP is attractive for many applications: ftp, web updates, overlay networks, P2P Significant benefits for best effort traffic, minimal throughput loss for bulk flows http://www.ece.rice.edu/networks/TCP-LP