Download presentation
Presentation is loading. Please wait.
Published byMaja Andersen Modified over 6 years ago
1
TCP-LP: A Distributed Algorithm for Low Priority Data Transfer
Aleksandar Kuzmanovic & Edward W. Knightly Rice Networks Group
2
Motivation Traditional view of service differentiation:
High priority: real-time service Best-effort: everything else What’s missing? Low-priority (receiving only excess bandwidth) Lower than best-effort! Non-interactive apps, bulk download Speeds up best-effort service Inference of available bandwidth for resource selection Routers could achieve via a low (strict) priority queue Objective: realize low-priority via end-point control Premise: routers will not help
3
Applications for Low Priority Service
LP vs. rate-limiting: P2P file sharing Often rate limited Isolation vs. sharing LP vs. fair-share: Bulk downloads Improve my other applications Data-base replication across the Internet
4
Problem Formulation & Design Objectives
Low-priority service objectives Utilize the “excess/available” capacity What no other flows are using TCP-transparency (non-intrusiveness) Inter-LP flow fairness (fair-share of the available bandwidth)
5
Origins of the Available Bandwidth
Why is excess bandwidth available when TCP is greedy? TCP is imperfect Cross-traffic burstiness Delayed ACKs due to reverse traffic frees up available bandwidth Short-lived flows Majority of traffic consists of short-lived flows (web browsing) Bandwidth gaps between short lived-flows
6
Illustration of TCP Transparency
LP flow utilizes only excess bandwidth Does not reduce the throughput of TCP flows
7
How Is This Different from TCP?
In presence of TCP cross-traffic: TCP achieves fairness LP achieves TCP-transparency
8
Fairness Among LP Flows
Inter-LP-fairness is essential for simultaneous file transfers estimates of available bandwidth
9
TCP-LP: A Congestion Control Protocol
Key concepts Early congestion indication One-way delay thresholds Modified congestion avoidance policy Less aggressive than TCP Implication: Sender-side modification of TCP incrementally deployable and easy to implement
10
Early Congestion Indication
For transparency, TCP-LP must know of congestion before TCP Idealized objective: buffer threshold indication Endpoint inference: one-way delay threshold RFC1323 Source - destination time stamping Synchronized clocks not needed Eliminates bias due to reverse traffic
11
TCP-LP Congestion Avoidance
Objectives: LP-flow fairness and TCP transparency LP-flow fairness AIMD with early congestion indication Transparency Early congestion indication Inference phase goals: Infer the cross-traffic Improve dynamic properties “MD” not conservative enough
12
TCP-LP Timeline Illustration
- Send 1 pkt/RTT - Ensure available x bandwidth > 0
13
TCP-LP Timeline Illustration
- AI phase - CWND/2 upon __early congestion xxindication - Inference phase
14
TCP-LP Timeline Illustration
2nd CI => CWND=1 - Inference phase
15
TCP-LP Timeline Illustration
16
Low-Aggregation Regime
Hypothesis: TCP cannot attain 1.5 Mb/s throughput due to reverse cross-traffic How much capacity remains and can TCP-LP utilize it?
17
TCP-LP is invisible to TCP traffic!
TCP-LP in Action TCP alone Kb/s TCP vs Kb/s TCP-LP Kb/s TCP-LP is invisible to TCP traffic!
18
High-Aggregation Regime with Short-Lived Flows
Bulk FTP flow using TCP-LP vs. TCP Explore delay improvement to web traffic Explore throughput penalty to FTP/TCP-LP flow
19
TCP Background Bulk Data Transfer
Web response times are normalized
20
TCP-LP Background Bulk Data Transfer
Web response times improved 3-5 times FTP throughput: TCP: 58.2% TCP-LP: 55.1%
21
Conclusions http://www.ece.rice.edu/networks/TCP-LP
TCP-LP adds a new service to the Internet General low priority service (compared to “best-effort”) TCP-LP is easy to deploy and use Sender side modification of TCP without changes to routers TCP-LP is attractive for many applications: ftp, web updates, overlay networks, P2P Significant benefits for best effort traffic, minimal throughput loss for bulk flows
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.