Download presentation
Presentation is loading. Please wait.
Published byEdward Skinner Modified over 9 years ago
1
Adaptive Transmission Protocols for the Future Internet Hari Balakrishnan MIT Lab for Computer Science http://www.sds.lcs.mit.edu/~hari
2
Internet Service Model Congestion due to overload causes losses Transmission protocols provide end- to-end data transport –Loss recovery (if reliability is important) –Congestion management (to reduce instability) –Connection setup/teardown Internet A best-effort network: losses & reordering can occur Router
3
Transmission Protocols User Datagram Protocol (UDP) –Simple datagram delivery –Other protocols built on top (e.g., RTP for video) Transmission Control Protocol (TCP) –Reliable, in-order byte stream delivery –Loss recovery & congestion control TCP is dominant today, and is tuned for: –Long-running transfers –Wired links and symmetric topologies
4
Problem #1: The Web! r1 r-n r3 r2 Server Client Multiple reliable streams Individual objects are small So what? Far too inefficient! Far too aggressive! Internet
5
Problem #2: Application Heterogeneity u1 u-m u3 u2 r1 r2 r3 r-n Server New applications (e.g., real-time streams) –The world isn’t only about HTTP or even TCP! So what? Applications do not adapt to congestion Long-term Internet stability is threatened Internet Client
6
Problem #3: Technology Heterogeneity In-Building Campus-Area Packet Radio Metro-Area Regional-Area + Asymmetry Cellular Digital Packet Data (CDPD) Metricom RicochetLucent WaveLAN IBM Infrared Tremendous diversity So what? Awful performance Mobility-related inefficiencies
7
Why is Efficient Transport Hard? Congestion Channel errors Asymmetry Latency variability Packet reordering Mobility Large network “pipes” Small network “pipes”
8
Solution: Adaptive Transmissions A framework to adapt to various network conditions Guiding principle: the end-to-end argument –Do only the “right” amount inside the network –Expose important information to applications Algorithms to adapt to different conditions Wanted: A grand unified architecture for Internet data transport
9
This Talk Congestion Channel errors Asymmetry Latency variability Packet reordering Mobility Large network “pipes” Small network “pipes”
10
0 TCP Overview 8 1 1 7 6 1 3 4 1 2 lost 1 5 Timeouts based on mean round-trip time (RTT) and deviation Fast retransmissions based on duplicate ACKs 0 10 9 Congestion control –Window-based algorithm to determine sustainable rate –Upon congestion, reduce window –“ACK clocking” sends data smoothly Loss recovery
11
TCP Dynamics Data ACKs Window Fast retransmission Duplicate ACKs Sequence number (bytes) Time (s) RTT
12
Congestion Management Challenges Heterogeneous traffic mix Multiple concurrent streams Variety of applications and transports Control algorithms must be stable Clean separation from other tasks like loss recovery
13
“Solution” #1: Persistent Connections r1 r-n r3 r2 Put everyone on same ordered byte stream While this fixes some of the problems of independent connections, it really is a step in the wrong direction! 1. Far too much coupling between objects 2. Far too application-specific 3. Does not enable application adaptation Server Client
14
“Solution” #2: Web Accelerators Is your Web experience too slow? Chances are, it’s because of pesky TCP congestion control and those annoying timeouts Web accelerators will greatly speed up your transfers… By just “adjusting” TCP’s congestion control! Who cares if the Internet is stable or not?
15
“Solution” #3: Integrated TCP Sessions r1 r-n r3 r2 Independent TCP connections, but shared control parameters [BPS+98, Touch98] Shared congestion windows, round-trip estimates But, this approach doesn’t accomodate non-TCP traffic Server Client
16
What is the World Heading Toward? The world won’t be just HTTP The world won’t be just TCP Logically different streams (objects) should be kept separate, yet efficient congestion management must be performed. u1 u-m u3 u2 r1 r2 r3 r-n Server Internet Client
17
What We Really Need… An integrated approach to end-to-end congestion management for the Internet using the CM IP HTTPVideo1 TCP1TCP2UDP AudioVideo2 Congestion Manager
18
CM: Some Salient Features Shared learning –Maintains host- and domain-specific information Heterogeneous application support Simple application interfaces to CM Robust and stable rate control algorithms Flexible bandwidth-apportioning using receiver hints Enables application adaptation to congestion and changing bandwidth
19
The CM API A simple but powerful application-to-CM API Three classes of functions –Query –Control –Application callback Design principle: Application-Level Framing (ALF) –Feed information up to application –Application decides what to send; CM tells it how fast
20
How the API Works CM does not buffer any data; request/callback/notify API
21
Preliminary Results Simulation results show significant improvements in performance predictability –E.g., TCP with CM reduces timeouts and shares bandwidth well between connections CM’s internal congestion algorithm is rate-based –Great platform for experimenting with new control schemes Experiments with scheduling algorithms planned Proxy receiver hosts are problematic
22
Summary & Status The CM provides a simple API to make applications adaptive and network- aware –Enables all traffic to adhere to basic congestion control principles –Improves performance predictability –Enables shared state learning ns-2 experiments in progress Linux implementation coming soon (including rate-adaptive applications)
23
This Talk Congestion Channel errors Asymmetry Latency variability Packet reordering Mobility Large network “pipes” Small network “pipes”
24
TCP/Wireless Performance Today Goal: To bridge the gap between perceived and rated performance
25
Channel Errors Internet Router Loss Congestion 2 3 2 1 Loss ==> Congestion 2 1 0 Burst losses lead to coarse-grained timeouts Result: Low throughput
26
Performance Degradation Time (s) Sequence number (bytes) TCP Reno (280 Kbps) Best possible TCP with no errors (1.30 Mbps) 2 MB wide-area TCP transfer over 2 Mbps Lucent WaveLAN
27
Conventional Approaches Link-layer protocols [LC83] End-to-end ARQ/FEC Adverse interactions with transport layer –Timer interactions [DCY93] –Interactions with fast retransmissions –Large end-to-end round- trip time variation Wired connectionWireless connection Split connections [YB94,BB95] –Wireless connection need not be TCP Hard state at base station –Complicates mobility –Vulnerable to failures Violates end-to-end semantics Base Station
28
Our Solution: Snoop Protocol Shield TCP sender from wireless vagaries –Eliminate adverse interactions between protocol layers –Congestion control only when congestion occurs The End-to-End Argument [SRC84] –Preserve TCP/IP service model: end-to-end semantics –Is connection splitting fundamentally important? Eliminate non-TCP protocol messages –Is link-layer messaging fundamentally important? Fixed to mobile: transport-aware link protocol Mobile to fixed: link-aware transport protocol
29
Snoop Protocol: FH to MH FH Sender Mobile Host Base Station 5 1 123 4 6 Snoop agent: active interposition agent –Snoops on TCP segments and ACKs –Detects losses by duplicate ACKs and timers –Suppresses duplicate ACKs from FH sender Cross-layer protocol design: snoop agent state is soft Snoop agent
30
Snoop Protocol: FH to MH Mobile Host 1 Base Station Snoop Agent FH Sender
31
Snoop Protocol: FH to MH Mobile Host 1234 Base Station 5 FH Sender
32
Snoop Protocol: FH to MH Mobile Host Base Station 5 1 123 4 6 FH Sender
33
Snoop Protocol: FH to MH Mobile Host 5 123 4 Base Station 3 2 6 2 1 Sender
34
Snoop Protocol: FH to MH Mobile Host 6 123 4 Base Station 43 1 52 ack 0 Sender Duplicate ACK
35
Snoop Protocol: FH to MH Mobile Host 123 4 Base Station 1 1 56432 Sender Retransmit from cache at higher priority ack 0 65
36
Snoop Protocol: FH to MH Mobile Host 123 4 Base Station 1 1 Suppress Duplicate Acks 56 4 32 Sender 5 ack 0 ack 4
37
Snoop Protocol: FH to MH Base Station 6561432 5 Sender ack 4 ack 5 Clean cache on new ACK
38
Snoop Protocol: FH to MH Mobile Host Base Station 6 5432 1 Sender ack 4 6 ack 6 ack 5
39
Snoop Protocol: FH to MH Mobile Host Base Station 5432 1 Active soft state agent at base station Transport-aware reliable link protocol Preserves end-to-end semantics 6 Sender ack 5 ack 6 7 8 9
40
Best possible TCP (1.30 Mbps) Snoop Performance Improvement Time (s) Sequence number (bytes) Snoop (1.11 Mbps) TCP Reno (280 Kbps) 2 MB wide-area TCP transfer over 2 Mbps Lucent WaveLAN
41
Benefits of TCP-Awareness 30-35% improvement for Snoop: LL congestion window is small (but no coarse timeouts occur) Connection bandwidth-delay product = 25 KB 0 01020304050607080 20000 30000 40000 50000 60000 10000 Time (sec) Congestion Window (bytes) LL (no duplicate ack suppression) Snoop Suppressing duplicate acknowledgments and TCP-awareness leads to better utilization of link bandwidth and performance
42
Snoop Protocol Status BSD/OS implementation –Integrated with Daedalus low-latency handoff software Version 1 released 1996; Version 2 released 1998 In daily production use at Berkeley and UC Santa Cruz Several hundred downloads –Ports to Linux, FreeBSD, NetBSD
43
Summary: Wireless Bit- Errors Problem: wireless corruption mistaken for congestion Solution: Snoop Protocol General lessons –Lightweight soft-state agent in network infrastructure Guided by the End-to-End Argument Fully conforms to the IP service model –Cross-layer protocol design & optimizations Transport Network Link Physical Link-aware transport (Explicit Loss Notification) Transport-aware link (Snoop agent at BS)
44
Conclusions Efficient data transport is a hard problem: congestion, errors, asymmetry,... Adaptive transmission schemes are essential in the future Internet Architectural components should include –Congestion Manager (CM) –Error-handlers (e.g., Snoop protocol) –(And many other features) Wanted: a grand unified transmission architecture for resource management and application adaptation
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.