Download presentation
Presentation is loading. Please wait.
Published byReynard Kennedy Modified over 8 years ago
1
FAST TCP Cheng Jin David Wei Steven Low netlab.CALTECH.edu GNEW, CERN, March 2004
2
Acknowledgments Caltech Bunn, Choe, Doyle, Jin, Newman, Ravot, Singh, J. Wang, Wei UCLA Paganini, Z. Wang CERN/DataTAG Martin, Martin-Flatin Internet2 Almes, Shalunov SLAC Cottrell, Mount Cisco Aiken, Doraiswami, Yip Level(3) Fernes LANL Wu
3
FAST project Performance StabilityFairnessTCP/IPNoise Random -ness Theory Linux TCP kernel Other platforms Monitoring Debugging Implement Abilene PlanetL DummyNet HEP networks WAN in Lab UltraLight testbed Experiment TeraGridHEP networksAbilene IETF GGF Deployment NSF ITR (2001) NSF STI (2002) NSF RI (2003)
4
Outline Experiments Results Future plan Status Open issues Code release mid 04 Unified framework Reno, FAST, HSTCP, STCP, XCP, … Implementation issues
5
FAST TCP util: 95% Linux TCP (no tuning) util: 19% 1Gbps path; 180 ms RTT; 1 flow Jin, Wei, Ravot, etc (Caltech, Nov 02) DataTAG: CERN – StarLight – Level3/SLAC
6
Aggregate throughput 1 flow 2 flows 7 flows 9 flows 10 flows Average utilization 95% 92% 90% 88% FAST Standard MTU Utilization averaged over > 1hr 1hr 6hr 1.1hr6hr DataTAG: CERN – StarLight – Level3/SLAC (Jin, Wei, Ravot, etc SC2002)
7
Dynamic sharing: 3 flows FASTLinux Dynamic sharing on Dummynet capacity = 800Mbps delay=120ms 3 flows iperf throughput Linux 2.4.x (HSTCP: UCL)
8
Dynamic sharing: 3 flows FASTLinux HSTCPSTCP Steady throughput
9
FASTLinux throughput loss queue STCPHSTCP Dynamic sharing on Dummynet capacity = 800Mbps delay=120ms 14 flows iperf throughput Linux 2.4.x (HSTCP: UCL) 30min
10
FASTLinux throughput loss queue STCPHSTCP 30min Room for mice ! HSTCP
11
Aggregate throughput ideal performance Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts
12
Aggregate throughput small window 800pkts large window 8000 Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts
13
Fairness Jain’s index HSTCP ~ Reno Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts
14
Stability Dummynet: cap = 800Mbps; delay = 50-200ms; #flows = 1-14; 29 expts stable in diverse scenarios
15
Outline Experiments Results Future plan Status Open issues Code release Unified framework Reno, FAST, HSTCP, STCP, XCP, … Implementation issues
16
Benchmarking TCP Not just static throughput Dynamic sharing, what protocol does to network, … Tests to zoom in on specific properties Throughput, delay, loss, fairness, stability, … Critical for basic design Test scenarios may not be realistic Tests with realistic scenarios Same performance metrics Critical for refinement for deployment Just started Input solicited What’s realistic for your applications?
17
Open issues: well understood baseRTT estimation route changes, dynamic sharing does not upset stability Small network buffer at least like TCP adapt on slow timescale, but how? TCP-friendliness friendly at least at small window tunable, but how to tune? Reverse path congestion should react? rare for large transfer?
18
Status: code release Source release mid 2004 For any non-profit purposes Re-implementation of FAST TCP completed Extensive testing to complete by April 04 Pre-release trials CFP for high-performance sites! Incorporate into Web100 with Matt Mathis
19
Status: IPR Caltech will license royalty-free if FAST TCP becomes IETF standard IPR covers more broadly than TCP Leave all options open
20
Outline Experiments Results Future plan Status Open issues Code release mid 04 Unified framework Reno, FAST, HSTCP, STCP, XCP, … Implementation issues
21
Packet & flow level ACK: W W + 1/W Loss: W W – 0.5W Packet level Reno TCP Flow level Equilibrium Dynamics pkts (Mathis formula)
22
Reno TCP Packet level Designed and implemented first Flow level Understood afterwards Flow level dynamics determines Equilibrium: performance, fairness Stability Design flow level equilibrium & stability Implement flow level goals at packet level
23
Reno TCP Packet level Designed and implemented first Flow level Understood afterwards Flow level dynamics determines Equilibrium: performance, fairness Stability Packet level design of FAST, HSTCP, STCP, H-TCP, … guided by flow level properties
24
Packet level ACK: W W + 1/W Loss: W W – 0.5W Reno AIMD(1, 0.5) ACK: W W + a(w)/W Loss: W W – b(w)W HSTCP AIMD(a(w), b(w)) ACK: W W + 0.01 Loss: W W – 0.125W STCP MIMD(a, b) FAST
25
Flow level: Reno, HSTCP, STCP, FAST Similar flow level equilibrium = 1.225 (Reno), 0.120 (HSTCP), 0.075 (STCP) MSS/sec
26
Flow level: Reno, HSTCP, STCP, FAST Different gain and utility U i They determine equilibrium and stability Different congestion measure p i Loss probability (Reno, HSTCP, STCP) Queueing delay (Vegas, FAST) Common flow level dynamics window adjustment control gain flow level goal =
27
FAST TCP Reno, HSTCP, and FAST have common flow level dynamics window adjustment control gain flow level goal = Equation-based Need to estimate “price” p i (t) p i (t) = queueing delay Easier to estimate at large window (t) and U’ i (t) explicitly designed for Performance Fairness Stability
28
Architecture Each component designed independently upgraded asynchronously
29
Architecture Each component designed independently upgraded asynchronously Window Control
30
Window control algorithm Full utilization regardless of bandwidth-delay product Globally stable exponential convergence Intra-protocol fairness weighted proportional fairness parameter
31
FAST tunes to knee TCP oscillation FAST stabilized Goal: Less delay Less jitter
32
Window adjustment FAST TCP
33
FAST TCP: motivation, architecture, algorithms, performance IEEE Infocom March 2004 FAST TCP: from theory to experiments Submitted for publication April 2003 netlab.caltech.edu/FAST
34
Panel 1: Lessons in Grid Networking
35
Metrics Performance Throughput, loss, delay, jitter, stability, responsiveness Availability, reliability Simplicity Application Management Evolvability, robustness
36
Constraints Scientific community Small & fixed set of major sites Few & large transfers Relatively simple traffic characteristics and quality requirements General public Large, dynamic sets of users Diverse set of traffic characteristics & quality requirements Evolving/unpredictable applications
37
Mechanisms Fiber infrastructure Lightpath configuration Resource provisioning Traffic engineering, adm control Congestion/flow control Months - years Mintes - days Service: sec - hrs Flow: sec - mins RTT: ms - sec Timescale: desired, instead of feasible Balance: cost/benefit, simplicity, evolvability
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.