High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.

Slides:



Advertisements
Similar presentations
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Advertisements

Helping TCP Work at Gbps Cheng Jin the FAST project at Caltech
A Survey of Recent High Speed TCP Variants
FAST TCP Anwis Das Ajay Gulati Slides adapted from : IETF presentation slides Link:
Cheng Jin David Wei Steven Low FAST TCP: design and experiments.
1 TCP Vegas: New Techniques for Congestion Detection and Avoidance Lawrence S. Brakmo Sean W. O’Malley Larry L. Peterson Department of Computer Science.
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
Router Buffer Sizing and Reliability Challenges in Multicast Aditya Akella 02/28.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
CUBIC : A New TCP-Friendly High-Speed TCP Variant Injong Rhee, Lisong Xu Member, IEEE v 0.2.
CUBIC Qian HE (Steve) CS 577 – Prof. Bob Kinicki.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
TCP Stability and Resource Allocation: Part II. Issues with TCP Round-trip bias Instability under large bandwidth-delay product Transient performance.
Congestion Control on High-Speed Networks
Texas A&M University Improving TCP Performance in High Bandwidth High RTT Links Using Layered Congestion Control Sumitha.
The War Between Mice and Elephants Presented By Eric Wang Liang Guo and Ibrahim Matta Boston University ICNP
Chapter 3 Transport Layer slides are modified from J. Kurose & K. Ross CPE 400 / 600 Computer Communication Networks Lecture 12.
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
Cheng Jin David Wei Steven Low FAST TCP: Motivation, Architecture, Algorithms, Performance.
TCP Congestion Control TCP sources change the sending rate by modifying the window size: Window = min {Advertised window, Congestion Window} In other words,
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Presented by Anshul Kantawala 1 Anshul Kantawala FAST TCP: From Theory to Experiments C. Jin, D. Wei, S. H. Low, G. Buhrmaster, J. Bunn, D. H. Choe, R.
Transport Level Protocol Performance Evaluation for Bulk Data Transfers Matei Ripeanu The University of Chicago Abstract:
Congestion Control for High Bandwidth-delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs.
CPSC 538A1 Dynamic Behavior of Slowly- Responsive Congestion Control Algorithms Deepak Bansal, Hari BalaKrishna, Sally Floyd and Scott Shenker Presented.
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
Transport Layer 4 2: Transport Layer 4.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Transport Layer3-1 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles.
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
CS/EE 145A Congestion Control Netlab.caltech.edu/course.
Implementing High Speed TCP (aka Sally Floyd’s) Yee-Ting Li & Gareth Fairey 1 st October 2002 DataTAG CERN (Kinda!)
Principles of Congestion Control Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control!
Parameswaran, Subramanian
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
BIC Control for Fast Long-Distance Networks paper written by Injong Rhee, Lisong Xu & Khaled Harfoush (2004) Presented by Jonathan di Costanzo (2009/02/18)
Lecture 9 – More TCP & Congestion Control
Murari Sridharan and Kun Tan (Collaborators: Jingmin Song, MSRA & Qian Zhang, HKUST.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer 3- Midterm score distribution. Transport Layer 3- TCP congestion control: additive increase, multiplicative decrease Approach: increase.
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
Winter 2003CS244a Handout 71 CS492B Project #2 TCP Tutorial # Jin Hyun Ju.
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
Murari Sridharan Windows TCP/IP Networking, Microsoft Corp. (Collaborators: Kun Tan, Jingmin Song, MSRA & Qian Zhang, HKUST)
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
XCP: eXplicit Control Protocol Dina Katabi MIT Lab for Computer Science
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
FAST TCP Cheng Jin David Wei Steven Low netlab.CALTECH.edu GNEW, CERN, March 2004.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
@Yuan Xue A special acknowledge goes to J.F Kurose and K.W. Ross Some of the slides used in this lecture are adapted from their.
Congestion Control for High Bandwidth-Delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Presented by Yufei Chen.
CUBIC Marcos Vieira.
Chapter 3 outline 3.1 Transport-layer services
Lecture 19 – TCP Performance
Fast TCP Matt Weaver CS622 Fall 2007.
FAST TCP : From Theory to Experiments
CS640: Introduction to Computer Networks
TCP Congestion Control
Chapter 3 outline 3.1 Transport-layer services
Review of Internet Protocols Transport Layer
Presentation transcript:

High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion Control for High Speeds (by S. Floyd, S. Ratnasamy, and S. Shenker)  Scalable TCP: Improving Performance in High-Speed WAN (by Tom Kelly)

Problem with TCP  Sending rate: T = 1.2 / sqrt(p) packets per rtt where p is packet loss rate  Example: 1500bytes packet, 100ms rtt, 10Gbp pipe require window size W = 83,333 packets at most 1 drop every 5,000,000,000 packets at most 1 drop every 6000 seconds W = sqrt(1.5) / sqrt(p); N = W 2 / 1.5  Real drop rate makes TCP a bottleneck which leads to poor network utilization

Problem with TCP  AIMD ACK: w = w + a / w (each rtt) Drop: w = w – b * w Slow – StartACK: w = w + c where a = 1, b = 0.5, c = 1  TCP steady state response function

HSTCP: Goals  Performance Sustain high speeds without requiring unrealistically low loss rates Reach high speeds reasonably quickly in when slow start Recover from congestion without huge delays

HSTCP: Goals  Compatibility Deployment without router involvement Fair treatment of unmodified TCP (unrealistic) Fair treatment of unmodified TCP: original TCP get as much bandwidth as if packet loss rate is very small

HSTCP: Approach  Leave slow start phase as it is Needs only 17 rtt make W = packets  Change response function by tweaking parameters: a and b Same for p > P = (W = 31) For smaller p treat a and b as functions of current window size

HSTCP: Response function  Suggestion of new RF to reach high speed: w = 10 S(logp-lopP)+logW for S = (logW 1 – logW) / (logP 1 – logP) gives w = p S * (1 / P) S * W  For two points (P, W) … (P 1, W 1 ) P = , W = 31 P 1 = 10 -7, W 1 = w = 0.15 / p 0.82

HSTCP: Response function

HSTCP: Fairness

HSTCP: Tweaking of a and b   For w <= W: a(w) = 1; b(w) = 0.5  For w > W need such a(w) and b(w) that w() gives: p(W) = P, p(W 1 ) = P 1

HSTCP: Testing  Not available

STCP: Scalable TCP  Same goals More aggressive increase Less aggressive decrease Fair treatment of unmodified TCP  Approach ACK: W = W (each ACK) Drop: W = W – [0.125 * W]  Doubles sending rate in about 70rtt

STCP: Scaling properties In original TCP scaling depends on sending rate Sending rate = c < Sending rate = C

STCP: Scaling properties In Scalable TCP there is no such dependence Sending rate = c < Sending rate = C

STCP: Response function For P > (W=15) native TCP function is used

STCP: Scaling properties Rate TCP recovery time STCP recovery time 1Mbps 1.7s 2.7s 10Mbps 17s 2.7s 100Mbps 2mins 2.7s 1Gbps 28mins 2.7s 10Gbps 4hrs 43mins 2.7s Environment: 1500 bytes packet, 200ms rtt

STCP: Experiments  Implemented in Linux kernel version  Competitors: TCP, TCP GB modifications, STCP  Topology and environment 2.4Gz Xeon 2Gb RAM Gigabit Ethernet card x 12

STCP: Experiments  Experiment #1 4 pairs of 2Gb file exchangers Number of 2Gb transfers completed in 1200 sec

STCP: Experiments  Experiment #2 3 pairs of Web-Traffic emulators (1400 users each) 2 pairs of 2Gb file exchangers Concurrent run of 4200 web users and 8 bulk transfers within 1200 sec

Problems with TCP 1.Packet level: AIMD provides slow increase and drastic decrease 2.Flow level: Maintaining large congestion windows requires small equilibrium loss probability 3.Packet level: Binary congestion measure leads to oscillation 4.Flow level: Dynamics is unstable. Resulting oscillations can be reduced only by accurate estimation of packet loss probability and stable design of flow dynamics

HSTCP and STCP vs TCP Reno HSTCP and STC increase more aggressively and decrease less drastically so they can tolerate larger loss probabilities than TCP Reno therefore achieve larger equilibrium windows and solve problems 1 and 2

TCP Oscillations Loss based approach Full utilization – large delays and oscillations Delay based approach Full utilization – stabilized window, predictable delays and no oscillations

FAST TCP: Strategy  Window adjustment depends on distance from equilibrium  Use queueing delay as congestion measure Multi-bit measure eliminates packet level oscillations Stabilize window near the point where buffer is large and delay is small Stabilizes flow dynamics since queueing delay dynamics scales with respect to network capacity

FAST TCP: Window adjustment Window adjustment is independent of where equilibrium is

FAST TCP: Design  Feedback model:  Flow level: design such u(w i, T i ) and k(w i, T i ) that feedback model above has an equilibrium fair, efficient, and stable in presence of feedback delay  Packet level: take care of issues ignored by flow level such as burstiness control, loss recovery and parameter estimation

FAST TCP: Architecture  Data control – which packets to transmit  Window control – how many  Burstiness control – when  Estimation – provides information to above components

FAST TCP: Window update Where: gamma is in (0, 1] baseRTT is a minimum RTT observed so far qdelay is the average end-to-end queueing delay alpha is a constant reflecting # of packets each flow attempts to maintain in network buffer at equilibrium. Provides linear window increase. However it can be constant when qdelay is nonzero. When qdelay is zero increase is exponential Window is updated every 2RTT

FAST TCP: Events and computations  Acknowledgement Qdelay Decision about packet injection into the network  After packet transmission Time stamp for each packet New window size  End of RTT Target throughput  Packet loss When to retransmit dropped packets

FAST TCP: Performance  Testbed and instrumentation 2.6 GHz Xeon, 2GB RAM Dual onboard gigabit Ethernet interface Network bottleneck capacity 800Mbps and 2000pkts buffer  Environment Static and 2 types of dynamic

FAST TCP: Static test X - # of flows, Y - propagation delay, Z – aggregate throughput

FAST TCP: Dynamic test #1 Throughput and window trajectory Queue size, packet losses, link utilization

FAST TCP: Dynamic test #2 Throughput and window trajectory Queue size, packet losses, link utilization

FAST TCP: Overall evaluation ThroughputFairness

FAST TCP: Overall evaluation Stability Responsiveness