Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.

Slides:



Advertisements
Similar presentations
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Advertisements

Appropriateness of Transport Mechanisms in Data Grid Middleware Rajkumar Kettimuthu 1,3, Sanjay Hegde 1,2, William Allcock 1, John Bresnahan 1 1 Mathematics.
Michele Pagano – A Survey on TCP Performance Evaluation and Modeling 1 Department of Information Engineering University of Pisa Network Telecomunication.
FAST TCP Anwis Das Ajay Gulati Slides adapted from : IETF presentation slides Link:
24-1 Chapter 24. Congestion Control and Quality of Service (part 1) 23.1 Data Traffic 23.2 Congestion 23.3 Congestion Control 23.4 Two Examples.
Ahmed El-Hassany CISC856: CISC 856 TCP/IP and Upper Layer Protocols Slides adopted from: Injong Rhee, Lisong Xu.
Presentation by Joe Szymanski For Upper Layer Protocols May 18, 2015.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Chapter 3 Transport Layer slides are modified from J. Kurose & K. Ross CPE 400 / 600 Computer Communication Networks Lecture 12.
Leveraging Multiple Network Interfaces for Improved TCP Throughput Sridhar Machiraju, Prof. Randy Katz.
Transport Layer3-1 Congestion Control. Transport Layer3-2 Principles of Congestion Control Congestion: r informally: “too many sources sending too much.
Week 9 TCP9-1 Week 9 TCP 3 outline r 3.5 Connection-oriented transport: TCP m segment structure m reliable data transfer m flow control m connection management.
Leveraging Multiple Network Interfaces for Improved TCP Throughput Sridhar Machiraju SAHARA Retreat, June 10-12, 2002.
1 Lecture 9: TCP and Congestion Control Slides adapted from: Congestion slides for Computer Networks: A Systems Approach (Peterson and Davis) Chapter 3.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
Performance Enhancement of TFRC in Wireless Ad Hoc Networks Travis Grant – Mingzhe Li, Choong-Soo Lee, Emmanuel.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Data Communication and Networks
Medium Start in TCP-Friendly Rate Control Protocol CS 217 Class Project Spring 04 Peter Leong & Michael Welch.
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
Introduction 1 Lecture 14 Transport Layer (Congestion Control) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer Science.
Lect3..ppt - 09/12/04 CIS 4100 Systems Performance and Evaluation Lecture 3 by Zornitza Genova Prodanoff.
Courtesy: Nick McKeown, Stanford 1 TCP Congestion Control Tahir Azim.
Transport Layer 4 2: Transport Layer 4.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Transport Layer3-1 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4 Principles.
CS540/TE630 Computer Network Architecture Spring 2009 Tu/Th 10:30am-Noon Sue Moon.
Transport Layer1 Flow and Congestion Control Ram Dantu (compiled from various text books)
Implementing High Speed TCP (aka Sally Floyd’s) Yee-Ting Li & Gareth Fairey 1 st October 2002 DataTAG CERN (Kinda!)
Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.
Principles of Congestion Control Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control!
UDT: UDP based Data Transfer Protocol, Results, and Implementation Experiences Yunhong Gu & Robert Grossman Laboratory for Advanced Computing / Univ. of.
Parameswaran, Subramanian
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
2000 년 11 월 20 일 전북대학교 분산처리실험실 TCP Flow Control (nagle’s algorithm) 오 남 호 분산 처리 실험실
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
Rate Control Rate control tunes the packet sending rate. No more than one packet can be sent during each packet sending period. Additive Increase: Every.
Copyright 2008 Kenneth M. Chipps Ph.D. Controlling Flow Last Update
What is TCP? Connection-oriented reliable transfer Stream paradigm
1 Transport Layer Lecture 10 Imran Ahmed University of Management & Technology.
Compound TCP in NS-3 Keith Craig 1. Worcester Polytechnic Institute What is Compound TCP? As internet speeds increased, the long ‘ramp’ time of TCP Reno.
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
TCP: Transmission Control Protocol Part II : Protocol Mechanisms Computer Network System Sirak Kaewjamnong Semester 1st, 2004.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Transport Layer 3- Midterm score distribution. Transport Layer 3- TCP congestion control: additive increase, multiplicative decrease Approach: increase.
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429/556 Introduction to Computer Networks Principles of Congestion Control Some slides.
Winter 2008CS244a Handout 71 CS244a: An Introduction to Computer Networks Handout 7: Congestion Control Nick McKeown Professor of Electrical Engineering.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
TCP over Wireless PROF. MICHAEL TSAI 2016/6/3. TCP Congestion Control (TCP Tahoe) Only ACK correctly received packets Congestion Window Size: Maximum.
Window Control Adjust transmission rate by changing Window Size
Topics discussed in this section:
Chapter 6 TCP Congestion Control
COMP 431 Internet Services & Protocols
Chapter 3 outline 3.1 Transport-layer services
TCP-LP Distributed Algorithm for Low-Priority Data Transfer
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
TCP.
Lecture 19 – TCP Performance
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
Chapter 6 TCP Congestion Control
Transport Layer: Congestion Control
Chapter 3 outline 3.1 Transport-layer services
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Presentation transcript:

Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago 3 California Institute of Technology Congestion Control Algorithm Slow-start: Sender window begins at one segment and is incremented by one segment every time an ACK is received. This opens the window exponentially. Congestion-avoidance: Sender window is incremented at most one segment each round- trip time (RTT), regardless of number of ACKs received in that RTT. The congestion control algorithm starts with the slow-start phase. Whenever congestion is detected, it reduces the sender window to half of its value and enters congestion avoidance. Limitations This multiplicative decrease per congestion event is too drastic and linear increase by one packet per RTT in congestion-avoidance phase is too slow for networks with large BDP. Numerous approaches have been formulated to address this limitation These include both loss-based and delay-based solutions and focus on congestion-avoidance phase The current slow-start procedure can result in increasing the sender window by thousands of segments in a single RTT for networks with large BDP. This can result in thousands of packets being dropped in one RTT. This is often counter-productive for the TCP flow itself, and is also hard on the rest of the traffic sharing the congested link. We propose a modification to the slow-start procedure to solve this problem and improve the network utilization. Abstract In network protocol research a common goal is optimal bandwidth utilization, while still being network friendly. Drawback of TCP in networks with large bandwidth-delay products (BDP) due to its Additive Increase Multiplicative Decrease (AIMD) based congestion control mechanism is well known. The congestion control algorithm of TCP has two phases namely slow-start phase and congestion-avoidance phase. We propose a modification to the slow-start phase of the algorithm to achieve better performance. Restricted slow-start algorithm is a simple sender side alteration to the TCP congestion window update algorithm. Restricted Slow-Start A brief description of the method is as follows: 1.Select proportional control alone 2.Increase the value of the proportional gain until the point of instability is reached (sustained oscillations), the critical value of gain, K c, is reached. 3.Measure the period of oscillation to obtain the critical time constant, T c. Once the values for K c and T c are obtained, the PID parameters are calculated as follows: K p = 0.33 K c; T i = 0.5 T c; and T d = 0.33 T c. Experimental Results Our scheme is implemented in a Linux kernel and the performance is evaluated through experiments conducted over a 100 Mbps link between Argonne National Laboratory and Lawrence Berkeley National Laboratory, a RTT of 60 ms. We use web100 to get detailed statistics of the TCP state information. Preliminary results show that our scheme is able to achieve 40% improvement in throughput compared to the standard TCP. Average throughput achieved with standard TCP: 60 Mbps Average throughput achieved with the proposed scheme: 85 Mbps Background Congestion occurs when the traffic offered to a communication network exceeds its available transmission capacity. Congestion events are not just pertained to congestion in the network. In some operating systems (for example: Linux), congestion events (send-stalls) are generated due to the saturation of several soft network components such as buffers and queues in the host. These are resource constraints at the sending host and are not in any way indicative of congestion in the network Linux TCP treats these events in the same way as it would treat the network congestion. Motivation The impact of these send-stall events was reflected in the demo that we conducted at IGrid2002. Further analysis revealed that these congestion events (send-stalls) are generated in the slow-start phase rather in the congestion-avoidance phase. We propose a control theory approach that appropriately paces the TCP sender during the slow- start phase to avoid the saturation of soft component such as device queue. Alternate Solutions Increase the size of these soft components to overcome this problem. Deployment of these solutions revealed that still a considerable amount of available bandwidth goes unutilized. Also, increasing the size of the soft components increases the memory usage. Restricted Slow-Start We use a PID control algorithm to determine the rate of increase during the slow-start phase. In the PID control approach, the gain is calculated using a first order differential equation. The controller gains are configurable. The 90% of the maximum value of the interface queue (IFQ) size is used as the set point and the current value of the IFQ is used as the process variable in the controller. The controller compares the process variable (current IFQ) to its set point (max IFQ) and calculates the error. Based on the error (E), a few adjustable settings and its internal structure, the controller calculates an output that determines the new value of the sender window. The PID transfer function used is K p * (E) + 1/T i   0 t (E) dt + T d * d(E)/ dt ) We use Ziegler Nichols Tuning Method to calculate the PID parameters (K p, T i and T d ).