Automatic TCP Buffer Tuning

Slides:



Advertisements
Similar presentations
Autotuning in Web100 John W. Heffner August 1, 2002 Boulder, CO.
Advertisements

1 School of Computing Science Simon Fraser University CMPT 771/471: Internet Architecture & Protocols TCP-Friendly Transport Protocols.
Simulation-based Comparison of Tahoe, Reno, and SACK TCP Kevin Fall & Sally Floyd Presented: Heather Heiman September 10, 2002.
CS 443 Advanced OS Fabián E. Bustamante, Spring 2005 Resource Containers: A new Facility for Resource Management in Server Systems G. Banga, P. Druschel,
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
Congestion Control: TCP & DC-TCP Swarun Kumar With Slides From: Prof. Katabi, Alizadeh et al.
Restricted Slow-Start for TCP William Allcock 1,2, Sanjay Hegde 3 and Rajkumar Kettimuthu 1,2 1 Argonne National Laboratory 2 The University of Chicago.
CSEE W4140 Networking Laboratory Lecture 7: TCP flow control and congestion control Jong Yul Kim
CSCE 515: Computer Network Programming Chin-Tser Huang University of South Carolina.
1 TCP Transport Control Protocol Reliable In-order delivery Flow control Responds to congestion “Nice” Protocol.
Reliable Networking Systems The goals: Implement a reliable network application of a file sharing network. Implement a reliable network application of.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Advanced Network Architecture Research Group 2001/11/149 th International Conference on Network Protocols Scalable Socket Buffer Tuning for High-Performance.
Dynamic Resource Monitoring and Allocation in a virtualized environment.
Chapter 12 Transmission Control Protocol (TCP)
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
Advanced Network Architecture Research Group 2001/11/74 th Asia-Pacific Symposium on Information and Telecommunication Technologies Design and Implementation.
2000 년 11 월 20 일 전북대학교 분산처리실험실 TCP Flow Control (nagle’s algorithm) 오 남 호 분산 처리 실험실
A Quality of Service Architecture that Combines Resource Reservation and Application Adaptation Ian Foster, Alain Roy, Volker Sander Report: Fu-Jiun Lu.
Copyright 2008 Kenneth M. Chipps Ph.D. Controlling Flow Last Update
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
Jennifer Rexford Fall 2014 (TTh 3:00-4:20 in CS 105) COS 561: Advanced Computer Networks TCP.
Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Chapter 13 TCP Implementation.
Internet Networking recitation #11
TCP Congestion Control 컴퓨터공학과 인공지능 연구실 서 영우. TCP congestion control2 Contents 1. Introduction 2. Slow-start 3. Congestion avoidance 4. Fast retransmit.
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
IP Configuration API. Network Interface Configuration NAIfconfigIsDeviceUp() NAIfconfigDeviceFromInterface() NAIfconfigBringDeviceUp() NAIfconfigSetIpAddress()
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
Increasing TCP's CWND based on Throughput draft-you-iccrg-throughput-based-cwnd-increasing-00 Jianjie You IETF92 Dallas.
Computer Networks 1000-Transport layer, TCP Gergely Windisch v spring.
Analysis and Comparison of TCP Reno and TCP Vegas Review
Kapitel 19: Routing. Kapitel 21: Routing Protocols
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Computer Networks Routing Algorithms.
By, Nirnimesh Ghose, Master of Science,
TCP Vegas Congestion Control Algorithm
Topics discussed in this section:
Transport Layer CS 381 3/7/2017.
Chapter 3 outline 3.1 transport-layer services
Chapter 6 TCP Congestion Control
Network Load Balancing
Routers Multiport connectivity device
Chapter 3 outline 3.1 Transport-layer services
Transport Protocols over Circuits/VCs
TCP Congestion Control
TCP-LP Distributed Algorithm for Low-Priority Data Transfer
Cisco Real Exam Dumps IT-Dumps
Understanding Throughput & TCP Windows
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
PUSH Flag A notification from the sender to the receiver to pass all the data the receiver has to the receiving application. Some implementations of TCP.
Remote Procedure Call (RPC)
Compound TCP Summary (by Song, Zhang & Murari)
Chapter 6 TCP Congestion Control
COMP/ELEC 429/556 Introduction to Computer Networks
TRANSMISSION CONTROL PROTOCOL
A packet by packet multi-path routing approach
TCP Throughput Modeling
COMP/ELEC 429/556 Fall 2017 Homework #2
CS4470 Computer Networking Protocols
TCP Congestion Control
Window Management in TCP
Transport Layer: Congestion Control
Chapter 3 outline 3.1 Transport-layer services
TCP flow and congestion control
Error Checking continued
Summer 2002 at SLAC Ajay Tirumala.
Presentation transcript:

Automatic TCP Buffer Tuning Jeffrey Semke, Jamshid Mahdavi & Matthew Mathis Presented By: Heather Heiman 11/24/2018 Cal Poly Network Performance Research Group

Cal Poly Network Performance Research Group Problem A single host may have multiple connections at any one time, and each connection may have a different bandwidth. Maximum transfer rates are often not achieved on each connection. To improve transfer rates, systems are often manually tuned, but this requires an expert or system administrator. 11/24/2018 Cal Poly Network Performance Research Group

Cal Poly Network Performance Research Group Problem If systems are manually tuned, TCP performance can still suffer because some connections will exceed the bandwidth-delay product, while other connections will be below the bandwidth-delay product. “The bandwidth-delay product is the buffer space required at sender and receiver to obtain maximum throughput on the TCP connection over the path.” 11/24/2018 Cal Poly Network Performance Research Group

Cal Poly Network Performance Research Group Auto-Tuning Auto-Tuning is the dynamic sizing of the bandwidth-delay product. It is based upon network conditions and system memory availability. Before implementing auto-tuning, the following features should be used: TCP Extensions for High Performance TCP Selective Acknowledgement Options Path MTU Discovery 11/24/2018 Cal Poly Network Performance Research Group

Auto-Tuning Implementation The receive socket buffer size is set to be the operating system’s maximum socket buffer size. The size of the send socket buffer is determined by three algorithms. The first is based on network conditions. The second balances memory usage. The third sets a limit to prevent excessive memory use. 11/24/2018 Cal Poly Network Performance Research Group

Cal Poly Network Performance Research Group Types of Connections default: connection type used the NetBSD 1.2 static default socket buffer size of 16kB hiperf: connection type was hand-tuned for performance to have a static socket buffer size of 400kB, which was adequate for connections to the remote receiver. It is overbuffered for local connections. auto: connections used dynamically adjusted socket buffer sizes according to the implementation described in section 2 of the paper 11/24/2018 Cal Poly Network Performance Research Group

Cal Poly Network Performance Research Group Testing Results Only one type of connection was run at any one time to correctly examine the performance and memory usage of each connection type. 11/24/2018 Cal Poly Network Performance Research Group

Cal Poly Network Performance Research Group Testing Results Concurrent data transfers were run from the sender to both the remote receiver and the local receiver. 11/24/2018 Cal Poly Network Performance Research Group

Cal Poly Network Performance Research Group Remaining Issues In some implementations of TCP, the cwnd is allowed to grow even when the connection is not controlled by the congestion window causing the dynamically sized send buffers to unnecessarily expand, wasting memory. Allowing large windows in TCP could cause a slow control-system response due to the long queues of packets. 11/24/2018 Cal Poly Network Performance Research Group

Cal Poly Network Performance Research Group Conclusion TCP needs to be able to use resources more efficiently in order to keep connections from starving other connections of memory. Auto-Tuning does not allow a connection to take more than its fair share of bandwidth. 11/24/2018 Cal Poly Network Performance Research Group