The TIME-WAIT state in TCP and its Effect on Busy Servers Theodore Faber University of Southern California Xindian Long.

Slides:



Advertisements
Similar presentations
Introduction 1 Lecture 13 Transport Layer (Transmission Control Protocol) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer.
Advertisements

TCP/IP Protocol Suite 1 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 22 World Wide Web and HTTP.
CISCO NETWORKING ACADEMY Chabot College ELEC Transport Layer (4)
TCP - Part I Relates to Lab 5. First module on TCP which covers packet format, data transfer, and connection management.
Transport Layer – TCP (Part2) Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF.
UDP & TCP Where would we be without them!. UDP User Datagram Protocol.
1 TCP - Part I Relates to Lab 5. First module on TCP which covers packet format, data transfer, and connection management.
1 CS 4396 Computer Networks Lab Transmission Control Protocol (TCP) Part I.
CS3505 The Internet and Info Hiway transport layer protocols : TCP/UDP.
1 Transport Control Protocol. 2 Header Identifies the port number of a source application program. Used by the receiver to reply. (16-bit). Identifies.
EEC-484/584 Computer Networks Lecture 15 Wenbing Zhao (Part of the slides are based on Drs. Kurose & Ross ’ s slides for their Computer.
Hypertext Transfer Protocol Kyle Roth Mark Hoover.
Page: 1 Director 1.0 TECHNION Department of Computer Science The Computer Communication Lab (236340) Summer 2002 Submitted by: David Schwartz Idan Zak.
CLIENT / SERVER ARCHITECTURE AYRİS UYGUR & NİLÜFER ÇANGA.
1 Web Proxies Dr. Rocky K. C. Chang 6 November 2005.
1 The HyperText Transfer Protocol: HTTP Nick Smith Stuart Alley Tara Tjaden.
Department of Electronic Engineering City University of Hong Kong EE3900 Computer Networks Transport Protocols Slide 1 Transport Protocols.
TCP. Learning objectives Reliable Transport in TCP TCP flow and Congestion Control.
Performance Comparison of Congested HTTP/2 Links Brian Card, CS /7/
FIREWALL TECHNOLOGIES Tahani al jehani. Firewall benefits  A firewall functions as a choke point – all traffic in and out must pass through this single.
TRANSPORT LAYER T.Najah Al-Subaie Kingdom of Saudi Arabia Prince Norah bint Abdul Rahman University College of Computer Since and Information System NET331.
INTRODUCTION TO WEB DATABASE PROGRAMMING
1 3 Web Proxies Web Protocols and Practice. 2 Topics Web Protocols and Practice WEB PROXIES  Web Proxy Definition  Three of the Most Common Intermediaries.
Slow Web Site Problem Analysis Last Update Copyright 2013 Kenneth M. Chipps Ph.D. 1.
© Janice Regan, CMPT 128, Jan 2007 CMPT 371 Data Communications and Networking Introducing the Application Layer 0.
Protocol(TCP/IP, HTTP) 송준화 조경민 2001/03/13. Network Computing Lab.2 Layering of TCP/IP-based protocols.
Chapter 5 Transport layer With special emphasis on Transmission Control Protocol (TCP)
Sharing Information across Congestion Windows CSE222A Project Presentation March 15, 2005 Apurva Sharma.
Transmission Control Protocol TCP. Transport layer function.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March 2012 Part.
TCP1 Transmission Control Protocol (TCP). TCP2 Outline Transmission Control Protocol.
CS 241 Section Week #12 (04/30/09). Announcements TA Final review: -Either Tuesday May 12, Or Wednesday May 13, 2009 (2:00pm - 4:00pm) || (6:30pm.
Performance of HTTP Application in Mobile Ad Hoc Networks Asifuddin Mohammad.
Transport Layer3-1 Chapter 3: Transport Layer Our goals: r understand principles behind transport layer services: m multiplexing/demultipl exing m reliable.
Fundamentals of Proxying. Proxy Server Fundamentals  Proxy simply means acting on someone other’s behalf  A Proxy acts on behalf of the client or user.
CSE 461 Section. Let’s learn things first! Joke Later!
Networking Basics CCNA 1 Chapter 11.
TCP Behavior Inference Tool Jitendra Padhye, Sally Floyd Presented by Songjie Wei.
Measuring the Capacity of a Web Server USENIX Sympo. on Internet Tech. and Sys. ‘ Koo-Min Ahn.
Breno de MedeirosFlorida State University Fall 2005 The IP, TCP, UDP protocols A quick refresher.
UDP & TCP Where would we be without them!. UDP User Datagram Protocol.
Transport Layer1 TCP Connection Management Recall: TCP sender, receiver establish “connection” before exchanging data segments r initialize TCP variables:
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
Submitted To: Submitted By: Seminar On Parasitic Computing.
TCP/IP Protocol Suite 1 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 22 World Wide Web and HTTP.
Fast Retransmit For sliding windows flow control we waited for a timer to expire before beginning retransmission of a packet TCP uses an additional mechanism.
Tiny http client and server
TCP.
TCP.
Magda El Zarki Professor, ICS UC, Irvine
Working at a Small-to-Medium Business or ISP – Chapter 7
Client-Server Interaction
Working at a Small-to-Medium Business or ISP – Chapter 7
TCP - Part I Karim El Defrawy
Transport Layer Our goals:
The IP, TCP, UDP protocols
Working at a Small-to-Medium Business or ISP – Chapter 7
CS4470 Computer Networking Protocols
SPEAKER: Yu-Shan Chou ADVISOR: DR. Kai-Wei Ke
CSE 461 HTTP and the Web.
Syara Hamdani Sandi Reza Fitroh
Figure 3-23: Transmission Control Protocol (TCP) (Study Figure)
EE 122: HyperText Transfer Protocol (HTTP)
World Wide Web Uniform Resource Locator hostname [:port]/path
CS4470 Computer Networking Protocols
TCP - Part I Relates to Lab 5. First module on TCP which covers packet format, data transfer, and connection management.
Transport Protocols: TCP Segments, Flow control and Connection Setup
Transport Protocols: TCP Segments, Flow control and Connection Setup
Transport Layer 9/22/2019.
TCP Connection Management
Presentation transcript:

The TIME-WAIT state in TCP and its Effect on Busy Servers Theodore Faber University of Southern California Xindian Long

Outline TIME-WAIT state and its function Performance problems on busy web server Solution: move the TIME-WAIT to clients End points negotiation TCP, HTTP modification Experiments Conclusion

Delayed Packet Problem

TIME-WAIT State Blocking connections between the same address/port pair Holding a Time-Wait State at one endpoint 2MSL The end doing active close holds the TIME-WAIT State

TIME-WAIT State

TIME-WAIT state in the server The server actively closes the connection, and maintains the TIME-WAIT state Server sending data TCP connection closure as end-of-transaction marker Simple protocol, fast response Otherwise: Knowing the content length Extra end-of-transaction marker requires masking & restoring the marker HTTP, FTP servers use TCP connection closure as an unambiguous end-of-transaction marker

Performance Problems on Busy Server Too many connections in TIME-WAIT State TCB Consumes memory Slows active connections A shorter MSL weaken the protection Solutions like persistent connection are not enough

Solution: TIME-WAIT in Clients Blocking at one end is enough Move the TIME-WAIT to the Client End points negotiating TCP Modification HTTP Modification Scales better than persistent connection

TIME-WAIT Negotiation Non-simultaneous connection establishment SYN (TW-Negotiate) Client Server SYN, Ack (TW-Negotiate set to its choice) Ack (TW-Negotiate set to the same value) If Acceptable If not Acceptable RST If the server does not respond with TW-negotiate option, the current TCP semantic is applied.

Either Contains No Option TIME-WAIT Negotiation Simultaneous connection establishment SYN (TW-Negotiate) SYN (TW-Negotiate) SYN-RCVD SYN-RCVD endpoint1 endpoint2 Value Known Value To Use Either Contains No Option No Option Same IP Address That IP Address Different IP Address Larger IP Address

TIME-WAIT Negotiation Advantage Makes post-connection memory requirement explict Transparent to applications No hidden performance penalty Disadvantage Significant change to TCP state machine Information from connection establishment affects the closure Negotiating at closure reduces endpoints’ control over their resource Allow either endpoint to decline the connection if the overhead is unacceptable. Only move to the TIME-WAIT those clients willing to accept the overhead Significant programming and testing effort Correctness proof based on TCP state machine would be invalidated

TCP Level Solution Modify the clients’ TCP implementation Sends a RST packet to the server Puts itself into a TIME-WAIT state Final Ack TW RST TW closed Client Server

Modified TCP State Machine

TCP Level Solution Only works for system that accepts <RST> in TIME-WAIT state Performance limited by the way <RST> is processed Changes the meaning of <RST> packet

Application Level Solution Decouple the end-of-connection with end-of-transaction indication HTTP 1.1 Modification Content Length as the end of transaction New Extension Request: CLIENT_CLOSE The client opens a connection Sending series of requests Collecting the response Sending a Client-Close, closing connection The server closes its end without responding

CLIENT_CLOSE Extention [Conection: close] line Client Server data CLIENT_CLOSE Detect the end of data close FIN . Last request: Connection: close data close FIN . Client Server

Application Level Solution CLIENT-WAIT notify the server about the client close Requires change only on the client side Conforms to HTTP 1.1 Requires no change to other protocols Creates no new security vulnerability ONLY reduces the load by HTTP

Experiments Environment Three Tests Implemented under SUNOS 4.1.3 Evaluation using custom benchmark program and WebSTONE benchmark 640 Mb/s Myrinet LAN Three Tests Demonstration of Worst-case Server Loading HTTP Experiment Time wait avoidance and Persistent Connections

Demonstration of Worst-Case Server Loading Test Configuration A Server Two clients doing simultaneous bulk transfer Server loaded with TIME-WAIT state by a fourth machine Constructing a worst case scenario Client connections are put at the end of the list of TCBs Two clients are used to neutralize the simple caching behavior Two distinct clients allow bursts from different clients to interleave The first experiment was designed to determine if TCB load reduces server throughput and if our modifications alleviate that effect;

Demonstration of Worst-Case Server Loading

HTTP Load Experiments Two workstations each running 20 clients File Size: 9KB-5MB System Throughput Mb/s Conn/s TCP Mem (Kb) Unmodified 20.97 49.09 722.7 TCP Mods 26.40 62.02 23.1 HTTP Mods 31.73 74.70 23.4 8 clients on 4 workstations, small files System Throughput Mb/s Conn/s TCP Mem (Kb) Unmodified Fails TCP Mods 1.14 223.8 16.1 HTTP Mods 222.4

TIME-WAIT Avoidance and Persistent Connection 2 Client machine, 1 Server 5 HTTP request bursts 10 request / Connection TIME-WAIT avoidance methods increase the per-connection throughput as client load increases Throughput vs. Clients

TIME-WAIT Avoidance and Persistent Connection Connection Rate vs. Clients

TIME-WAIT Avoidance and Persistent Connection Memory vs. Clients

Conclusion TCP With TCP With CLIENT_CLOSE TIME-WAIT Client HTTP Negotiation <RST> Extension Reduces TIME-WAIT Loading Yes Yes Yes Compatible With Current Protocols Yes Yes Yes Changes Are Effective If Only No Yes Yes The Client Is Modified Allows System To Prevent Yes No Yes TIME-WAIT Assassination No Changes To Transport Protocol No No Yes No Changes To Application Protocols Yes Yes No Adds No Packet Exchanges To Yes No No Modified Protocol TIME-WAIT Allocation Is A Requirement Yes No No of Connection Establishment