Transport layer UDP/TCP.

Slides:



Advertisements
Similar presentations
Transmission Control Protocol (TCP)
Advertisements

TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
1 TCP CSE May TCP Services Flow control Connection establishment and termination Congestion control 2.
Transport Layer 3-1 Fast Retransmit r time-out period often relatively long: m long delay before resending lost packet r detect lost segments via duplicate.
TDC365 Spring 2001John Kristoff - DePaul University1 Internetworking Technologies Transmission Control Protocol (TCP)
TDC375 Winter 03/04 John Kristoff - DePaul University 1 Network Protocols Transmission Control Protocol (TCP)
Computer Networks Transport Layer. Topics F Introduction  F Connection Issues F TCP.
Department of Electronic Engineering City University of Hong Kong EE3900 Computer Networks Transport Protocols Slide 1 Transport Protocols.
1 K. Salah Module 6.1: TCP Flow and Congestion Control Connection establishment & Termination Flow Control Congestion Control QoS.
TCP. Learning objectives Reliable Transport in TCP TCP flow and Congestion Control.
Process-to-Process Delivery:
TCOM 509 – Internet Protocols (TCP/IP) Lecture 04_b Transport Protocols - TCP Instructor: Dr. Li-Chuan Chen Date: 09/22/2003 Based in part upon slides.
TCP : Transmission Control Protocol Computer Network System Sirak Kaewjamnong.
CS332, Ch. 26: TCP Victor Norman Calvin College 1.
TCP1 Transmission Control Protocol (TCP). TCP2 Outline Transmission Control Protocol.
Chapter 12 Transmission Control Protocol (TCP)
TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang.
What is TCP? Connection-oriented reliable transfer Stream paradigm
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
TCP: Transmission Control Protocol Part II : Protocol Mechanisms Computer Network System Sirak Kaewjamnong Semester 1st, 2004.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
1 TCP Timeout And Retransmission Chapter 21 TCP sets a timeout when it sends data and if data is not acknowledged before timeout expires it retransmits.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
ECE 4110 – Internetwork Programming
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Peer-to-Peer Networks 13 Internet – The Underlay Network
McGraw-Hill Chapter 23 Process-to-Process Delivery: UDP, TCP Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
1 TCP ProtocolsLayer name DNSApplication TCP, UDPTransport IPInternet (Network ) WiFi, Ethernet Link (Physical)
TCP over Wireless PROF. MICHAEL TSAI 2016/6/3. TCP Congestion Control (TCP Tahoe) Only ACK correctly received packets Congestion Window Size: Maximum.
Distributed Systems 11. Transport Layer
The Transport Layer Implementation Services Functions Protocols
TCP - Part II.
DMET 602: Networks and Media Lab
Fast Retransmit For sliding windows flow control we waited for a timer to expire before beginning retransmission of a packet TCP uses an additional mechanism.
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Chapter 15 Transmission Control Protocol (TCP)
Chapter 3 outline 3.1 transport-layer services
The Transport Layer (TCP)
Chapter 6 TCP Congestion Control
Introduction to Networks
COMP 431 Internet Services & Protocols
Chapter 15 Transmission Control Protocol (TCP)
5. End-to-end protocols (part 1)
Process-to-Process Delivery, TCP and UDP protocols
PART 5 Transport Layer Computer Networks.
Chapter 3 outline 3.1 Transport-layer services
Magda El Zarki Professor, ICS UC, Irvine
Computer Networks Bhushan Trivedi, Director, MCA Programme, at the GLS Institute of Computer Technology, Ahmadabad.
Transport Layer Unit 5.
Transmission Control Protocol (TCP)
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
Process-to-Process Delivery:
Chapter 6 TCP Congestion Control
CS640: Introduction to Computer Networks
Chapter 15 Transmission Control Protocol (TCP)
Chapter 17. Transport Protocols
TCP Congestion Control
The Transmission Control Protocol (TCP)
Transport Protocols: TCP Segments, Flow control and Connection Setup
Transport Layer: Congestion Control
Chapter 3 outline 3.1 Transport-layer services
Transport Protocols: TCP Segments, Flow control and Connection Setup
Process-to-Process Delivery: UDP, TCP
TCP flow and congestion control
Computer Networks Protocols
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Presentation transcript:

Transport layer UDP/TCP

Transport Protocols Provide End-to-End Data Transfer Services Create a logical communication channel between two communicating parties A user does not need to worry about how to route his/her packets through a network. May provide various transport services Error control? (packet dropping, duplication, corruption) In-sequence delivery? Preserve message boundary? Out-of-band delivery? Flow control? Congestion control? Example: UDP and TCP

Using Port Number Is Needed to Identify a Communicating Party on a Machine A communicating party may be an application program such as ftp, telnet, or www running on a machine. Using IP addresses only allows us to forward and deliver packets to a machine To further identify which application program should receive a packet, in addition to using IP addresses, we need to use port numbers. Example: A web server uses TCP port 80.

Port number IP address

UDP Protocol Connectionless Get or lost service Primary applications: Before sending a UDP packet, we do not need to set up a connection first. Get or lost service No error, flow, congestion control No packet resequencing Preserve message boundary Primary applications: Multimedia streaming applications such as video conferenceing, video playback NFS (because there is no need to set up a connection before sending a request/reply)

TCP Protocol Connection-oriented Before sending a TCP packet, we need to set up a TCP connection first. A TCP connection is a full-duplex logical communication channel. The sending and receiving nodes need to be configured. However, the routers on the connection path need not be configured. Not the same as a virtual circuit or a physical circuit. Provide a reliable, in-sequence byte-stream delivery service With error, flow, and congestion control Can resequence arrived packets Do not preserve message boundary Applications: almost used everywhere (e.g., www, telnet, ftp, etc.)

UDP/TCP Header Formats

Berkeley Sockets Are the Most Popular Transport Primitives Client request Server reply

TCP Connection Setup and Termination One 3-way handshaking to set up a connection Two 2-way handshaking to close a full-duplex connection.

A Small Data Transfer Spends Most Time on Connection Setup and Termination According to measurement results, 70% Internet traffic is for WWW. And, the average web page size is only about 4 KB. (less than 3 1500-byte Ethernet packets) Therefore, if the network is not congested, most of a user’s waiting time is spent on the TCP connection setup and termination. (e.g., the RTT of a TCP connection to U.S. is about 200ms, (1.5 + 2) * 200 = 700 ms is needed for connection setup and termination) The transmission time of the 4 KB web page actually is nothing. (e.g., 3 * 1.2 = 3.6 ms on a 10 Mbps Ethernet)

TCP State Diagram

TCP Uses Sliding Windows to Implement Error, Flow, and Congestion Controls

Using Sliding Windows to Implement Error, Flow, and Congestion Controls Error control Do not advance the sending sliding window until the expected ACK comes back Flow control Let the size of the sending sliding window be always less than the receiving node’s buffer size. Congestion control Based on the current available bandwidth in the network, dynamically adjust the size of the sending sliding window The instantaneous throughput of a TCP connection is (sending sliding window size / round-trip time)

TCP Uses A Cumulative Acknowledgement Scheme Sending back an ACK(n) means that all data packets with sequence number up to n are correctly received. Another option is the individual acknowledgement scheme in which ACK(n) only means that data packet with sequence number n is correctly received. Advantages: solve the ACK packet loss problem. We use ACK to detect data packet losses. Should we use ACKACK to detect ACK packet losses? Disadvantage: if there is a packet lost, the ACK number that is sent back to the sender cannot advance even though new data packets have been received. This causes the sending window to stick and no more new data packets can be sent out. TCP throughput thus is poor.

A TCP Connection Has Five Timers Retransmit timer If a data packet is lost, it needs to be resent. Persist timer 500ms . If a window open update from the receiver is lost, the sender and receiver may deadlock. The sender sends a probe if the connection has been idle for 500 ms. Keepalive timer 2 hours. Check if the other communicating party still alive. Good for server to release resources used by dead clients. 2MSL (Maximum segment lifetime) time wait timer A few minutes. May need to resend the final ACK. Also, prevent a new connection using the same port/IP address to accept packets for a previous one Delayed ACK timer 200 ms, hoping to use ACK piggyback to save network bandwidth

RTT Estimation Is Important for Correctly Setting the Retransmit Timer If the estimated RTT is too large Then when a data packet is lost, we wait too long before resending it, increasing delay and decreasing throughput If the estimated RTT is too small We may prematurely resend data packets that are not lost but just have not reached the receiver. Waste network bandwidth Exacerbate the current congestion (note: if a data packet is lost, mostly likely right now the network is congested and the packet is dropped due to buffer overflow.) So, the current TCP RTT estimation is quite conservative. On most OS, the value used for the retransmit timer cannot be less than 1 second!

Karn’s Algorithm to Filter Out Invalid RTT Samples If a data packet is resent and then a corresponding ACK is received, then there are two possibilities, which the TCP sender cannot distinguish: Karn’s algorithm is that we should not use the measured RTT for a retransmitted data packet to estimate the RTT. Data ? Ack

Exponential Average of RTT Samples and Its Deviation (K+1)’th RTT sample Estimated Smoothed RTT g = 0.125 h = 0.25 f = 4 The value that should be used for the RxTransmit timer Estimated Smoothed RTT Deviation (K+1)’th RTT deviation

Exponential Retransmit Time Out Backoff The calculated srtt (k) is used for sending new data packets. For sending retransmitted data packets, the used value for the retransmitted packet is srtt(k) * 2 ^(n), if this is the nth time to resend the same packet. Share the same spirit of Ethernet’s MAC exponential backoff.

TCP Congestion Control Is Window-Based The purpose of congestion control is to decrease a traffic source’s sending rate when the network is congested, and increase the source’s sending rate when the network is not congested. TCP uses packet dropping as the signal of network congestion Congestion control can be either window-based or rate-based. Window-based: by controlling the maximum number of outstanding data packets in RTT. (That is, the size of congestion window -- cwnd.) Rate-based: by directly controlling the sending rate.

TCP Congestion Window Uses Additive-Increase and Multiplicative Decrease When there is no congestion (no packet dropping), the congestion window cw increases itself by a constant value per RTT to probe for more available bandwidth. (The constant for TCP is 1 packet size.) When there is one packet dropping, the cw decreases itself by a half. That is, cw = cw/2. Research results show that only AIMD can make the network stable. MIMD, AIAD, or MIAD do not work. CW

To Probe Available BW, TCP Congestion Control Itself Causes Packet Drops Suppose that the bottleneck router’s buffer can store N packets and the links on the path of a TCP connection can store 0 packets, and there is only one greedy TCP connection using the bottleneck link’s bandwidth, then the packet drop rate induced by TCP congestion control on the router is 1 / (N/2 + (N/2+1) + (N/2+2) + (N/2 + N/2)) CW drop N N/2

The Packet Drop Rate Increases With the Number of Competing TCP Connections Using the previous example, suppose now there are M competing TCP sharing the bottleneck link’s bandwidth (also, the bottleneck router’s buffer space), then the packet drop rate increases to: Let B = N/M Packet drop rate becomes 1 / (B/2 + (B/2+1) + (B/2+2) + (B/2 + B/2)) This suggests that as the number of TCP users increases in the Internet, the packet drop rates on routers will increase as well if we do not increase the link bandwidth to hold more packets on links.

TCP Congestion Control’s Slow Start and Congestion Avoidance Phases The goals of congestion control are: High link utilization When the network has available bandwidth, we want to use it immediately Slow start: double the CW every RTT. Implementation: Increase the CW by one packet when an ACK is received. Low packet drop rate When the network is stable, traffic sources should reduce their sending rates until their aggregate sending rate match the available bandwidth. Congestion Avoidance: increase the CW by one packet every RTT Implementation: Increase the CW by one packet when CW packets have been received.

Slow Start Congestion Avoidance Threshold is set to be one half of the CW when packets are dropped.

TCP Uses Self-Clocking to Increase Its Congestion Window and Send Out Its Packets Because a new data packet is sent out when an ACK is received, the sender thus can send its packets at the bottleneck link’s bandwidth.

TCP Retransmit Time Out (RTO) Will Resend Lost Data Packets If a data packet is dropped, it needs to be retransmitted. The retransmit timer will expire if the corresponding ACK does not come back soon. Most operating systems such as BSD, Linux, Windows enforce a lower bound (1 second) on the retransmit timer value. If every dropped packet needs to be resent by RTO, the TCP throughput will be very poor.

TCP Fast Retransmit and Recovery Quickly Resend Lost Packets If the sender receives three duplicate ACK packets from the receiver, it cuts the CW by a half and immediately resends the lost data packet pointed by these duplicate ACK packets. This scheme greatly improves a TCP connection’s throughput. Problems: Packet reordering caused by the network may unnecessarily trigger Fast Retransmit (cutting its CW by a half), thus causing poor throughput Small TCP transfers cannot compete with long TCP transfers. A small TCP transfer’s CW cannot grow to a large value before it is finished. As a result, it is hard for the small TCP transfer to receive 3 duplicated ACKs. Most of the time, RTO is needed.

TCP Fast Retransmit and Recovery CW N drop N/2 TCP Retransmit Timeout CW drop > 1 second > 1 second

Where Does the Time Go When You Download a Web Page? The signal propagation delays from your machine to the server. (RTT) No way to improve unless using proxy servers Packet transmission times spent on each links Dump your 56 Kbps Modem and subscribe to 1.5 Mbps ADSL Queueing delay spent on each router Download between 2 and 6 am in the early morning. TCP connection setup (1.5 RTT) and termination (2 RTTs) Use proxy servers if available TCP slow start and congestion avoidance TCP retransmit timeout ( > 1 second) TCP exponential retransmit timer backoff You better press the “reload” button 

The Congestion Collapse Problem of Internet UDP traffic sources do not use any congestion control. A bad news is that multimedia streaming applications, which uses UDP to transport A/V, are becoming more and more popular. Even if all traffic sources use TCP, the packet drop rates on router caused by TCP congestion control increase with the number of TCP users. Because the average web page size is too small (only 4~8 KB), most small transfers are finished before TCP congestion control has a chance to take effect.

The Unfairness Problem of the Internet UDP traffic (e.g., RealPlayer) competes with TCP traffic (e.g., FTP) UDP does not use any congestion control. Long TCP transfers (e.g., download IE 5.0 from MS) compete with short TCP transfers (e.g., download a 4 KB web page) TCP fast retransmit does not work well for small transfers because CW cannot grow to a large value. A site with a large number of TCP connections competes with a site with a small number of TCP connections TCP exhibits per-flow fairness. That is, if there are N TCP connections competing for available bandwidth, each TCP connection will roughly achieve 1/N bandwidth.

TCP Problems Poor performance on lossy wireless links. Every packet loss (even caused by packet corruption) is assumed to be caused by packet drop due to congestion. TCP congestion control is unnecessarily triggered. Performance is unnecessarily poor. The main problem is that TCP cannot distinguish between congestion and corruption losses. Couple error control and congestion control together Cannot be used for regulating the sending rate of a UDP packet stream, which is commonly used by multimedia streaming applications such as RealPlayer. Generated traffic is too bursty. Bursty traffic is hard to manage because it can cause massive packet drops at routers.