Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Chapter 13 TCP Implementation.

Slides:



Advertisements
Similar presentations
Laboratório de Teleprocessamento e Redes1 Unix Network Programming Prof. Nelson Fonseca
Advertisements

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Chapter 2 TCP/IP Fundamentals.
CSCI 4550/8556 Computer Networks
Make Protocol Ready for Gigabit. Scopes In this presentation, we will present various protocol design and implementation techniques that can allow a protocol.
BZUPAGES.COM 1 User Datagram Protocol - UDP RFC 768, Protocol 17 Provides unreliable, connectionless on top of IP Minimal overhead, high performance –No.
Transport Layer – TCP (Part1) Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing, UNF.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
Fundamentals of Computer Networks ECE 478/578 Lecture #21: TCP Window Mechanism Instructor: Loukas Lazos Dept of Electrical and Computer Engineering University.
- Reliable Stream Transport Service
Transport Layer 3-1 Fast Retransmit r time-out period often relatively long: m long delay before resending lost packet r detect lost segments via duplicate.
VIA and Its Extension To TCP/IP Network Yingping Lu Based on Paper “Queue Pair IP, …” by Philip Buonadonna.
Transport Layer3-1 Congestion Control. Transport Layer3-2 Principles of Congestion Control Congestion: r informally: “too many sources sending too much.
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 TCP (Part III: Miscl) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
TCP. Learning objectives Reliable Transport in TCP TCP flow and Congestion Control.
What Can IP Do? Deliver datagrams to hosts – The IP address in a datagram header identify a host IP treats a computer as an endpoint of communication Best.
Lect3..ppt - 09/12/04 CIS 4100 Systems Performance and Evaluation Lecture 3 by Zornitza Genova Prodanoff.
1 Transport Layer Computer Networks. 2 Where are we?
3: Transport Layer3b-1 Principles of Congestion Control Congestion: r informally: “too many sources sending too much data too fast for network to handle”
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
COMT 4291 Communications Protocols and TCP/IP COMT 429.
3: Transport Layer3b-1 TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 r full duplex data: m bi-directional data flow in same connection m MSS: maximum.
Principles of Congestion Control Congestion: informally: “too many sources sending too much data too fast for network to handle” different from flow control!
TCOM 509 – Internet Protocols (TCP/IP) Lecture 04_b Transport Protocols - TCP Instructor: Dr. Li-Chuan Chen Date: 09/22/2003 Based in part upon slides.
TCP : Transmission Control Protocol Computer Network System Sirak Kaewjamnong.
TCP1 Transmission Control Protocol (TCP). TCP2 Outline Transmission Control Protocol.
The Transmission Control Protocol (TCP) Application Services (Telnet, FTP, , WWW) Reliable Stream Transport (TCP) Connectionless Packet Delivery.
Chapter 12 Transmission Control Protocol (TCP)
1 M. Atiquzzaman, SCTP over satellite networks IEEE Computer Communications Workshop, Oct 20, SCTP over Satellite Networks Mohammed Atiquzzaman School.
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 5 th edition. Jim Kurose, Keith Ross Addison-Wesley, April 2009.
Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Chapter 8 TCP/IP Performance over Optical Networks.
Copyright © Lopamudra Roychoudhuri
Lecture 9 – More TCP & Congestion Control
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
Computer Networking Lecture 18 – More TCP & Congestion Control.
TCP: Transmission Control Protocol Part II : Protocol Mechanisms Computer Network System Sirak Kaewjamnong Semester 1st, 2004.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
Congestion Avoidance and Control Van Jacobson and Michael Karels Presented by Sui-Yu Wang.
Advance Computer Networks Lecture#09 & 10 Instructor: Engr. Muhammad Mateen Yaqoob.
ECE 4110 – Internetwork Programming
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
Peer-to-Peer Networks 13 Internet – The Underlay Network
IT 424 Networks2 IT 424 Networks2 Ack.: Slides are adapted from the slides of the book: “Computer Networking” – J. Kurose, K. Ross Chapter 3: Transport.
Transport Layer3-1 Transport Layer If you are going through Hell Keep going.
TCP/IP1 Address Resolution Protocol Internet uses IP address to recognize a computer. But IP address needs to be translated to physical address (NIC).
1 TCP ProtocolsLayer name DNSApplication TCP, UDPTransport IPInternet (Network ) WiFi, Ethernet Link (Physical)
3. END-TO-END PROTOCOLS (PART 1) Rocky K. C. Chang Department of Computing The Hong Kong Polytechnic University 22 March
Chapter 3 outline 3.1 Transport-layer services
Internet Networking recitation #9
TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 full duplex data:
Chapter 3 outline 3.1 transport-layer services
Introduction to Networks
COMP 431 Internet Services & Protocols
5. End-to-end protocols (part 1)
Magda El Zarki Professor, ICS UC, Irvine
Introduction of Transport Protocols
Internet and Intranet Protocols and Applications
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Lecture 19 – TCP Performance
Jiyong Park Seoul National University, Korea
Internet Networking recitation #10
TCP Overview.
Transport Layer: Congestion Control
TCP flow and congestion control
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Review of Internet Protocols Transport Layer
Congestion Michael Freedman COS 461: Computer Networks
ECE 671 – Lecture 8 Network Adapters.
Presentation transcript:

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Chapter 13 TCP Implementation

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Objectives  Understand the structure of typical TCP implementation  Outline the implementation of extended standards for TCP over high-performance networks  Understand the sources of end-system overhead in typical TCP implementations, and techniques to minimize them  Quantify the effect of end-system overhead and buffering on TCP performance  Understand the role of Remote Direct Memory Access (RDMA) extensions for high-performance IP networking

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Contents  Overview of TCP implementation  High-performance TCP  End-system overhead  Copy avoidance  TCP offload

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Implementation Overview

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Overall Structure (RFC 793)  Internal structure specified in RFC 793  Fig. 13.1

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Data Structure of TCP Endpoint  Data structure of TCP endpoint  Transmission control block: Stores the connection state and related variables  Transmit queue: Buffers containing outstanding data  Receiver queue: Buffers for received data (but not yet forwarded to higher layer)

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Buffering and Data Movement  Buffer queues reside in the protocol-independent socket layer within the operating system kernel  TCP sender upcalls to the transmit queue to obtain data  TCP receiver notifies the receive queue of correct arrival of incoming data  BSD-derived kernels implement buffers in mbufs  Moves data by reference  Reduces the need to copy  Most implementations commit buffer space to the queue lazily  Queues consume memory only when the bandwidth of the network does not match the rate at which TCP user produces/consumes data

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain User Memory Access  Provides for movement of data to and from the memory of the TCP user  Copy semantics  SEND and RECEIVE are defined with copy semantics  The user can modify a send buffer at the time the SEND is issued  Direct access  Allows TCP to access the user buffers directly  Bypasses copying of data

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain TCP Data Exchange  TCP endpoints cooperate by exchanging segments  Each segment contains:  Sequence number seg.seq, segment data length seg.len, status bits, ack seq number seg.ack, advertised receive window size seg.wnd  Fig. 13.3

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Data Retransmissions  TCP sender uses retransmission timer to derive retransmission of unacknowledged data  Retransmits a segment if the timer fires  Retransmission timeout (RTO)  RTO<RTT: Aggressive; too many retransmissions  RTO>RTT: Conservative; low utilisation due to connection idle  In practice, adaptive retransmission timer with back-off is used (Specified in RFC 2988)

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Congestion Control  A retransmission event indicates (to TCP sender) that the network is congested  Congestion management is a function of the end-systems  RFC 2581 requires TCP end-systems respond to congestion by reducing sending rate  AIMD: Additive Increase Multiplicative Decrease  TCP sender probes for available bandwidth on the network path  Upon detection of congestion, TCP sender multiplicatively reduces cwnd  Achieves fairness among TCP connections

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain High Performance TCP

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain TCP Implementation with High Bandwidth-Delay Product  High bandwidth-delay product:  High speed networks (e.g. optical networks)  High-latency networks (e.g. satellite network)  Collectively called Long Fat Networks (LFNs)  LFNs require large window size (more than 16 bits as originally defined for TCP)  Window scale option allows TCP sender to advertise large window size (e.g. 1 Gbyte)  Specified at connection setup  Limits window sizes in units of up to 16K

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Round Trip Time Estimation  Accuracy of RTT estimation depends on frequent sample measurements of RTT  Percentage of segments sampled decreases with larger windows  May be insufficient for LFNs  Timestamp option  Enables the sender to compute RTT samples  Provides safeguard against accepting out-of-sequence numbers

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Path MTU Discovery  Most efficient by using the largest MSS without segmentation  Enables TCP sender to automatically discover the largest acceptable MSS  TCP implementation must correctly handle dynamic changes to MSS  Never leaves more than 2*MSS bytes of data unacknowledged  TCP sender may need to segment data for retransmission

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain End-System Overhead

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Reduce End-System Overhead  TCP imposes processing overhead in operating system  Adds directly to latency  Consumes a significant share of CPU cycles and memory  Reducing overhead can improve application throughput

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Relationship Between Bandwidth and CPU Utilization

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Achievable Throuput for Host- Limited Systems

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Sources of Overhead for TCP/IP  Per-transfer overhead  Per-packet overhead  Per-byte overhead  Fig. 13.5

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Per-Packet Overhead  Increasing packet size can mitigate the impact of per-packet and per-segment overhead  Fig  Increasing segment size S increases achievable bandwidth  As packet size grows, the effect of per-packet overhead becomes less significant  Interrupts  A significant source of per-packet overhead

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Relationship between Packet Size and Achievable Bandwidth

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Relationship between Packet Overhead and Bandwidth

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Checksum Overhead  A source of per-byte overhead  Ways for reducing checksum overhead:  Complete multiple steps in a single traversal to reduce per-byte overhead  Integrate chechsumming with the data copy  Compute the checksum in hardware

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Copy Avoidance

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain Copy Avoidance for High- Performance TCP  Page remapping  Uses virtual memory to reduce copying across the TCP/user interface  Typically resides at the socket layer in the OS kernel  Scatter/gather I/O  Does not require copy semantics  Entails a comprehensive restructuring of OS and I/O interfaces  Remote Direct Memory Access (RDMA)  Steers incoming data directly into user-specified buffers  IETF standards under way

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain TCP Offload

Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain TCP Offload  Supports TCP/IP protocol functions directly on the network adapter (NIC)  Processing  TCP checksum offloading  Significantly reduces per-packet overheads for TCP/IP protocol processing  Helps to avoid expensive copy operations