15-441 Computer Networking Lecture 10 – TCP & Routers.

Slides:



Advertisements
Similar presentations
Computer Networking Lecture 20 – Queue Management and QoS.
Advertisements

Introduction 1 Lecture 13 Transport Layer (Transmission Control Protocol) slides are modified from J. Kurose & K. Ross University of Nevada – Reno Computer.
Congestion Control Reasons: - too many packets in the network and not enough buffer space S = rate at which packets are generated R = rate at which receivers.
1 Transport Protocols & TCP CSE 3213 Fall April 2015.
Transport Layer3-1 TCP. Transport Layer3-2 TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 r full duplex data: m bi-directional data flow in same connection.
Data Communications and Computer Networks Chapter 3 CS 3830 Lecture 16 Omar Meqdadi Department of Computer Science and Software Engineering University.
1 CS492B Project #2 TCP Tutorial # Jin Hyun Ju.
Congestion Control Created by M Bateman, A Ruddle & C Allison As part of the TCP View project.
TCP Congestion Control Dina Katabi & Sam Madden nms.csail.mit.edu/~dina 6.033, Spring 2014.
Fundamentals of Computer Networks ECE 478/578 Lecture #21: TCP Window Mechanism Instructor: Loukas Lazos Dept of Electrical and Computer Engineering University.
Explicit Congestion Notification ECN Tilo Hamann Technical University Hamburg-Harburg, Germany.
15-744: Computer Networking L-10 Congestion Control.
Lecture 18 – TCP Performance
15-744: Computer Networking L-11 Queue Management.
1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug 1993), pp
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 TCP (Part III: Miscl) Shivkumar Kalyanaraman Rensselaer Polytechnic Institute
Computer Networking Lecture 19 – TCP Performance.
Computer Networking Lecture 16 – More TCP
Computer Networking Lecture 17 – Queue Management As usual: Thanks to Srini Seshan and Dave Anderson.
Random Early Detection Gateways for Congestion Avoidance
Advanced Computer Networks : RED 1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking,
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Computer Networking Lecture 17 – TCP Performance Dejian Ye, Liu Xin.
Computer Networking Lecture 17 – TCP Performance & Future Eric Anderson Fall
A Simulation of Adaptive Packet Size in TCP Congestion Control Zohreh Jabbari.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
1 Queue Management Hamed Khanmirza Principles of Networking University of Tehran.
3: Transport Layer3b-1 TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 r full duplex data: m bi-directional data flow in same connection m MSS: maximum.
2: Transport Layer 21 Transport Layer 2. 2: Transport Layer 22 TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 r full duplex data: m bi-directional data.
CS 268: Computer Networking L-6 Router Congestion Control.
Advance Computer Networking L-6 TCP & Routers Acknowledgments: Lecture slides are from the graduate level Computer Networks course thought by Srinivasan.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
1 Lecture 14 High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation.
Computer Networking TCP (cont.). Lecture 16: Overview TCP congestion control TCP modern loss recovery TCP interactions TCP options TCP.
Queueing and Active Queue Management Aditya Akella 02/26/2007.
Copyright 2008 Kenneth M. Chipps Ph.D. Controlling Flow Last Update
Copyright © Lopamudra Roychoudhuri
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
Lecture 9 – More TCP & Congestion Control
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Computer Networking Lecture 18 – More TCP & Congestion Control.
TCP: Transmission Control Protocol Part II : Protocol Mechanisms Computer Network System Sirak Kaewjamnong Semester 1st, 2004.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
Transport Layer3-1 Chapter 3 outline r 3.1 Transport-layer services r 3.2 Multiplexing and demultiplexing r 3.3 Connectionless transport: UDP r 3.4 Principles.
Internet Networking recitation #11
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
Transport Layer: Sliding Window Reliability
© Janice Regan, CMPT 128, CMPT 371 Data Communications and Networking Congestion Control 0.
Transport Layer3-1 Transport Layer If you are going through Hell Keep going.
11 CS716 Advanced Computer Networks By Dr. Amir Qayyum.
DMET 602: Networks and Media Lab Amr El Mougy Yasmeen EssamAlaa Tarek.
Other Methods of Dealing with Congestion
DMET 602: Networks and Media Lab
Chapter 5 TCP Sequence Numbers & TCP Transmission Control
Internet Networking recitation #9
TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 full duplex data:
Chapter 3 outline 3.1 transport-layer services
Chapter 5 TCP Transmission Control
Lecture 19 – TCP Performance
Other Methods of Dealing with Congestion
Other Methods of Dealing with Congestion
CS640: Introduction to Computer Networks
Internet Networking recitation #10
If both sources send full windows, we may get congestion collapse
TCP Congestion Control
Transport Layer: Congestion Control
TCP: Transmission Control Protocol Part II : Protocol Mechanisms
Presentation transcript:

Computer Networking Lecture 10 – TCP & Routers

Lecture 10: Overview TCP & router queuing TCP details Workloads

Lecture 10: TCP Performance Can TCP saturate a link? Congestion control Increase utilization until… link becomes congested React by decreasing window by 50% Window is proportional to rate * RTT Doesn’t this mean that the network oscillates between 50 and 100% utilization? Average utilization = 75%?? No…this is *not* right!

Lecture 10: TCP Performance In the real world, router queues play important role Window is proportional to rate * RTT But, RTT changes as well the window Window to fill links = propagation RTT * bottleneck bandwidth If window is larger, packets sit in queue on bottleneck link

Lecture 10: TCP Performance If we have a large router queue  can get 100% utilization But, router queues can cause large delays How big does the queue need to be? Windows vary from W  W/2 Must make sure that link is always full W/2 > RTT * BW W = RTT * BW + Qsize Therefore, Qsize > RTT * BW Ensures 100% utilization Delay? Varies between RTT and 2 * RTT

Lecture 10: Queuing Disciplines Each router must implement some queuing discipline Queuing allocates both bandwidth and buffer space: Bandwidth: which packet to serve (transmit) next Buffer space: which packet to drop next (when required) Queuing also affects latency

Lecture 10: Typical Internet Queuing FIFO + drop-tail Simplest choice Used widely in the Internet FIFO (first-in-first-out) Implies single class of traffic Drop-tail Arriving packets get dropped when queue is full regardless of flow or importance Important distinction: FIFO: scheduling discipline Drop-tail: drop policy

Lecture 10: FIFO + Drop-tail Problems Leaves responsibility of congestion control completely to the edges (e.g., TCP) Does not separate between different flows No policing: send more packets  get more service Synchronization: end hosts react to same events

Lecture 10: FIFO + Drop-tail Problems Full queues Routers are forced to have have large queues to maintain high utilizations TCP detects congestion from loss Forces network to have long standing queues in steady-state Lock-out problem Drop-tail routers treat bursty traffic poorly Traffic gets synchronized easily  allows a few flows to monopolize the queue space

Lecture 10: Active Queue Management Design active router queue management to aid congestion control Why? Router has unified view of queuing behavior Routers can distinguish between propagation and persistent queuing delays Routers can decide on transient congestion, based on workload

Lecture 10: Design Objectives Keep throughput high and delay low High power (throughput/delay) Accommodate bursts Queue size should reflect ability to accept bursts rather than steady-state queuing Improve TCP performance with minimal hardware changes

Lecture 10: Lock-out Problem Random drop Packet arriving when queue is full causes some random packet to be dropped Drop front On full queue, drop packet at head of queue Random drop and drop front solve the lock-out problem but not the full-queues problem

Lecture 10: Full Queues Problem Drop packets before queue becomes full (early drop) Intuition: notify senders of incipient congestion Example: early random drop (ERD): If qlen > drop level, drop each new packet with fixed probability p Does not control misbehaving users

Lecture 10: Random Early Detection (RED) Detect incipient congestion Assume hosts respond to lost packets Avoid window synchronization Randomly mark packets Avoid bias against bursty traffic

Lecture 10: RED Algorithm Maintain running average of queue length If avg < min th do nothing Low queuing, send packets through If avg > max th, drop packet Protection from misbehaving sources Else mark packet in a manner proportional to queue length Notify sources of incipient congestion

Lecture 10: RED Operation Min thresh Max thresh Average Queue Length min th max th max P 1.0 Avg queue length P(drop)

Lecture 10: Overview TCP & router queuing TCP details Workloads

Lecture 10: Observed TCP Problems Too many small packets Delayed acks Silly window syndrome Nagel’s algorithm

Lecture 10: Delayed ACKS Problem: In request/response programs, you send separate ACK and Data packets for each transaction Solution: Don’t ACK data immediately Wait 200ms (must be less than 500ms – why?) Must ACK every other packet Must not delay duplicate ACKs

Lecture 10: TCP ACK Generation [RFC 1122, RFC 2581] Event In-order segment arrival, No gaps, Everything else already ACKed In-order segment arrival, No gaps, One delayed ACK pending Out-of-order segment arrival Higher-than-expect seq. # Gap detected Arrival of segment that Partially or completely fills gap TCP Receiver action Delayed ACK. Wait up to 500ms for next segment. If no next segment, send ACK Immediately send single cumulative ACK Send duplicate ACK, indicating seq. # of next expected byte Immediate ACK if segment starts at lower end of gap

Lecture 10: Delayed Ack Impact TCP congestion control triggered by acks If receive half as many acks  window grows half as fast Slow start with window = 1 Will trigger delayed ack timer First exchange will take at least 200ms Start with > 1 initial window Bug in BSD, now a “feature”/standard

Lecture 10: Silly Window Syndrome Problem: (Clark, 1982) If receiver advertises small increases in the receive window then the sender may waste time sending lots of small packets Solution Receiver must not advertise small window increases Increase window by min(MSS,RecvBuffer/2)

Lecture 10: Nagel’s Algorithm Small packet problem: Don’t want to send a 41 byte packet for each keystroke How long to wait for more data? Solution: Allow only one outstanding small (not full sized) segment that has not yet been acknowledged Can be disabled for interactive applications

Lecture 10: TCP Extensions Implemented using TCP options Timestamp Protection from sequence number wraparound Large windows Maximum segment size

Lecture 10: Large Windows Delay-bandwidth product for 100ms delay 1.5Mbps: 18KB 10Mbps: 122KB 45Mbps: 549KB 100Mbps: 1.2MB 622Mbps: 7.4MB 1.2Gbps: 14.8MB 10Mbps > max 16bit window Scaling factor on advertised window Specifies how many bits window must be shifted to the left Scaling factor exchanged during connection setup

Lecture 10: Window Scaling: Example Use of Options “Large window” option (RFC 1323) Negotiated by the hosts during connection establishment Option 3 specifies the number of bits by which to shift the value in the 16 bit window field Independently set for the two transmit directions The scaling factor specifies bit shift of the window field in the TCP header Scaling value of 2 translates into a factor of 4 Old TCP implementations will simply ignore the option Definition of an option TCP syn SW?3 TCP syn,ack SW yes 3 SW?2 TCP ack SW yes 2

Lecture 10: Maximum Segment Size (MSS) Exchanged at connection setup Typically pick MTU of local link What all does this effect? Efficiency Congestion control Retransmission Path MTU discovery Why should MTU match MSS?

Lecture 10: Protection From Wraparound Wraparound time vs. Link speed 1.5Mbps: 6.4 hours 10Mbps: 57 minutes 45Mbps: 13 minutes 100Mbps: 6 minutes 622Mbps: 55 seconds 1.2Gbps: 28 seconds Why is this a problem? 55seconds < MSL! Use timestamp to distinguish sequence number wraparound

Lecture 10: Overview TCP & router queuing TCP details Workloads

Lecture 10: Changing Workloads New applications are changing the way TCP is used 1980’s Internet Telnet & FTP  long lived flows Well behaved end hosts Homogenous end host capabilities Simple symmetric routing 2000’s Internet Web & more Web  large number of short xfers Wild west – everyone is playing games to get bandwidth Cell phones and toasters on the Internet Policy routing

Lecture 10: Short Transfers Fast retransmission needs at least a window of 4 packets To detect reordering Short transfer performance is limited by slow start  RTT

Lecture 10: Short Transfers Start with a larger initial window What is a safe value? TCP already burst 3 packets into network during slow start Large initial window = min (4*MSS, max (2*MSS, 4380 bytes)) [rfc2414] Not a standard yet Enables fast retransmission Only used in initial slow start not in any subsequent slow start

Lecture 10: Well Behaved vs. Wild West How to ensure hosts/applications do proper congestion control? Who can we trust? Only routers that we control Can we ask routers to keep track of each flow Per flow information at routers tends to be expensive Fair-queuing later in the semester

Lecture 10: TCP Fairness Issues Multiple TCP flows sharing the same bottleneck link do not necessarily get the same bandwidth. Factors such as roundtrip time, small differences in timeouts, and start time, … affect how bandwidth is shared The bandwidth ratio typically does stabilize Users can grab more bandwidth by using parallel flows. Each flow gets a share of the bandwidth to the user gets more bandwidth than users who use only a single flow

Lecture 10: TCP (Summary) General loss recovery Stop and wait Selective repeat TCP sliding window flow control TCP state machine TCP loss recovery Timeout-based RTT estimation Fast retransmit Selective acknowledgements

Lecture 10: TCP (Summary) Congestion collapse Definition & causes Congestion control Why AIMD? Slow start & congestion avoidance modes ACK clocking Packet conservation TCP performance modeling TCP interaction with routers/queuing