EECS 122: Introduction to Computer Networks Midterm II Review

Slides:



Advertisements
Similar presentations
Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Fourth Edition,Peterson.
Advertisements

School of Information Technologies TCP Congestion Control NETS3303/3603 Week 9.
Transport Layer 3-1 Fast Retransmit r time-out period often relatively long: m long delay before resending lost packet r detect lost segments via duplicate.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #7 TCP New Reno Vs. Reno.
1 Internet Networking Spring 2002 Tutorial 10 TCP NewReno.
Katz, Stoica F04 EECS 122: Introduction to Computer Networks Packet Scheduling and QoS Computer Science Division Department of Electrical Engineering and.
Computer Networks : TCP Congestion Control1 TCP Congestion Control.
1 Chapter 3 Transport Layer. 2 Chapter 3 outline 3.1 Transport-layer services 3.2 Multiplexing and demultiplexing 3.3 Connectionless transport: UDP 3.4.
1 Internet Networking Spring 2004 Tutorial 10 TCP NewReno.
Advanced Computer Networks: TCP Congestion Control 1 TCP Congestion Control Lecture material taken from “Computer Networks A Systems Approach”, Fourth.
TCP: flow and congestion control. Flow Control Flow Control is a technique for speed-matching of transmitter and receiver. Flow control ensures that a.
Integrated Services (RFC 1633) r Architecture for providing QoS guarantees to individual application sessions r Call setup: a session requiring QoS guarantees.
CSE 461 University of Washington1 Topic How TCP implements AIMD, part 1 – “Slow start” is a component of the AI portion of AIMD Slow-start.
1 Lecture 14 High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation.
EE 122: Congestion Control and Avoidance Kevin Lai October 23, 2002.
1 Mao W07 Midterm Review EECS 489 Computer Networks Z. Morley Mao Monday Feb 19, 2007 Acknowledgement: Some.
1 TCP - Part II Relates to Lab 5. This is an extended module that covers TCP data transport, and flow control, congestion control, and error control in.
Lecture 9 – More TCP & Congestion Control
What is TCP? Connection-oriented reliable transfer Stream paradigm
Transport Layer 3-1 Chapter 3 Transport Layer Computer Networking: A Top Down Approach 6 th edition Jim Kurose, Keith Ross Addison-Wesley March
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
1 CS 4396 Computer Networks Lab TCP – Part II. 2 Flow Control Congestion Control Retransmission Timeout TCP:
CS640: Introduction to Computer Networks Aditya Akella Lecture 15 TCP – III Reliability and Implementation Issues.
1 Lecture, November 27, 2002 TCP Other Internet Protocols; Internet Traffic Scalability of Virtual Circuit Networks QoS.
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
Katz, Stoica F04 EECS 122: Introduction to Computer Networks Midterm II Review Computer Science Division Department of Electrical Engineering and Computer.
Peer-to-Peer Networks 13 Internet – The Underlay Network
CS 268: Lecture 5 (TCP Congestion Control) Ion Stoica February 4, 2004.
1 Lecture 15 Internet resource allocation and QoS Resource Reservation Protocol Integrated Services Differentiated Services.
Transmission Control Protocol (TCP) TCP Flow Control and Congestion Control CS 60008: Internet Architecture and Protocols Department of CSE, IIT Kharagpur.
Other Methods of Dealing with Congestion
The Transport Layer Implementation Services Functions Protocols
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
CS450 – Introduction to Networking Lecture 19 – Congestion Control (2)
Topics discussed in this section:
Transport Layer CS 381 3/7/2017.
Chapter 3 outline 3.1 transport-layer services
Chapter 6 TCP Congestion Control
COMP 431 Internet Services & Protocols
Introduction to Congestion Control
EECS 122: Introduction to Computer Networks Congestion Control
TCP.
TCP.
Introduction of Transport Protocols
EE 122: Router Support for Congestion Control: RED and Fair Queueing
TCP - Part II Relates to Lab 5. This is an extended module that covers TCP flow control, congestion control, and error control in TCP.
Advanced Computer Networks
Lecture 19 – TCP Performance
ECE 4450:427/527 - Computer Networks Spring 2017
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
Other Methods of Dealing with Congestion
EE 122: Quality of Service and Resource Allocation
CS 268: Lecture 4 (TCP Congestion Control)
Other Methods of Dealing with Congestion
Chapter 6 TCP Congestion Control
CS640: Introduction to Computer Networks
Computer Science Division
COMP/ELEC 429 Introduction to Computer Networks
EECS 122: Introduction to Computer Networks TCP Variations
CS4470 Computer Networking Protocols
TCP Overview.
EE 122: Lecture 10 (Congestion Control)
Congestion Control (from Chapter 05)
Computer Science Division
Transport Layer: Congestion Control
TCP flow and congestion control
EECS 122: Introduction to Computer Networks Packet Scheduling and QoS
کنترل جریان امیدرضا معروضی.
Lecture 6, Computer Networks (198:552)
Presentation transcript:

EECS 122: Introduction to Computer Networks Midterm II Review Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720-1776

Topics to be Covered Transport Protocols Congestion Control and Congestion Avoidance concepts, TCP variants QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts P&D: Sections 5.1, 5.2, 6.1- 6.3, 6.4.2, 6.5.1- 6.5.3 Project #2 Specification

Topics to be Covered Transport Protocols Congestion Control and Congestion Avoidance concepts, TCP variants QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts

Transport Layer Provide a way to decide which packets go to which applications (multiplexing/demultiplexing) Can provide more reliability, in order delivery, at most once delivery Can support messages of arbitrary length Govern when hosts should send data  can implement congestion and flow control

UDP User Datagram Protocol Minimalist transport protocol Same best-effort service model as IP Messages up to 64KB Provides multiplexing/demultiplexing to IP Does not provide flow and congestion control Application examples: video/audio streaming

TCP Transmission Control Protocol Reliable, in-order, and at most once delivery Messages can be of arbitrary length Provides multiplexing/demultiplexing to IP Provides congestion control and avoidance Application examples: file transfer, chat

TCP Service Open connection Reliable byte stream transfer from (IPa, TCP Port1) to (IPb, TCP Port2) Indication if connection fails: Reset Close connection

Flow Control vs. Congestion Control Flow control: regulates how much data can a sender send without overflowing the receiver Congestion control: regulates how much data can a sender send without congesting the network Congestion: dropping packets into the network due to buffer overflow at routers

Congestion Window (cwnd) Limits how much data can be in transit, i.e., how many unacknowledged bytes can the sender send AdvertisedWindow is used for flow control MaxWindow = min(cwnd, AdvertisedWindow) EffectiveWindow = MaxWindow – (LastByteSent – LastByteAcked) LastByteAcked LastByteSent sequence number increases

Topics to be Covered Transport Protocols Congestion Control and Congestion Avoidance concepts, TCP variants QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts

Why do You Care About Congestion Control? Otherwise you get to congestion collapse How might this happen? Assume network is congested (a router drops packets) You learn the receiver didn’t get the packet Either by ACK or Timeout What do you do? Retransmit packet Still receiver didn’t get the packet (because it is dropped again) Retransmit again …. and so on … And now assume that everyone is doing the same! Network will become more and more congested And this with duplicate packets rather than new packets!

Solutions? Increase buffer size. Why not? Slow down Questions: If you know that your packets are not delivered because network congestion, slow down Questions: How do you detect network congestion? Use packet loss as indication! By how much do you slow down?

TCP: Slow Start Goal: discover congestion quickly How? Quickly increase cwnd until network congested  get a rough estimate of the optimal size of cwnd: Whenever starting traffic on a new connection, or whenever increasing traffic after congestion was experienced: Set cwnd =1 Each time a segment is acknowledged increment cwnd by one (cwnd++).

Slow Start Example The congestion window size grows very rapidly TCP slows down the increase of cwnd when cwnd >= ssthresh cwnd = 1 segment 1 cwnd = 2 segment 2 segment 3 cwnd = 3 cwnd = 4 segment 4 segment 5 segment 6 segment 7 cwnd = 8

Congestion Avoidance Additive increase: starting from the rough estimate, slowly increase cwnd to probe for additional available bandwidth Multiplicative decrease: cut congestion window size aggressively if a timeout occurs If cwnd > ssthresh then each time a segment is acknowledged increment cwnd by 1/cwnd (cwnd += 1/cwnd).

Slow Start/Congestion Avoidance Example Assume that ssthresh = 8 Cwnd (in segments) ssthresh Roundtrip times

Putting Everything Together: TCP Pseudocode Initially: cwnd = 1; ssthresh = infinite; New ack received: if (cwnd < ssthresh) /* Slow Start*/ cwnd = cwnd + 1; else /* Congestion Avoidance */ cwnd = cwnd + 1/cwnd; Timeout: /* Multiplicative decrease */ ssthresh = cwnd/2;

The big picture cwnd Timeout Congestion Avoidance Slow Start sstresh

Packet Loss Detection Wait for Retransmission Time Out (RTO) What’s the problem with this? Because RTO is performance killer In the BSD TCP implementation, RTO is usually more than 1 second The granularity of RTT estimate is 500 ms Retransmission timeout is at least two times RTT Solution: Don’t wait for RTO to expire

Fast Retransmit Resend a segment after 3 duplicate ACKs Notes: Remember, a duplicate ACK means that an out-of sequence segment was received Notes: Duplicate ACKs due packet reordering! If window is small don’t get duplicate ACKs! segment 1 cwnd = 1 ACK 2 cwnd = 2 segment 2 segment 3 ACK 3 ACK 4 cwnd = 4 segment 4 segment 5 segment 6 ACK 4 segment 7 ACK 4 3 duplicate ACKs

Fast Recovery After a fast-retransmit set cwnd to ssthresh/2 i.e., don’t reset cwnd to 1 But when RTO expires still do cwnd = 1 Fast Retransmit and Fast Recovery  implemented by TCP Reno; most widely used version of TCP today

Fast Retransmit and Fast Recovery cwnd Congestion Avoidance Slow Start Fast retransmit Time Retransmit after 3 duplicated acks Prevent expensive timeouts No need to slow start again At steady state, cwnd oscillates around the optimal cwnd size

TCP Flavors TCP-Tahoe TCP-Reno TCP-newReno TCP-Vegas, TCP-SACK cwnd =1 whenever drop is detected TCP-Reno cwnd =1 on timeout cwnd = cwnd/2 on dupack TCP-newReno TCP-Reno + improved fast recovery TCP-Vegas, TCP-SACK

TCP Vegas Improved timeout mechanism Decrease cwnd only for losses sent at current rate Avoids reducing rate twice Congestion avoidance phase: Compare Actual rate (A) to Expected rate (E) If E-A > , decrease cwnd linearly If E-A < , increase cwnd linearly Rate measurements ~ delay measurements See textbook for details!

TCP-SACK SACK = Selective Acknowledgements ACK packets identify exactly which packets have arrived Makes recovery from multiple losses much easier

Topics to be Covered Transport Protocols: UDP, TCP and its many variations Congestion Control and Congestion Avoidance concepts QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts

Packet Scheduling Decide when and what packet to send on output link Usually implemented at output interface of a router flow 1 Classifier flow 2 1 Scheduler 2 flow n Buffer management

Goals of Packet Scheduling Provide per flow/aggregate QoS guarantees in terms of delay and bandwidth Provide per flow/aggregate protection Flow/Aggregate identified by a subset of following fields in the packet header source/destination IP address (32 bits) source/destination port number (16 bits) protocol type (8 bits) type of service (8 bits) Examples: All packets from machine A to machine B All packets from Berkeley All packets between Berkeley and MIT All TCP packets from EECS-Berkeley

Random Early Detection (RED) Basic premise: Router should signal congestion when the queue first starts building up (by dropping a packet) But router should give flows time to reduce their sending rates before dropping more packets Therefore, packet drops should be: Early: don’t wait for queue to overflow Random: don’t drop all packets in burst, but space drops out

RED Advantages High network utilization with low delays Average queue length small, but capable of absorbing large bursts Many refinements to basic algorithm make it more adaptive (requires less tuning)

Explicit Congestion Notification Rather than drop packets to signal congestion, router can send an explicit signal Explicit congestion notification (ECN): Instead of optionally dropping packet, router sets a bit in the packet header If data packet has bit set, then ACK has ECN bit set Backward compatibility: Bit in header indicates if host implements ECN Note that not all routers need to implement ECN

ECN Advantages No need for retransmitting optionally dropped packets No confusion between congestion losses and corruption losses

Topics to be Covered Transport Protocols: UDP, TCP and its many variations Congestion Control and Congestion Avoidance concepts QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts

Token Bucket and Arrival Curve Parameters r – average rate, i.e., rate at which tokens fill the bucket b – bucket depth R – maximum link capacity or peak rate (optional parameter) A bit is transmitted only when there is an available token Arrival curve – maximum number of bits transmitted within an interval of time of size t Arrival curve r bps bits slope r b*R/(R-r) b bits slope R <= R bps time regulator

Source Traffic Characterization Arrival curve – maximum amount of bits transmitted during an interval of time Δt Use token bucket to bound the arrival curve bps bits Arrival curve Δt time

Source Traffic Characterization: Example Arrival curve – maximum amount of bits transmitted during an interval of time Δt Use token bucket to bound the arrival curve (R=2,b=1,r=1) Arrival curve bits 4 bps 3 2 2 1 1 Δt 1 2 3 4 5 time 1 2 3 4 5

QoS Guarantees: Per-hop Reservation End-host: specify the arrival rate characterized by token-bucket with parameters (b,r,R) the maximum maximum admissible delay D, no losses Router: allocate bandwidth ra and buffer space Ba such that no packet is dropped no packet experiences a delay larger than D slope ra slope r bits Arrival curve b*R/(R-r) D Ba R

Packet Scheduling and Fair Queuing Make sure that at any time the flow receives at least the allocated rate ra The canonical example of such scheduler: Weighted Fair Queueing (WFQ) Implements max-min fairness: each flow receives min(ri, f) , where ri – flow arrival rate f – link fair rate (see next slide) Weighted Fair Queueing (WFQ) – associate a weight with each flow

Fluid Flow System: Example link Red flow has packets backlogged between time 0 and 10 Backlogged flow  flow’s queue not empty Other flows have packets continuously backlogged All packets have the same size flows weights 5 1 1 1 1 1 2 4 6 8 10 15

Packet System: Example Service in fluid flow system 2 4 6 8 10 Select the first packet that finishes in the fluid flow system Packet system 2 4 6 8 10

Example: Problem 6.14 (a) link capacity (C) = 1; packet size = 1 A B C wall clock 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Arrival patterns B 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 C 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 A,1 C,1 B,2 A,2 C,2 1 2 3 4 5 6 7 8 9 10 A,4 B,6 C,3 A,6 B,8 C,5 A,7 B,11 C,6 A,9 B,12 C,7 A,10 B,15 C,8 12 13 14 15 11 16 17 18 19 20 21 wall clock Fluid system 1 2 3 4 5 6 7 8 9 10 12 13 14 15 11 16 17 18 19 20 21 wall clock Packet system

Solution: Virtual Time Key Observation: while the finish times of packets may change when a new packet arrives, the order in which packets finish doesn’t! Only the order is important for scheduling Solution: instead of the packet finish time maintain the number of rounds needed to send the remaining bits of the packet (virtual finishing time) Virtual finishing time doesn’t change when the packet arrives System virtual time – index of the round in the bit-by-bit round robin scheme

Example: Problem 6.14 (a) C – link capacity N(t) – total weight of V(t) - virtual system time C – link capacity N(t) – total weight of backlogged flows 1/3 3 1/2 2.5 1/3 1.5 1/2 wall clock 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 A,1 A,2 A,4 A,6 A,7 A,9 A,10 B,2 B,6 B,8 B,11 B,12 B,15 C,1 C,2 C,3 C,5 C,6 C,7 C,8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 wall clock

Properties of WFQ Guarantee that any packet is transmitted within packet_lengt/link_capacity of its transmission time in the fluid flow system Can be used to provide guaranteed services Achieve max-min fair allocation Can be used to protect well-behaved flows against malicious flows

What You Need to Know Basic concepts Mechanisms: Arrival curve Token-bucket specification System virtual time / finish virtual time WFQ properties Link sharing requirements and challenges Bandwidth-delay coupling problem Mechanisms: WFQ implementation in the fluid flow & packet system

Topics to be Covered Transport Protocols: UDP, TCP and its many variations Congestion Control and Congestion Avoidance concepts QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts

Differentiated Services Some traffic should get better treatment Application requirements: interactive vs. bulk transfer Economic arrangements: first-class versus coach What kind of better service could you give? Measured by drops, or delay (and drops) How do you know which packets to give better service? Bits in packet header

Advantages of DiffServ Very simple to implement Can be applied at different granularities Flows Institutions Traffic types Marking can be done at edges or by hosts Allows easy peering (bilateral SLAs)

Disadvantages of DiffServ Service is still “best effort”, just a “better” class of best effort Except for EF, which has terrible efficiency All traffic accepted (within SLAs) Some applications need better than this Certainly some apps need better service than today’s Internet delivers But perhaps if DiffServ were widely deployed premium traffic would get great service (recall example)

Integrated Services Attempt to integrate service for “real-time” applications into the Internet Known as IntServ Total, massive, and humiliating failure 1000s of papers IETF standards and STILL no deployment ...

Resource Reservation Protocol: RSVP Establishes end-to-end reservations over a datagram network Designed for multicast (which will be covered later) Sources: send TSpec Receivers: respond with RSpec Network Network: responds to reservation requests

Advantages of IntServ Precise QoS delivered at flow granularities Good service, given exactly to who needs it Decisions made by hosts Who know what they need Not by organizations, egress/ingress points, etc. Fits multicast and unicast traffic equally well

Disadvantages of IntServ Scalability: per-flow state, classification, etc. Goofed big time Aggregation/encapsulation techniques can help Can overprovision big links, per-flow ok on small links Scalability can be fixed, but no second chance Economic arrangements: Need sophisticated settlements between ISPs Right now, settlements are primitive (barter) User charging mechanisms: need QoS pricing

What You Need to Know Three kinds of QoS approaches Link sharing, DiffServ, IntServ Some basic concepts: Differentiated dropping versus service priority Per-flow QoS (IntServ) versus per-aggregate QoS (DiffServ) Admission control: parameter versus measurement Control plane versus data plane Controlled load versus guaranteed service Codepoints versus explicit signaling Various mechanisms: Playback points Token bucket RSVP PATH/RESV messages