Presentation is loading. Please wait.

Presentation is loading. Please wait.

EECS 122: Introduction to Computer Networks Midterm II Review

Similar presentations


Presentation on theme: "EECS 122: Introduction to Computer Networks Midterm II Review"— Presentation transcript:

1 EECS 122: Introduction to Computer Networks Midterm II Review
Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA

2 Topics to be Covered Transport Protocols
Congestion Control and Congestion Avoidance concepts, TCP variants QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts P&D: Sections 5.1, 5.2, , 6.4.2, Project #2 Specification

3 Topics to be Covered Transport Protocols
Congestion Control and Congestion Avoidance concepts, TCP variants QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts

4 Transport Layer Provide a way to decide which packets go to which applications (multiplexing/demultiplexing) Can provide more reliability, in order delivery, at most once delivery Can support messages of arbitrary length Govern when hosts should send data  can implement congestion and flow control

5 UDP User Datagram Protocol Minimalist transport protocol
Same best-effort service model as IP Messages up to 64KB Provides multiplexing/demultiplexing to IP Does not provide flow and congestion control Application examples: video/audio streaming

6 TCP Transmission Control Protocol
Reliable, in-order, and at most once delivery Messages can be of arbitrary length Provides multiplexing/demultiplexing to IP Provides congestion control and avoidance Application examples: file transfer, chat

7 TCP Service Open connection
Reliable byte stream transfer from (IPa, TCP Port1) to (IPb, TCP Port2) Indication if connection fails: Reset Close connection

8 Flow Control vs. Congestion Control
Flow control: regulates how much data can a sender send without overflowing the receiver Congestion control: regulates how much data can a sender send without congesting the network Congestion: dropping packets into the network due to buffer overflow at routers

9 Congestion Window (cwnd)
Limits how much data can be in transit, i.e., how many unacknowledged bytes can the sender send AdvertisedWindow is used for flow control MaxWindow = min(cwnd, AdvertisedWindow) EffectiveWindow = MaxWindow – (LastByteSent – LastByteAcked) LastByteAcked LastByteSent sequence number increases

10 Topics to be Covered Transport Protocols
Congestion Control and Congestion Avoidance concepts, TCP variants QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts

11 Why do You Care About Congestion Control?
Otherwise you get to congestion collapse How might this happen? Assume network is congested (a router drops packets) You learn the receiver didn’t get the packet Either by ACK or Timeout What do you do? Retransmit packet Still receiver didn’t get the packet (because it is dropped again) Retransmit again …. and so on … And now assume that everyone is doing the same! Network will become more and more congested And this with duplicate packets rather than new packets!

12 Solutions? Increase buffer size. Why not? Slow down Questions:
If you know that your packets are not delivered because network congestion, slow down Questions: How do you detect network congestion? Use packet loss as indication! By how much do you slow down?

13 TCP: Slow Start Goal: discover congestion quickly How?
Quickly increase cwnd until network congested  get a rough estimate of the optimal size of cwnd: Whenever starting traffic on a new connection, or whenever increasing traffic after congestion was experienced: Set cwnd =1 Each time a segment is acknowledged increment cwnd by one (cwnd++).

14 Slow Start Example The congestion window size grows very rapidly
TCP slows down the increase of cwnd when cwnd >= ssthresh cwnd = 1 segment 1 cwnd = 2 segment 2 segment 3 cwnd = 3 cwnd = 4 segment 4 segment 5 segment 6 segment 7 cwnd = 8

15 Congestion Avoidance Additive increase: starting from the rough estimate, slowly increase cwnd to probe for additional available bandwidth Multiplicative decrease: cut congestion window size aggressively if a timeout occurs If cwnd > ssthresh then each time a segment is acknowledged increment cwnd by 1/cwnd (cwnd += 1/cwnd).

16 Slow Start/Congestion Avoidance Example
Assume that ssthresh = 8 Cwnd (in segments) ssthresh Roundtrip times

17 Putting Everything Together: TCP Pseudocode
Initially: cwnd = 1; ssthresh = infinite; New ack received: if (cwnd < ssthresh) /* Slow Start*/ cwnd = cwnd + 1; else /* Congestion Avoidance */ cwnd = cwnd + 1/cwnd; Timeout: /* Multiplicative decrease */ ssthresh = cwnd/2;

18 The big picture cwnd Timeout Congestion Avoidance Slow Start sstresh

19 Packet Loss Detection Wait for Retransmission Time Out (RTO)
What’s the problem with this? Because RTO is performance killer In the BSD TCP implementation, RTO is usually more than 1 second The granularity of RTT estimate is 500 ms Retransmission timeout is at least two times RTT Solution: Don’t wait for RTO to expire

20 Fast Retransmit Resend a segment after 3 duplicate ACKs Notes:
Remember, a duplicate ACK means that an out-of sequence segment was received Notes: Duplicate ACKs due packet reordering! If window is small don’t get duplicate ACKs! segment 1 cwnd = 1 ACK 2 cwnd = 2 segment 2 segment 3 ACK 3 ACK 4 cwnd = 4 segment 4 segment 5 segment 6 ACK 4 segment 7 ACK 4 3 duplicate ACKs

21 Fast Recovery After a fast-retransmit set cwnd to ssthresh/2
i.e., don’t reset cwnd to 1 But when RTO expires still do cwnd = 1 Fast Retransmit and Fast Recovery  implemented by TCP Reno; most widely used version of TCP today

22 Fast Retransmit and Fast Recovery
cwnd Congestion Avoidance Slow Start Fast retransmit Time Retransmit after 3 duplicated acks Prevent expensive timeouts No need to slow start again At steady state, cwnd oscillates around the optimal cwnd size

23 TCP Flavors TCP-Tahoe TCP-Reno TCP-newReno TCP-Vegas, TCP-SACK
cwnd =1 whenever drop is detected TCP-Reno cwnd =1 on timeout cwnd = cwnd/2 on dupack TCP-newReno TCP-Reno + improved fast recovery TCP-Vegas, TCP-SACK

24 TCP Vegas Improved timeout mechanism
Decrease cwnd only for losses sent at current rate Avoids reducing rate twice Congestion avoidance phase: Compare Actual rate (A) to Expected rate (E) If E-A > , decrease cwnd linearly If E-A < , increase cwnd linearly Rate measurements ~ delay measurements See textbook for details!

25 TCP-SACK SACK = Selective Acknowledgements
ACK packets identify exactly which packets have arrived Makes recovery from multiple losses much easier

26 Topics to be Covered Transport Protocols: UDP, TCP and its many variations Congestion Control and Congestion Avoidance concepts QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts

27 Packet Scheduling Decide when and what packet to send on output link
Usually implemented at output interface of a router flow 1 Classifier flow 2 1 Scheduler 2 flow n Buffer management

28 Goals of Packet Scheduling
Provide per flow/aggregate QoS guarantees in terms of delay and bandwidth Provide per flow/aggregate protection Flow/Aggregate identified by a subset of following fields in the packet header source/destination IP address (32 bits) source/destination port number (16 bits) protocol type (8 bits) type of service (8 bits) Examples: All packets from machine A to machine B All packets from Berkeley All packets between Berkeley and MIT All TCP packets from EECS-Berkeley

29 Random Early Detection (RED)
Basic premise: Router should signal congestion when the queue first starts building up (by dropping a packet) But router should give flows time to reduce their sending rates before dropping more packets Therefore, packet drops should be: Early: don’t wait for queue to overflow Random: don’t drop all packets in burst, but space drops out

30 RED Advantages High network utilization with low delays
Average queue length small, but capable of absorbing large bursts Many refinements to basic algorithm make it more adaptive (requires less tuning)

31 Explicit Congestion Notification
Rather than drop packets to signal congestion, router can send an explicit signal Explicit congestion notification (ECN): Instead of optionally dropping packet, router sets a bit in the packet header If data packet has bit set, then ACK has ECN bit set Backward compatibility: Bit in header indicates if host implements ECN Note that not all routers need to implement ECN

32 ECN Advantages No need for retransmitting optionally dropped packets
No confusion between congestion losses and corruption losses

33 Topics to be Covered Transport Protocols: UDP, TCP and its many variations Congestion Control and Congestion Avoidance concepts QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts

34 Token Bucket and Arrival Curve
Parameters r – average rate, i.e., rate at which tokens fill the bucket b – bucket depth R – maximum link capacity or peak rate (optional parameter) A bit is transmitted only when there is an available token Arrival curve – maximum number of bits transmitted within an interval of time of size t Arrival curve r bps bits slope r b*R/(R-r) b bits slope R <= R bps time regulator

35 Source Traffic Characterization
Arrival curve – maximum amount of bits transmitted during an interval of time Δt Use token bucket to bound the arrival curve bps bits Arrival curve Δt time

36 Source Traffic Characterization: Example
Arrival curve – maximum amount of bits transmitted during an interval of time Δt Use token bucket to bound the arrival curve (R=2,b=1,r=1) Arrival curve bits 4 bps 3 2 2 1 1 Δt 1 2 3 4 5 time 1 2 3 4 5

37 QoS Guarantees: Per-hop Reservation
End-host: specify the arrival rate characterized by token-bucket with parameters (b,r,R) the maximum maximum admissible delay D, no losses Router: allocate bandwidth ra and buffer space Ba such that no packet is dropped no packet experiences a delay larger than D slope ra slope r bits Arrival curve b*R/(R-r) D Ba R

38 Packet Scheduling and Fair Queuing
Make sure that at any time the flow receives at least the allocated rate ra The canonical example of such scheduler: Weighted Fair Queueing (WFQ) Implements max-min fairness: each flow receives min(ri, f) , where ri – flow arrival rate f – link fair rate (see next slide) Weighted Fair Queueing (WFQ) – associate a weight with each flow

39 Fluid Flow System: Example
link Red flow has packets backlogged between time 0 and 10 Backlogged flow  flow’s queue not empty Other flows have packets continuously backlogged All packets have the same size flows weights 5 1 1 1 1 1 2 4 6 8 10 15

40 Packet System: Example
Service in fluid flow system 2 4 6 8 10 Select the first packet that finishes in the fluid flow system Packet system 2 4 6 8 10

41 Example: Problem 6.14 (a) link capacity (C) = 1; packet size = 1 A B C
wall clock 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Arrival patterns B 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 C 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 A,1 C,1 B,2 A,2 C,2 1 2 3 4 5 6 7 8 9 10 A,4 B,6 C,3 A,6 B,8 C,5 A,7 B,11 C,6 A,9 B,12 C,7 A,10 B,15 C,8 12 13 14 15 11 16 17 18 19 20 21 wall clock Fluid system 1 2 3 4 5 6 7 8 9 10 12 13 14 15 11 16 17 18 19 20 21 wall clock Packet system

42 Solution: Virtual Time
Key Observation: while the finish times of packets may change when a new packet arrives, the order in which packets finish doesn’t! Only the order is important for scheduling Solution: instead of the packet finish time maintain the number of rounds needed to send the remaining bits of the packet (virtual finishing time) Virtual finishing time doesn’t change when the packet arrives System virtual time – index of the round in the bit-by-bit round robin scheme

43 Example: Problem 6.14 (a) C – link capacity N(t) – total weight of
V(t) - virtual system time C – link capacity N(t) – total weight of backlogged flows 1/3 3 1/2 2.5 1/3 1.5 1/2 wall clock 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 A,1 A,2 A,4 A,6 A,7 A,9 A,10 B,2 B,6 B,8 B,11 B,12 B,15 C,1 C,2 C,3 C,5 C,6 C,7 C,8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 wall clock

44 Properties of WFQ Guarantee that any packet is transmitted within packet_lengt/link_capacity of its transmission time in the fluid flow system Can be used to provide guaranteed services Achieve max-min fair allocation Can be used to protect well-behaved flows against malicious flows

45 What You Need to Know Basic concepts Mechanisms: Arrival curve
Token-bucket specification System virtual time / finish virtual time WFQ properties Link sharing requirements and challenges Bandwidth-delay coupling problem Mechanisms: WFQ implementation in the fluid flow & packet system

46 Topics to be Covered Transport Protocols: UDP, TCP and its many variations Congestion Control and Congestion Avoidance concepts QoS (Packet Scheduling) and Congestion Control (RED) Mechanisms Fluid Model and Token Bucket method Essential DiffServ and IntServ Concepts

47 Differentiated Services
Some traffic should get better treatment Application requirements: interactive vs. bulk transfer Economic arrangements: first-class versus coach What kind of better service could you give? Measured by drops, or delay (and drops) How do you know which packets to give better service? Bits in packet header

48 Advantages of DiffServ
Very simple to implement Can be applied at different granularities Flows Institutions Traffic types Marking can be done at edges or by hosts Allows easy peering (bilateral SLAs)

49 Disadvantages of DiffServ
Service is still “best effort”, just a “better” class of best effort Except for EF, which has terrible efficiency All traffic accepted (within SLAs) Some applications need better than this Certainly some apps need better service than today’s Internet delivers But perhaps if DiffServ were widely deployed premium traffic would get great service (recall example)

50 Integrated Services Attempt to integrate service for “real-time” applications into the Internet Known as IntServ Total, massive, and humiliating failure 1000s of papers IETF standards and STILL no deployment ...

51 Resource Reservation Protocol: RSVP
Establishes end-to-end reservations over a datagram network Designed for multicast (which will be covered later) Sources: send TSpec Receivers: respond with RSpec Network Network: responds to reservation requests

52 Advantages of IntServ Precise QoS delivered at flow granularities
Good service, given exactly to who needs it Decisions made by hosts Who know what they need Not by organizations, egress/ingress points, etc. Fits multicast and unicast traffic equally well

53 Disadvantages of IntServ
Scalability: per-flow state, classification, etc. Goofed big time Aggregation/encapsulation techniques can help Can overprovision big links, per-flow ok on small links Scalability can be fixed, but no second chance Economic arrangements: Need sophisticated settlements between ISPs Right now, settlements are primitive (barter) User charging mechanisms: need QoS pricing

54 What You Need to Know Three kinds of QoS approaches
Link sharing, DiffServ, IntServ Some basic concepts: Differentiated dropping versus service priority Per-flow QoS (IntServ) versus per-aggregate QoS (DiffServ) Admission control: parameter versus measurement Control plane versus data plane Controlled load versus guaranteed service Codepoints versus explicit signaling Various mechanisms: Playback points Token bucket RSVP PATH/RESV messages


Download ppt "EECS 122: Introduction to Computer Networks Midterm II Review"

Similar presentations


Ads by Google