Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS244 Packet Scheduling (Generalized Processor Sharing) Some slides created by: Ion Stoica and Mohammad Alizadeh

Similar presentations


Presentation on theme: "CS244 Packet Scheduling (Generalized Processor Sharing) Some slides created by: Ion Stoica and Mohammad Alizadeh"— Presentation transcript:

1 CS244 Packet Scheduling (Generalized Processor Sharing) Some slides created by: Ion Stoica (istoica@cs.berkeley.edu), and Mohammad Alizadeh (alizadeh@csail.mit.edu)istoica@cs.berkeley.edu)alizadeh@csail.mit.edu

2 Context Paper: 3,699 citations (Google Scholar) 2 Abhay Parekh Student of Robert Gallager at MIT Entrepreneur, adjunct prof at Berkeley Robert “Bob” Gallager Professor Emeritus, MIT Weighted Fair Queueing (WFQ) paper: “Analysis and simulation of a fair queueing algorithm” Demers, Keshav, Shenker (SIGCOMM ’89; 3,717 citations) WFQ and PGPS are essentially same ideas, developed independently.

3 Problem: How to share a link? What is the “right” rate allocation? A B C 1Mb/s 10Mb/s 5Mb/s 0.5Mb/s 0.5Mb/s, 0.5Mb/s (fair?) 3

4 Problem: How to share a link? What is the “right” rate allocation? A B C 1Mb/s 10Mb/s 5Mb/s 0.67Mb/s 0.33Mb/s 0.5Mb/s, 0.5Mb/s (fair?) 0.67Mb/s, 0.33Mb/s 4

5 Problem: How to share a link? What is the “right” rate allocation? A B C 1Mb/s 10Mb/s 5Mb/s 0.1Mb/s 0.9Mb/s Bulk file transfer Voice call Tough to answer in general (depends on traffic needs, payment, policy…) 5

6 10 8 2 4 4 2 Rate Demanded/Allocated Flows sorted by demand 2 8 10 Max-Min Fairness A common way to allocate flows 2 22 Total = 6 Total = 10 4 4 1.Allocate in order of increasing demand 2.No flow gets more than demand 3.The excess, if any, is equally shared f = 4 : min(8, 4) = 4 min(10, 4) = 4 min(2, 4) = 2 6

7 What is the GPS paper about? A scheduling algorithm for (weighted) max-min fairness –Decides which flow’s packet to send next Link 1, ingressLink 1, egress Link 2, ingressLink 2, egress Link 3, ingressLink 3, egress Scheduler flow 1 flow 2 flow n Classifier 7

8 Properties of GPS Work conserving –Link is busy if there are packets to send Max-min fair allocation of bandwidth for best effort service Provides isolation –Traffic hogs cannot overrun others –[Why is this surprising?] End-to-end delay bounds for guaranteed service 8

9 What you said "This paper almost feels like it was the result of network designers feeling like they had to have bounds on runtime because they are valued in theory and in other fields, but provides no context as to why these bounds are actually important.” “Why this paper rather than the WFQ paper by Demers, Shenker, Keshav?” 9

10 Why is this an important paper? Jang et al. “Silo: Predictable Message Completion Time in the Cloud” (accepted to SIGCOMM 2015) “…We identify three key requirements for such predictability: guaranteed network bandwidth, guaranteed per-packet delay and guaranteed burst allowance… Silo leverages the fact that guaranteed bandwidth and delay are tightly coupled: controlling tenant bandwidth yields deterministic bounds on network queuing delay. Silo builds upon network calculus… using a novel packet pacing mechanism to ensure the requirements are met.” Thousands of citations; several best paper awards; on nearly every “must-read networking papers” list… why? 10

11 Outline for Rest of Today GPS (for fluid flows) Packet-by-packet GPS (PGPS) Implementing PGPS Rate and delay guarantees with GPS 11

12 Bit-by-Bit Fair Queueing 1.Packets belonging to a flow are placed in a FIFO. This is called “per-flow queueing”. 2.FIFOs are scheduled one bit at a time, in a round-robin fashion. Flow 1 Flow N Classification Bit-by-bit round robin Question: What is a “flow”? Question: How can we give weights? 12

13 Bit-by-Bit Weighted Fair Queueing Flows can be allocated different rates by servicing a different number of bits for each flow during each round. Generalized Processor Sharing (GPS): “infinitesimal amount of flow” instead of “bits” 1 w 1 = 0.1 w 3 = 0.3 R1R1 C w 4 = 0.3 w 2 = 0.3 Order of service for the four queues: … f 1, f 2, f 2, f 2, f 3, f 3, f 3, f 4, f 4, f 4, f 1,… 13

14 GPS Formal Definition N flows with weights w 1, …, w N, link rate = r Flow i is continually backlogged S j (t 1,t 2 ): amount of service received by session j between times t 1 and t 2 Then, for all j: Basically, a rate guarantee 14

15 GPS Example 015210468 511111 flows link Red flow backlogged between time 0 and 10 Other flows continuously backlogged 15

16 Outline GPS (for fluid flows) Packet-by-packet GPS (PGPS) Implementing PGPS Rate and delay guarantees with GPS 16

17 Packet vs. Fluid System GPS is not implementable [why?] –Fluid system: multiple queues can be serviced simultaneously In real packet-based systems –One queue is served at any given time –Packet transmission cannot be preempted Goal: A packet scheme close to fluid system –Bound performance w.r.t fluid GPS 17

18 First Cut: Simple Round Robin Serve a packet from non-empty queues in turn –Let’s assume all flows have equal weight [Is this fair?] Variable packet length  can get more service by sending bigger packets [How might one mitigate this?] Unfair instantaneous service rate (esp. with variable weights) – E.g. 500 flows: 250 with weight 1, 250 with weight 10 – Each round takes 2,750 packet times – What if a packet arrives right after its “turn”? 18

19 PGPS or Weighted Fair Queuing (WFQ) Deals better with variable size packets and weights Key Idea: 1.Determine the finish time of packets in GPS, assuming no more arrivals 2.Serve packets in order of finish times Question: Why not some other metric (largest finishing time, earliest starting time)? Theorem: 19

20 Intuition for PGPS result Finishing order of packets currently in GPS system is independent of future arrivals [why?] A packet is delayed more by PGPS only if when it arrives, a later packet in the GPS order is already being transmitted There can be at most one such packet 20

21 Outline GPS (for fluid flows) Packet-by-packet GPS (PGPS) Implementing PGPS Rate and delay guarantees with GPS 21

22 Virtual Time Implementation Assign a “virtual finish time” to each packet upon arrival  serve packets in order of virtual times Key observation: Virtual finish time can be calculated at packet arrival. Real finish time cannot. Question: Why use virtual time? 22

23 Virtual Time in GPS Virtual time (V GPS ): service that backlogged flow with weight=1 would receive in GPS Backlogged flow i receives service at rate: link rate = r B(t) = set of backlogged flows Packet of length L takes L/w i virtual time to serve Question: How much virtual time does it take to serve a packet of length L of flow i? 23

24 Virtual Time in GPS 0412816 1/2 1/8 2*r r 24

25 Virtual Start and Finish Times 0412816 Virtual times when the packets start S i k and finish F i k in fluid system 25

26 Putting it together: PGPS (WFQ) Implementation For kth packet of flow i arriving at time a: S i k (virtual start time) F i k (virtual finish time) (Flow i backlogged) k ……k-1 (Flow i empty) k 26

27 Summary of PGPS (WFQ) Pros –Excellent approximation of GPS –Preserves good properties of GPS: isolation, rate and delay guarantees [Why?] –Gives users incentive to use intelligent flow control [Why?] Cons –Needs per-flow queues (potentially millions) –Requires sorting packets by virtual finish time –Virtual time hard to implement exactly 27

28 Outline GPS (for fluid flows) Packet-by-packet GPS (PGPS) Implementing PGPS Rate and delay guarantees with GPS 28

29 GPS Rate Guarantee 015210468 511111 flows Link rate = r Red flow ends 29

30 How do we turn rate guarantees into delay guarantees? 30

31 time Cumulative bytes A(t) D(t) R Deterministic model of a router queue 31 Q(t) FIFO delay, d(t) R A(t)D(t) Model of router queue Q(t) Properties of A(t), D(t): 1. A(t), D(t) are non-decreasing 2. A(t) >= D(t)

32 Flow 1 Flow N Classification WFQ Scheduler A 1 (t) A N (t) R(f 1 ), D 1 (t) R(f N ), D N (t) time Cumulative bytes A 1 (t) D 1 (t) R(f1)R(f1) Key idea: Constrain arrival process A(t) 32

33 Leaky bucket“(  )” regulator 33 Tokens at rate,  Token bucket size,  Packet buffer Packets One byte per token # of bytes in any period of length t is bounded by:

34 (  ) Constrained Arrivals and Minimum Service Rate 34 time Cumulative bytes A 1 (t) D 1 (t) R(f1)R(f1)   d max B max Theorem: If flows are leaky-bucket constrained, and routers use WFQ, then end-to-end delay guarantees are possible.

35 Question How do you get delay bounds for multi-hop networks? Subject of “A generalized processor sharing … : The multiple node case” [Parekh, Gallagher, 1994] “Output burstiness” characterization of GPS key: 35

36 Is this practical? 36

37 What you said "I had one main question when I read the paper; do people actually use leaky bucket in practice? …it seems like it would require a lot of dynamic coordination between the senders in the multiple flow case to achieve reasonable link utilization.” 37

38 Is this practical? Difficult to make work; requires a lot of coordination How do networks get reasonable delay today? 38 Google B4 SDN WAN uses global rate limiting from end- hosts! See: - B4 (SIGCOMM 2013) - BwEnforcer (SIGCOMM 2015) * Figure from B4 [SIGCOMM 2013]

39 39

40 Question Are delay guarantees possible even when R(f i ) < ρ i for some 1 ≤ i ≤ N? Yes! As long as: (r is link speed) Thoerem: Worst case is when all flows are “greedy” at time 0. 40  Cumulative bytes time A i (t) 

41 Question What kind of fairness does TCP provide? Statistical, over a large number of round-trip times 41

42 Rate Allocation vs Delay A B C 1Mb/s 0.1Mb/s Mon Tue Wed Thu Fri Sat Sun Mon 1Mb/s 1/7 Mb/s Voice call Time-scale of guarantee critical for delay guarantees 42


Download ppt "CS244 Packet Scheduling (Generalized Processor Sharing) Some slides created by: Ion Stoica and Mohammad Alizadeh"

Similar presentations


Ads by Google