Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scheduling: Buffer Management

Similar presentations


Presentation on theme: "Scheduling: Buffer Management"— Presentation transcript:

1 Scheduling: Buffer Management
Lecture 4: Nov 13, 2013 Scheduling: Buffer Management

2 Lecture 4: Nov 13, 2013 The setting

3 Buffer Scheduling Who to send next? What happens when buffer is full?
Lecture 4: Nov 13, 2013 Buffer Scheduling Who to send next? What happens when buffer is full? Who to discard?

4 Requirements of scheduling
Lecture 4: Nov 13, 2013 Requirements of scheduling An ideal scheduling discipline is easy to implement is fair and protective provides performance bounds Each scheduling discipline makes a different trade-off among these requirements

5 Ease of implementation
Lecture 4: Nov 13, 2013 Ease of implementation Scheduling discipline has to make a decision once every few microseconds! Should be implementable in a few instructions or hardware for hardware: critical constraint is VLSI space Complexity of enqueue + dequeue processes Work per packet should scale less than linearly with number of active connections

6 Lecture 4: Nov 13, 2013 Fairness Intuitively each connection should get no more than its demand the excess, if any, is equally shared But it also provides protection traffic hogs cannot overrun others automatically isolates heavy users

7 Max-min Fairness: Single Buffer
Lecture 4: Nov 13, 2013 Max-min Fairness: Single Buffer Allocate bandwidth equally among all users If anyone doesn’t need its share, redistribute maximize the minimum bandwidth provided to any flow not receiving its request To increase the smallest need to take from larger. Consider fluid example. Ex: Compute the max-min fair allocation for a set of four sources with demands 2, 2.6, 4, 5 when the resource has a capacity of 10. s1= 2; s2= 2.6; s3 = s4= 2.7 More complicated in a network.

8 FCFS / FIFO Queuing Simplest Algorithm, widely used.
Lecture 4: Nov 13, 2013 FCFS / FIFO Queuing Simplest Algorithm, widely used. Scheduling is done using first-in first-out (FIFO) discipline All flows are fed into the same queue

9 FIFO Queuing (cont’d) First-In First-Out (FIFO) queuing
Lecture 4: Nov 13, 2013 FIFO Queuing (cont’d) First-In First-Out (FIFO) queuing First Arrival, First Transmission Completely dependent on arrival time No notion of priority or allocated buffers No space in queue  packet discarded Flows can interfere with each other; No isolation; malicious monopolization; Various hacks for priority, random drops,...

10 Priority Queuing Lecture 4: Nov 13, 2013 A priority index is assigned to each packet upon arrival Packets transmitted in ascending order of priority index. Priority 0 through n-1 Priority 0 is always serviced first Priority i is serviced only if 0 through i-1 are empty Highest priority has the lowest delay, highest throughput, lowest loss Lower priority classes may be starved by higher priority Preemptive and non-preemptive versions.

11 Priority Queuing Packet discard when full High-priority packets
Lecture 4: Nov 13, 2013 Priority Queuing Transmission link Packet discard when full High-priority packets Low-priority When high-priority queue empty

12 Round Robin: Architecture
Lecture 4: Nov 13, 2013 Round Robin: Architecture Round Robin: scan class queues serving one from each class that has a non-empty queue Flow 1 Flow 3 Flow 2 Transmission link Round robin Called also: Cyclic Polling with limited-1 service Hardware requirement: Jump to next non-empty queue

13 Round Robin Scheduling
Lecture 4: Nov 13, 2013 Round Robin Scheduling Round Robin: scan class queues serving one from each class that has a non-empty queue

14 Round Robin (cont’d) Characteristics: Problems: Benefits:
Lecture 4: Nov 13, 2013 Round Robin (cont’d) Characteristics: Classify incoming traffic into flows (source-destination pairs) Round-robin among flows Problems: Ignores packet length (GPS, Fair queuing) Inflexible allocation of weights (WRR,WFQ) Benefits: protection against heavy users (why?)

15 Weighted Round-Robin Weighted round-robin Called also: Cyclic Polling
Lecture 4: Nov 13, 2013 Weighted Round-Robin Weighted round-robin Different weight wi (per flow) Flow j can sends wj packets in a period. Period of length  wj Disadvantage Variable packet size. Fair only over time scales longer than a period time. If a connection has a small weight, or the number of connections is large, this may lead to long periods of unfairness. Called also: Cyclic Polling with limited Wj service

16 DRR (Deficit RR) algorithm
Lecture 4: Nov 13, 2013 DRR (Deficit RR) algorithm Like RR (over bits), variable packet size Choose a quantum of bits to serve from each connection in order. For each HoL (Head of Line) packet, credit := credit + quantum if the packet size is ≤ credit; send and save excess, otherwise save entire credit. If no packet to send, reset counter (to remain fair) If some packet sent: counter = min{ excess, quantum } Each connection has a deficit counter (to store credits) with initial value zero. Easier implementation than other fair policies WFQ To prevent Volume attack by Flow that sends many small packets

17 Deficit Round-Robin DRR can handle variable packet size
Lecture 4: Nov 13, 2013 Deficit Round-Robin DRR can handle variable packet size Quantum size : 1000 byte 1st Round A’s count : 1000 B’s count : 200 (served twice) C’s count : 1000 2nd Round A’s count : 500 (served) B’s count : 0 C’s count : 800 (served) 2000 1000 1500 A 500 300 B 1200 C Head of Queue Second Round First Round

18 DRR: performance Handles variable length packets fairly
Lecture 4: Nov 13, 2013 DRR: performance Handles variable length packets fairly Backlogged sources share bandwidth equally Preferably, packet size < Quantum Simple to implement Similar to round robin

19 Generalized Processor Sharing
Lecture 4: Nov 13, 2013 Generalized Processor Sharing

20 Generalized Process Sharing (GPS)
Lecture 4: Nov 13, 2013 Generalized Process Sharing (GPS) The methodology: Assume we can send infinitesimal packets single bit Perform round robin. At the bit level Idealized policy to split bandwidth GPS is not implementable Used mainly to evaluate and compare real approaches. Has weights that give relative frequencies.

21 GPS: Example 1 (PS) Packets of size 10, 20 & 30 arrive at time 0 30 50
Lecture 4: Nov 13, 2013 GPS: Example 1 (PS) 30 50 60 Packets of size 10, 20 & 30 arrive at time 0

22 GPS: Example 2 (PS) Packets: time 0 size 15 time 5 size 20
Lecture 4: Nov 13, 2013 GPS: Example 2 (PS) 40 45 5 15 30 Packets: time 0 size 15 time 5 size 20 time 15 size 10

23 GPS: Example 3 (PS) Packets: time 0 size 15 time 5 size 20
Lecture 4: Nov 13, 2013 GPS: Example 3 (PS) 5 15 30 45 60 Packets: time 0 size 15 time 5 size 20 time 15 size 10 time 18 size 15

24 GPS : Adding weights Flow j has weight wj
Lecture 4: Nov 13, 2013 GPS : Adding weights Flow j has weight wj The output rate of flow j, Rj(t) obeys: For the un-weighted case (wj=1):

25 Lecture 4: Nov 13, 2013 Fairness using GPS Non-backlogged connections (received what they asked for). Backlogged connections: share the remaining bandwidth in proportion to the assigned weights. Every backlogged connection i, receives a service rate of : Active(t): the set of backlogged flows at time t

26 GPS: Measuring unfairness
Lecture 4: Nov 13, 2013 GPS: Measuring unfairness No packet discipline can be as fair as GPS while a packet is being served, we are unfair to others Degree of unfairness can be bounded Define: workA (i,a,b) = # bits transmitted for flow i in time [a,b] by policy A. Absolute fairness bound for policy S Max (|workGPS(i,a,b) - workS(i, a,b)|) Relative fairness bound for policy S Max (|workS(i,a,b) - workS(j,a,b)|) assuming both i and j are backlogged in [a,b]

27 GPS: Measuring unfairness
Lecture 4: Nov 13, 2013 GPS: Measuring unfairness Assume fixed packet size and round robin Relative bound: 1 Absolute bound: 1-1/n n is the number of flows Challenge: handle variable size packets.

28 Weighted Fair Queueing
Lecture 4: Nov 13, 2013 Weighted Fair Queueing

29 GPS to WFQ We can’t implement GPS So, lets see how to emulate it
Lecture 4: Nov 13, 2013 GPS to WFQ We can’t implement GPS So, lets see how to emulate it We want to be as fair as possible But also have an efficient implementation

30 Lecture 4: Nov 13, 2013

31 GPS vs WFQ (equal length)
Lecture 4: Nov 13, 2013 GPS vs WFQ (equal length) t 1 2 Queue 1 @ t=0 Queue 2 GPS:both packets served at rate 1/2 Both packets complete service at t=2 1 t 2 Packet from queue 2 being served Packet from queue 2 waiting Packet-by-packet system (WFQ): queue 1 served first at rate 1; then queue 2 served at rate 1. Packet from queue 1 being served

32 GPS vs WFQ (different length)
Lecture 4: Nov 13, 2013 GPS vs WFQ (different length) 2 1 t 3 GPS: both packets served at rate 1/2 Queue 1 @ t=0 Queue 2 @ t=0 Packet from queue 2 served at rate 1 1 t 2 3 Packet from queue 2 waiting queue 2 served at rate 1 Packet from queue 1 being served at rate 1 Note: nobody is hurt..

33 GPS vs WFQ Queue 1 GPS: packet from queue 1 @ t=0 served at rate 1/4;
Lecture 4: Nov 13, 2013 GPS vs WFQ 1 t 2 Queue 1 @ t=0 Queue 2 GPS: packet from queue 1 served at rate 1/4; Packet from queue 1 served at rate 1 Packet from queue 2 served at rate 3/4 Weight: Queue 1=1 Queue 2 =3 1 t 2 Packet from queue 1 waiting WFQ: queue 2 served first at rate 1; then queue 1 served at rate 1. Packet from queue 2 being served Packet from queue 1 being served Note: nobody is hurt..

34 Completion times Emulating a policy: FIFO:
Lecture 4: Nov 13, 2013 Completion times Emulating a policy: Assign each packet p a value time(p). Send packets in order of time(p). FIFO: Arrival of a packet p from flow j: last = last + size(p); time(p)=last; perfect emulation...

35 Round Robin Emulation Round Robin (equal size packets)
Lecture 4: Nov 13, 2013 Round Robin Emulation Round Robin (equal size packets) Arrival of packet p from flow j: last(j) = last(j)+ 1; time(p)=last(j); Idle queue not handle properly!!! Sending packet q: round = time(q) Arrival: last(j) = max{round,last(j)}+ 1 1 Queue 1 Queue 2 2 3 Queue 3 Round

36 Round Robin Emulation Round Robin (equal size packets)
Lecture 4: Nov 13, 2013 Round Robin Emulation Round Robin (equal size packets) Sending packet q: round = time(q); flow_num = flow(q); Arrival: last(j) = max{round,last(j) }+1 IF (j >= flow_num) & (last(j)=round+1) THEN last(j)=last(j)-1 time(p)=last(j);

37 GPS emulation (WFQ) Arrival of p from flow j:
Lecture 4: Nov 13, 2013 GPS emulation (WFQ) Arrival of p from flow j: last(j)= max{last(j), round} + size(p); using weights: last(j)= max{last(j), round} + size(p)/wj; How should we compute the round (clock)? We like to simulate GPS: x is the period of time in which #active did not change round(t+x) = round(t) + x/B(t) B(t) = # active flows (unweighted case) B(t) = sum of weights of active flows (weighted case) A flow j is active while round(t) < last(j)

38 WFQ: Example (GPS view)
Lecture 4: Nov 13, 2013 WFQ: Example (GPS view) 1 t 2 3 4 round 1/2 5/6 7/6 11/6 6/6 Note that if in a time interval round progresses by amount x Then every non-empty buffer is emptied by amount x during the interval (“derivative” is always -1)

39 WFQ: Example (GPS view)
Lecture 4: Nov 13, 2013 WFQ: Example (GPS view) 1 t 2 3 4 round 1/2 5/6 7/6 11/6 round(t+x) = round(t) + x/B(t) last(j)= max{last(j), round} + size(p)/wj; 6/6 Packets 1+2 Terminate Exactly at Round=1 Time 0: packets arrive to flow 1 & 2. last(1)= 1; last(2)= 1; Active = 2 round (0) =0; send 1

40 WFQ: Example (GPS view)
Lecture 4: Nov 13, 2013 WFQ: Example (GPS view) 1 t 2 3 4 round 1/2 5/6 7/6 11/6 round(t+x) = round(t) + x/B(t) last(j)= max{last(j), round} + size(p)/wj; 6/6 Packets 3 Terminates Exactly at Round=3/2 Time 1: A packet arrives to flow 3 round(1) = 1/2; Active = 3 last(3) = 3/2; 1 finished service  send 2 (last(2)=1)

41 WFQ: Example (GPS view)
Lecture 4: Nov 13, 2013 WFQ: Example (GPS view) 1 t 2 3 4 round 1/2 5/6 7/6 11/6 round(t+x) = round(t) + x/B(t) last(j)= max{last(j), round} + size(p)/wj; 6/6 Time 2: A packet arrives to flow 4. round(2) = 1/2+1/3=5/6; Active = 4 last(4) = 5/6+1=11/6; send 3 (last(3)= 3/2)

42 WFQ: Example (GPS view)
Lecture 4: Nov 13, 2013 WFQ: Example (GPS view) 1 t 2 3 4 round 1/2 5/6 7/6 11/6 round(t+x) = round(t) + x/B(t) last(j)= max{last(j), round} + size(p)/wj; 6/6 Time 2+2/3: round = 1; Active = 2 Time 3 : round =1+1/3*1/2=7/6 ; send 4; Time 3+2/3: round =7/6+1/3=3/2; Active = 1 Time 4 : round = 11/6 ; Active=0

43 WFQ: Delay Termination (WFQ) =< Termination (GPS)+ max packet time
Lecture 4: Nov 13, 2013 WFQ: Delay 1 t 2 3 4 round 1/2 5/6 7/6 11/6 round(t+x) = round(t) + x/B(t) last(j)= max{last(j), round} + size(p)/wj; 6/6 Termination (WFQ) =< Termination (GPS)+ max packet time Argument: AT T(GpS) completed all work that ended before T(GPS). At T(GPS) packet is in system. must schedule it.

44 WFQ: Example (equal size)
Lecture 4: Nov 13, 2013 WFQ: Example (equal size) Time 0: packets arrive to flow 1 & 2. last(1)= 1; last(2)= 1; Active = 2 round (0) =0; send 1 Time 1: A packet arrives to flow 3 round(1) = 1/2; Active = 3 last(3) = 3/2; send 2 Time 2: A packet arrives to flow 4. round(2) = 5/6; Active = 4 last(4) = 11/6; send 3 Time 2+2/3: round = 1; Active = 2 Time 3 : round = 7/6 ; send 4; Time 3+2/3: round = 3/2; Active = 1 Time 4 : round = 11/6 ; Active=0

45 Worst Case Fair Weighted Fair Queuing (WF2Q)
Lecture 4: Nov 13, 2013 Worst Case Fair Weighted Fair Queuing (WF2Q)

46 Worst Case Fair Weighted Fair Queuing (WF2Q)
Lecture 4: Nov 13, 2013 Worst Case Fair Weighted Fair Queuing (WF2Q) WF2Q fixes an unfairness problem in WFQ. WFQ: among packets waiting in the system, pick one that will finish service first under GPS WF2Q: among packets waiting in the system, that have started service under GPS, select one that will finish service first GPS WF2Q provides service closer to GPS difference in packet service time bounded by max. packet size. (not earlier, not later)

47 Lecture 4: Nov 13, 2013

48 Lecture 4: Nov 13, 2013

49 These complete <1/2 time earlier This packet finishes
Lecture 4: Nov 13, 2013 These complete <1/2 time earlier This packet finishes 5 units earlier. Can hurt fairness when Entering another node

50 Lecture 4: Nov 13, 2013

51 Lecture 4: Nov 13, 2013

52 Lecture 4: Nov 13, 2013 Multiple Buffers

53 Buffer locations Buffers Input ports Output ports Inside fabric
Lecture 4: Nov 13, 2013 Buffers Fabric Buffer locations Input ports Output ports Inside fabric Shared Memory Combination of all

54 Lecture 4: Nov 13, 2013 Input Queuing fabric Outputs Inputs

55 Input Buffer : properties
Lecture 4: Nov 13, 2013 Input Buffer : properties Input speed of queue – no more than input line Need arbiter (running N times faster than input) FIFO queue Head of Line (HoL) blocking . Utilization: Random destination 1- 1/e = 59% utilization due to HoL blocking

56 Lecture 4: Nov 13, 2013 Head of Line Blocking

57 Lecture 4: Nov 13, 2013

58 Lecture 4: Nov 13, 2013

59 Overcoming HoL blocking: look-ahead
Lecture 4: Nov 13, 2013 Overcoming HoL blocking: look-ahead The fabric looks ahead into the input buffer for packets that may be transferred if they were not blocked by the head of line. Improvement depends on the depth of the look ahead. This corresponds to virtual output queues where each input port has buffer for each output port.

60 Input Queuing Virtual output queues
Lecture 4: Nov 13, 2013

61 Overcoming HoL blocking: output expansion
Lecture 4: Nov 13, 2013 Overcoming HoL blocking: output expansion Each output port is expanded to L output ports The fabric can transfer up to L packets to the same output instead of one cell. Karol and Morgan, IEEE transaction on communication, 1987:

62 Input Queuing Output Expansion
Lecture 4: Nov 13, 2013 Input Queuing Output Expansion L fabric

63 Output Queuing The “ideal”
Lecture 4: Nov 13, 2013 Output Queuing The “ideal” 1 2 2 1 1

64 Output Buffer : properties
Lecture 4: Nov 13, 2013 Output Buffer : properties No HoL problem Output queue (not line) needs to run faster than input lines Need to provide for N packets arriving to same queue solution: limit the number of input lines that can be destined to the output.

65 Shared Memory MEMORY a common pool of buffers divided into
Lecture 4: Nov 13, 2013 Shared Memory MEMORY FABRIC FABRIC a common pool of buffers divided into linked lists indexed by output port number

66 Shared Memory: properties
Lecture 4: Nov 13, 2013 Shared Memory: properties Packets stored in memory as they arrive Resource sharing Easy to implement priorities Memory is accessed at speed equal to sum of the input or output speeds How to divide the space between the sessions


Download ppt "Scheduling: Buffer Management"

Similar presentations


Ads by Google