Presentation is loading. Please wait.

Presentation is loading. Please wait.

תזכורת  שבוע הבא אין הרצאה m יום א, נובמבר 15, 2009  שיעור השלמה m יום שישי, דצמבר 11, 2009 Lecture 4: Nov 8, 2009 # 1.

Similar presentations


Presentation on theme: "תזכורת  שבוע הבא אין הרצאה m יום א, נובמבר 15, 2009  שיעור השלמה m יום שישי, דצמבר 11, 2009 Lecture 4: Nov 8, 2009 # 1."— Presentation transcript:

1 תזכורת  שבוע הבא אין הרצאה m יום א, נובמבר 15, 2009  שיעור השלמה m יום שישי, דצמבר 11, 2009 Lecture 4: Nov 8, 2009 # 1

2 # 2 Scheduling: Buffer Management Lecture 4: Nov 8, 2009

3 # 3 The setting Lecture 4: Nov 8, 2009

4 # 4 Buffer Scheduling  Who to send next?  What happens when buffer is full?  Who to discard? Lecture 4: Nov 8, 2009

5 # 5 Requirements of scheduling  An ideal scheduling discipline m is easy to implement m is fair and protective m provides performance bounds  Each scheduling discipline makes a different trade-off among these requirements Lecture 4: Nov 8, 2009

6 # 6 Ease of implementation  Scheduling discipline has to make a decision once every few microseconds!  Should be implementable in a few instructions or hardware m for hardware: critical constraint is VLSI space m Complexity of enqueue + dequeue processes  Work per packet should scale less than linearly with number of active connections Lecture 4: Nov 8, 2009

7 # 7 Fairness  Intuitively m each connection should get no more than its demand m the excess, if any, is equally shared  But it also provides protection m traffic hogs cannot overrun others m automatically isolates heavy users Lecture 4: Nov 8, 2009

8 # 8 Max-min Fairness: Single Buffer m Allocate bandwidth equally among all users m If anyone doesn’t need its share, redistribute m maximize the minimum bandwidth provided to any flow not receiving its request m To increase the smallest need to take from larger. m Consider fluid example. m Ex: Compute the max-min fair allocation for a set of four sources with demands 2, 2.6, 4, 5 when the resource has a capacity of 10. s1= 2; s2= 2.6; s3 = s4= 2.7 m More complicated in a network. Lecture 4: Nov 8, 2009

9 # 9 FCFS / FIFO Queuing  Simplest Algorithm, widely used.  Scheduling is done using first-in first-out (FIFO) discipline  All flows are fed into the same queue Lecture 4: Nov 8, 2009

10 # 10 FIFO Queuing (cont ’ d)  First-In First-Out (FIFO) queuing m First Arrival, First Transmission m Completely dependent on arrival time m No notion of priority or allocated buffers m No space in queue, packet discarded m Flows can interfere with each other; No isolation; malicious monopolization; m Various hacks for priority, random drops,... Lecture 4: Nov 8, 2009

11 # 11 Priority Queuing  A priority index is assigned to each packet upon arrival  Packets transmitted in ascending order of priority index. m Priority 0 through n-1 m Priority 0 is always serviced first  Priority i is serviced only if 0 through i-1 are empty  Highest priority has the m lowest delay, m highest throughput, m lowest loss  Lower priority classes may be starved by higher priority  Preemptive and non-preemptive versions. Lecture 4: Nov 8, 2009

12 # 12 Priority Queuing Transmission link Packet discard when full High-priority packets Low-priority packets Packet discard when full When high-priority queue empty Lecture 4: Nov 8, 2009

13 # 13 Round Robin: Architecture Flow 1 Flow 3 Flow 2 Transmission link Round robin Hardware requirement: Jump to next non-empty queue  Round Robin: scan class queues serving one from each class that has a non-empty queue Lecture 4: Nov 8, 2009

14 # 14 Round Robin Scheduling  Round Robin: scan class queues serving one from each class that has a non-empty queue Lecture 4: Nov 8, 2009

15 # 15 Round Robin (cont’d)  Characteristics: m Classify incoming traffic into flows (source- destination pairs) m Round-robin among flows  Problems: m Ignores packet length (GPS, Fair queuing) m Inflexible allocation of weights (WRR,WFQ)  Benefits: m protection against heavy users (why?) Lecture 4: Nov 8, 2009

16 # 16 Weighted Round-Robin  Weighted round-robin m Different weight w i (per flow) m Flow j can sends w j packets in a period. m Period of length  w j  Disadvantage m Variable packet size. m Fair only over time scales longer than a period time. If a connection has a small weight, or the number of connections is large, this may lead to long periods of unfairness. Lecture 4: Nov 8, 2009

17 # 17 DRR (Deficit RR) algorithm  Choose a quantum of bits to serve from each connection in order.  For each HoL (Head of Line) packet, m credit := credit + quantum m if the packet size is ≤ credit; send and save excess, m otherwise save entire credit. m If no packet to send, reset counter (to remain fair) m If some packet sent: counter = min{ excess, quantum }  Each connection has a deficit counter (to store credits) with initial value zero.  Easier implementation than other fair policies m WFQ Lecture 4: Nov 8, 2009

18 # 18 Deficit Round-Robin  DRR can handle variable packet size 1500 300 1200 20001000 Second Round First Round Head of Queue A B C 0 Quantum size : 1000 byte  1st Round m A’s count : 1000 m B’s count : 200 (served twice) m C’s count : 1000  2nd Round m A’s count : 500 (served) m B’s count : 0 m C’s count : 800 (served) 500 Lecture 4: Nov 8, 2009

19 # 19 DRR: performance  Handles variable length packets fairly  Backlogged sources share bandwidth equally  Preferably, packet size < Quantum  Simple to implement m Similar to round robin Lecture 4: Nov 8, 2009

20 # 20 Generalized Processor Sharing Lecture 4: Nov 8, 2009

21 # 21 Generalized Process Sharing (GPS)  The methodology: m Assume we can send infinitesimal packets single bit m Perform round robin. At the bit level  Idealized policy to split bandwidth  GPS is not implementable  Used mainly to evaluate and compare real approaches.  Has weights that give relative frequencies. Lecture 4: Nov 8, 2009

22 # 22 GPS: Example 1 50 60 30 Packets of size 10, 20 & 30 arrive at time 0 Lecture 4: Nov 8, 2009

23 # 23 GPS: Example 2 5 15 30 40 45 Packets: time 0 size 15 time 5 size 20 time 15 size 10 Lecture 4: Nov 8, 2009

24 # 24 GPS: Example 3 5 15 30 45 60 Packets: time 0 size 15 time 5 size 20 time 15 size 10 time 18 size 15 Lecture 4: Nov 8, 2009

25 # 25 GPS : Adding weights  Flow j has weight w j  The output rate of flow j, R j (t) obeys:  For the un-weighted case (w j =1): Lecture 4: Nov 8, 2009

26 # 26  Non-backlogged connections, receive what they ask for.  Backlogged connections share the remaining bandwidth in proportion to the assigned weights.  Every backlogged connection i, receives a service rate of : Fairness using GPS Active(t): the set of backlogged flows at time t Lecture 4: Nov 8, 2009

27 # 27 GPS: Measuring unfairness  No packet discipline can be as fair as GPS m while a packet is being served, we are unfair to others  Degree of unfairness can be bounded  Define: work A (i,a,b) = # bits transmitted for flow i in time [a,b] by policy A.  Absolute fairness bound for policy S m Max (work GPS (i,a,b) - work S (i, a,b))  Relative fairness bound for policy S m Max (work S (i,a,b) - work S (j,a,b)) assuming both i and j are backlogged in [a,b] Lecture 4: Nov 8, 2009

28 # 28 GPS: Measuring unfairness  Assume fixed packet size and round robin  Relative bound: 1  Absolute bound: 1-1/n m n is the number of flows  Challenge: handle variable size packets. Lecture 4: Nov 8, 2009

29 # 29 Weighted Fair Queueing Lecture 4: Nov 8, 2009

30 # 30 GPS to WFQ  We can ’ t implement GPS  So, lets see how to emulate it  We want to be as fair as possible  But also have an efficient implementation Lecture 4: Nov 8, 2009

31 # 31 Lecture 4: Nov 8, 2009

32 # 32 Queue 1 @ t=0 Queue 2 @ t=0 GPS:both packets served at rate 1/2 Both packets complete service at t=2 t 1 1 2 0 Packet-by-packet system (WFQ): queue 1 served first at rate 1; then queue 2 served at rate 1. Packet from queue 1 being served Packet from queue 2 being served Packet from queue 2 waiting 1 t 1 2 0 GPS vs WFQ (equal length) Lecture 4: Nov 8, 2009

33 # 33 Queue 1 @ t=0 Queue 2 @ t=0 2 1 t 3 0 2 Packet from queue 2 served at rate 1 GPS: both packets served at rate 1/2 queue 2 served at rate 1 Packet from queue 1 being served at rate 1 Packet from queue 2 waiting 1 t 1 2 0 3 GPS vs WFQ (different length) Lecture 4: Nov 8, 2009

34 # 34 Queue 1 @ t=0 Queue 2 @ t=0 1 t 1 2 0 WFQ: queue 2 served first at rate 1; then queue 1 served at rate 1. Packet from queue 1 being served Packet from queue 2 being served Packet from queue 1 waiting 1 t 1 2 0 GPS: packet from queue 1 served at rate 1/4; Packet from queue 2 served at rate 3/4 GPS vs WFQ Weight: Queue 1=1 Queue 2 =3 Packet from queue 1 served at rate 1 Lecture 4: Nov 8, 2009

35 # 35 Completion times  Emulating a policy: m Assign each packet p a value time(p). m Send packets in order of time(p).  FIFO: m Arrival of a packet p from flow j: last = last + size(p); time(p)=last; m perfect emulation... Lecture 4: Nov 8, 2009

36 # 36 Round Robin Emulation  Round Robin (equal size packets) m Arrival of packet p from flow j: m last(j) = last(j)+ 1; m time(p)=last(j); m Idle queue not handle properly!!!  Sending packet q: round = time(q) m Arrival: last(j) = max{round,last(j)}+ 1 m time(p)=last(j);  What kind of low level scheduling? Lecture 4: Nov 8, 2009

37 # 37 Round Robin Emulation  Round Robin (equal size packets) m Sending packet q: m round = time(q); flow_num = flow(q); m Arrival: m last(j) = max{round,last(j) }+1 m IF (j >= flow_num) & (last(j)=round) THEN last(j)=last(j)-1 m time(p)=last(j);  What kind of low level scheduling? Lecture 4: Nov 8, 2009

38 # 38 GPS emulation (WFQ)  Arrival of p from flow j: m last(j)= max{last(j), round} + size(p); m using weights: last(j)= max{last(j), round} + size(p)/w j ;  How should we compute the round? m We like to simulate GPS: m x is the period of time in which #active did not change m round(t+x) = round(t) + x/B(t) m B(t) = # active flows (unweighted case) B(t) = sum of weights of active flows (weighted case)  A flow j is active while round(t) < last(j) Lecture 4: Nov 8, 2009

39 # 39 WFQ: Example (GPS view) 1 t 1 2 0 3 4 ½ round 1/2 5/6 0 7/6 11/6 Note that if in a time interval round progresses by amount x Then every non-empty buffer is emptied by amount x during the interval Lecture 4: Nov 8, 2009

40 # 40 WFQ: Example (equal size) Time 0: packets arrive to flow 1 & 2. last(1)= 1; last(2)= 1; Active = 2 round (0) =0; send 1 Time 1: A packet arrives to flow 3 round(1) = 1/2; Active = 3 last(3) = 3/2; send 2 Time 2: A packet arrives to flow 4. round(2) = 5/6; Active = 4 last(4) = 11/6; send 3 Time 2+2/3: round = 1; Active = 2 Time 3 : round = 7/6 ; send 4; Time 3+2/3: round = 3/2; Active = 1 Time 4 : round = 11/6 ; Active=0 Lecture 4: Nov 8, 2009

41 # 41 WFQ: Example (GPS view) 1 t 1 2 0 3 4 ½ round 1/2 5/6 0 7/6 11/6 Note that if in a time interval round progresses by amount x Then every non-empty buffer is emptied by amount x during the interval Lecture 4: Nov 8, 2009

42 # 42 Worst Case Fair Weighted Fair Queuing (WF 2 Q) Lecture 4: Nov 8, 2009

43 # 43 Worst Case Fair Weighted Fair Queuing (WF 2 Q)  WF 2 Q fixes an unfairness problem in WFQ. m WFQ: among packets waiting in the system, pick one that will finish service first under GPS m WF 2 Q: among packets waiting in the system, that have started service under GPS, select one that will finish service first GPS  WF 2 Q provides service closer to GPS m difference in packet service time bounded by max. packet size. Lecture 4: Nov 8, 2009

44 # 44 Lecture 4: Nov 8, 2009

45 # 45 Lecture 4: Nov 8, 2009

46 # 46 Lecture 4: Nov 8, 2009

47 # 47 Lecture 4: Nov 8, 2009

48 # 48 Multiple Buffers Lecture 4: Nov 8, 2009

49 # 49 Buffers  Input ports  Output ports  Inside fabric  Shared Memory  Combination of all Buffer locations Fabric Lecture 4: Nov 8, 2009

50 # 50 Input Queuing fabric Inputs Outputs Lecture 4: Nov 8, 2009

51 # 51 Input speed of queue – no more than input line Need arbiter (running N times faster than input) FIFO queue Head of Line (HoL) blocking. Utilization: Random destination 1- 1/e = 59% utilization due to HoL blocking Input Buffer : properties Lecture 4: Nov 8, 2009

52 # 52 Head of Line Blocking Lecture 4: Nov 8, 2009

53 # 53 Lecture 4: Nov 8, 2009

54 # 54 Lecture 4: Nov 8, 2009

55 # 55  The fabric looks ahead into the input buffer for packets that may be transferred if they were not blocked by the head of line.  Improvement depends on the depth of the look ahead.  This corresponds to virtual output queues where each input port has buffer for each output port. Overcoming HoL blocking: look-ahead Lecture 4: Nov 8, 2009

56 # 56 Input Queuing Virtual output queues Lecture 4: Nov 8, 2009

57 # 57  Each output port is expanded to L output ports  The fabric can transfer up to L packets to the same output instead of one cell. Overcoming HoL blocking: output expansion Karol and Morgan, IEEE transaction on communication, 1987: 1347-1356 Lecture 4: Nov 8, 2009

58 # 58 fabric L Input Queuing Output Expansion Lecture 4: Nov 8, 2009

59 # 59 Output Queuing The “ideal” 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 Lecture 4: Nov 8, 2009

60 # 60 Output Buffer : properties  No HoL problem  Output queue needs to run faster than input lines  Need to provide for N packets arriving to same queue  solution: limit the number of input lines that can be destined to the output. Lecture 4: Nov 8, 2009

61 # 61 Shared Memory a common pool of buffers divided into linked lists indexed by output port number FABRIC MEMORY Lecture 4: Nov 8, 2009

62 # 62 Shared Memory: properties Packets stored in memory as they arrive Resource sharing Easy to implement priorities Memory is accessed at speed equal to sum of the input or output speeds How to divide the space between the sessions Lecture 4: Nov 8, 2009


Download ppt "תזכורת  שבוע הבא אין הרצאה m יום א, נובמבר 15, 2009  שיעור השלמה m יום שישי, דצמבר 11, 2009 Lecture 4: Nov 8, 2009 # 1."

Similar presentations


Ads by Google