Download presentation
Presentation is loading. Please wait.
1
COMP680E by M. Hamdi 1 Can we make these scheduling algorithms simpler? Using a Simpler Architecture
2
COMP680E by M. Hamdi 2 Buffered Crossbar Switches A buffered crossbar switch is a switch with buffered fabric (memory inside the crossbar). A pure buffered crossbar switch architecture, has only buffering inside the fabric and none anywhere else. Due to HOL blocking problem, VOQ are used in the input side.
3
COMP680E by M. Hamdi 3 Buffered Crossbar Architecture ….…. 1 N Arbiter ….…. 1 N ….…. 1 N 1 N 2 … … Data Flow Control Input Cards ……… … … … … Output Card 1 2N
4
COMP680E by M. Hamdi 4 Scheduling Process Scheduling is divided into three steps: – –Input scheduling: each input selects in a certain way one cell from the HoL of an eligible queue and sends it to the corresponding internal buffer. – –Output scheduling: each output selects in a certain way from all internally buffered cells in the crossbar to be delivered to the output port. – –Delivery notifying: for each delivered cell, inform the corresponding input of the internal buffer status.
5
COMP680E by M. Hamdi 5 Advantages Total independence between input and output arbiters (distributed design) (1/N complexity as compared to centralized schedulers) Performance of Switch is much better (because there is much less output contention) – a combination of IQ and OQ switches Disadvantage: Crossbar is more complicated
6
COMP680E by M. Hamdi 6 4 1 2 3 1 3 2 4 I/O Contention Resolution
7
COMP680E by M. Hamdi 7 4 1 2 3 1 3 2 4 I/O Contention Resolution
8
COMP680E by M. Hamdi 8 InRr-OutRr Input scheduling: InRr (Round-Robin) - Each input selects the next eligible VOQ, based on its highest priority pointer, and sends its HoL packet to the internal buffer. Output scheduling: OutRr (Round-Robin) - Each output selects the next nonempty internal buffer, based on its highest priority pointer, and sends it to the output link. The Round Robin Algorithm
9
COMP680E by M. Hamdi 9 4 1 2 3 1 3 2 4 4 1 3 2 4 1 3 2 4 1 3 2 4 1 3 2 Input Scheduling (InRr.)
10
COMP680E by M. Hamdi 10 41 2 3 1 3 2 4 4 1 3 2 4 1 3 2 4 1 3 2 4 1 3 2 4 1 3 2 4 1 3 2 4 1 3 2 4 1 3 2 Output Scheduling (OutRr.)
11
COMP680E by M. Hamdi 11 41 2 3 4 1 3 2 4 1 3 2 4 1 3 2 4 1 3 2 1 3 2 4 4 1 3 2 4 1 3 2 4 1 3 2 4 1 3 2 Out. Ptrs Updt + Notification delivery
12
COMP680E by M. Hamdi 12 Performance study Delay/throughput under Bernoulli Uniform and Burtsy Uniform Stability performance:
13
COMP680E by M. Hamdi 13 Bernoulli Uniform Arrivals
14
COMP680E by M. Hamdi 14 Bursty Uniform Arrivals
15
COMP680E by M. Hamdi 15 Scheduling Process Because the arbitration is simple: – – We can afford to have algorithms based on weights for example (LQF, OCF). – – We can afford to have algorithms that provide QoS
16
COMP680E by M. Hamdi 16 Buffered Crossbar Solution: Scheduler The algorithm MVF-RR is composed of two parts: –Input scheduler – MVF (most vacancies first) Each input selects the column of internal buffers (destined to the same output) where there are most vacancies (non-full buffers). –Output scheduler – Round-robin Each output chooses the internal buffer which appears next on its static round-robin schedule from the highest priority one and updates the pointer to 1 location beyond the chosen one.
17
COMP680E by M. Hamdi 17 Buffered Crossbar Solution: Scheduler The algorithm ECF-RR is composed of two parts: – –Input scheduler – ECF (empty column first) Each input selects first empty column of internal buffers (destined to the same output). If there is no empty column, it selects on a round-robin basis. – –Output scheduler – Round-robin Each output chooses the internal buffer which appears next on its static round-robin schedule from the highest priority one and updates the pointer to 1 location beyond the chosen one.
18
COMP680E by M. Hamdi 18 Buffered Crossbar Solution: Scheduler The algorithm RR-REMOVE is composed of two parts: – –Input scheduler – Round-robin (with remove-request signal sending) Each input chooses non-empty VOQ which appears next on its static round-robin schedule from the highest priority one and updates the pointer to 1 location beyond the chosen one. It then sends out at most one remove-request signal to outputs – –Output scheduler – REMOVE For each output, if it receives any remove-request signals, it chooses one of them based on its highest priority pointer and removes the cell. If no signal is received, it does simple round- robin arbitration.
19
COMP680E by M. Hamdi 19 Buffered Crossbar Solution: Scheduler The algorithm ECF-REMOVE is composed of two parts: – –Input scheduler – ECF (with remove-request signal sending) Each input selects first empty column of internal buffers (destined to the same output). If there is no empty column, it selects on a round-robin basis.It then sends out at most one remove-request signal to outputs – –Output scheduler – REMOVE For each output, if it receives any remove-request signals, it chooses one of them based on its highest priority pointer and removes the cell. If no signal is received, it does simple round-robin arbitration.
20
COMP680E by M. Hamdi 20 Hardware Implementation of ECF-RR: An Input Scheduling Block Round-robin arbiter Selector 0Selector N-1 Any grant Arbitration results Grants Highest priority pointer
21
COMP680E by M. Hamdi 21 Performance Evaluation: Simulation Study Uniform Traffic
22
COMP680E by M. Hamdi 22 Performance Evaluation: Simulation Study Load0.50.60.70.80.90.950.99 Improvement Percentage 1% 3%6%13%17%12% Normalized Improvement Percentage 1% 3%6%12%15%11% Improvement Factor 1.01 1.031.061.131.171.12 ECF-REMOVe over RR-RR
23
COMP680E by M. Hamdi 23 Performance Evaluation : Simulation Study Bursty Traffic
24
COMP680E by M. Hamdi 24 Performance Evaluation: Simulation Study Load0.50.60.70.80.90.950.99 Improvement Percentage 10%13%16%20%22%18%11% Normalized Improvement Percentage 9%12%14%16%18%16%10% Improvement Factor 1.101.131.161.201.221.181.11 ECF-REMOVe over RR-RR
25
COMP680E by M. Hamdi 25 Performance Evaluation : Simulation Study Hotspot Traffic
26
COMP680E by M. Hamdi 26 Performance Evaluation: Simulation Study Load0.310.360.410.450.490.51 Improvement Percentage 0.2%0.3%0.5%0.8%1%0.7% Normalized Improvement Percentage 0.2%0.3%0.5%0.8%1%0.7% Improvement Factor1.0021.0031.0051.0081.011.007 ECF-REMOVe over RR-RR
27
COMP680E by M. Hamdi 27 Quality of Service Mechanisms for Switches/Routers and the Internet
28
COMP680E by M. Hamdi 28 Recap High-Performance Switch Design –We need scalable switch fabrics – crossbar, bit- sliced crossbar, Clos networks. –We need to solve the memory bandwidth problem Our conclusion is to go for input queued-switches We need to use VOQ instead of FIFO queues –For these switches to function at high-speed, we need efficient and practically implementable scheduling/arbitration algorithms
29
COMP680E by M. Hamdi 29 Algorithms for VOQ Switching We analyzed several algorithms for matching inputs and outputs –Maximum size matching: these are based on bipartite maximum matching – which can be solved using Max-flow techniques in O(N 2.5 ) These are not practical for high-speed implementations They are stable (100% throughput for uniform traffic) They are not stable for non-uniform traffic –Maximal size matching: they try to approximate maximum size matching PIM, iSLIP, SRR, etc. These are practical – can be executed in parallel in O(logN) or even O(1) They are stable for uniform traffic and unstable for non-uniform traffic
30
COMP680E by M. Hamdi 30 Algorithms for VOQ Switching – Maximum weight matching: These are maximum matchings based weights such queue length (LQF) (LPF) or age of cell (OCF) with a complexity of O(N 3 logN) These are not practical for high-speed implementations. Much more difficult to implement than maximum size matching They are stable (100% throughput) under any admissible traffic –Maximal weight matching: they try to approximate maximum weight matching. They use RGA mechanism like iSLIP iLQF, iLPF, iOCF, etc. These are “somewhat” practical – can be executed in parallel in O(longN) or even O(1) like iSLIP BUT the arbiters are much more complex to build They are “recently” shown to be stable under any admissible traffic
31
COMP680E by M. Hamdi 31 Algorithms for VOQ Switching –Randomized algorithms They try in a smart way to approximate maximum weight matching by avoiding using an iterative process They are stable under any admissible traffic Their time complexity is small (depending on the algorithm) Their hardware complexity is yet untested. –No schedulers – deal with mis-sequencing of packets –Distributed schedulers – buffered crossbars –Two important points to remember The time complexity of an algorithm is not a “true” indication of its hardware implementation 100% throughput does not mean a low delay “Weak” vs. “Strong” stability
32
COMP680E by M. Hamdi 32 VOQ Algorithms and Delay But, delay is key –Because users don’t care about throughput alone –They care (more) about delays –Delay = QoS (= $ for the network operator) Why is delay difficult to approach theoretically? –Mainly because it is a statistical quantity –It depends on the traffic statistics at the inputs –It depends on the particular scheduling algorithm used The last point makes it difficult to analyze delays in i /q switches For example in VOQ switches, it is almost impossible to give any guarantees on delay. All you can hope for is to have a high throughput and a bounded queue length – bounded average delay (but even the bound on the queue length is beyond the control of the algorithm – we cannot say that the length of the queue should not be more than 10).
33
COMP680E by M. Hamdi 33 VOQ Algorithms and Delay This does not mean that we cannot have an algorithm that can do that. It means there exist none at this moment. For this exact reason, almost all quality of service schemes (whether for delay or bandwidth guarantees) assume that you have an output-queued switch Link 1, ingressLink 1, egress Link 2, ingressLink 2, egress Link 3, ingressLink 3, egress Link 4, ingressLink 4, egress
34
COMP680E by M. Hamdi 34 VOQ Algorithms and Delay WHY: Because an OQ switch has no “fabric” scheduling/arbitration algorithm. Delay simply depends on traffic statistics Researchers have shown that you can provide a lot of QoS algorithms (like WFQ) using a single server and based on the traffic statistics But, OQ switches are extremely expensive to build –Memory bandwidth requirement is very high –These QoS scheduling algorithms have little practical significance for scalable and high-performance switches/routers.
35
COMP680E by M. Hamdi 35 Output Queueing The “ideal” 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2
36
COMP680E by M. Hamdi 36 How to get good delay cheaply? Enter speedup… –The fabric speedup for an IQ switch equals 1 (mem. bwdth. = 2) –The fabric speedup for an OQ switch equals N (mem. Bwdth. = N+1) –Suppose we consider switches with fabric speedup of S, 1 < S << N –Such switch will require buffers both at the input and the output call these combined input- and output-queued (CIOQ) switches Such switches could help if… –With very small values of S –We get the performance – both delay and throughput – of an OQ switch
37
COMP680E by M. Hamdi 37 A CIOQ switch Consist of –An (internally non-blocking, e.g. crossbar) fabric with speedup S > 1 –Input and output buffers –A scheduler to determine matchings
38
COMP680E by M. Hamdi 38 A CIOQ switch For concreteness, suppose S = 2. The operation of the switch consists of –Transferring no more than 2 cells from (to) each input (output) –Logically, we will think of each time slot as consisting of two phases –Arrivals to (departures from) switch occur at most once per time slot –The transfer of cells from inputs to outputs can occur in each phase
39
COMP680E by M. Hamdi 39 Using Speedup 1 1 1 2 2
40
COMP680E by M. Hamdi 40 Performance of CIOQ switches Now that we have a higher speedup, do we get a handle on delay? –Can we say something about delay (e.g., every packet from a given flow should below 15 msec)? –There is one way of doing this: competitive analysis – the idea is to compete with the performance of an OQ switch
41
COMP680E by M. Hamdi 41 Intuition Speedup = 1 Speedup = 2 Fabric throughput =.58 Fabric throughput = 1.16 Ave I/p queue = 6.25 Ave I/p queue = too large
42
COMP680E by M. Hamdi 42 Intuition (continued) Speedup = 3 Fabric throughput = 1.74 Speedup = 4 Fabric throughput = 2.32 Ave I/p queue = 0.75 Ave I/p queue = 1.35
43
COMP680E by M. Hamdi 43 Performance of CIOQ switches The setup –Under arbitrary, but identical inputs (packet-by-packet) –Is it possible to replace an OQ switch by a CIOQ switch and schedule the CIOQ switch so that the outputs are identical packet-by-packet? To exactly mimick an OQ switch –If yes, what is the scheduling algorithm?
44
COMP680E by M. Hamdi 44 What is exact mimicking? Apply same inputs to an OQ and a CIOQ switch - - packet by packet Obtain same outputs - - packet by packet
45
COMP680E by M. Hamdi 45 Why is a speedup of N not necessary? It is useless to bring all packets to the output if they need wait at the output. Need to bring packets at the output before they can leave. What is exact mimicking?
46
COMP680E by M. Hamdi 46 Consequences Suppose, for now, that a CIOQ is competitive wrt an OQ switch. Then –We get perfect emulation of an OQ switch –This means we inherit all its throughput and delay properties –Most importantly – all QoS scheduling algorithms originally designed for OQ switches can be directly used on a CIOQ switch –But, at the cost of introducing a scheduling algorithm – which is the key
47
COMP680E by M. Hamdi 47 Emulating OQ Switches with CIOQ Consider an N x N switch with (integer) speedup S > 1 –We’re going to see if this switch can emulate an OQ switch We’ll apply the same inputs, cell-by-cell, to both switches –We’ll assume that the OQ switch sends out packets in FIFO order –And we’ll see if the CIOQ switch can match cells on the output side
48
COMP680E by M. Hamdi 48 Key concept: Urgency Urgency of a cell at any time = its departure time - current time It basically indicates the time that this packet will depart the OQ switch This value is decremented after each time slot When the value reaches 0, it must depart (it is at the HoL of the output queues) OQ switch
49
COMP680E by M. Hamdi 49 Key concept: Urgency Algorithm: Most urgent cell first (MUCF). In each “phase” 1.Outputs try to get their most urgent cells from inputs. 2.Input grant to output whose cell is most urgent. In case of ties, output i takes priority over output i + k. 3.Loser outputs try to obtain their next most urgent cell from another (unmatched) input. 4.When no more matchings are possible, cells are transferred.
50
COMP680E by M. Hamdi 50 Key concept: Urgency - Example At the beginning of phase 1, both outputs 1 and 2 request input 1 to obtain their most urgent cells Since there is a tie, then input 1 grants to output 1 (give it to least port #). Output 2 proceeds to get its next most urgent cell (from input 2 and has urgency of 3)
51
COMP680E by M. Hamdi 51 Key concept: Urgency Observation: A cell is not forwarded from input to output for one of two (and only two) reasons… –Input contention: its input sends a more urgent cell (output 2 cannot receive its most urgent cell in phase 1 because input 1 wants to send to output 1 a more urgent cell) –Output contention: its output receives a more urgent cell (Input 2 cannot send its most urgent cell because output 3 wants to receive from input 3)
52
COMP680E by M. Hamdi 52 Implementing MUCF The way in which MUCF matches inputs to outputs is similar to the “stable marriage problem” (SMP) The SMP finds “stable” matchings in bipartite graphs –There are N women and N men –Each woman (man) ranks each man (woman) in order of preference for marriage
53
COMP680E by M. Hamdi 53 The Gale-Shapley Algorithm (GSA) What is a stable matching? –A matching is a pairing (i, p(i)) of i with their partner p(i) –An unstable matching is one in which there are matched pairs (i,p(i)) and (j, p(j)) such that i prefers p(j) to p(i), and p(j) prefers i to j –The GSA algorithm is guaranteed to give a stable matching –Its complexity is O(N 2 )
54
COMP680E by M. Hamdi 54 An example Consider the example we have already seen Executing GSA… –With men proposing we get the matching (1, 1), (2, 4), (3, 2), (4, 3) – this takes 7 proposals (iterations) –With women proposing we get the matching (1, 1), (2, 3), (3, 2), (4, 4) – this takes 7 proposals (iterations) –Both matchings are stable –The first is man-optimal – men get the best partners of any stable matching –Likewise the second is woman-optimal
55
COMP680E by M. Hamdi 55 Implementing MUCF by the GSA MUCF can be implemented using the GSA algorithm with preference list as follows: –Output j assigns a preference value to each input i, equal to the urgency of the cell at the HoL of VOQ ij If VOQij is empty then the preference value of input I for output j is set to infiniti The preference list of the output is the ordered set of its preference values for each input –Each input assigns a preference value for each output based on the urgency of the cells, and creates the preference list accordingly
56
COMP680E by M. Hamdi 56 Theorem A CIOQ switch with a speedup of 4 operating under the MUCF algorithm exactly matches cells with FIFO output-queued switch. This is true even for Non-FIFO OQ scheduling schemes (e.g., WFQ, strict priority, etc.) We can achieve similar results with S = 2
57
COMP680E by M. Hamdi 57 Implementation - a closer look Main sources of difficulty - - Estimating urgency - - Matching process - too many iterations? Estimating urgency depends on what is being emulated - - FIFO, Strict priorities - no problem - - WFQ, etc - problems (and communicating this info among I/ps and O/ps)
58
COMP680E by M. Hamdi 58 Other Work Relax stringent requirement of exact emulation - - Least Occupied O/p First Algorithm (LOOFA) - - Can provide QoS Keeps outputs always busy if there are packets A lot of work has been done using this direction A lot of work has been done using this direction Conclusion: We must have a speedup if we want to approach the performance of OQ switches, or provide QoS
59
COMP680E by M. Hamdi 59 QoS Scheduling Algorithms
60
COMP680E by M. Hamdi 60 QoS Differentiation: Two options Stateful (per flow) IETF Integrated Services (Intserv)/RSVP Stateless (per class) IETF Differentiated Services (Diffserv)
61
COMP680E by M. Hamdi 61 The Building Blocks: May contain more functions Classifier Shaper Policer Scheduler Dropper
62
COMP680E by M. Hamdi 62 QoS Mechanisms Admission Control –Determines whether the flow can/should be allowed to enter the network. Packet Classification –Classifies the data based on admission control for desired treatment through the network Traffic Policing –Measures the traffic to determine if it is out of profile. Packets that are determined to be out-of-profile can be dropped or marked differently (so they may be dropped later if needed) Traffic Shaping –Provides some buffering, therefore delaying some of the data, to make sure the traffic fits into the profile (may only effect bursts or all traffic to make it similar to Constant Bit Rate) Queue Management –Determines the behavior of data within a queue. Parameters include queue depth, drop policy Queue Scheduling –Determines how different queues empty onto the outbound link
63
COMP680E by M. Hamdi 63 QoS Router Policer Classifier Policer Classifier Per-flow Queue Scheduler Per-flow Queue Scheduler Per-flow Queue shaper Queue management
64
COMP680E by M. Hamdi 64 Queue Scheduling Algorithms
65
COMP680E by M. Hamdi Scheduling at the output link of an OQ Switch Sharing always results in contention A scheduling discipline resolves contention: Decide when and what packet to send on the output link –Usually implemented at output interface –Scheduling is a Key to fairly sharing resources and providing performance guarantees Link 1, ingressLink 1, egress Link 2, ingressLink 2, egress Link 3, ingressLink 3, egress Link 4, ingressLink 4, egress
66
COMP680E by M. Hamdi 66 Output Scheduling scheduler Allocating output bandwidth Controlling packet delay
67
COMP680E by M. Hamdi 67 Types of Queue Scheduling Strict Priority –Empties the highest priority non-empty queue first, before servicing lower priority queues. It can cause starvation of lower priority queues. Round Robin –Services each queue by emptying a certain amount of data and then going to the next queue in order. Weighted Fair Queuing (WFQ) –Empties an amount of data from a queue based on the relative weight for the queue (driven by reserved bandwidth) before servicing the next queue. Earliest Deadline First –Determines the latest time a packet must leave to meet the delay requirements and service the queues in that order
68
COMP680E by M. Hamdi 68 Scheduling: Deterministic Priority Packet is served from a given priority level only if no packets exist at higher levels (multilevel priority with exhaustive service) Highest level gets lowest delay Watch out for starvation! Usually map priority levels to delay classes Low bandwidth urgent messages Realtime Non-realtime Priority
69
COMP680E by M. Hamdi 69 Scheduling: Work conserving vs. non- work-conserving Work conserving discipline is never idle when packets await service Why bother with non-work conserving? (sometimes useful for example to minimize delay jitter)
70
COMP680E by M. Hamdi Scheduling: Requirements An ideal scheduling discipline –is easy to implement (preferably in hardware) –is fair (each connection gets no more than what it wants. The excess, if any, is equally shared) –provides performance bounds (Can be deterministic or statistical) Common parameters are bandwidth delay delay-jitter Loss –allows easy admission control decisions (Choice of scheduling discipline affects ease of admission control algorithm) to decide whether a new flow can be allowed
71
COMP680E by M. Hamdi 71 Scheduling: No Classification FIFO First come first serve This is the simplest possible. But we cannot provide any guarantees. This is the simplest possible. But we cannot provide any guarantees. With FIFO queues, if the depth of the queue is not bounded, there very little that can be done With FIFO queues, if the depth of the queue is not bounded, there very little that can be done We can perform preferential dropping We can perform preferential dropping We can use other service disciplines on a single queue (e.g., EDF) We can use other service disciplines on a single queue (e.g., EDF)
72
COMP680E by M. Hamdi 72 Scheduling: Class Based Queuing At each output port, packets of the same class are queued at distinct queues. Service disciplines within each queue can vary (e.g., FIFO, EDF, etc.). Usually it is FIFO Service disciplines between classes can vary as well (e.g., strict priority, some kind of sharing, etc.) Class 1 Class 2 Class 3 Class 4 Class based scheduling
73
COMP680E by M. Hamdi 73 Per Flow Packet Scheduling Each flow is allocated a separated “virtual queue” –Lowest level of aggregation –Service disciplines between the flows vary (FIFO, SP, etc.) 1 2 Scheduler flow 1 flow 2 flow n Classifier Buffer management
74
COMP680E by M. Hamdi 74 The problems caused by FIFO queues in routers 1.In order to maximize its chances of success, a source has an incentive to maximize the rate at which it transmits. 2.(Related to #1) When many flows pass through it, a FIFO queue is “unfair” – it favors the most greedy flow. 3.It is hard to control the delay of packets through a network of FIFO queues. Fairness Delay Guarantees
75
COMP680E by M. Hamdi 75 Round Robin (RR) RR avoids starvation All sessions have the same weight and the same packet length: A:B:C: Round #2 … Round #1
76
COMP680E by M. Hamdi 76 RR with variable packet length A:B:C: Round #1Round #2 … But the Weights are equal !!!
77
COMP680E by M. Hamdi 77 Solution… A:B:C: #1#2#3 … #4
78
COMP680E by M. Hamdi 78 Weighted Round Robin (WRR) W A =3 W B =1 W C =4 #1 round length = 8 … #2
79
COMP680E by M. Hamdi 79 WRR – non Integer weights W A =1.4 W B =0.2 W C =0.8 W A =7 W B =1 W C =4 Normalize round length = 13 …
80
COMP680E by M. Hamdi 80 Weighted round robin Serve a packet from each non-empty queue in turn –Can provide protection against starvation –It is easy to implement in hardware Unfair if packets are of different length or weights are not equal What is the Solution? Different weights, fixed packet size –serve more than one packet per visit, after normalizing to obtain integer weights
81
COMP680E by M. Hamdi 81 Problems with Weighted Round Robin Different weights, variable size packets –normalize weights by mean packet size e.g. weights {0.5, 0.75, 1.0}, mean packet sizes {50, 500, 1500} normalize weights: {0.5/50, 0.75/500, 1.0/1500} = { 0.01, 0.0015, 0.000666}, normalize again {60, 9, 4} With variable size packets, need to know mean packet size in advance Fairness is only provided at time scales larger than the schedule
82
COMP680E by M. Hamdi 82 Fairness 1.1 Mb/s 10 Mb/s 100 Mb/s A B R1R1 C 0.55 Mb/s 0.55 Mb/s What is the “fair” allocation: (0.55Mb/s, 0.55Mb/s) or (0.1Mb/s, 1Mb/s)? e.g. an http flow with a given (IP SA, IP DA, TCP SP, TCP DP)
83
COMP680E by M. Hamdi 83 Fairness 1.1 Mb/s 10 Mb/s 100 Mb/s A B R1R1 D What is the “fair” allocation? 0.2 Mb/s C
84
COMP680E by M. Hamdi 84 Max-Min Fairness The min of the flows should be as large as possible Max-Min fairness for single resource: Bottlenecked (unsatisfied) connections share the residual bandwidth equally Their share is > = the share held by the connections not constrained by this bottleneck C=10 F1 = 25 F2 = 6 F1’= 5 F2”= 5
85
COMP680E by M. Hamdi 85 Max-Min Fairness A common way to allocate flows N flows share a link of rate C. Flow f wishes to send at rate W(f), and is allocated rate R(f). 1.Pick the flow, f, with the smallest requested rate. 2.If W(f) < C/N, then set R(f) = W(f). 3.If W(f) > C/N, then set R(f) = C/N. 4.Set N = N – 1. C = C – R(f). 5.If N>0 goto 1.
86
COMP680E by M. Hamdi 86 1 W(f 1 ) = 0.1 W(f 3 ) = 10 R1R1 C W(f 4 ) = 5 W(f 2 ) = 0.5 Max-Min Fairness An example Round 1: Set R(f 1 ) = 0.1 Round 2: Set R(f 2 ) = 0.9/3 = 0.3 Round 3: Set R(f 4 ) = 0.6/2 = 0.3 Round 4: Set R(f 3 ) = 0.3/1 = 0.3
87
COMP680E by M. Hamdi 87 Max-Min Fairness How can an Internet router “allocate” different rates to different flows? First, let’s see how a router can allocate the “same” rate to different flows…
88
COMP680E by M. Hamdi 88 Fair Queueing 1.Packets belonging to a flow are placed in a FIFO. This is called “per-flow queueing”. 2.FIFOs are scheduled one bit at a time, in a round-robin fashion. 3.This is called Bit-by-Bit Fair Queueing. Flow 1 Flow N ClassificationScheduling Bit-by-bit round robin
89
COMP680E by M. Hamdi 89 Weighted Bit-by-Bit Fair Queueing Likewise, flows can be allocated different rates by servicing a different number of bits for each flow during each round. 1 R(f 1 ) = 0.1 R(f 3 ) = 0.3 R1R1 C R(f 4 ) = 0.3 R(f 2 ) = 0.3 Order of service for the four queues: … f 1, f 2, f 2, f 2, f 3, f 3, f 3, f 4, f 4, f 4, f 1,… Also called “Generalized Processor Sharing (GPS)”
90
COMP680E by M. Hamdi 90 Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Equal Weights Weights : 1:1:1:1 1 1 1 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 Time 1 1 1 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A1B1C1D1 A2 = 2 C3 = 2 Weights : 1:1:1:1 D1, C1 Depart at R=1 A2, C3 arrive Time Round 1 Weights : 1:1:1:1 1 1 1 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A1B1C1D1 A2 = 2 C3 = 2 A1B1C2D2 C2 Departs at R=2 Time Round 1Round 2
91
COMP680E by M. Hamdi 91 Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Equal Weights Weights : 1:1:1:1 1 1 1 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A1B1C1D1 A2 = 2 C3 = 2 A1B1C2D2 D2, B1 Depart at R=3 A1B1C3D2 Time Round 1Round 2Round 3 Weights : 1:1:1:1 1 1 1 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C3 = 2C1 = 1 C1D1C2B1 D2 A 1 A2 = 2 C3 A2 Departure order for packet by packet WFQ: Sort by finish round of packets Time Sort packets 1 1 1 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A1B1C1D1 A2 = 2 C3 = 2 A1B1C2D2 A1 Depart at R=4 A1B1C3D2A1C3A2 Time Round 1Round 2Round 3 Round 4 C3,A2 Departs at R=6 56
92
COMP680E by M. Hamdi 92 Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1 Weights : 3:2:2:1 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 Time 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A1 B1 A2 = 2 C3 = 2 Time Weights : 3:2:2:1 Round 1 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A1 B1 A2 = 2 C3 = 2 D1, C2, C1 Depart at R=1 Time B1C1C2D1 Weights : 3:2:2:1 Round 1
93
COMP680E by M. Hamdi 93 Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1 Weights : 3:2:2:1 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A2 = 2 C3 = 2 B1, A2 A1 Depart at R=2 Time A1 B1 C1C2D1A1A2 B1 Round 1Round 2 Weights : 3:2:2:1 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C1 = 1 A2 = 2 C3 = 2 D2, C3 Depart at R=2 Time A1 B1 C1C2D1A1A2 B1C3 D2 Round 1Round 2 3 Weights : 1:1:1:1 Weights : 3:2:2:1 3 2 2 1 6 5 4 3 2 1 0 B1 = 3 A1 = 4 D2 = 2 D1 = 1 C2 = 1C3 = 2C1 = 1 C1C2D1A1 A2 B1 A2 = 2 C3 D2 Departure order for packet by packet WFQ: Sort by finish time of packets Time Sort packets
94
COMP680E by M. Hamdi 94 Packetized Weighted Fair Queueing (WFQ) Problem: We need to serve a whole packet at a time. Solution: 1.Determine what time a packet, p, would complete if we served it flows bit-by-bit. Call this the packet’s finishing time, F p. 2.Serve packets in the order of increasing finishing time. Also called “Packetized Generalized Processor Sharing (PGPS)”
95
COMP680E by M. Hamdi 95 WFQ is complex There may be hundreds to millions of flows; the linecard needs to manage a FIFO queue per each flow. The finishing time must be calculated for each arriving packet, Packets must be sorted by their departure time. Most efforts in QoS scheduling algorithms is to come up with practical algorithms that can approximate WFQ! 1 2 3 N Packets arriving to egress linecard Calculate F p Find Smallest F p Departing packet Egress linecard
96
COMP680E by M. Hamdi 96 When can we Guarantee Delays? Theorem If flows are leaky bucket constrained and all nodes employ GPS (WFQ), then the network can guarantee worst-case delay bounds to sessions.
97
COMP680E by M. Hamdi 97 time Cumulative bytes A(t) D(t) R B(t) Deterministic analysis of a router queue FIFO case FIFO delay, d(t) R A(t)D(t) Model of router queue B(t)
98
COMP680E by M. Hamdi 98 Flow 1 Flow N Classification WFQ Scheduler A 1 (t) A N (t) R(f 1 ), D 1 (t) R(f N ), D N (t) time Cumulative bytes A 1 (t) D 1 (t) R(f 1 ) Key idea: In general, we don’t know the arrival process. So let’s constrain it.
99
COMP680E by M. Hamdi 99 Let’s say we can bound the arrival process time Cumulative bytes Number of bytes that can arrive in any period of length t is bounded by: This is called “( ) regulation” A 1 (t)
100
COMP680E by M. Hamdi 100 The leaky bucket “( )” regulator Tokens at rate, Token bucket size, Packet buffer Packets One byte (or packet) per token
101
COMP680E by M. Hamdi 101 ( ) Constrained Arrivals and Minimum Service Rate time Cumulative bytes A 1 (t) D 1 (t) R(f 1 ) d max B max Theorem [Parekh,Gallager ’93]: If flows are leaky-bucket constrained, and routers use WFQ, then end-to-end delay guarantees are possible.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.