Download presentation
Presentation is loading. Please wait.
Published byAudrey Gibson Modified over 9 years ago
1
Lecture Note on Scheduling Algorithms
2
What is scheduling? A scheduling discipline resolves contention, “who is the next?” Goal: fairness and latency. –Fairness: isolate different users, reduce the impact of the malice users. –Reduce the latency for the time-critical user.
3
Where? Anywhere where contention may occur Examples: –How to serve customers waiting in line. –CPU manages multiple tasks. –Multiplexing of multiple flows at one node, it could be the point before the traffic is sent to the switch or the traffic is left the switches or routers.
4
Why do we need scheduler?-QoS enforcement Multiplexing in the integrated Service Networks: thousands of flows sharing the same physical infrastructure. Prioritize traffic –Time critical applications: VOIP, video streaming –Non-real time application: Web browsing, e-mail, FTP. –ATM: CBR, real-time-VBR, non-real-time VBR ABR, UBR –IP: Diff-Serv, (Golden, Silver, Brown, and best effort) Supporting diversity traffic profiles with different traffic characteristics.
5
How can scheduler help? Guarantee the bandwidth according to weight. Isolate different users, so the naïve source is protected from the malicious one. Guarantees the delay boundary for the real-time application. Statistics multiplexing
6
QoS Components Scheduler: Key component –Guarantee the bandwidth fairly allocation and latency bound. Policing: filter the traffic according to pre-defined traffic profile (peak rate, average rate, burst length,delay variance, etc.) Traffic shaper: re-shape the outgoing traffic according to pre-defined traffic profile for the downstream node. Admission control:prevent the bandwidth from being overbooked.
7
Design Requirement Complexity Fairness Performance bound
8
Complexity Simple computation. Fast decision High scalability
9
Fairness Ideal fairness: Relative Fairness Bound (RFB) is zero. –RFB is maximum difference in the normalized service received by any two backlogged flows over all possible intervals of time. –If the bandwidth is over-subscribed, all the flows reduces the service proportionally according to their weight. – Left over bandwidth is shared proportionally as well.
10
Performance Bound Some are statistical: –Average rate –Loss rate Some are deterministic –Delay bound –Jitter bound
11
Types of scheduler Work-conservative or non-work conservative Time stamped based or frame based
12
Work conserving Vs. non-work-conserving Controls traffic pattern distortion inside network –Key idea: delay packet till becomes eligible Reconstructs traffic pattern at each switch –Reduces delay-jitter => fewer buffers in networks How to choose eligibility time –rate-jitter regulator partial reconstruction bounds maximum outgoing rate –delay-jitter regulator full reconstruction compensates for variable delay
13
Do we need non-work-conservation? Can remove delay-jitter at an endpoint instead –but also reduces size of switch buffers… Increases mean delay –not a problem for playback applications Wastes bandwidth –can serve best-effort packets instead Always punishes a misbehaving source –yes???
14
List of schedulers Work-conserving policy –Virtual clock –WFQ –Delay-EDD –SCED - service curve earliest due date first –FSC - fair service curve –Rate-controlled servers with stand by queues Non-work-conserving policy –Stop-and-go –Hierarchical round robin –Jitter-EDD –Rate-controlled static priority
15
How to deal with traffic in different priority Always favor the higher priority traffic –Pre-emptive, higher priority traffic can cut through the lower traffic –Non-pre-emptive Watch out for starvation –Guarantee the minimum bandwidth for lower priority traffic Scheduler serving multiple priorities can be de-composed to several schedulers for the same priority plus a priority selector.
16
GPS-an ideal scheduling model GPS service Server services all backlogged connections simultaneously,proportional to their guaranteed rates
17
GPS Example 50% 10% 24681015 GPS service Connection A B C D E F
18
Generalized Processor Sharing An idealized policy that can split bandwidth among multiple connections simultaneously –each connection has a queue and a service share –at any given time, GPS services all nonempty queues simultaneously, proportional to service shares Observation: –guaranteed service rate for each backlogged connection –the fewer the backlogged connections, the larger service rate for each
19
More about GPS (1) Idealized fluid system Can not be implemented Served as a theoretical reference for performance comparison Multiple flows can be served simultaneously Each flow can be divided infinitely Emulation models for other schedulers End- to- end delay bound for guaranteed service Fair allocation of bandwidth for best effort service Work- conserving for high link utilization
20
How to emulate GPS GPS ---- Idealized fluid model –multiple connections can be serviced simultaneously –no non- preemption packet Real system ---- Packetized systems –one connection is served at any given time –packet transmission is not preempted, rather store and forward Goal –look for packet algorithms approximating the fluid system, and maintaining most of the important properties
21
Packet Approximation of Fluid System Standard mechanism of approximating fluid GPS –select packet that will finish first in GPS assuming that there are no future arrivals Important property of GPS –finishing order of packets currently in system is independent of future arrivals Implementation based on virtual time –assign virtual finish time to each packet upon arrival –packets served in increasing order of virtual finish time
22
Approximating GPS with PGPS PGPS (also called Weighted Fair Queuing) –select the first packet that finishes in GPS 24681015 GPS service t 12345106789 t
23
WFQ example - virtual time calculation 01 50% 40% 10% 20 7 0a1bdc2I GPS server: WFQ server: …. Assume server rate always equals 1. t 207 V(t)=10t V(t)=2t+20 V(t)=t+30 a b cd 2 I II ef ef t 20 30 0
24
Improvement for WFQ Improve worst-case fairness –use Smallest Eligible virtual Finish time First policy –examples: WF 2 Q, WF 2 Q+ Reduce complexity –use simpler virtual time functions –examples: SCFQ, SFQ, leap- forward Virtual Clock, WF 2 Q+
25
WFQ Approximating GPS Fluid GPS system service order Weighted Fair Queueing –service the first packet that finishes in GPS Observation –packet finishes in WFQ no late than in GPS (basis for proving delay bound for WFQ) –some packet finishes service much earlier in WFQ in GPS 24681015 t 12345106789
26
Two Approximations of GPS Problem with WFQ WFQ can be ahead of GPS too much WF2Q service order: 24681015 t WF 2 Q
27
Again: Problems with WFQ Packets can be served much earlier in WFQ than in GPS Amount of service received under WFQ far ahead of that under GPS Possibility of long period of no service even when backlogged
28
Virtual Time in Fluid GPS Sys tem Virtual time represents the cumulative fair amount of service Previously idle connection starts to receive service at the virtual time level when it becomes active –no compensation for the idle period –start receiving service immediately after becoming active Accurate but difficult to compute –need to emulate fluid system, worst case complexity O( N)
29
Virtual Time in Packet System Can we compute system virtual time based on information in the packet system? –no need to emulate fluid system Normalized amounts of service received by backlogged connections are different Which level to set the previous idle connection?
30
Problems with time-stamped Scheduler Complex to compute the virtual time Require sorting function –Not scalable to support a large amount of flows at high speed
31
Scheduling Design Consideration Complexity –Easy to implement in hardware –Support thousands of flows –Scale to high speed A OC-48 line only has 160 ns to make a scheduling decision for ATM traffic.
32
Timing Wheel Scheme
33
Features of time wheel scheduler Complexity is o(1). –Easy to implement. Can not guarantee when the bandwidth is subscribed. Complexity is increased dramatically when the weight granularity is increased.
34
Framed-Based Scheduling WRR QLWFQ DRR Nested DRR
35
Traditional WRR
36
WRR (Continued) Simple to implement Maintain fairness for cell traffic Long delay Short-term unfairness Not suitable for variable length packets
37
Improvement A W A =3 B W B =2 C W C =1 D W D =3 E W E =1 F W F =2 1 12 10 11 7 9 8 2 3 4 5 6 Figure 2(b): PRIOR ART Roundtrip 1 Roundtrip 2 Roundtrip 3
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.