Download presentation
Presentation is loading. Please wait.
Published byPrudence Nicholson Modified over 9 years ago
1
Scheduling Periodic Real-Time Tasks with Heterogeneous Reward Requirements I-Hong Hou and P.R. Kumar 1 Presenter: Qixin Wang
2
Problem Overview Imprecise Computation Model: Tasks generate jobs periodically, each job with some deadline Jobs that miss deadline cause performance degradation of the system, rather than serious failure Partially-completed jobs are still useful and generate some rewards Previous work: maximize the total rewards of all tasks Assumes that rewards of different tasks are equivalent May result in serious unfairness Does not allow tradeoff between tasks This work: Provide guarantees on reward for each task 2
3
Example: Video Streaming A server serves several video streams Each stream generates a group of video-frames (GOF) periodically Video-Frames need to be delivered on time, or they are not useful Lost video-frames result in glitches of videos Video-Frames of the same flow are not equally important MPEG has three types of video-frames: I, P, and B I-frames are more important than P-frames, which are more important than B-frames Goal: provide guarantees on perceived video quality for each stream 3
4
System Model Discrete time, basic unit: time-slot. A system with several tasks (task = video stream) Each task X generates one job every τ X time-slots, deadline = τ X (job = GOF) All tasks release one job at the first time-slot A B C A A A A B B B C C C C C τ A =4 τ B =6 τ C =3 4
5
System Model Cycle = least common multiple of τ A, τ B, … A B C T = 12 5
6
Model for Rewards A job can run for several time-slots before its deadline Upon finishing its k th time-slot of execution, a job of task X wins a marginal reward of r X k, where r X 1 ≥ r X 2 ≥ r X 3 ≥… (i.e., the reward of the k th video-frame in a GOF is r X k ) A B C A A A A B B B C C C C C τ A =4 τ B =6 τ C =3 6
7
Scheduling Example Reward of A per cycle = 3 r A 1 +2 r A 2 + r A 3 A B C A A A A B B B C C C C C A A B C C B A C B A A A rA1rA1 rA2rA2 rA1rA1 rA1rA1 rA2rA2 rA3rA3 7
8
Scheduling Example Reward of A per cycle = 3 r A 1 +2 r A 2 + r A 3 Reward of B per cycle = 2 r B 1 + r B 2 Reward of C per cycle = 3 r C 1 A B C A A A A B B B C C C C C A A B C C B A C B A A A 8
9
Reward Requirements Task X requires an average reward per cycle of ≥ q X Q: Is [ q A, q B,…] feasible? How to schedule to meet the reward requirements? A B C A A A A B B B C C C C C A A B C C B A C B A A A 9
10
Extension for Imprecise Computation Models Imprecise Computation Model: Each job may have a mandatory part and an optional part Mandatory part has to run to completion, or a serious failure happens. Incomplete optional part only harms performance Our model: set the reward of a mandatory part to be M, where M is larger than any finite number The reward requirement of a task = aM + b, where a is the mandatory part length in the unit of time-slots, and b is the optional part’s total reward. Mandatory part has to be completed before optional part can start. 10
11
Feasibility Condition f X k := average number of X jobs, each of which runs for at least k time-slots per cycle Obviously, 0 ≤ f X k ≤ T/τ X Average reward of X = ∑ k f X k r X k Average reward requirement: ∑ k f X k r X k ≥ q X The average number of time-slots that the CPU spends on X per cycle = ∑ k f X k Hence, ∑ X ∑ k f X k ≤ T 11
12
Admission Control Theorem: A system is feasible if and only if there exists a vector [ f X k ]such that 1. 0 ≤ f X k ≤ T/τ X 2. ∑ k f X k r X k ≥ q X 3. ∑ X ∑ k f X k ≤ T Check feasibility by linear programming Complexity of admission control can be further reduced by noting r X 1 ≥ r X 2 ≥ r X 3 ≥… Theorem: check feasibility in O(∑ X τ X ) time 12
13
Scheduling Policy Q: Given a feasible system, how to design a scheduling policy that fulfills all reward requirements? Propose a framework for designing scheduling policies Propose an on-line scheduling policy Analyze the performance of the on-line scheduling policy 13
14
A Condition Based on Debts Let s X (l) be the reward obtained by X in the l th cycle The (accumulated) Debt of task X right after the l th cycle: d X (l) := [d X (l-1)+q X - s X (l)] + x + := max{x, 0} The requirement of task X is met if d X (l)/l→0, as l→∞ Theorem: A schedling policy that maximizes ∑ X d X (l)s X (l) for every cycle fulfills every feasible system Such a policy is called an optimal policy 14
15
Approximation Policy Computation overhead of an optimal policy may be high Study performance guarantees of suboptimal policies Theorem: If a policy’s resulting ∑ X d X (l)s X (l) is at least 1/p of the optimal policy’s ∑ X d X (l)s X (l), then this policy achieves reward requirement [q X ] as long as the reward requirement [pq X ] is feasible Such a policy is called a p-approximation policy 15
16
An On-Line Scheduling Policy At some time slot, let ( j X - 1) be the number of time-slots that the CPU has worked on the current job of X so far If the CPU schedules X in this time slot, X obtains a reward of r X j X Greedy Maximizer: Schedule the task X that maximizes r X j X d X ( l ) in every time-slot Greedy Maximizer can be efficiently implemented 16
17
Performance of Greedy Maximizer The Greedy Maximizer is optimal when the period length of all tasks are the same τ A = τ B = … However, when tasks have different period lengths, the Greedy Maximizer is not necessarily optimal 17
18
Example of Suboptimality A system with two tasks Task A: τ A = 6, r A 1 = r A 2 = r A 3 = r A 4 = 100, r A 5 = r A 6 = 1 Task B: τ B = 3, r B 1 = 10, r B 2 = r B 3 = 0 Suppose d A (l) = d B (l) = 1 d A (l)s A (l) + d B (l)s B (l) of Greedy Maximizer = 411 A A A AA B B A 100 1 10 18
19
Example of Suboptimality A system with two tasks Task A: τ A = 6, r A 1 = r A 2 = r A 3 = r A 4 = 100, r A 5 = r A 6 = 1 Task B: τ B = 3, r B 1 = 10, r B 2 = r B 3 = 0 Suppose d A (l) = d B (l) = 1 d A (l)s A (l) + d B (l)s B (l) of Greedy Maximizer = 411 d A (l)s A (l) + d B (l)s B (l) of an optimal policy = 420 A A A AA B B B 19
20
Approximation Bound Analyze the worst case performance of the Greedy Maximizer Show that resulting ∑ X d X (l)s X (l) is at least 1/2 of the resulting ∑ X d X (l)s X (l) by any other policy Theorem: The Greedy Maximizer is a 2-approximation policy The Greedy Maximizer achieves reward requirements [ q X ] as long as requirements [2 q X ] are feasible 20
21
Simulation Setup: MPEG Streaming MPEG: 1 GOF consists of 1 I-frame, 3 P-frames, and 8 B- frames Two groups of tasks, A and B Tasks in A treat both I-frames and P-frames as mandatory parts, while tasks in B only require I-frames to be mandatory B-frames are optional for tasks in A; both P-frames and B- frames are optional for tasks in B 3 tasks in each group 21
22
Reward Function for Optional Part Each task gains some reward when its optional parts are executed Consider three types of optional part reward functions: exponential, logarithmic, and linear Exponential: X obtains a total reward of (5+i)(1-e -k/5 ) if its job is executed k time-slots, where i is the index of the task X Logarithmic: X obtains a total reward of (5+i)log(10k+1) if its job is executed k time-slots Linear: X obtains a total reward of (5+i)k if its job is executed k time-slots 22
23
Performance Comparison Assume all tasks in A requires an average reward of α, and all tasks in B requires an average reward of β Plot all pairs of ( α, β ) that are achieved by each policy Consider three policies: Feasible: the feasible region characterized by the feasibility conditions Greedy Maximizer MAX: a policy that aims to maximize the total reward in the system 23
24
Simulation Results: Same Frame Rate All streams generate one GOF every 30 time slots Exponential reward functions: Greedy = Feasible Thus, Greedy Maximizer is indeed feasibility optimal Greedy is much better than MAX 24
25
Simulation Results: Same Frame Rate All streams generate one GOF every 30 time slots Greedy = Feasible, and is always better than MAX LogarithmicLinear 25
26
Simulation Results: Heterogeneous Frame Rate Different tasks generate GOFs at different rates Period length may be 20, 30, or 40 time slots Performance of Greedy is close to optimal Greedy is much better than MAX Exponential 26
27
Simulation Results: Heterogeneous Frame Rate Different tasks generate GOFs at different rates Period length may be 20, 30, or 40 time slots LogarithmicLinear 27
28
Conclusions We propose a model based on the imprecise computation models that supports per-task reward guarantees This model can achieve better fairness, and allow fine- grain tradeoff between tasks Derive a sharp condition for feasibility Propose an on-line scheduling policy, the Greedy Maximizer Greedy Maximizer is feasibility optimal when all tasks have the same period length It is a 2-approximation policy, otherwise 28
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.