Download presentation
Presentation is loading. Please wait.
Published byDylan Barber Modified over 9 years ago
1
Sporadic Server Scheduling in Linux Theory vs. Practice Mark Stanovich Theodore Baker Andy Wang
2
Real-Time Scheduling Theory Analysis techniques to design a system to meet timing constraints Schedulability analysis – Workload models – Processor models – Scheduling algorithms
3
Real-Time Scheduling Theory Analysis techniques to design a system to meet timing constraints Schedulability analysis – Workload models – Processor models – Scheduling algorithms
4
time Period = T Computation time WCET = C Deadline = D Task jobs (j 1, j 2, j 3, …) Release time 4 Task = {T, C, D} Periodic Task
5
sched_setscheduler(SCHED_FIFO) clock_nanosleep()
6
Periodic Task Assumptions WCET is reliable Arrivals are periodic Not realistic for most tasks
7
Polling Server time Job arrivals Initial budget Replenishment period
8
Polling Server Type of aperiodic server CPU time no worse than an equivalent periodic task Can be modeled as a periodic task WCET = Initial Budget Period = Replenishment Period Budget consumed as CPU time is used CPU time forfeited if not used Replenish budget every period
9
Polling Server Good Bounds CPU time Analyzable workload Simplicity Can be better Faster response time if budget is not forfeited
10
Sporadic Server time Job arrivals Initial budget Replenishment period replenishments
11
Sporadic Server Originally proposed by Sprunt et. al. Parameters – Initial budget – Replenishment period Bounds CPU interference for other tasks Fits into the periodic task workload model Better avg. response time than polling server
12
Sporadic Server Scheduling algorithm for fixed-task-priority systems Can be used in UNIX priority model SCHED_SPORADIC is a version of SS defined in POSIX definition
13
Implementation Linux 2.6.38 Softirq threading patch ported from earlier RT patch Sporadic server implementation Uniprocessor
14
Sporadic Server Performance Metrics Interference for lower priority tasks Average response time
15
An experiment Sends UDP packet with current timestamp Receives UDP packets Calculate response time based on arrival at UDP layer Measure CPU time for 10 second burst A B
16
Measuring CPU Time Regher's “hourglass” technique Constantly read time stamp counter – Detect preemptions by larger gaps – Sum execution chunks Hourglass thread lower than SS thread – Measures interference from SS thread
17
Measuring CPU Time Network receive thread Sporadic and polling server Budget = 1 msec Period = 10 msec SCHED_FIFO Hourglass thread SCHED_FIFO Lower priority than network receive thread
18
CPU Utilization
19
Response Time
20
Interference SS budget limited to CPU demand Additional overheads lower priority tasks – Context switch time – Cache eviction and reloading Not in theoretical workload model Guarantees of theory require interference to be included in the analysis
21
Polling Server 21 time = aperiodic job CPU time = aperiodic job arrival
22
Sporadic Server 22 time = aperiodic job CPU time = aperiodic job arrival = replenishment period
23
Over Provisioning All context switch time may not be used – e.g., one replenishment per period Account for CS time on-line – Charge SS for each preemption
24
CPU Utilization
25
Response Time
27
Analysis Light load – Sporadic Server Low response time – Polling Server High response time Heavy load – Sporadic Server High response time Dropped packets – Polling Server Low response time No dropped packets 27
28
Can we get the best of both? 28 Sporadic Server Light loads Polling Server Heavy loads
29
Hybrid Server How to switch – Ensure bounded interference – SS with 1 replenishment is same as polling server – Coalesce replenishments Push replenishments further into the future Switching point – Server has work but no budget
30
Sporadic Server time
31
Sporadic Server 31 time
32
Response Time
33
CPU Utilization
34
Switching Immediate coalescing may be too extreme CPU time could be used for better response time Gradual approach Coalesce a few replenishments
35
Sporadic Server 35 time
36
Sporadic Server 36 time
37
Sporadic Server 37 time
38
Response Time
39
CPU Utilization
40
Conclusion Theoretical analysis provides solid guarantees Implementation must match abstract models Additional interference terms need to be considered SS can fit into the theoretical analysis
41
Deferrable Server
42
Bandwidth Preserving Allow server to retain budget Periodically replenish budget WCET != Budget
43
Response Time
44
44 Replenishment Policy replenishment period replenishment initial budget time arrival time (work available for server)
45
45 Bandwidth Preservation replenishment initial budget time arrival time (work available for server) replenishment period
46
Sporadic Server 46 time
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.