Presentation is loading. Please wait.

Presentation is loading. Please wait.

Processes and operating systems zPriority-driven scheduling zScheduling policies: yRMS; yEDF. zInterprocess communication. zOperating system performance.

Similar presentations


Presentation on theme: "Processes and operating systems zPriority-driven scheduling zScheduling policies: yRMS; yEDF. zInterprocess communication. zOperating system performance."— Presentation transcript:

1 Processes and operating systems zPriority-driven scheduling zScheduling policies: yRMS; yEDF. zInterprocess communication. zOperating system performance. zPower management.

2 Embedded vs. general- purpose scheduling zWorkstations try to avoid starving processes of CPU access. yFairness = access to CPU. zEmbedded systems must meet deadlines. yLow-priority processes may not run for a long time.

3 Priority-driven scheduling zEach process has a priority. zCPU goes to highest-priority process that is ready. zPriorities determine scheduling policy: yfixed priority; ytime-varying priorities.

4 Priority-driven scheduling example zRules: yeach process has a fixed priority (1 highest); yhighest-priority ready process gets CPU; yprocess continues until done. zProcesses yP1: priority 1, execution time 10 yP2: priority 2, execution time 30 yP3: priority 3, execution time 20

5 Priority-driven scheduling example time P2 ready t=0P1 ready t=15 P3 ready t=18 0 30 1020 60 4050 P2 P1P3

6 Metrics zHow do we evaluate a scheduling policy: yAbility to satisfy all deadlines. yCPU utilization---percentage of time devoted to useful work. yScheduling overhead---time required to make scheduling decision. yAlgorithm xRate-monotonic scheduling (RMS) xEarliest deadline first(EDF)

7 Rate monotonic scheduling zRMS (Liu and Layland): widely-used, analyzable scheduling policy. zAnalysis is known as Rate Monotonic Analysis (RMA).

8 RMA model zAll process run on single CPU periodly. zZero context switch time. zNo data dependencies between processes. zProcess execution time is constant. zDeadline is at end of period. zHighest-priority ready process runs.

9 Process parameters  T i is computation time of process i;  i is period of process i. period  i Pi computation time T i

10 RMS priorities zOptimal (fixed) priority assignment: zshortest-period process gets highest priority;

11 RMS example time 0510 P2 period P1 period P1 P2 P1

12 Example ProcessExecution timeperiod P114 P226 P3312 Priority: P1,P2,P3 Need: hyperperiod :12 length executing P1-3times, P2-2times, P3-1times

13 CPU utilization: (1*3+2*2+3*1)/12=83.3%

14 Another Example ProcessExecution timeperiod P124 P236 P3312 Need: hyperperiod :12 length executing P1-3times, P2-2times, P3-1times P1-3times-6cpu unit cycles; P2-2times-6cpu unit cycles P3-1time-3cpu unit cycles; sum=15 cpu unit cycles, but hyperperiod is 12 unit cycles. No schedule algorithm!

15 RMS CPU utilization zUtilization for n processes is   i T i /  i zAs number of tasks approaches infinity, maximum utilization approaches 69%.

16 RMS CPU utilization, cont’d. zRMS cannot use 100% of CPU, even with zero context switch overhead. zMust keep idle cycles available to handle worst-case scenario. zHowever, RMS guarantees all processes will always meet their deadlines.

17 Earliest-deadline-first scheduling ( EDF ) zEDF: dynamic priority scheduling scheme. zProcess closest to its deadline has highest priority. zRequires recalculating processes at every timer interrupt.

18 EDF analysis zEDF can use 100% of CPU. zBut EDF may fail to miss a deadline.

19 ProcessTimeperiod P113 P214 P325 Time RP deadline 0 P1 d(P2)=3,d(P3)=4 1 P2 d(P3)=3 2 P3 d(P1)=3,d(P3)=2 3 P3 d(P1)=2,d(P2)=4 4 P1 d(P2)=3,d(P3)=5 5 P2 d(P1)=3,d(P3)=4 6 P1 d(P3)=3 7 P3 d(P3)=2, d(P2)=4 8 p3 d(P2)=3, d(P1)=3 9 P1 d(P2)=2, d(P3)=5 10 P2 d(P3)=4 11 P3 d(P1)=3,d(P3)=3,d(P2)=4 12 P1 d(P3)=2,d(P2)=3 13 P3 d(P2)=2,d(P1)=3 14 P2 d(P1)=2,d(P3)=5 15 P1 d(P2)=4,d(P3)=4

20 EDF implementation zOn each timer interrupt: ycompute time to deadline; ychoose process closest to deadline. zGenerally considered too expensive to use in practice.

21 Fixing scheduling problems zWhat to do if your set of processes is unschedulable? yChange deadlines in requirements. yReduce execution times of processes. yGet a faster CPU.

22 Priority inversion zPriority inversion: low-priority process keeps high-priority process from running. zImproper use of system resources can cause scheduling problems: yLow-priority process grabs I/O device. yHigh-priority device needs I/O device, but can’t get it until low-priority process is done. zCan cause deadlock.

23 Solving priority inversion zSet the highest priority to the low-priority process zAfter executing this process, restore the low priority.

24 Data dependencies zData dependencies allow us to improve utilization. yRestrict combination of processes that can run simultaneously. zP1 and P2 can’t run simultaneously. P1 P2

25 Context-switching time zNon-zero context switch time can push limits of a tight schedule. zHard to calculate effects---depends on order of context switches.  In practice, OS context switch overhead is small (hundreds of clock cycles) relative to many common task periods (ms –  s).

26 Interprocess communication( 进程间通讯 ) zInterprocess communication (IPC): OS provides mechanisms so that processes can pass data. zTwo types of semantics: yBlocking (阻塞) : sending process waits for response; ynon-blocking (非阻塞) : sending process continues.

27 IPC styles zShared memory: yprocesses have some memory in common; ymust cooperate to avoid destroying/missing messages. zMessage passing: yprocesses send messages along a communication channel---no common address space.

28 Shared memory zShared memory on a bus: CPU I/O device memory

29 Race condition in shared memory zProblem when CPU and I/O device try to write the same location: yCPU reads flag and sees 0. yI/O reads flag and sees 0. yCPU sets flag to one and writes location. yI/O sets flag to one and overwrites location.

30 Atomic test-and-set zProblem is solved by an atomic test-and-set: ysingle bus operation reads memory location, tests it, writes it. zARM test-and-set provided by SWP: ADR r0,SEMAPHORE LDR r1,#1 GETFLAG SWP r1,r1,[r0] CMP r1, #0 BNZ GETFLAG HASFLAG … …

31 Critical regions zCritical region: section of code that cannot be interrupted by another process. zExamples: ywriting shared memory; yaccessing I/O device.

32 Semaphores zSemaphore: OS primitive for controlling access to critical regions. zProtocol: yGet access to semaphore with P(). yPerform critical region operations. yRelease semaphore with V().

33 Message passing zMessage passing on a network: CPU 1CPU 2 message

34 Evaluating RTOS performance zSimplifying assumptions: yContext switch costs no CPU time,. yWe know the exact execution time of processes. yWCET/BCET don’t depend on context switches.

35 Scheduling and context switch overhead ProcessExecution time deadline P135 P2310 With context switch overhead of 1, no feasible schedule. 2TP1 + TP2 = 2*(1+3)+(1+3)=12

36 Process execution time zProcess execution time is not constant. zExtra CPU time can be good. zExtra CPU time can also be bad: yNext process runs earlier, causing new preemption.

37 Processes and caches zProcesses can cause additional caching problems. yEven if individual processes are well- behaved, processes may interfere with each other. zWorst-case execution time with bad behavior is usually much worse than execution time with good cache behavior.

38 Effects of scheduling on the cache ProcessWCETAvg. CPU time P186 P243 P343 Schedule 1 (LRU cache): Schedule 2 (half of cache reserved for P1): Each process use half cache. Every repeating, all cache can use.

39 Power optimization zPower management: determining how system resources are scheduled/used to control power consumption. zOS can manage for power just as it manages for time. zOS reduces power by shutting down units. yMay have partial shutdown modes.

40 Power management and performance zPower management and performance are often at odds. zEntering power-down mode consumes yenergy, ytime. zLeaving power-down mode consumes yenergy, ytime.

41 Simple power management policies zRequest-driven: power up once request is received. Adds delay to response. zPredictive shutdown: try to predict how long you have before next request. yMay start up in advance of request in anticipation of a new request. yIf you predict wrong, you will incur additional delay while starting up.

42 Probabilistic shutdown zAssume service requests are probabilistic. zOptimize expected values: ypower consumption; yresponse time. zSimple probabilistic: shut down after time T on, turn back on after waiting for T off.

43 Advanced Configuration and Power Interface(ACPI) zACPI: open standard for power management services. Hardware platform device drivers ACPI BIOS OS kernel applications power management

44 ACPI global power states zG3: mechanical off zG2: soft off xS1: low wake-up latency with loss of context xS2: low latency with loss of CPU/cache state xS3: low latency with loss of all state except memory xS4: lowest-power state with all devices off zG1: sleeping state zG0: working state


Download ppt "Processes and operating systems zPriority-driven scheduling zScheduling policies: yRMS; yEDF. zInterprocess communication. zOperating system performance."

Similar presentations


Ads by Google