Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scheduling. Main Points Scheduling policy: what to do next, when there are multiple threads ready to run – Or multiple packets to send, or web requests.

Similar presentations


Presentation on theme: "Scheduling. Main Points Scheduling policy: what to do next, when there are multiple threads ready to run – Or multiple packets to send, or web requests."— Presentation transcript:

1 Scheduling

2 Main Points Scheduling policy: what to do next, when there are multiple threads ready to run – Or multiple packets to send, or web requests to serve, or … Definitions – response time, throughput, predictability Uniprocessor policies – FIFO, round robin, optimal – multilevel feedback as approximation of optimal Multiprocessor policies – Affinity scheduling, gang scheduling Queueing theory – Can you predict a system’s response time?

3 Diagram of Process State Maximum CPU utilization obtained with multiprogramming P1 P2

4 Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: – switching context – switching to user mode – jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running

5 Example You manage a web site, that suddenly becomes wildly popular. Do you? – Buy more hardware? – Implement a different scheduling policy? – Turn away some users? Which ones? How much worse will performance get if the web site becomes even more popular?

6 Definitions Task/Job – User request: e.g., mouse click, web request, shell command, … Latency/response time – How long does a task take to complete? Throughput – How many tasks can be done per unit of time? Overhead – How much extra work is done by the scheduler? Fairness – How equal is the performance received by different users? Predictability – How consistent is the performance over time?

7 More Definitions Workload – Set of tasks for system to perform Preemptive scheduler – If we can take resources away from a running task Work-conserving – Resource is used whenever there is a task to run – For non-preemptive schedulers, work-conserving is not always better Scheduling algorithm – takes a workload as input – decides which tasks to do first – Performance metric (throughput, latency) as output – Only preemptive, work-conserving schedulers to be considered

8 Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time –amount of time to execute a particular process Waiting time –amount of time a process has been waiting in the ready queue Response time –amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

9 Scheduling Criteria CPU utilization – MAX Throughput – MAX Turnaround time – MIN Waiting time – MIN Response time – MIN

10 First In First Out (FIFO) Schedule tasks in the order they arrive – Continue running them until they complete or give up the processor Example: memcached – Facebook cache of friend lists, … On what workloads is FIFO particularly bad?

11 First-Come, First-Served (FCFS) Scheduling ProcessBurst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1, P 2, P 3 The Gantt Chart for the schedule is: Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 Turnaround time for P 1 = ?; P 2 = ?; P 3 = ? P1P1 P2P2 P3P3 2427300 Suppose that the processes arrive in the order (at or close to the same time)

12 FCFS Scheduling (Cont) Suppose that the processes arrive in the order (at or close to the same time) P 2, P 3, P 1 The Gantt chart for the schedule is: Waiting time for P 1 = 6; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Convoy effect short process behind long process P1P1 P3P3 P2P2 63300

13 Shortest Job First (SJF) Always do the task that has the shortest remaining amount of work to do – Often called Shortest Remaining Time First (SRTF) Suppose we have five tasks arrive one right after each other, but the first one is much longer than the others – Which completes first in FIFO? Next? – Which completes first in SJF? Next?

14 Example of SJF ProcessArrival TimeBurst Time P 1 0.06 P 2 2.08 P 3 4.07 P 4 5.03 SJF scheduling chart Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 P4P4 P3P3 P1P1 3 16 0 9 P2P2 24

15 Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time SJF is optimal – gives minimum average waiting time for a given set of processes – The difficulty is knowing the length of the next CPU request

16 Determining Length of Next CPU Burst Can only estimate the length Can be done by using the length of previous CPU bursts, using exponential averaging

17 Prediction of the Length of the Next CPU Burst

18 Examples of Exponential Averaging  =0 –  n+1 =  n – Recent history does not count  =1 –  n+1 =  t n – Only the actual last CPU burst counts If we expand the formula, we get:  n+1 =  t n +(1 -  )  t n -1 + … +(1 -  ) j  t n -j + … +(1 -  ) n +1  0 Since both  and (1 -  ) are less than or equal to 1, each successive term has less weight than its predecessor

19 FIFO vs. SJF

20 Starvation and Sample Bias Suppose you want to compare FIFO and SJF on some sequence of arriving tasks – Compute average response time as the average for tasks that start/end in some window Is this valid or invalid?

21 Round Robin Each task gets resource for a fixed period of time (time quantum) – If task doesn’t complete, it goes back in line Need to pick a time quantum – What if time quantum is too long? Infinite? – What if time quantum is too short? One instruction?

22 Round Robin

23 Round Robin vs. FIFO Assuming zero-cost time slice, is Round Robin always better than FIFO?

24 Round Robin vs. FIFO

25 Round Robin vs. Fairness Is Round Robin always fair?

26 Mixed Workload

27 Max-Min Fairness How do we balance a mixture of repeating tasks: – Some I/O bound, need only a little CPU – Some compute bound, can use as much CPU as they are assigned One approach: maximize the minimum allocation given to a task – Schedule the smallest task first, then split the remaining time using max-min

28 Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer  highest priority) – Preemptive – nonpreemptive SJF is a priority scheduling where priority is the predicted next CPU burst time Problem  Starvation – low priority processes may never execute Solution  Aging – as time progresses increase the priority of the process

29 Multi-level Feedback Queue (MFQ) Goals: – Responsiveness – Low overhead – Starvation freedom – Some tasks are high/low priority – Fairness (among equal priority tasks) Not perfect at any of them! – Used in Linux (and probably Windows, MacOS)

30 MFQ Set of Round Robin queues – Each queue has a separate priority High priority queues have short time slices – Low priority queues have long time slices Scheduler picks first thread in highest priority queue Tasks start in highest priority queue – If time slice expires, task drops one level

31 MFQ

32 Uniprocessor Summary FIFO is simple and minimizes overhead. If tasks are variable in size, then FIFO can have very poor average response time. If tasks are equal in size, FIFO is optimal in terms of average response time. Considering only the processor, SJF is optimal in terms of average response time. SJF is pessimal in terms of variance in response time.

33 Uniprocessor Summary If tasks are variable in size, Round Robin approximates SJF. If tasks are equal in size, Round Robin will have very poor average response time. Tasks that intermix processor and I/O benefit from SJF and can do poorly under Round Robin. Max-min fairness can improve response time for I/O- bound tasks. Round Robin and Max-min fairness both avoid starvation. By manipulating the assignment of tasks to priority queues, an MFQ scheduler can achieve a balance between responsiveness, low overhead, and fairness.

34 Scheduling Applications on Multiprocessors Make effective use of multiple cores for running sequential tasks Adapt scheduling algorithms for parallel applications

35 Scheduling Sequential Applications on Multiprocessors Use a centralized MFQ Centralized lock on I/O can be a bottleneck – Cache effects of a single ready list: Cache coherence overhead MFQ data structure would ping between caches Fetching data from other caches can be even slower than re- fetching from DRAM – Cache reuse Thread’s data from last time it ran is often still in its old cache MFQ for each processor – Once a thread is scheduled on a processor, it is returned to the same processor – Balancing workload must be managed

36 Per-Processor Multi-level Feedback: Affinity Scheduling

37 Amdahl’s Law Speedup on a multiprocessor limited by whatever runs sequentially Runtime >= Sequential portion + parallel portion/# CPUs Example: – Suppose scheduler lock used 0.1% of the time – Suppose scheduler lock is 50x slower because of cache effects – Runtime >= 5% + 95%/# CPUs System is only 2.5x faster with 100 processors than 10

38 Scheduling Parallel Applications on Multiprocessors Might be a natural decomposition of parallel application onto a set of processors – Ex. Image Competition between processes for processor resources. Techniques – Oblivious scheduling – Gang Scheduling

39 Oblivious Scheduling Parallel Programs Each thread is scheduled as a completely independent entity Each processor time-slices its ready list independently of the other processors

40 Oblivious Scheduling Design Patterns Bulk synchronous Parallelism – Split work into roughly equal sized chunks – At each step, computation is limited by slowest processor Producer-Consumer Parallelism – The results of one thread is fed to the next thread. – Preempting a thread can stall all processes in the chain

41 Bulk Synchronous Parallel Program

42 Oblivious Scheduling Problems Considerations – Critical Path Delay – Locks – Parallel I/O In each of these the scheduler may not know the application decomposition to best split the work across processors.

43 Gang Scheduling Parallel Programs Schedule all the tasks of a program together Application picks some decomposition of work into some number of threads and those threads run together or not at all.

44 Co-Scheduling

45 Space Sharing Scheduler activations: kernel informs user-level library as to # of processors assigned to that application, with upcalls every time the assignment changes

46 Energy-Aware Scheduling Consideration of scheduling on battery life Scheduling optimization – Processor high/low power modes. OS can decide which is appropriate – Disable one or more chips to conserve – Power off I/O devices

47 Real-Time Scheduling Real-time constraints: Computation that must be completed by a deadline Widely used techniques – Over provisioning – Earliest Deadline first – Priority donation

48 Real-Time Scheduling Over Provisioning – Ensure that all real-time tasks, in aggregate, use only a fraction of the system’s processing power. Earliest Deadline First (EDF) – Sorts processes by deadline Priority Donation – When a high priority waits for a lower one, (with equal real-time deadlines) it donates priority to make sure the lower one can finish.

49 Queueing Theory Can we predict what will happen to user performance: – If a service becomes more popular? – If we buy more hardware? – If we change the implementation to provide more features?

50 Queueing Model

51 Definitions Queueing delay: wait time Service time: time to service the request Response time = queueing delay + service time Utilization: fraction of time the server is busy – Service time * arrival rate Throughput: rate of task completions – If no overload, throughput = arrival rate

52 Queueing What is the best case scenario for minimizing queueing delay? – Keeping arrival rate, service time constant What is the worst case scenario?

53 Queueing: Best Case

54 Queueing: Worst Case

55 Queueing: Average Case? Gaussian: Arrivals are spread out, around a mean value Exponential: arrivals are memoryless Heavy-tailed: arrivals are bursty

56 Exponential Distribution Permits closed form solution to state probabilities, as function of arrival rate and service rate

57 Response Time vs. Utilization R = S/(1-U) – Better if gaussian – Worse if heavy-tailed Variance in R = S/(1-U)^2

58 What if Multiple Resources? Response time = Sum over all i Service time for resource i / (1 – Utilization of resource i) Implication – If you fix one bottleneck, the next highest utilized resource will limit performance

59 Overload Management What if arrivals occur faster than service can handle them – If do nothing, response time will become infinite Turn users away? – Which ones? Average response time is best if turn away users that have the highest service demand Degrade service? – Compute result with fewer resources – Example: CNN static front page on 9/11 – Counterexample: highway congestion

60 Why Do Metro Buses Cluster? Suppose two Metro buses start 15 minutes apart – Why might they arrive at the same time?


Download ppt "Scheduling. Main Points Scheduling policy: what to do next, when there are multiple threads ready to run – Or multiple packets to send, or web requests."

Similar presentations


Ads by Google