1 Processes and Threads Chapter 2 2.1 Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.

Slides:



Advertisements
Similar presentations
CPU Scheduling.
Advertisements

1 Processes and Threads Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
 Basic Concepts  Scheduling Criteria  Scheduling Algorithms.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 5: CPU Scheduling.
Operating Systems Chapter 6
Chap 5 Process Scheduling. Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
CPU Scheduling CS 3100 CPU Scheduling1. Objectives To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various.
CPU Scheduling Algorithms
1 Threads CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
CS 311 – Lecture 23 Outline Kernel – Process subsystem Process scheduling Scheduling algorithms User mode and kernel mode Lecture 231CS Operating.
02/06/2008CSCI 315 Operating Systems Design1 CPU Scheduling Algorithms Notice: The slides for this lecture have been largely based on those accompanying.
Scheduling in Batch Systems
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
What we will cover…  CPU Scheduling  Basic Concepts  Scheduling Criteria  Scheduling Algorithms  Evaluations 1-1 Lecture 4.
Chapter 5-CPU Scheduling
Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Silberschatz, Galvin and Gagne ©2007 Chapter 5: CPU Scheduling.
02/11/2004CSCI 315 Operating Systems Design1 CPU Scheduling Algorithms Notice: The slides for this lecture have been largely based on those accompanying.
1 Processes and Threads Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Basic Concepts Maximum CPU utilization.
Chapter 6: CPU Scheduling
Silberschatz, Galvin, and Gagne  Applied Operating System Concepts Module 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Processes and Threads.
CS212: OPERATING SYSTEM Lecture 3: Process Scheduling 1.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Scheduling. Alternating Sequence of CPU And I/O Bursts.
MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 2 Processes and Threads Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Alternating Sequence of CPU And I/O Bursts. Histogram of CPU-burst Times.
CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 5: CPU Scheduling.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 5: CPU Scheduling Basic.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
1 Uniprocessor Scheduling Chapter 3. 2 Alternating Sequence of CPU And I/O Bursts.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Lecture 4 CPU scheduling. Basic Concepts Single Process  one process at a time Maximum CPU utilization obtained with multiprogramming CPU idle :waiting.
CPU Scheduling G.Anuradha Reference : Galvin. CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
1 Module 5: Scheduling CPU Scheduling Scheduling Algorithms Reading: Chapter
Basic Concepts Maximum CPU utilization obtained with multiprogramming
1 Lecture 5: CPU Scheduling Operating System Fall 2006.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 6: CPU Scheduling
CPU Scheduling G.Anuradha
Module 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
CPU Scheduling: Basic Concepts
Module 5: CPU Scheduling
Presentation transcript:

1 Processes and Threads Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling

2 Processes The Process Model Multiprogramming of four programs Conceptual model of 4 independent, sequential processes Only one program active at any instant

3 Process Creation Principal events that cause process creation 1.System initialization Execution of a process creation system 1.User request to create a new process 2.Initiation of a batch job

4 Process Termination Conditions which terminate processes 1.Normal exit (voluntary) 2.Error exit (voluntary) 3.Fatal error (involuntary) 4.Killed by another process (involuntary)

5 Process Hierarchies Parent creates a child process, child processes can create its own process Forms a hierarchy –UNIX calls this a "process group" Windows has no concept of process hierarchy –all processes are created equal

6 Process States (1) Possible process states –running –blocked –ready Transitions between states shown

7 Process States (2) Lowest layer of process-structured OS –handles interrupts, scheduling Above that layer are sequential processes

8 Implementation of Processes (1) Fields of a process table entry

9 Implementation of Processes (2) Skeleton of what lowest level of OS does when an interrupt occurs

10 Threads The Thread Model (1) (a) Three processes each with one thread (b) One process with three threads

11 The Thread Model (2) Items shared by all threads in a process Items private to each thread

12 The Thread Model (3) Each thread has its own stack

13 Thread Usage (1) A word processor with three threads

14 Thread Usage (2) A multithreaded Web server

15 Thread Usage (3) Rough outline of code for previous slide (a) Dispatcher thread (b) Worker thread

16 Thread Usage (4) Three ways to construct a server

17 Implementing Threads in User Space A user-level threads package

18 Implementing Threads in the Kernel A threads package managed by the kernel

19 Hybrid Implementations Multiplexing user-level threads onto kernel- level threads

20 Scheduler Activations Goal – mimic functionality of kernel threads –gain performance of user space threads Avoids unnecessary user/kernel transitions Kernel assigns virtual processors to each process –lets runtime system allocate threads to processors Problem: Fundamental reliance on kernel (lower layer) calling procedures in user space (higher layer)

21 Pop-Up Threads Creation of a new thread when message arrives (a) before message arrives (b) after message arrives

22 Making Single-Threaded Code Multithreaded (1) Conflicts between threads over the use of a global variable

23 Making Single-Threaded Code Multithreaded (2) Threads can have private global variables

24 Interprocess Communication Race Conditions Two processes want to access shared memory at same time

25 Critical Regions (1) Four conditions to provide mutual exclusion 1. No two processes simultaneously in critical region 2. No assumptions made about speeds or numbers of CPUs 3. No process running outside its critical region may block another process 4. No process must wait forever to enter its critical region

26 Critical Regions (2) Mutual exclusion using critical regions

27 Algorithm 1 Shared variables: –int turn; initially turn = 0 –turn - i  P i can enter its critical section Process P i do { while (turn != i) /*infinite loop */; critical section turn = j; reminder section } while (1);

28 Mutual Exclusion with Busy Waiting (1) Proposed solution to critical region problem (a) Process 0. (b) Process 1.

29 Mutual Exclusion with Busy Waiting (2) Peterson's solution for achieving mutual exclusion

30 Synchronization Hardware Test and modify the content of a word atomically. boolean TestAndSet(boolean &target) { boolean rv = target; tqrget = true; return rv; }

31 Mutual Exclusion with Test-and-Set Shared data: boolean lock = false; Process P i do { while (TestAndSet(lock)) ; critical section lock = false; remainder section }

32 Mutual Exclusion with Busy Waiting (3) Entering and leaving a critical region using the TSL instruction

33 Synchronization Hardware Atomically swap two variables. void Swap(boolean &a, boolean &b) { boolean temp = a; a = b; b = temp; }

34 Mutual Exclusion with Swap Shared data (initialized to false): boolean lock; Process P i do { key = true; while (key == true) Swap(lock,key); critical section lock = false; remainder section }

35 Semaphores Synchronization tool that does not require busy waiting. Semaphore S – integer variable can only be accessed via two indivisible (atomic) operations wait/down (S): while S  0 do no-op; S--; signal/up (S): S++;

36 Critical Section of n Processes Shared data: semaphore mutex; //initially mutex = 1 Process Pi: do { wait(mutex); critical section signal(mutex); remainder section } while (1);

37 Semaphore Implementation Define a semaphore as a record typedef struct { int value; struct process *L; } semaphore; Assume two simple operations: –block suspends the process that invokes it. –wakeup(P) resumes the execution of a blocked process P.

38 Implementation Semaphore operations now defined as wait/down(S): S.value--; if (S.value < 0) { add this process to S.L; block; } signal/up(S): S.value++; if (S.value <= 0) { remove a process P from S.L; wakeup(P); }

39 Mutexes Implementation of mutex_lock and mutex_unlock

40 Semaphore as a General Synchronization Tool Execute B in P j only after A executed in P i Use semaphore flag initialized to 0 Code: P i P j   Await/down(flag) signal/up(flag)B

41 Deadlock and Starvation Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. Let S and Q be two semaphores initialized to 1 P 0 P 1 wait(S);wait(Q); wait(Q);wait(S);  signal(S);signal(Q); signal(Q)signal(S); Starvation – indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended.

42 Two Types of Semaphores Counting semaphore – integer value can range over an unrestricted domain. Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement. Can implement a counting semaphore S as a binary semaphore.

43 Implementing S as a Binary Semaphore Data structures: binary-semaphore S1, S2; int C: Initialization: S1 = 1 S2 = 0 C = initial value of semaphore S

44 Implementing S wait operation wait(S1); C--; if (C < 0) { signal(S1); wait(S2); } signal(S1); signal operation wait(S1); C ++; if (C <= 0) signal(S2); else signal(S1);

45 Sleep and Wakeup Producer-consumer problem with fatal race condition

46 Semaphores The producer-consumer problem using semaphores

47 Semaphores The producer-consumer problem using semaphores

48 Monitors (1) Example of a monitor

49 Monitors (2) Outline of producer-consumer problem with monitors –only one monitor procedure active at one time –buffer has N slots

50 Monitors (3) Solution to producer-consumer problem in Java (part 1)

51 Monitors (4) Solution to producer-consumer problem in Java (part 2)

52 Monitors

53 Message Passing The producer-consumer problem with N messages

54 Barriers Use of a barrier –processes approaching a barrier –all processes but one blocked at barrier –last process arrives, all are let through

55 Dining Philosophers (1) Philosophers eat/think Eating needs 2 forks Pick one fork at a time How to prevent deadlock

56 Dining Philosophers (2) A nonsolution to the dining philosophers problem

57 Dining Philosophers (3) Solution to dining philosophers problem (part 1)

58 Dining Philosophers (4) Solution to dining philosophers problem (part 2)

59 The Readers and Writers Problem A solution to the readers and writers problem

60 The Sleeping Barber Problem (1)

61 The Sleeping Barber Problem (2) Solution to sleeping barber problem.

62 Scheduling Introduction to Scheduling (1) Bursts of CPU usage alternate with periods of I/O wait –a CPU-bound process –an I/O bound process

63 Introduction to Scheduling (2) Scheduling Algorithm Goals

64 Histogram of CPU-burst Times

65 CPU Scheduler Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process: 1.Switches from running to waiting state. 2.Switches from running to ready state. 3.Switches from waiting to ready. 4.Terminates. Scheduling under 1 and 4 is nonpreemptive. All other scheduling is preemptive.

66 Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: –switching context –switching to user mode –jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running.

67 Scheduling Criteria CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time- sharing environment)

68 Optimization Criteria Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time

69 First-Come, First-Served (FCFS) Scheduling ProcessBurst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1, P 2, P 3 The Gantt Chart for the schedule is: Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: ( )/3 = 17 P1P1 P2P2 P3P

70 FCFS Scheduling (Cont.) Suppose that the processes arrive in the order P 2, P 3, P 1. The Gantt chart for the schedule is: Waiting time for P 1 = 6; P 2 = 0 ; P 3 = 3 Average waiting time: ( )/3 = 3 Much better than previous case. Convoy effect short process behind long process P1P1 P3P3 P2P

71 Shortest-Job-First (SJR) Scheduling Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. Two schemes: –nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst. –preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF). SJF is optimal – gives minimum average waiting time for a given set of processes.

72 ProcessArrival TimeBurst Time P P P P SJF (non-preemptive) Average waiting time = ( )/4 - 4 Example of Non-Preemptive SJF P1P1 P3P3 P2P P4P4 812

73 Example of Preemptive SJF ProcessArrival TimeBurst Time P P P P SJF (preemptive) Average waiting time = ( )/4 - 3 P1P1 P3P3 P2P P4P4 57 P2P2 P1P1 16

74 Determining Length of Next CPU Burst Can only estimate the length. Can be done by using the length of previous CPU bursts, using exponential averaging.

75 Prediction of the Length of the Next CPU Burst

76 Examples of Exponential Averaging  =0 –  n+1 =  n –Recent history does not count.  =1 –  n+1 = t n –Only the actual last CPU burst counts. If we expand the formula, we get:  n+1 =  t n +(1 -  )  t n -1 + … +(1 -  ) j  t n -1 + … +(1 -  ) n=1 t n  0 Since both  and (1 -  ) are less than or equal to 1, each successive term has less weight than its predecessor.

77 Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer  highest priority). –Preemptive –nonpreemptive SJF is a priority scheduling where priority is the predicted next CPU burst time. Problem  Starvation – low priority processes may never execute. Solution  Aging – as time progresses increase the priority of the process.

78 Round Robin (RR) Each process gets a small unit of CPU time (time quantum), usually milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Performance –q large  FIFO –q small  q must be large with respect to context switch, otherwise overhead is too high.

79 Example of RR with Time Quantum = 20 ProcessBurst Time P 1 53 P 2 17 P 3 68 P 4 24 The Gantt chart is: Typically, higher average turnaround than SJF, but better response. P1P1 P2P2 P3P3 P4P4 P1P1 P3P3 P4P4 P1P1 P3P3 P3P

80 Time Quantum and Context Switch Time

81 Turnaround Time Varies With The Time Quantum

82 Multilevel Queue Ready queue is partitioned into separate queues: foreground (interactive) background (batch) Each queue has its own scheduling algorithm, foreground – RR background – FCFS Scheduling must be done between the queues. –Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. –Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR –20% to background in FCFS

83 Multilevel Queue Scheduling

84 Multilevel Feedback Queue A process can move between the various queues; aging can be implemented this way. Multilevel-feedback-queue scheduler defined by the following parameters: –number of queues –scheduling algorithms for each queue –method used to determine when to upgrade a process –method used to determine when to demote a process –method used to determine which queue a process will enter when that process needs service

85 Example of Multilevel Feedback Queue Three queues: –Q 0 – time quantum 8 milliseconds –Q 1 – time quantum 16 milliseconds –Q 2 – FCFS Scheduling –A new job enters queue Q 0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q 1. –At Q 1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q 2.

86 Multilevel Feedback Queues

87 Multiple-Processor Scheduling CPU scheduling more complex when multiple CPUs are available. Homogeneous processors within a multiprocessor. Load sharing Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing.

88 Real-Time Scheduling Hard real-time systems – required to complete a critical task within a guaranteed amount of time. Soft real-time computing – requires that critical processes receive priority over less fortunate ones.

89 Dispatch Latency

90 Algorithm Evaluation Deterministic modeling – takes a particular predetermined workload and defines the performance of each algorithm for that workload. Queueing models Implementation

91 Evaluation of CPU Schedulers by Simulation

92 Scheduling in Batch Systems (1) An example of shortest job first scheduling

93 Scheduling in Batch Systems (2) Three level scheduling

94 Scheduling in Interactive Systems (1) Round Robin Scheduling –list of runnable processes –list of runnable processes after B uses up its quantum

95 Scheduling in Interactive Systems (2) A scheduling algorithm with four priority classes

96 Scheduling in Real-Time Systems Schedulable real-time system Given –m periodic events –event i occurs within period P i and requires C i seconds Then the load can only be handled if

97 Policy versus Mechanism Separate what is allowed to be done with how it is done –a process knows which of its children threads are important and need priority Scheduling algorithm parameterized –mechanism in the kernel Parameters filled in by user processes –policy set by user process

98 Thread Scheduling (1) Possible scheduling of user-level threads 50-msec process quantum threads run 5 msec/CPU burst

99 Thread Scheduling (2) Possible scheduling of kernel-level threads 50-msec process quantum threads run 5 msec/CPU burst