Presentation is loading. Please wait.

Presentation is loading. Please wait.

MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 2 Processes and Threads Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,

Similar presentations


Presentation on theme: "MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 2 Processes and Threads Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,"— Presentation transcript:

1 MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 2 Processes and Threads Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

2 outline Process Process scheduling Threads Race condition/synchronization

3 pseudoparallelism In a uni-processor system, at any instant, CPU is running only one process But in a multiprogramming system, CPU switches from processes quickly, running each for tens or hundreds of ms

4 Figure 2-1. (a) Multiprogramming of four programs. (b) Conceptual model of four independent, sequential processes. (c) Only one program is active at once. The Process Model Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

5 Execution non-reproducible Non-reproducible  Example at an instant, N = n , if execution sequence is : N: = N + 1 ; print(N); N:=0; then, N is:n + 1 ; n + 1 ; 0 print(N); N:=0; N: = N + 1 ; then N is: n ; 0 ; 1 print(N); N: = N + 1 ; N:=0; then N is: n ; n + 1 ; 0 Another example:  An I/O process start a tape steamer and wants to read 1 st record  It decides to loop 10,000 time to wait for the steamer to speed up Program A repeat N:=N+1; Program B repeat print(N);N:=0;

6 Events which cause process creation: System initialization. Execution of a process creation system call by a running process. A user request to create a new process. Initiation of a batch job. Process Creation Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

7 Events which cause process termination: Normal exit (voluntary). Error exit (voluntary). Fatal error (involuntary). Killed by another process (involuntary). Process Termination Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

8 Figure 2-2. A process can be in running, blocked, or ready state. Transitions between these states are as shown. Process States Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

9 Figure 2-3. The lowest layer of a process-structured operating system handles interrupts and scheduling. Above that layer are sequential processes. Implementation of Processes (1) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

10 Figure 2-4. Some of the fields of a typical process table entry. Implementation of Processes (2) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

11 Figure 2-5. Skeleton of what the lowest level of the operating system does when an interrupt occurs. Implementation of Processes (3) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

12 Figure 2-6. CPU utilization as a function of the number of processes in memory. Modeling Multiprogramming Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

13 Figure 2-38. Bursts of CPU usage alternate with periods of waiting for I/O. (a) A CPU-bound process. (b) An I/O-bound process. Scheduling – Process Behavior Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

14 Batch Interactive Real time Categories of Scheduling Algorithms Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

15 Figure 2-39. Some goals of the scheduling algorithm under different circumstances. Scheduling Algorithm Goals Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

16 First-come first-served Shortest job first Shortest remaining Time next Scheduling in Batch Systems Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

17 FCFS is advantageous for which kind of jobs? FCFS ProcessArrive time Service time Start timeFinish time Turnar ound Weighted turnaroun d A010111 B110011011001 C21101102100 D3 1022021991.99

18 Figure 2-40. An example of shortest job first scheduling. (a) Running four jobs in the original order. (b) Running them in shortest job first order. Shortest Job First Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

19 Compare:  Average Turnaround time  Short job  Long job Shortest job first ProcessABCDEaverage Arrive time01234 Service Time43524 SJFFinish time4918613 turnaround4816398 weighted12.673.11.52.252.1 FCFSfinish47121418 turnaround461011149 weighted1225.53.52.8

20 Round-robin scheduling Priority scheduling Multiple queues Shortest process next Guaranteed scheduling Lottery scheduling Fair-share scheduling Scheduling in Interactive Systems Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

21 Figure 2-41. Round-robin scheduling. (a) The list of runnable processes. (b) The list of runnable processes after B uses up its quantum. Round-Robin Scheduling Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

22 P4P4 Arrival time = 0 Time quantum =20 The Gantt chart is Typically, higher average turnaround than SJF, but better response Example: RR with Time Quantum = 20 ProcessBurst Time P153 P217 P368 p424 P1P1 P2P2 P3P3 P4P4 P1P1 P3P3 P1P1 P3P3 P3P3 02037 577797117 121 134154162

23 Size of time quantum The performance of RR depends heavily on the size of time quantum Time quantum  Too big , = FCFS  Too small: Hardware: Process sharing Software: context switch, high overhead, low CPU utilization  Must be large with respect to context switch

24 How a Smaller Time Quantum Increases Context Switches

25 Turnaround Time Varies With The Time Quantum Will the average turnaround time improve as q increase? 80% CPU burst should be shorter than q

26 Priority scheduling Each priority has a priority number The highest priority can be scheduled first If all priorities equal, then it is FCFS

27 Example ProcessBurst TimePriority P1103 P211 P323 P414 P552 E.g.: Priority (nonpreemprive) Average waiting time = (6 + 0 + 16 + 18 + 1) /5 = 8.2

28 Difficulty How to define the priorities  Internally or  Externally Possible Starvation (饿死)  Low priority processes may never execute  Solution Aging – as time progresses increase the priority of the process.

29 Multilevel Queue Ready queue is partitioned into separate queues:  Foreground, interactive, RR & background, batch, FCFS  A process is permanently assigned to one queue  Each queue has its own scheduling algorithm  Preemptive Scheduling must be done between the queues.  Fixed priority scheduling i.e.: serve all from foreground then from background Possibility of starvation (饿死)  Time slice Each queue gets a certain amount of CPU time which it can schedule among its processes  i.e.: 80% VS. 20%

30 Multilevel Queue Scheduling

31 Multilevel Feedback Queue A process can move between the various queues; aging can be implemented this way Multilevel-feedback-queue scheduler defined by the following parameters:  number of queues  scheduling algorithms for each queue  method used to determine when to upgrade a process  method used to determine when to demote a process  method used to determine which queue a process will enter when that process needs service

32 Example of Multilevel Feedback Queue Three queues:  Q0 – time quantum 8 milliseconds, FCFS  Q1 – time quantum 16 milliseconds, FCFS  Q2 – FCFS

33 Scheduling  A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.  At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2.

34 Priority inversion  When the higher-priority process needs to read or modify kernel data that are currently being accessed by another, lower-priority process  The high-priority process would be waiting for a lower-priority one to finish

35 E.g.: Priority: P 1 <P RT3 <P RT2 P RT3 preempt P 1 ; P RT2 waits for P 1 ;  P RT2 waits for P RT3

36 P1P1 P RT3 P RT2 抢占

37 Priority inversion (cont.) Solution  Priority-inheritance (lending) P RT2 lends its priory to P 1, thus P RT3 could not preempt P 1 Priority inheritance must be transitive  E.g.: Priority: P 1 <P RT3 <P RT2

38 Priority Inversion Ceiling Protocol  One way to solve priority inversion is to use the priority ceiling protocol, which gives each shared resource a predefined priority ceiling.  When a task acquires a shared resource, the task is hoisted (has its priority temporarily raised) to the priority ceiling of that resource.  The priority ceiling must be higher than the highest priority of all tasks that can access the resource, thereby ensuring that a task owning a shared resource won't be preempted by any other task attempting to access the same resource.  When the hoisted task releases the resource, the task is returned to its original priority level.

39 Threads: The Thread Model (1) (a) Three processes each with one thread (b) One process with three threads

40 The Thread Model (2) Items shared by all threads in a process Items private to each thread

41 Single- and multi-threaded processes

42 Thread Has own program counter, register set, stack Shares code (text), global data, open files  With other threads within a single Heavy-Weight Process (HWP)  But not with other HWP’s May have own PCB  Depends on operating system  Context will involve thread ID, program counter, register set, stack pointer  RAM address space shared with other threads in same process Memory management information shared

43 Thread Usage (1) A word processor with three threads

44 Thread Usage (2) A multithreaded Web server

45 Thread Usage (3) Rough outline of code for previous slide (a) Dispatcher thread (b) Worker thread

46 Thread Usage (4) Three ways to construct a server

47 Implementing Threads in User Space A user-level threads package

48 Implementing Threads in the Kernel A threads package managed by the kernel

49 Making Single-Threaded Code Multithreaded (1) Conflicts between threads over the use of a global variable

50 Making Single-Threaded Code Multithreaded (2) Threads can have private global variables

51 Threads: Benefits User responsiveness  When one thread blocks, another may handle user I/O  But: depends on threading implementation Resource sharing: economy  Memory is shared (i.e., address space shared)  Open files, sockets shared Speed  E.g., Solaris: thread creation about 30x faster than heavyweight process creation; context switch about 5x faster with thread Utilizing hardware parallelism  Like heavy weight processes, can also make use of multiprocessor architectures

52 Threads: Drawbacks Synchronization  Access to shared memory or shared variables must be controlled if the memory or variables are changed  Can add complexity, bugs to program code  E.g., need to be very careful to avoid race conditions, deadlocks and other problems Lack of independence  Threads not independent, within a Heavy-Weight Process (HWP)  The RAM address space is shared; No memory protection from each other  The stacks of each thread are intended to be in separate RAM, but if one thread has a problem (e.g., with pointers or array addressing), it could write over the stack of another thread

53 Linux pthreads: One-to-one threading model jargon talks about tasks (not processes or threads) PCB (struct task_struct) per task  See include/linux/sched.h in Linux source code Clone system call creates a new thread  more refined version of fork; flags determine sharing between parent and child task

54 Interprocess Communication Race Conditions Two processes want to access shared memory at same time

55 Therac-25 Between June 1985 and January 1987, some cancer patients being treated with radiation were injured & killed due to faulty software  Massive overdoses to 6 patients, killing 3 Software had a race condition associated with command screen Software was improperly synchronized!! See also  p. 340-341 Quinn (2006), Ethics for the Information Age  Nancy G. Leveson, Clark S. Turner, "An Investigation of the Therac-25 Accidents," Computer, vol. 26, no. 7, pp. 18-41, July 1993  http://doi.ieeecomputersociety.org/10.1109/MC.1993.274940

56 Concepts Race condition  Two or more processes are reading or writing some shared data and the final result depends on who runs precisely when

57 Concetps Mutual exclusion  Prohibit more than one process from reading and writing the shared data at the same time Critical region  Part of the program where the shared memory is accessed

58 Critical Regions (1) Four conditions to provide mutual exclusion 1. No two processes simultaneously in critical region 2. No assumptions made about speeds or numbers of CPUs 3. No process running outside its critical region may block another process 4. No process must wait forever to enter its critical region

59 59 General structure of a typical process P i do { entry section critical section exit section remainder section } while (TRUE); Critical Section Problem

60 Critical Regions (2) Mutual exclusion using critical regions

61 Mutual Exclusion with Busy Waiting Disable interrupt  After entering critical region, disable all interrupts  Since clock is just an interrupt, no CPU preemption can occur  Disabling interrupt is useful for OS itself, but not for users…

62 Mutual Exclusion with busy waiting Lock variable  A software solution  A single, shared variable (lock), before entering critical region, programs test the variable, if 0, no contest; if 1, the critical region is occupied  What is the problem?

63 Mutual Exclusion with Busy Waiting Proposed solution to critical region problem (a) Process 0. (b) Process 1.

64 Concepts Busy waiting  Continuously testing a variable unitil some value appears, Spin lock  A lock using busy waiting is call a spin lock

65 Mutual Exclusion with Busy Waiting (2) Peterson's solution for achieving mutual exclusion

66 Mutual Exclusion with Busy Waiting (3) Entering and leaving a critical region using the TSL instruction

67 Sleep and wakeup Drawback of Busy waiting  A lower priority process has entered critical region  A higher priority process comes and preempts the lower priority process, it wastes CPU in busy waiting, while the lower priority don’t come out  Priority inversion/deadlock

68 Sleep and Wakeup Producer-consumer problem with fatal race condition

69 Semaphore Atomic Action  A single, indivisible action Down (P)  Check a semaphore to see whether it’s 0, if so, sleep; else, decrements the value and go on Up (v)  Check the semaphore  If processes are waiting on the semaphore, OS will chose on to proceed, and complete its down

70 Semaphore Solve producer-consumer problem  Full: counting the slots that are full; initial value 0  Empty: counting the slots that are empty, initial value N  Mutex: prevent access the buffer at the same time, initial value 0 (binary semaphore)  Synchronization/mutual exclusion

71 Semaphores The producer-consumer problem using semaphores

72 Mutexes Implementation of mutex_lock and mutex_unlock

73 A problem What would happen if the downs in producer’s code were reversed in order?

74 Figure 2-30. Some of the Pthreads calls relating to mutexes. Mutexes in Pthreads (1) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

75 Figure 2-31. Some of the Pthreads calls relating to condition variables. Mutexes in Pthreads (2) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

76 Figure 2-32. Using threads to solve the producer-consumer problem. Mutexes in Pthreads (3) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639...

77 Figure 2-33. A monitor. Monitors (1) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

78 Figure 2-34. An outline of the producer-consumer problem with monitors. Monitors (2) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

79 Figure 2-35. A solution to the producer-consumer problem in Java. Message Passing (1) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639...

80 Figure 2-35. A solution to the producer-consumer problem in Java. Message Passing (2) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639...

81 Figure 2-35. A solution to the producer-consumer problem in Java. Message Passing (3) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639...

82 Figure 2-36. The producer-consumer problem with N messages. Producer-Consumer Problem with Message Passing (1) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639...

83 Figure 2-36. The producer-consumer problem with N messages. Producer-Consumer Problem with Message Passing (2) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639...

84 Figure 2-37. Use of a barrier. (a) Processes approaching a barrier. (b) All processes but one blocked at the barrier. (c) When the last process arrives at the barrier, all of them are let through. Barriers Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

85 Figure 2-43. (a) Possible scheduling of user-level threads with a 50-msec process quantum and threads that run 5 msec per CPU burst. Thread Scheduling (1) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

86 Figure 2-43. (b) Possible scheduling of kernel-level threads with the same characteristics as (a). Thread Scheduling (2) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

87 Figure 2-44. Lunch time in the Philosophy Department. Dining Philosophers Problem (1) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

88 Figure 2-45. A nonsolution to the dining philosophers problem. Dining Philosophers Problem (2) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

89 Dining Philosophers Problem

90 Figure 2-46. A solution to the dining philosophers problem. Dining Philosophers Problem (3) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639...

91 Figure 2-46. A solution to the dining philosophers problem. Dining Philosophers Problem (4) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639...

92 Figure 2-46. A solution to the dining philosophers problem. Dining Philosophers Problem (5) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639...

93 Figure 2-47. A solution to the readers and writers problem. The Readers and Writers Problem (1) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639...

94 Figure 2-47. A solution to the readers and writers problem. The Readers and Writers Problem (2) Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639...

95 Figure 2-37. Use of a barrier. (a) Processes approaching a barrier. (b) All processes but one blocked at the barrier. (c) When the last process arrives at the barrier, all of them are let through. Barriers Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639

96 Linux Synch Primitives Memory barriers  avoids compiler, cpu instruction re-ordering Atomic operations  memory bus lock, read-modify-write ops Interrupt/softirq disabling/enabling  Local, global Spin locks  general, read/write, big reader Semaphores  general, read/write

97 Barriers: Motivation The compiler can:  Reorder code as long as it correctly maintains data flow dependencies within a function and with called functions  Reorder the execution of code to optimize performance The processor can:  Reorder instruction execution as long as it correctly maintains register flow dependencies  Reorder memory modification as long as it correctly maintains data flow dependencies  Reorder the execution of instructions (for performance optimization)

98 Barriers: Definition Barriers are used to prevent a processor and/or the compiler from reordering instruction execution and memory modification. Barriers are instructions to hardware and/or compiler to complete all pending accesses before issuing any more  read memory barrier – acts on read requests  write memory barrier – acts on write requests Intel –  certain instructions act as barriers: lock, iret, control regs  rmb – asm volatile("lock;addl $0,0(%esp)":::"memory") add 0 to top of stack with lock prefix  wmb – Intel never re-orders writes, just for compiler

99 barrier – prevent only compiler reordering mb – prevents load and store reordering rmb – prevents load reordering wmb – prevents store reordering Barrier Operations smp_mb – prevent load and store reordering only in SMP kernel smp_rmb – prevent load reordering only in SMP kernels smp_wmb – prevent store reordering only in SMP kernels set_mb – performs assignment and prevents load and store reordering

100 Example of barrier Thread 1Thread 2 a = 3;- mb();- b = 4;c = b; -rmb(); -d = a;

101 Choosing Synch Primitives Avoid synch if possible! (clever instruction ordering)  Example: inserting in linked list (needs barrier still) Use atomics or rw spinlocks if possible Use semaphores if you need to sleep  Can’t sleep in interrupt context  Don’t sleep holding a spinlock! Complicated matrix of choices for protecting data structures accessed by deferred functions

102 The implementation of the synchronization primitives is extremely architecture dependent. This is because only the hardware can guarantee atomicity of an operation. Each architecture must provide a mechanism for doing an operation that can examine and modify a storage location atomically. Some architectures do not guarantee atomicity, but inform whether the operation attempted was atomic. Architectural Dependence


Download ppt "MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 2 Processes and Threads Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,"

Similar presentations


Ads by Google