Download presentation
Presentation is loading. Please wait.
Published byRichard Förstner Modified over 5 years ago
1
Chapter 5 Mutual Exclusion(互斥) and Synchronization(同步)
Operating System Chapter 5 Mutual Exclusion(互斥) and Synchronization(同步)
2
Chapter 5 Mutual Exclusion and Synchronization
5.1 Principles of Concurrency 5.2 Mutual Exclusion 5.3 Semaphores 5.4 Monitors 5.5 Message Passing 5.6 Readers/Writers Problem 5.7 Summary
3
5.1 Principles of Concurrency
5.1.0 What is concurrency 5.1.1 A Simple Example 5.1.2 Race Condition 5.1.3 Operating System Concerns 5.1.4 Process Interaction 5.1.5 Requirements for Mutual Exclusion
4
5.1.0 What is concurrency(1/5)
Operating System design is concerned with the management of processes and threads: Multiprogramming multiple processes within a uniprocessor system Multiprocessing multiple processes within a multiprocessor Distributed Processing multiple processes within distributed computer systems, such as clusters
5
5.1.0 What is concurrency(2/5)
The execution of two or more process simultaneously.
6
5.1.0 What is concurrency(3/5)
Concurrent and Parallel Programming Concurrent: 2 queues and 1 coffee machine Parallel: 2 queues and 2 coffee machines
7
5.1.0 What is concurrency(4/5)
Difficulties of Concurrency Sharing of variables Optimal management of resource allocation Locating programming errors (cannot reproduce) because of following facts: relative speed of execution of processes is not predictable. system interrupts are not predictable scheduling policies may vary
8
5.1.0 What is concurrency(5/5)
Deadlock:独木桥 Livelock:相向让路
9
5.1 Principles of Concurrency
5.1.0 What is concurrency 5.1.1 A Simple Example of shared variable 5.1.2 Race Condition 5.1.3 Operating System Concerns 5.1.4 Process Interaction 5.1.5 Requirements for Mutual Exclusion
10
5.1.1 A Simple Example(1/4) Consider the following procedure echo
which is global to all applications char chin, chout; void echo() { chin = getchar(); chout = chin; putchar(chout); }
11
5.1.1 A Simple Example(2/4) Uniprocessor: Process P1 . Process P2
. chin = getchar(); chout = chin; putchar(chout); . chin = getchar(); chout = chin; putchar(chout); X Y The essence of the problem is the share gloabl variable chin The bottom line is that for both uniprocessor multiprogramming or multiprocessing we must control access to the shared variable.
12
5.1.1 A Simple Example(3/4) Uniprocessor: Process P1 . Process P2
. chin = getchar(); chout = chin; putchar(chout); . chin = getchar(); chout = chin; putchar(chout); X Y The essence of the problem is the share gloabl variable chin
13
5.1.1 A Simple Example(4/4) Multiprocessor: Process P1 . Process P2
. chin = getchar(); chout = chin; putchar(chout); . chin = getchar(); chout = chin; putchar(chout); X Y
14
5.1 Principles of Concurrency
5.1.0 What is concurrency 5.1.1 A Simple Example 5.1.2 Race Condition 5.1.3 Operating System Concerns 5.1.4 Process Interaction 5.1.5 Requirements for Mutual Exclusion
15
5.1.2 Race Condition(1/1) Race Condition(竞争条件)
A race condition occurs when multiple processes or threads read and write data items The final result depends on the order of execution The “loser”of the race is the process that updates last and will determine the final value of the variable
16
5.1 Principles of Concurrency
5.1.0 What is concurrency 5.1.1 A Simple Example 5.1.2 Race Condition 5.1.3 Operating System Concerns 5.1.4 Process Interaction 5.1.5 Requirements for Mutual Exclusion
17
5.1.3 Operating System Concerns(1/1)
Design and management issues raised by the existence of concurrency The OS must: be able to keep track of various processes allocate and deallocate resources Processor time Memory Files I/O devices protect data and resources ensure processes and outputs are independent of the processing speed The OS must be able to keep track of the various processes. This is done with the use of PCB and was described in chapter 4
18
5.1 Principles of Concurrency
5.1.0 What is concurrency 5.1.1 A Simple Example 5.1.2 Race Condition 5.1.3 Operating System Concerns 5.1.4 Process Interaction 5.1.5 Requirements for Mutual Exclusion
19
5.1.4 Process Interaction(1/5)
进程间三种可能的交互方式 Processes unaware of each other Processes indirectly aware of each other Process directly aware of each other Processes unaware of each other-These are independent processes that are not intended to work together-competition Processes indirectly aware of each other-they share access to some object-cooperation
20
5.1.4 Process Interaction(2/5)
21
5.1.4 Process Interaction(3/5)
Resource Competition Concurrent processes come into confict when they are competing for use of the same resource for example: I/O device, memory, processor time In the case of competing processes three control problem must be faced: the need for mutual exclusion deadlock starvation
22
5.1.4 Process Interaction(4/5)
Illustration of Mutual Exclusion by critical section .entercritical:进入临界区 .exitcritical:退出临界区
23
5.1.4 Process Interaction(5/5)
Critical section A Critical Section is a code segment that accesses shared variables and has to be executed as an atomic action. It means that in a group of cooperating processes, at a given point of time, only one process must be executing its critical section. Atomic Atomic operation must be performed entirely or not performed at all. Without any other process being able to read or change state that is read or changed during the operation
24
5.1 Principles of Concurrency
5.1.0 What is concurrency 5.1.1 A Simple Example 5.1.2 Race Condition 5.1.3 Operating System Concerns 5.1.4 Process Interaction 5.1.5 Requirements for Mutual Exclusion
25
5.1.5 Requirements for Mutual Exclusion(1/2)
Only one process at a time is allowed in the critical section for a resource (一次只允许一个进 程进入临界区,忙则等待) A process that halts in its noncritical section must do so without interfering with other processes (阻塞于临界区外的进程不能干涉其它进程) No deadlock or starvation (不会发生饥饿和死锁, 有限等待)
26
5.1.5 Requirements for Mutual Exclusion(2/2)
A process must not be delayed access to a critical section when there is no other process using it (闲则让进) No assumptions are made about relative process speeds or number of processes (对相关 进程的执行速度和处理器数目没有要求) A process remains inside its critical section for a finite time only (有限占用)
27
Chapter 5 Mutual Exclusion and Synchronization
5.1 Principles of Concurrency 5.2 Mutual Exclusion 5.3 Semaphores 5.4 Monitors 5.5 Message Passing 5.6 Readers/Writers Problem 5.7 Summary
28
5.2 Mutual Exclusion 5.2.1 Hardware approaches
5.2.2 Software approaches
29
5.2.1 Hardware approaches 5.2.1.1 Interrupt Disabling(禁用中断)
Special Machine Instructions(特殊机器指 令)
30
5.2.1.1 Interrupt Disabling(1/4)
Prior to entering a CS, disable interrupts. This guarantees that another process cannot be dispatched.
31
5.2.1.1 Interrupt Disabling(2/4)
Disabling interrupts simple solution. Works because it prevents other processes from running & therefore prevents other processes entering their critical sections requires access to hardware registers(only in kernel mode) may interfere with higher-priority tasks and interrupts doesn't work with multiple cpus
32
5.2.1.1 Interrupt Disabling(3/4)
标志寄存器:Interrupt Enable Flag(IF) STI 使IF置“1”,即开放中断。 CLI 使IF清“0”,即关闭中断
33
5.2.1.1 Interrupt Disabling(4/4)
In practice, the OS may disable interrupts for very short periods of time, within the kernel code. For example, the Amiga OS disables interrupts while updating various internal structures, but provides signals, message- passing and semaphores for user-level processes to achieve mutual exclusion.
34
5.2.1 Hardware approaches 5.2.1.1 Interrupt Disabling(禁用中断)
Special Machine Instructions(特殊机器指 令)
35
5.2.1.2 Special Machine Instructions(1/10)
CAS instruction A compare-and-swap operation is an atomic version of the following pseudocode the atomic counter and atomic bitmask operations in the Linux kernel tpically use a CAS instruction in their implementation. int compare_and_swap(int *word, int testval, int newval) { int oldval; oldval = *word; if ( oldval == testval) *word = newval; return oldval; } CMPXCHG
36
5.2.1.2 Special Machine Instructions(2/10)
Use of CAS instruction (P212) bolt-> &bold ???来来来,来纠错???
37
5.2.1.2 Special Machine Instructions(3/10)
Update version of CAS (compare-and-set) bool CAS( int *word, int testval, int newval) { if ( *word != testval) return false; *word = newval; return true; } /* program mutualexclusion */ const int n = /* number of processes */; int bolt; void P(int i) { while (true) { while (!CAS(&bolt, 0, 1)) /* do nothing */; /* critical section */; bolt = 0; /* remainder */; } }... busy waiting Compare&Swap Instruction also called a “compare and exchange instruction” a compare is made between a memory value and a test value if the values are the same a swap occurs carried out atomically
38
5.2.1.2 Special Machine Instructions(4/10)
CAS指令:CMPXCHG/CMPXCHGL 比较交换指令,第一操作数先和AL/AX/EAX比较,如果相等ZF 置1,第二操作数赋给第一操作数,否则ZF清0,第一操作数赋 给AL/AX/EAX。多处理器安全,在80486及以上CPU中支持。
39
5.2.1.2 Special Machine Instructions(5/10)
Exchange Instruction XCHG instruction: XCHG OPRD1, OPRD2 void exchange(int register, int memory) { int temp; temp = memory; memory = register; register = temp; }
40
5.2.1.2 Special Machine Instructions(6/10)
Use of Exchange Instruction
41
5.2.1.2 Special Machine Instructions(7/10)
Performed in a single instruction cycle access to the memory location is blocked for any other instructions carried out atomically
42
5.2.1.2 Special Machine Instructions(8/10)
HLOCK 原子性不可能由软件单独保证--必须需要硬件的支持,因此是和架构相关的。在x86 平台上,CPU提供了在指令执行期间对总线加锁的手段。CPU芯片上有一条引线#HLOCK pin,如果汇编语言的程序中在一条指令前面加上前缀"LOCK",经过汇编以后的机器代码就使CPU在执行这条指令的时候把#HLOCK pin的电位拉低,持续到这条指令结束时放开,从而把总线锁住,这样同一总线上别的CPU就暂时不能通过总线访问内存了,保证了这条指令在多处理器环境中的原子性.
43
5.2.1.2 Special Machine Instructions(9/10)
Advantages By sharing main memory,it is applicable to any number of processes single processor multiple processors It is simple and therefore easy to verify It can be used to support multiple critical sections
44
5.2.1.2 Special Machine Instructions(10/10)
Disadvantages Busy-waiting(忙等待) consumes processor time Starvation(饥饿) is possible when a process leaves a critical section and more than one process is waiting. Deadlock (死锁) is possible If a low priority process has the critical region and a higher priority process needs, the higher priority process will obtain the processor to wait for the critical region.
45
5.2 Mutual Exclusion 5.2.1 Hardware approaches
5.2.2 Software approaches
46
5.2.2 Software approaches 5.2.2.1 Dekker's Algorithm
Peterson's Algorithm
47
Dekker's Algorithm(1/20) Dekker's algorithm is the first known algorithm that solves the mutual exclusion problem in concurrent programming. It is credited to Th. J. Dekker, a Dutch mathematician who created the algorithm for another context. Dekker's algorithm is used in process queuing, and allows two different threads to share the same single-use resource without conflict by using shared memory for communication.
48
Dekker's Algorithm(2/20) Mutual exclusion: The critical section statements must not be interleaved. Deadlock free: If some processes are trying to enter their CS's, then one must eventually succeed. Starvation free: If any process tries to enter its CS, then that process must succeed.
49
Dekker's Algorithm(3/20) First attempt
50
Dekker's Algorithm(4/20) State Diagram for the First Attempt
51
5.2.2.1 Dekker's Algorithm(5/20) Analysis of mutual exclusion
Do either of the states (p3, q3, 1) or (p3, q3, 2) appear in the state transition diagram? No! Conclusion: We have ME.
52
Dekker's Algorithm(6/20)
53
5.2.2.1 Dekker's Algorithm(7/20) Analysis of mutual exclusion
Do either of the states (p2, q2, 1) or (p2, q2, 2) appear in the state transition diagram? No! Conclusion: We have ME.
54
5.2.2.1 Dekker's Algorithm(8/20) Analysis of deadlock
Deadlock free: If some try to enter, one must succeed. Question: In what state are p and q both trying to enter? Answer: In states (p1, q1, 1) and (p1, q1, 2).
55
Dekker's Algorithm(9/20) Analysis: Deduce what must happen from one of these states, say (p1, q1, 2). Conclusion: Deadlock-free. (p1,q1,2) ⇒ (p1,q2,2) q selected by week fairness (p1,q2,2) ⇒ (p1,q1,1) q must complete CS, selected by week fairness
56
5.2.2.1 Dekker's Algorithm(10/20) Analysis of starvation (p2, q1, 2)
p is trying to enter CS. q is in non-CS. q need not make progress. So, p is starved. Conclusion: The first attempt is not starvation-free. Starvation free: If any tries to enter, it must succeed. Analysis: See state (p2, q1, 2) in the non-abbreviated state diagram. CS空闲, p等待进入,但不是它的turn, 干等
57
Dekker's Algorithm(11/20) Second attempt
58
5.2.2.1 Dekker's Algorithm(12/20) Analysis of mutual exclusion
Class exercise: Starting at state (p1, q1, F, F), show state transitions that get to state (p3, q3, _, _). Conclusion: Second attempt does not enforce ME.
59
Dekker's Algorithm(13/20) Third attempt
60
Dekker's Algorithm(14/20) Class exercise: Starting at state (p1, q1, F, F), show state transitions that get to state (p3, q3, T, T) with no possibility of progress. Conclusion: Third attempt is not deadlock-free.
61
Dekker's Algorithm(15/20) Fourth attempt
62
Dekker's Algorithm(16/20) Fourth attempt
63
5.2.2.1 Dekker's Algorithm(17/20) Fourth attempt
Mutual exclusion: Yes (Proof omitted.) Deadlock-free: Yes (Proof omitted.) Starvation-free: No There is a perfect interleaving that starves both. (livelock)
64
5.2.2.1 Dekker's Algorithm(18/20) Dekker's algorithm
A combination of the first and fourth attempts. The turn variable means whose turn it is to insist on entering if they both want to enter at the same time.
65
Dekker's Algorithm(19/20)
66
Dekker's Algorithm(20/20)
67
5.2.2 Software approaches 5.2.2.1 Dekker's Algorithm
Peterson's Algorithm
68
5.2.5.2 Peterson's Algorithm(1/2)
69
5.2.5.2 Peterson's Algorithm(2/2)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.