Download presentation
Presentation is loading. Please wait.
Published byGabrielle Langevin Modified over 6 years ago
1
COT 5611 Operating Systems Design Principles Spring 2014
Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 3:30 – 5:30 PM
2
Lecture 20 Reading assignment: Last time –
Chapter 9 from the on-line text Last time – Thread management Address spaces and multi-level memories Kernel structures for the management of multiple cores/processors and threads/processes 12/5/2018 Lecture 20
3
Today More on YIELD Shared Deadlocks 12/5/2018 Lecture 20
4
YIELD System call executed by the kernel at the request of an application allows an active thread A to voluntarily release control of the processor. YIELD invokes the ENTER_PROCESSOR_LAYER procedure locks the thread table and unlock it when it finishes it work changes the state of thread A from RUNNING to RUNNABLE invokes the SCHEDULER the SCHEDULER searches the thread table to find another tread B in RUNNABLE state the state of thread B is changed from RUNNABLE to RUNNING the registers of the processor are loaded with the ones saved on the stack for thread B thread B becomes active Why is it necessary to lock the thread table? We may have multiple cores/processors so another thread my be active. An interrupt may occur The pseudo code assumes that we have a fixed number of threads, 7. The flow of control YIELDENTER_PROCESSOR_LAYERSCHEDULEREXIT_PROCESSOR_LAYERYIELD 12/5/2018 Lecture 19
5
12/5/2018 Lecture 20
6
Information in the processor and thread table
The following information is maintained in the processor table: topstack // integer the value of the stack pointer stack // preallocated processor stack thread_id // integer, identity of the thread currently running on the processor The following information is in the thread table for each thread: topstack // integer the value of the stack pointer for the thread state // the state of the thread, e.g., RUNNING, RUNNABLE stack // stack for this thread kill_or_continue // boolean indicating if the thread should be terminated 12/5/2018 Lecture 20
7
12/5/2018 Lecture 20
8
Thread coordination with bounded buffers
Bounded buffer the virtualization of a communication channel Thread coordination Locks for serialization Bounded buffers for communication Producer thread writes data into the buffer Consumer thread read data from the buffer Basic assumptions: We have only two threads Threads proceed concurrently at independent speeds/rates Bounded buffer – only N buffer cells Messages are of fixed size and occupy only one buffer cell. 12/5/2018 Lecture 20
9
12/5/2018 Lecture 20
10
Implicit assumptions for the correctness of the implementation
One sending and one receiving thread. Only one thread updates each shared variable. Sender and receiver threads run on different processors to allow spin locks in and out are implemented as integers large enough so that they do not overflow (e.g., 64 bit integers) The shared memory used for the buffer provides read/write coherence The memory provides before-or-after atomicity for the shared variables in and out The result of executing a statement becomes visible to all threads in program order. No compiler optimization supported 12/5/2018 Lecture 20
11
In practice….. Threads run concurrently Race conditions may occur
data in the buffer may be overwritten a lock for the bounded buffer the producer acquires the lock before writing the consumer acquires the lock before reading 12/5/2018 Lecture 20
12
12/5/2018 Lecture 20
13
We have to avoid deadlocks
If a producer thread cannot write because the buffer is full it has to release the lock to allow the consumer thread to acquire the lock to read, otherwise we have a deadlock. If a consumer thread cannot read because the there is no new item in the buffer it has to release the lock to allow the consumer thread to acquire the lock to write, otherwise we have a deadlock. 12/5/2018 Lecture 20
14
12/5/2018 Lecture 20
15
In practice… We have to ensure atomicity of some operations, e.g., updating the pointers 12/5/2018 Lecture 20
16
One more pitfall of the previous implementation of bounded buffer
If in and out are long integers (64 or 128 bit) then a load requires two registers, e.,g, R1 and R2. int “ FFFFFFFF” L R1,int /* R1 L R2,int /* R2 FFFFFFFF Race conditions could affect a load or a store of the long integer. 12/5/2018 Lecture 20
17
12/5/2018 Lecture 20
18
In practice the threads may run on the same system….
We cannot use spinlocks for a thread to wait until an event occurs. That’s why we have spent time on YIELD… 12/5/2018 Lecture 20
19
12/5/2018 Lecture 20
20
Coordination with events and signals
We introduce two events p_room event which signals that there is room in the buffer p_notempty event which signals that there is a new item in the buffer We also introduce two new system calls WAIT(ev) wait until the event ev occurs NOTIFY(ev) notify the other process that event ev has occurred. SEND will wait if the buffer is full until it is notified that the RECIVE has created more room SEND WAIT(p_room) and RECIVE NOTIFY(p_room) RECEIVE will wait if there is no new item in the buffer until it is notified by SEND that a new item has been written RECIVEWAIT(p_notempty) and SENDNOTIFY(p_notempty) 12/5/2018 Lecture 20
21
12/5/2018 Lecture 20
22
Deadlocks Happen quite often in real life and the proposed solutions are not always logical: “When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.” a pearl from Kansas legislation. Deadlock jury. Deadlock legislative body. 12/5/2018 Lecture 21
23
Examples of deadlock Traffic only in one direction.
Solution one car backs up (preempt resources and rollback). Several cars may have to be backed up . Starvation is possible. 12/5/2018 Lecture 21
24
12/5/2018 Lecture 21
25
Thread deadlock Deadlocks prevent sets of concurrent threads/processes from completing their tasks. How does a deadlock occur a set of blocked threads each holding a resource and waiting to acquire a resource held by another thread in the set. Example locks A and B, initialized to 1 P P1 wait (A); wait(B) wait (B); wait(A) Aim prevent or avoid deadlocks 12/5/2018 Lecture 21
26
System model Resource types R1, R2, . . ., Rm (CPU cycles, memory space, I/O devices) Each resource type Ri has Wi instances. Resource access model: request use release 12/5/2018 Lecture 21
27
Simultaneous conditions for deadlock
Mutual exclusion: only one process at a time can use a resource. Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes. No preemption: a resource can be released only voluntarily by the process holding it (presumably after that process has finished). Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0. 12/5/2018 Lecture 21
28
Wait for graphs Processes are represented as nodes, and an edge from thread Ti to thread Tj implies that Tj is holding a resource that Ti needs and thus is waiting for Tj to release its lock on that resource. A deadlock exists if the graph contains any cycles. 12/5/2018 Lecture 21
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.