Presentation is loading. Please wait.

Presentation is loading. Please wait.

Operating Systems Concepts

Similar presentations


Presentation on theme: "Operating Systems Concepts"— Presentation transcript:

1 Operating Systems Concepts
Lecture 17,18,19,20 – inter-process communication, Multithreading, Parallel Processing, Concurrency, critical section, mutual exclusion

2 Can Processes Interact
Consider a web application with two processes A web server A database These processes communicate with the help of IPC Mechanisms

3 Inter-process Communication Mechanisms (IPC)
Transfer data/info between address spaces Still maintaining protection and isolation Provide flexibility and performance

4 Message Passing IPC Mechanism
OS provides communication channel like a shared buffer Processes Send() Receive() Messages to and from channel

5 Message Passing Advantage Disadvantage
OS manages communication hence, standard APIs and system call functions are used. Disadvantage Overhead – Copying messages between uses space and kernel space

6 Shared Memory IPC Mechanism
OS establishes a shared channel and maps it into address space of each process Process directly read/write from this memory OS is out of the way!

7 Shared Memory Advantage Disadvantage
OS is out of the way – no copying of messages between user and kernel space Disadvantage No standard APIs or System call; Processes must implement their own code

8 Threads The unit of dispatching is referred to as a thread or lightweight process

9 Multithreading The ability of an OS to support multiple, concurrent paths of execution within a single process. Multithreading refers to the ability of an OS to support multiple, concurrent paths of execution within a single process.

10 Single Thread Approaches
MS-DOS supports a single user process and a single thread. Some UNIX, support multiple user processes but only support one thread per process Animated Slide Onload Enlarges top-left to discuss DOS Click1: Enlarges bottom-left for Unix Single Threaded approach: The traditional approach of a single thread of execution per process, in which the concept of a thread is not recognized, examples are MS DOS (single process, single thread) Unix (multiple, single threaded processes)

11 Multithreading Java run-time environment is a single process with multiple threads Multiple processes and threads are found in Windows, Solaris, and many modern versions of UNIX Animated Slide Onload: Emphasis on top-right and JRE (single process, multiple thread), Click 1: Emphasis on multiple processes with multiple threads – this is the main topic of this chapter JRE is an example of a system of one process with multiple threads. Of main interest in this chapter is the use of multiple processes, each of which support multiple threads. Examples include: Windows, Solaris, and many modern versions of UNIX.

12 Processes A virtual address space which holds the process image
Protected access to Processors, Other processes, Files, I/O resources In a multithreaded environment, a process is defined as the unit of resource allocation and a unit of protection.

13 One or More Threads in Process
Each thread has An execution state (running, ready, etc.) Saved thread context when not running An execution stack Some per-thread static storage for local variables Access to the memory and resources of its process (all threads of a process share this) Within a process, there may be one or more threads, each with the following: • A thread execution state (Running, Ready, etc.). • A saved thread context when not running; one way to view a thread is as an independent program counter operating within a process.

14 One view… One way to view a thread is as an independent program counter operating within a process.

15 Threads vs. processes Distinction between threads and processes from the point of view of process management. In a single-threaded process model, the representation of a process includes its process control block user address space, user and kernel stacks to manage the call/return behaviour of the execution of the process. While the process is running, it controls the processor registers. The contents of these registers are saved when the process is not running. In a multithreaded environment, there is still a single process control block and user address space associated with the process, but separate stacks for each thread, as well as a separate control block for each thread containing register values, priority, and other thread-related state information. Thus, all of the threads of a process share the state and resources of that process. They reside in the same address space and have access to the same data. When one thread alters an item of data in memory, other threads see the results if and when they access that item. If one thread opens a file with read privileges, other threads in the same process can also read from that file.

16 Benefits of Threads Takes less time to create a new thread than a process Less time to terminate a thread than a process Switching between two threads takes less time that switching processes Threads can communicate with each other without invoking the kernel If there is an application or function that should be implemented as a set of related units of execution, it is far more efficient to do so as a collection of threads - rather than a collection of separate processes.

17 Threads Several actions that affect all of the threads in a process
The OS must manage these at the process level. Examples: Suspending a process involves suspending all threads of the process Termination of a process, terminates all threads within the process Suspension involves swapping the address space of one process out of main memory to make room for the address space of another process. Because all threads in a process share the same address space, all threads are suspended at the same time. Similarly, termination of a process terminates all threads within that process.

18 Parallel Processing Parallel processing is a method of simultaneously breaking up and running program tasks on multiple microprocessors, thereby reducing processing time. Parallel processing may be accomplished via a computer with two or more processors or via a computer network. Parallel processing is also called parallel computing.

19 Concurrency Concurrency is the computation of processes in such a way that either they are actually executing simultaneously or at least give an impression of simultaneous execution. The central themes of operating system design are all concerned with the management of processes and threads: Multiprogramming: The management of multiple processes within a uniprocessor system. Multiprocessing: The management of multiple processes within a multiprocessor. Distributed processing: The management of multiple processes executing on multiple, distributed computer systems.

20 Concurrency Concurrency arises in: Multiple applications
Sharing time Structured applications Extension of modular design Operating system structure OS themselves implemented as a set of processes or threads • Multiple applications: Multiprogramming was invented to allow processing time to be dynamically shared among a number of active applications. • Structured applications: As an extension of the principles of modular design and structured programming, some applications can be effectively programmed as a set of concurrent processes. • Operating system structure: The same structuring advantages apply to systems programs, and we have seen that operating systems are themselves often implemented as a set of processes or threads.

21 Key Terms

22 Interleaving and Overlapping Processes
Earlier we saw that processes may be interleaved on uniprocessors

23 Interleaving and Overlapping Processes
And not only interleaved but overlapped on multi-processors

24 Difficulties of Concurrency
Sharing of global resources Optimally managing the allocation of resources Difficult to locate programming errors as results are not deterministic and reproducible. The sharing of global resources If two processes both make use of the same global variable and both perform reads and writes on that variable, then the order in which the various reads and writes are executed is critical. Managing Resources It is difficult for the OS to manage the allocation of resources optimally. E.G. A process may request use of, and be granted control of, a particular I/O channel and then be suspended before using that channel. It may be undesirable for the OS simply to lock the channel and prevent its use by other processes; indeed this may lead to a deadlock condition, Locating Programming Errors It becomes very difficult to locate a programming error because results are typically not deterministic and reproducible

25 A Simple Example void echo() { chin = getchar(); //get input
chout = chin; putchar(chout); //Display value } A program that will provide a character echo procedure; input is obtained from a keyboard one keystroke at a time. Each input character is stored in variable chin. It is then transferred to variable chout and finally sent to the display. Any program can call this procedure repeatedly to accept user input and display it on the user’s screen. Each application needs to use the procedure echo, So it makes sense for it to be a shared procedure that is loaded into a portion of memory global to all applications. Thus, only a single copy of the echo procedure is used, saving space.

26 A Simple Example: On a Multiprocessor
Process P1 Process P2 . . chin = getchar(); . . chin = getchar(); chout = chin; chout = chin; putchar(chout); . . putchar(chout); . . The result is that the character input to P1 is lost before being displayed, and the character input to P2 is displayed by both P1 and P2.

27 Race Condition A race condition occurs when
Multiple processes or threads read and write data items They do so in a way where the final result depends on the order of execution of the processes. The output depends on who finishes the race last.

28 Operating System Concerns
What design and management issues are raised by the existence of concurrency? The OS must Keep track of various processes Allocate and de-allocate resources Protect the data and resources against interference by other processes. Ensure that the processes and outputs are independent of the processing speed

29 Need for Mutual Exclusion
Mutual Exclusion: Requirement that when one process in critical section then no other process can be in the same critical section Suppose two or more processes require access to a single non sharable resource, such as a printer. During the course of execution, each process will be sending commands to the I/O device, receiving status information, sending data, and/or receiving data. We will refer to such a resource as a critical resource, and the portion of the program that uses it a critical section of the program. It is important that only one program at a time be allowed in its critical section. We cannot simply rely on the OS to understand and enforce this restriction because the detailed requirements may not be obvious. In the case of the printer, for example, we want any individual process to have control of the printer while it prints an entire file. Otherwise, lines from competing processes will be interleaved.

30 Critical Section Critical Section: The section of code where shared resources are accessed is called Critical Section and that resource is called critical resource. Critical Section Properties: It must have mechanism for mutual exclusion A thread outside critical section cannot stop another thread from entering it Bounded waiting - A process remains inside its critical section for a finite time only These are also requirements for mutual exclusion There must be some mechanisms for implementing mutual exclusion

31 Example void echo() { //Critical Section Entry
chin = getchar(); //get input chout = chin; putchar(chout); //Display value // End of Critical Section }

32 Process Synchronization
Process Synchronization is defined as a mechanism which ensures that two or more concurrent processes do not simultaneously execute critical section. When one process starts executing the critical section the other process/thread should wait.

33 MUTEX Locks – Process Synchronization tool
A locking mechanism used to synchronize access to a resource Only one process/thread can acquire the mutex lock And the process that acquires it has the authority to release it When the process is enters the critical section it acquires the lock an changes the value of the lock to be 1 or true When the process leaves the critical section, it releases the lock changing its value to 0 or false. Other processes waiting for the lock to be released continuously tests the value of the lock to see if the value has been changed, hence wasting CPU time. This phenomenon is called Busy Waiting. And due to this, mutex locks are also called Spin Locks

34 Example void echo() { aquire_lock() //Critical Section Entry
chin = getchar(); //get input chout = chin; putchar(chout); //Display value release_lock() // End of Critical Section }

35 Semaphore – Process Synchronization tool
An integer value used for signalling among processes. Only three operations may be performed on a semaphore, all of which are atomic: Initialize() - A semaphore may be initialized to a nonnegative integer value (No: of resources of the same type). Decrement (semWait) The semWait operation decrements the semaphore value. If the value becomes negative, then the process executing the semWait is blocked. Otherwise, the process continues execution. Increment (semSignal) The semSignal operation increments the semaphore value. a process blocked by a semWait operation, if any, can be unblocked by this function. 1.

36 Types of Semaphore Binary Semaphore Counting Semaphore
If the value of semaphore is initialized with 1 then it behaves same as mutex locks and is called binary semaphore Counting Semaphore If the value of semaphore is initialized with value greater than1 then it is called counting semaphore

37 Mutex vs Semaphore Mutex has the problem of busy waiting which does not exist with semaphores Semaphores can be used to incorporate mutual exclusion when no: of resources are greater than 1.

38 Thank You!


Download ppt "Operating Systems Concepts"

Similar presentations


Ads by Google