Multiprocessor OS The functional capabilities often required in an OS for a multiprogrammed computer include the resource allocation and management schemes,

Slides:



Advertisements
Similar presentations
Chapter 7: Deadlocks.
Advertisements

Concurrency: Mutual Exclusion and Synchronization Chapter 5.
WHAT IS AN OPERATING SYSTEM? An interface between users and hardware - an environment "architecture ” Allows convenient usage; hides the tedious stuff.
File Management Chapter 12. File Management File management system is considered part of the operating system Input to applications is by means of a file.
Chapter 6 Concurrency: Deadlock and Starvation
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
Computer Systems/Operating Systems - Class 8
Review: Chapters 1 – Chapter 1: OS is a layer between user and hardware to make life easier for user and use hardware efficiently Control program.
Concurrency.
Lecture 1: History of Operating System
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
1: Operating Systems Overview
Scheduler Activations Effective Kernel Support for the User-Level Management of Parallelism.
OS Spring 2004 Concurrency: Principles of Deadlock Operating Systems Spring 2004.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
OPERATING SYSTEM OVERVIEW
3.5 Interprocess Communication
OS Fall’02 Concurrency: Principles of Deadlock Operating Systems Fall 2002.
Chapter 11 Operating Systems
©Brooks/Cole, 2003 Chapter 7 Operating Systems Dr. Barnawi.
Chapter 1 and 2 Computer System and Operating System Overview
Informationsteknologi Tuesday, October 9, 2007Computer Systems/Operating Systems - Class 141 Today’s class Scheduling.
MULTIPROCESSOR SYSTEMS OUTLINE  Coordinated job Scheduling  Separate Systems  Homogeneous Processor Scheduling  Master/Slave Scheduling.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
Operating System A program that controls the execution of application programs An interface between applications and hardware 1.
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
LOGO OPERATING SYSTEM Dalia AL-Dabbagh
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
Multiple Processor Systems. Multiprocessor Systems Continuous need for faster and powerful computers –shared memory model ( access nsec) –message passing.
Operating System 4 THREADS, SMP AND MICROKERNELS
1 Previous lecture review n Out of basic scheduling techniques none is a clear winner: u FCFS - simple but unfair u RR - more overhead than FCFS may not.
©Brooks/Cole, 2003 Chapter 7 Operating Systems. ©Brooks/Cole, 2003 Define the purpose and functions of an operating system. Understand the components.
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems CSCI-6140 – Computer Operating Systems David Goldschmidt, Ph.D.
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Understanding Operating Systems 1 Chapter 6 : Concurrent Processes What is Parallel Processing? Typical Multiprocessing Configurations Process Synchronization.
Multiprocessor and Real-Time Scheduling Chapter 10.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Cpr E 308 Spring 2004 Real-time Scheduling Provide time guarantees Upper bound on response times –Programmer’s job! –Every level of the system Soft versus.
Chapter 7 Operating Systems. Define the purpose and functions of an operating system. Understand the components of an operating system. Understand the.
Computers Operating System Essentials. Operating Systems PROGRAM HARDWARE OPERATING SYSTEM.
Ihr Logo Operating Systems Internals & Design Principles Fifth Edition William Stallings Chapter 2 (Part II) Operating System Overview.
Operating Systems Part III: Process Management (Deadlocks)
Operating Systems David Goldschmidt, Ph.D. Computer Science The College of Saint Rose CIS 432.
Copyright ©: University of Illinois CS 241 Staff1 Threads Systems Concepts.
1: Operating Systems Overview 1 Jerry Breecher Fall, 2004 CLARK UNIVERSITY CS215 OPERATING SYSTEMS OVERVIEW.
1 Lecture 1: Computer System Structures We go over the aspects of computer architecture relevant to OS design  overview  input and output (I/O) organization.
Slides created by: Professor Ian G. Harris Operating Systems  Allow the processor to perform several tasks at virtually the same time Ex. Web Controlled.
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Deadlock. Chapter 7: Deadlocks The Deadlock Problem System Model Deadlock Characterization Methods for Handling Deadlocks Deadlock Prevention Deadlock.
Page 1 2P13 Week 1. Page 2 Page 3 Page 4 Page 5.
Introduction to operating systems What is an operating system? An operating system is a program that, from a programmer’s perspective, adds a variety of.
Chapter 4 – Thread Concepts
Concurrency.
Introduction to Operating System (OS)
Multiprocessor Introduction and Characteristics of Multiprocessor
Operating Systems.
Operating Systems : Overview
Threads Chapter 4.
Multiprocessor and Real-Time Scheduling
Concurrency: Mutual Exclusion and Process Synchronization
Conditions for Deadlock
Operating Systems : Overview
Operating Systems : Overview
Operating Systems : Overview
Operating Systems : Overview
- When you approach operating system concepts there might be several confusing terms that may look similar but in fact refer to different concepts:  multiprogramming, multiprocessing, multitasking,
Operating Systems : Overview
Operating Systems : Overview
Operating System Overview
Presentation transcript:

Multiprocessor OS The functional capabilities often required in an OS for a multiprogrammed computer include the resource allocation and management schemes, memory and dataset protection, prevention of system deadlocks and abnormal process termination or process handling. Also need techniques for efficient utilization of resources and must provide I-O and load balancing schemes.

Presence of more than one processing unit in the system introduces a new dimension into the design of the OS.

There are 3 organizations that been utilized in the design of OS for multiprocessors: master- slave configuration, separate supervisor for each processor and floating supervisor control.

In master slave mode one processor called the master maintains the status of all processors in the system and apportions the work to all the slave processors. An eg of the master slave mode is in the Cyber- 170, where the OS is executed by one peripheral processor P0. All other processors are treated as slaves to P0.

When there is a separate supervisor system (kernel) running in each processor, the OS characteristics are very different from the master-slave systems. This is similar to the approach taken by the computer networks where each processor contains a copy of basic kernel. Resource sharing occurs at a highest level. Each processor services its own needs.

Since there is some interaction between the processors it is necessary for some of the supervisory code to be reentrant or replicated to provide separate copies for each processor. Although each supervisor has its own set of private tables, some tables are common and shared by the whole system. This creates table access problems. The method used in accessing the shared resources depends on the degree of coupling among the processors. The separate supervisor OS is not as sensitive to a catastrophic failure as a master slave system.

The floating supervisor control scheme treats all the processors as well as other resources symmetrically or as an anonymous pool of resources. This is the most difficult mode of operation and the most flexible. In this mode the supervisor routine floats from one processor to another, although several of the processors may be executing supervisory service routines simultaneously. This type of systems can attain better load balancing over all types of resources.

Eg of OS that execute in this mode are the MVS and VM in the IBM 3081 and the Hydra on C.mmp.

Software requirements for multiprocessors Program control structures are provided to aid the programmers in developing efficient parallel algorithms. 3 basic nonsequential program control structure have been identified. These control structures are characterized by the fact that programmer need only focus on a small program and not on the overall control of the computation.

The I eg. is the message based organization which was used in the Cm* OS. In this, computation is performed by multiple homogenous processes that execute independently and interact via messages. The grain size of a typical process depends on the system. The II is the chore structure. Here all codes are broken into small units. The process that executes the unit of code is called a chore. An important characteristic of chore is that once it begins execution, it runs to completion. To avoid long waits, chores are small.

The III control structure is that of production system now often used in AI. Productions are expressions of the form. Whenever the Boolean antecedent evaluates to true, the consequent may be performed. In contrast to chores, production consequents may or may not include code which might block.

In a production system 4 scheduling strategies are required (a) to control the selection of antecedents to be evaluated next (b) to order the execution of selected antecedents (c) to select the subset runnable consequents to be executed, (d) to order the execution of the selected consequents.

OS requirements Basic goals of OS: --To provide programmer interface to the m/c --Manage resources --Provide mechanisms to implement policies --Facilitate matching applications to the m/c.

The sharing of the multiple processors may be achieved by placing the several processes together in shared memory and providing a mechanism for rapidly switching the attention of a processor from one process to another. This operation is often called context switching.

Sharing of the processors introduces 3 subordinate problems: 1.The protection of the resources of one process from willful or accidental damage by other processes 2.The provision for communication among processes and b/w user processes and supervisor processes. 3.The allocation of resources among processes so that resource demands can always be fulfilled.

Exploiting concurrency for multiprocessing A parallel program for a multiprocessor consists of 2 or more interacting processes. A process is a sequential program that executes concurrently with other processes.

Language features to exploit parallelism Processes re concurrent if their executions overlap in time. No prior knowledge is available about the speed at which concurrent processes are executed. One way to denote concurrency is to use FORK and JOIN statements.

A very common problem occurs when 2 or more concurrent processes share data which is modifiable. If a process is allowed to access a set of variables that is being updated by another processes concurrently, erroneous results will occur in the computation.

So controlled access of the shared variables should be required of the computations so as to guarantee that a process will have mutually exclusive access to the sections of programs and data which are nonreentrant or modifiable. Such segments of programs are called critical sections.

Following are the assumptions regarding the critical sections: 1.Mutual exclusion: At most one process can be in a critical section at a time. 2.Termination: The critical section is executed in a finite time. 3.Fair scheduling: A process attempting to enter the critical section will eventually do so in a finite time.

The deadlock occurs because 2 processes enter their critical sections in opposite order and create a situation in which each process is waiting indefinitely for the completion of a region within the other process. The circular wait is a condition for deadlock. The deadlock is possible it is assumed that a resource cannot be released by a process waiting for an allocation of another resource.

From this technique, an algorithm can be designed to find a subset of resources that would incur the minimum cost if preempted. This approach means that after each preemption, the detection algorithm must be reinvoked to check whether a deadlock still exists.

A process which has a resource preempted from it must make a subsequent request for the resource to be reallocated to it.

Synchronization is a general term for timing constraints of this type of communication imposed on interactions between concurrent processes. The simplest form of interaction is an exchange of timing signals b/w 2 processes.

An eg is the use of interrupts to signal the completion of asynchronous peripheral operations to the processor. Another type of timing signals events was used in early multiprocessing systems to synchronize concurrent processes.

Program and algorithm restructuring 2 major issues in decomposing algorithm can be identified as partitioning and assignment Partitioning is the division of an algorithm into procedures, modules and processes. Assignment refers to the allocation of these units to processors.