Operating Systems (OS) Threads, SMP, and Microkernel, Unix Kernel

Slides:



Advertisements
Similar presentations
Processes and Threads Chapter 3 and 4 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College,
Advertisements

Threads, SMP, and Microkernels
Operating Systems: Internals and Design Principles
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Computer Systems/Operating Systems - Class 8
Threads Irfan Khan Myo Thein What Are Threads ? a light, fine, string like length of material made up of two or more fibers or strands of spun cotton,
1 Threads, SMP, and Microkernels Chapter 4. 2 Process: Some Info. Motivation for threads! Two fundamental aspects of a “process”: Resource ownership Scheduling.
Course: Operating Systems Instructor: Umar Kalim NUST Institute of Information Technology, Pakistan Operating Systems.
Operating Systems (OS) Threads, SMP, and Microkernel, Unix Kernel
Home: Phones OFF Please Unix Kernel Parminder Singh Kang Home:
3.5 Interprocess Communication
Operating Systems CS208. What is Operating System? It is a program. It is the first piece of software to run after the system boots. It coordinates the.
1 Chapter 4 Threads Threads: Resource ownership and execution.
Threads, SMP, and Microkernels
A. Frank - P. Weisberg Operating Systems Introduction to Tasks/Threads.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
Threads Chapter 4. Modern Process & Thread –Process is an infrastructure in which execution takes place  (address space + resources) –Thread is a program.
Chapter 51 Threads Chapter 5. 2 Process Characteristics  Concept of Process has two facets.  A Process is: A Unit of resource ownership:  a virtual.
Computer System Architectures Computer System Software
Operating System A program that controls the execution of application programs An interface between applications and hardware 1.
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 2: System Structures.
Operating Systems Lecture 09: Threads (Chapter 4)
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Chapter 1. Introduction What is an Operating System? Mainframe Systems
Process Management. Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication.
Threads, SMP, and Microkernels Chapter 4. 2 Outline n Threads n Symmetric Multiprocessing (SMP) n Microkernel n Linux Threads.
 Introduction to Operating System Introduction to Operating System  Types Of An Operating System Types Of An Operating System  Single User Single User.
Operating System 4 THREADS, SMP AND MICROKERNELS
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Fall 2000M.B. Ibáñez Lecture 01 Introduction What is an Operating System? The Evolution of Operating Systems Course Outline.
1 Threads, SMP, and Microkernels Chapter 4. 2 Focus and Subtopics Focus: More advanced concepts related to process management : Resource ownership vs.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 7 OS System Structure.
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
CE Operating Systems Lecture 3 Overview of OS functions and structure.
Ihr Logo Operating Systems Internals & Design Principles Fifth Edition William Stallings Chapter 2 (Part II) Operating System Overview.
Threads G.Anuradha (Reference : William Stallings)
1 Threads, SMP, and Microkernels Chapter 4. 2 Process Resource ownership: process includes a virtual address space to hold the process image (fig 3.16)
1 Threads, SMP, and Microkernels Chapter Multithreading Operating system supports multiple threads of execution within a single process MS-DOS.
Threads, SMP, and Microkernels Chapter 4. Threads, SMP, and Microkernels 1. Processes and threads 2. Symmetric MultiProcessing (SMP) 3. Microkernel: structuring.
1 Lecture 4: Threads Advanced Operating System Fall 2010.
We will focus on operating system concepts What does it do? How is it implemented? Apply to Windows, Linux, Unix, Solaris, Mac OS X. Will discuss differences.
Operating System 4 THREADS, SMP AND MICROKERNELS.
A. Frank - P. Weisberg Operating Systems Structure of Operating Systems.
Operating Systems: Internals and Design Principles
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
Module 2.0: Threads.
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
Background Computer System Architectures Computer System Software.
1 Threads, SMP, and Microkernels Chapter 4. 2 Process Resource ownership - process includes a virtual address space to hold the process image Scheduling/execution-
Page 1 2P13 Week 1. Page 2 Page 3 Page 4 Page 5.
Threads, SMP and Microkernels Process vs. thread: –Unit of resource ownership (process has virtual address space, memory, I/O channels, files) –Unit of.
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
CSCI/CMPE 4334 Operating Systems Review: Exam 1 1.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
1 OPERATING SYSTEMS. 2 CONTENTS 1.What is an Operating System? 2.OS Functions 3.OS Services 4.Structure of OS 5.Evolution of OS.
Processes and Threads Chapter 3 and 4 Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College,
Threads, SMP, and Microkernels
Chapter 17 Parallel Processing
Symmetric Multiprocessing (SMP)
Threads Chapter 4.
Lecture 4- Threads, SMP, and Microkernels
Operating System 4 THREADS, SMP AND MICROKERNELS
Threads Chapter 4.
Multithreaded Programming
Chapter 4 Threads, SMP, and Microkernels
Operating System Overview
Presentation transcript:

Operating Systems (OS) Threads, SMP, and Microkernel, Unix Kernel phones off (please) CSCI2413 Lecture 10 Operating Systems (OS) Threads, SMP, and Microkernel, Unix Kernel

© De Montfort University, 2004/5 . Your assignment is due on Friday 25th February before 4.00pm Your second Test is on Thursday, 3rd March at 4.00pm in GH3.75 © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Lecture Outline Threads, Parallel Processing SMP Microkernels Unix Kernel © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Consider A Process is a unit of: Resource ownership Dispatching (execution path, state) What if we treat each independently? Process resource ownership Thread dispatch and execution unit © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Threads Sometimes called a lightweight process smaller execution unit than a process Consists of: program counter register set stack space Threads share: memory space code section OS resources(open files, signals, etc.) © De Montfort University, 2004/5 csci2413 - L10

Uses of Threads: Examples Foreground to background work (spreadsheet: one thread reading user input, another executing the commands and updating the spreadsheet; yet another making periodic backups) Asynchronous processing (ex: a thread performing periodic backups against power failures in a word-processor) Fast execution (on a Multiprocessor system, multiple threads can execute in parallel) Modular program structure (different tasks/activities in a program may be implemented using different threads) Client/Server computing © De Montfort University, 2004/5 csci2413 - L10

User-Level Threads (ULT) All thread management is done by the application The kernel is not aware of the existence of threads Disadvantages: when a ULT executes a blocking system call, all of the threads within the same process are blocked can not take advantage of multiprocessing © De Montfort University, 2004/5 csci2413 - L10

Kernel-Level Threads (KLT) W2K, Linux, and OS/2 are examples of this approach Kernel maintains context information for the process and the threads Scheduling is done on a thread basis © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Multithreading Most OS support multiple threads of execution within a single process MS-DOS supports a single thread UNIX supports multiple user processes but only supports one thread per process Windows NT, Solaris, Linux, Mac, and OS/2 support multiple threads © De Montfort University, 2004/5 csci2413 - L10

Threads Process 0 Process 1 regs regs regs mem code code memory All threads in a process share the same memory space © De Montfort University, 2004/5 csci2413 - L10

Single Threaded and Multithreaded Process Models Control Block Thread Control Block Thread Control Block Process Control Block User Stack Process Control Block User Stack User Stack User Stack User Address Space Kernel Stack User Address Space Kernel Stack Kernel Stack Kernel Stack © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Why use threads? Parallelism Uses fewer resources Faster context switch Threads can work together easier Modularity © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Parallel Processing The processing of program instructions by dividing them among multiple processors with the objective of running a program in less time. Rather than speeding up hardware you could just add more processors to speed up the system. © De Montfort University, 2004/5 csci2413 - L10

Flynn’s Classification Based on the number of instruction streams and the number of data streams. © De Montfort University, 2004/5 csci2413 - L10

Categories of Computer Systems Single Instruction Single Data (SISD) single processor executes a single instruction stream to operate on data stored in a single memory Single Instruction Multiple Data (SIMD) each instruction is executed on a different set of data by the different processors © De Montfort University, 2004/5 csci2413 - L10

Categories of Computer … Multiple Instruction Single Data (MISD) a sequence of data is transmitted to a set of processors, each of which executes a different instruction sequence. Never implemented Multiple Instruction Multiple Data (MIMD) a set of processors simultaneously execute different instruction sequences on different data sets © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 csci2413 - L10

Symmetric Multiprocessing SMP is the processing of programs by multiple processors that share a common operating system and memory. Kernel can execute on any processor Typically each processor does self-scheduling from the pool of available process or threads Timer interrupt Ready queue © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 csci2413 - L10

Multiprocessor Operating System Design Considerations: Simultaneous concurrent processes or threads Scheduling Synchronization Memory Management Reliability and Fault Tolerance © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Microkernel Small operating system core Contains only essential OS functions Many services traditionally included in the operating system are now external subsystems device drivers file systems virtual memory manager windowing system and security services © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Microkernel OS U s e r P o c D e v i c r V i r t u a l M e m P r o c e s S v F i l e S y s t m User Processes File System IPC I/O & Device Mgmt Virtual Memory Process Management Microkernel Hardware Hardware Kernel Mode User Mode © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Unix Kernel Hardware is surrounded by the OS which is called the kernel Comes with a number of services and interfaces Shell C compiler strictly speaking, the kernel is Unix monolithic in design; grown to a massive block © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Unix … © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Unix … The Kernel can be split into two sections :- hardware dependent hardware control, incorporating low level i/o, device drivers, interrupt handlers, context switching and some memory management. hardware independent the rest © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Kernel Structures The key structure is the process table contains an entry for every process contains information, accessible to the kernel. The user area contains fields relating to the current process which can be swapped to disk © De Montfort University, 2004/5 csci2413 - L10

Process table entries:- Scheduling details: priority, CPU usage, sleeping etc. Identifiers: process id, parent's id, user id. Memory usage Signals: signals sent to the process but not yet dealt with. Synchronisation © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 The user area :- machine registers in user state; system call state - parameters and results; file descriptor table - current directory; permission modes for creating files; accounting information - user time, system time; kernel stack. © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 User processes - pre-emptive scheduling. Unpriviledged. Kernel - hardware independent - runs until completed or blocked. Priviledged. Kernel - hardware dependent - Never scheduled. Cannot block. Deals with interrupts. Runs on kernel stack, in kernel address space. © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Interrupt priorities high priority Machine errors Clock Disk Network Terminals low priority Software Interrupt © De Montfort University, 2004/5 csci2413 - L10

© De Montfort University, 2004/5 Summary Symmetric multiprocessing is a method of organising a multiprocessing system such that any process (or thread) can run on any processor; this includes kernel codes and processes. An SMP architecture raises new operating systems design issues and provides greater performance than a uniprocessor system under similar conditions. In recent years there has been much interest in the microkernel approach to operating system design. In its pure form, a microkernel operating system consists of a very small microkernel that runs in kernel mode and that contains only the most essential and critical operating system functions. © De Montfort University, 2004/5 csci2413 - L10

Benefits of Microkernel Organization (Appendix A) Uniform interface on request made by a process all services are provided by means of message passing Extensibility allows the addition of new services Flexibility existing features can be subtracted Portability changes needed to port the system to a new processor is changed in the microkernel - not in the other services Reliability modular design small microkernel can be rigorously tested Distributed system support message are sent without knowing what the target machine is Object-oriented operating system components are objects with clearly defined interfaces that can be interconnected to form software © De Montfort University, 2004/5 csci2413 - L10

Solaris Solaris is the computer operating system that Sun Microsystems provides for its family of Scalable Processor Architecture-based processors as well as for Intel-based processors. Sun has historically dominated the large Unix workstation market. As the Internet grew in the early 1990s, Sun's SPARC/Solaris systems became the most widely installed servers for Web sites. Sun emphasizes the system's availability (meaning it seldom crashes), its large number of features, and its Internet-oriented design. Sun advertises that its latest version, the Solaris 8 Operating Environment, is "the leading UNIX environment" today. SPARC (Scalable Processor Architecture) is a 32- and 64-bit microprocessor architecture from Sun Microsystems that is based on reduced instruction set computing (RISC). © De Montfort University, 2004/5 csci2413 - L10

Appendices … Linux is a Unix-like operating system that was designed to provide personal computer users a free or very low-cost operating system comparable to traditional and usually more expensive Unix systems. Linux has a reputation as a very efficient and fast-performing system. Linux's kernel (the central part of the operating system) was developed by Linus Torvalds at the University of Helsinki in Finland. To complete the operating system, Torvalds and other team members made use of system components developed by members of the Free Software Foundation for the GNU Project. Linux is a remarkably complete operating system, including a graphical user interface, an X Window System, TCP/IP, the Emacs editor, and other components usually found in a comprehensive Unix system. Although copyrights are held by various creators of Linux's components, Linux is distributed using the Free Software Foundation's copyleft stipulations that mean any modified version that is redistributed must in turn be freely available. © De Montfort University, 2004/5 csci2413 - L10

Parallel Processing In computers, parallel processing is the processing of program instructions by dividing them among multiple processors with the objective of running a program in less time. In the earliest computers, only one program ran at a time. A computation-intensive program that took one hour to run and a tape copying program that took one hour to run would take a total of two hours to run. An early form of parallel processing allowed the interleaved execution of both programs together. The computer would start an I/O operation, and while it was waiting for the operation to complete, it would execute the processor-intensive program. The total execution time for the two jobs would be a little over one hour. The next improvement was multiprogramming. In a multiprogramming system, multiple programs submitted by users were each allowed to use the processor for a short time. To users it appeared that all of the programs were executing at the same time. Problems of resource contention first arose in these systems. Explicit requests for resources led to the problem of the deadlock. Competition for resources on machines with no tie-breaking instructions lead to the critical section routine. Vector processing was another attempt to increase performance by doing more than one thing at a time. In this case, capabilities were added to machines to allow a single instruction to add (or subtract, or multiply, or otherwise manipulate) two arrays of numbers. This was valuable in certain engineering applications where data naturally occurred in the form of vectors or matrices. In applications with less well-formed data, vector processing was not so valuable. © De Montfort University, 2004/5 csci2413 - L10

Parallel … The next step in parallel processing was the introduction of multiprocessing. In these systems, two or more processors shared the work to be done. The earliest versions had a master/slave configuration. One processor (the master) was programmed to be responsible for all of the work in the system; the other (the slave) performed only those tasks it was assigned by the master. This arrangement was necessary because it was not then understood how to program the machines so they could cooperate in managing the resources of the system. Solving these problems led to the symmetric multiprocessing system (SMP). In an SMP system, each processor is equally capable and responsible for managing the flow of work through the system. Initially, the goal was to make SMP systems appear to programmers to be exactly the same as single processor, multiprogramming systems. (This standard of behavior is known as sequential consistency). However, engineers found that system performance could be increased by someplace in the range of 10-20% by executing some instructions out of order and requiring programmers to deal with the increased complexity. (The problem can become visible only when two or more programs simultaneously read and write the same operands; thus the burden of dealing with the increased complexity falls on only a very few programmers and then only in very specialized circumstances.) The question of how SMP machines should behave on shared data is not yet resolved. © De Montfort University, 2004/5 csci2413 - L10

Parallel … As the number of processors in SMP systems increases, the time it takes for data to propagate from one part of the system to all other parts grows also. When the number of processors is somewhere in the range of several dozen, the performance benefit of adding more processors to the system is too small to justify the additional expense. To get around the problem of long propagation times, message passing systems were created. In these systems, programs that share data send messages to each other to announce that particular operands have been assigned a new value. Instead of a broadcast of an operand's new value to all parts of a system, the new value is communicated only to those programs that need to know the new value. Instead of a shared memory, there is a network to support the transfer of messages between programs. This simplification allows hundreds, even thousands, of processors to work together efficiently in one system. (In the vernacular of systems architecture, these systems "scale well.") Hence such systems have been given the name of massively parallel processing (MPP) systems. © De Montfort University, 2004/5 csci2413 - L10

Parallel … The most successful MPP applications have been for problems that can be broken down into many separate, independent operations on vast quantities of data. In data mining, there is a need to perform multiple searches of a static database. In artificial intelligence, there is the need to analyze multiple alternatives, as in a chess game. Often MPP systems are structured as clusters of processors. Within each cluster the processors interact as in a SMP system. It is only between the clusters that messages are passed. Because operands may be addressed either via messages or via memory addresses, some MPP systems are called NUMA machines, for Non-Uniform Memory Addressing. SMP machines are relatively simple to program; MPP machines are not. SMP machines do well on all types of problems, providing the amount of data involved is not too large. For certain problems, such as data mining of vast data bases, only MPP systems will serve. © De Montfort University, 2004/5 csci2413 - L10