Download presentation
1
Module 1: Introduction What is an operating system?
Multiprogramming Systems Time-Sharing Systems Parallel Systems Distributed Systems Real -Time Systems System Calls and APIs K. Salah
2
What is an Operating System?
A program that acts as an intermediary between a user of a computer and the computer hardware. Operating system goals: Execute user programs and make solving user problems easier. Make the computer system convenient to use. Use the computer hardware in an efficient manner. K. Salah
3
Computer System Components
1. Hardware – provides basic computing resources (CPU, memory, I/O devices). 2. Operating system – controls and coordinates the use of the hardware among the various application programs for the various users. 3. Applications programs – define the ways in which the system resources are used to solve the computing problems of the users (compilers, database systems, video games, business programs). 4. Users (people, machines, other computers). K. Salah
4
Abstract View of System Components
K. Salah
5
Multiprogramming and Timesharing
Multiprogramming: CPU is multiplexed (shared) among a number of jobs -- while one job waiting for I/O, another can use CPU. Advantages: CPU is kept busy. Disadvantages: Hardware and O.S. became significantly more complex for handling and scheduling multiple jobs. Timesharing: switch CPU among jobs for pre-defined time interval. Most O.S. issues arise from trying to support multiprogramming -- CPU scheduling, deadlock, protection, memory management, virtual memory, etc. K. Salah
6
Why need multiprogramming? I/O Times vs. CPU times
400 MHz Pentium II = 400 million cycles/second 10 cycles/instruction = 40 million instructions/second Read 1 disk block = 20 msec CPU can do (40 x 106) / 103= 40,000 instructions/msec Thus: In time to read one disk block, CPU can do 20 * 40,000 = 800,000 instructions !! K. Salah
7
Parallel Systems Multiprocessor systems with more than one CPU in close communication. Tightly coupled system – processors share memory and a clock; communication usually takes place through the shared memory. Advantages of parallel system: Increased throughput Economical Increased reliability graceful degradation or fault-tolerant The ability to continue providing service proportional to the level of surviving hardware. K. Salah
8
Parallel Systems (Cont.)
Symmetric multiprocessing (SMP) Each processor runs an identical copy of the operating system. Many processes can run at once without performance deterioration. Most modern operating systems support SMP Asymmetric multiprocessing Each processor is assigned a specific task; master processor schedules and allocates work to slave processors. More common in extremely large systems Also used in PS/3 and Sbox PS/3 has 8 CPUs with one CPU acting as a master K. Salah
9
Symmetric Multiprocessing Architecture
K. Salah
10
Distributed Systems Distribute the computation among several physical processors. Loosely coupled system – each processor has its own local memory; processors communicate with one another through various communications lines, such as high-speed buses or telephone lines. Advantages of distributed systems. Resource sharing sharing and printing files at remote sites processing information in a distributed database using remote specialized hardware devices Computation speedup – load sharing Reliability – detect and recover from site failure, function transfer, reintegrate failed site Communication – message passing K. Salah
11
Network vs. Distributed OS
Network OS A configuration in which there is a network of application machines, typically a workstations with multiple server machines. Server machines can be file servers, printer servers, mail, etc. Each computer has its own OS. The user must be aware that there are multiple independent machine and must deal with them explicitly. NW OS allows machines to interact with each other by having a common communication architecture. Distributed OS A common OS shared by a network of computers Offers the illusion of a unified system image, i.e. single system image i.e, a pool of interconnected computers appears as a single unified computing resource can say that these machines have a Single System Image (SSI) [Buyya vol.1, 1999]. It provides the user with transparent access to the resources of multiple machines Therefore: less autonomy between computers gives the impression there is a single operating system controlling the network. Research vehicle Examples: Bell Labs Inferno & Plan 9, Mach, Amobea by Tanenbaum, Chorus by CMU SSI Single System Image K. Salah
12
Computing Architectures
Vector Computers (VC) - proprietary system: provided the breakthrough needed for the emergence of computational science, buy they were only a partial answer. Massively Parallel Processors (MPP) -proprietary systems: high cost and a low performance/price ratio. Symmetric Multiprocessors (SMP): suffers from scalability Distributed Systems: difficult to use and hard to extract parallel performance. Clusters - gaining popularity: High Performance Computing - Commodity Supercomputing High Availability Computing - Mission Critical Applications Grid Computing Cloud Computing Grid computing (or the use of computational grids) is the combination of computer resources from multiple administrative domains applied to a common task, usually to a scientific, technical or business problem that requires a great number of computer processing cycles or the need to process large amounts of data. One of the main strategies of grid computing is using software to divide and apportion pieces of a program among several computers, sometimes up to many thousands. Grid computing is distributed, large-scale cluster computing, as well as a form of network-distributed parallel processing [1]. The size of grid computing may vary from being small — confined to a network of computer workstations within a corporation, for example — to being large, public collaboration across many companies and networks. K. Salah
13
Cloud Computing Cloud computing is the E-busniess on-demand of the future. Huge VPS servers. CC is the provision of dynamically scalable and often virtualised resources as a service over the Internet.[1][2] Users need not have knowledge of, expertise in, or control over the technology infrastructure in the "cloud" that supports them.[3] Cloud computing services often provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers. The term cloud is used as a metaphor for the Internet, based on how the Internet is depicted incomputer network diagrams and is an abstraction for the complex infrastructure it conceals.[4] A technical definition is that "cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction."[5] This definition states that clouds have five essential characteristics: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service Software as a service (SaaS) Hardware as a service (HaaS) Infrastracute as a service (IaaS) Platform as a service (PaaS) E-business. & beta.cloudos.com eyeos.info & eyeos.mobi (has and an internet browser) oos.cc K. Salah
14
Scalability vs. SSI In distributed computing, a single system image cluster is a cluster of machines that appears to be one single system. [1] [2] The interest in SSI clusters is based on the perception that they may be simpler to use and administer than more specialized clusters. Different SSI systems may provide a more or less complete illusion of a single system. As a result, all NUMA (non-uniform Memory Access) computers sold to the market use special-purpose hardware to maintain cache coherence and thus class as "cache-coherent NUMA", or ccNUMA. K. Salah
15
Real-Time Systems Often used as a control device in a dedicated application such as controlling scientific experiments, medical imaging systems, industrial control systems, and some display systems. Well-defined fixed-time constraints. Hard real-time system. Deadline support Secondary storage limited or absent, data stored in short-term memory, or read-only memory (ROM) Conflicts with time-sharing systems, not supported by general-purpose operating systems. Soft real-time system. No deadline support Limited utility in industrial control or robotics Useful in applications (multimedia, virtual reality) requiring advanced operating-system features. K. Salah
16
Operating System Services
Program execution – system capability to load a program into memory and to run it. I/O operations – since user programs cannot execute I/O operations directly, the operating system must provide some means to perform I/O. File-system manipulation – program capability to read, write, create, and delete files. Communications – exchange of information between processes executing either on the same computer or on different systems tied together by a network. Implemented via shared memory or message passing. Error detection – ensure correct computing by detecting errors in the CPU and memory hardware, in I/O devices, or in user programs. K. Salah
17
System Programs System programs provide a convenient environment for program development and execution. The can be divided into: File manipulation Status information File modification Programming language support Program loading and execution Communications Application programs Most users’ view of the operation system is defined by system programs, not the actual system calls. K. Salah
18
System Calls System calls provide the interface between a running program and the operating system. Generally available as assembly-language instructions. Languages defined to replace assembly language for systems programming allow system calls to be made directly Three general methods are used to pass parameters between a running program and the operating system. Pass parameters in registers. Store the parameters in a table in memory, and the table address is passed as a parameter in a register. Push (store) the parameters onto the stack by the program, and pop off the stack by operating system. K. Salah
19
System Calls Provide "direct access" to operating system services (e.g., file system, I/O routines, memory allocate & free routines) by user programs. System calls execute instructions that control the resources of the computer system, e.g., I/O instructions for devices. We want such privileged instructions to be executed only by a system routine, under the control of the OS! As we will see, system calls are special, and in fact, are treated as a special case of interrupts. Programs that make system calls were traditionally called "system programs" and were traditionally implemented in assembly language. K. Salah
20
System Call Scenario (cont.)
File system User program I/O devices memory Operating System User program is confined! K. Salah
21
Dual-Mode Operation Sharing system resources requires operating system to ensure that an incorrect program cannot cause other programs to execute incorrectly. Provide hardware support to differentiate between at least two modes of operations. 1. User mode – execution done on behalf of a user. 2. Monitor mode (also supervisor mode or system mode) – execution done on behalf of operating system. K. Salah
22
Dual-Mode Operation Sharing system resources requires operating system to ensure that an incorrect program cannot cause other programs to execute incorrectly. Provide hardware support to differentiate between at least two modes of operations. 1. User mode – execution done on behalf of a user. 2. Monitor mode (also supervisor mode or system mode) – execution done on behalf of operating system. K. Salah
23
Dual-Mode Operation (Cont.)
Mode bit added to computer hardware to indicate the current mode: monitor (0) or user (1). Part of EFLAG in x86 architecture Part of PSW in Motorola architecture When an interrupt or fault occurs hardware switches to monitor mode. Interrupt/fault monitor user set user mode Privileged instructions can be issued only in monitor mode. All I/O instructions are privileged instructions. Must ensure that a user program could never gain control of the computer in monitor mode (I.e., a user program that, as part of its execution, traps if it executes a privileged instruction). K. Salah
24
API vs. System Calls Mostly accessed by programs via a high-level Application Program Interface (API) rather than direct system call use Three most common APIs are Win32 API for Windows, POSIX API for POSIX-based systems (including virtually all versions of UNIX, Linux, and Mac OS X), and Java API for the Java virtual machine (JVM) Why use APIs rather than system calls? K. Salah
25
Example of Standard API
Consider the ReadFile() function in the Win32 API—a function for reading from a file A description of the parameters passed to ReadFile() HANDLE file—the file to be read LPVOID buffer—a buffer where the data will be read into and written from DWORD bytesToRead—the number of bytes to be read into the buffer LPDWORD bytesRead—the number of bytes read during the last read LPOVERLAPPED ovl—indicates if overlapped I/O is being used K. Salah
26
API – System Call – OS Relationship
K. Salah
27
Standard C Library Example
C program invoking printf() library call, which calls write() system call K. Salah
28
System Call Implementation
Typically, a number associated with each system call System-call interface maintains a table indexed according to these numbers The system call interface invokes intended system call in OS kernel and returns status of the system call and any return values The callers need know nothing about how the system call is implemented Just needs to obey API and understand what OS will do as a result call Most details of OS interface hidden from programmer by API Managed by run-time support library (set of functions built into libraries included with compiler) K. Salah
29
System Calls (cont.) Now, system calls can be made from high-level languages, such as C and Modula-2 (to a degree). Unix has about 32 system calls: read(), write(), open(), close(), fork(), … using trap instructions: i = read(fd,80, buffer) push buffer push 80 push fd trap read pop i Each system call had a particular number. Instruction set has a special instruction for making system calls: SVC (IBM 360/370) trap (PDP 11) tw (PowerPC) - trap word tcc (Sparc) break (MIPS) K. Salah
30
User vs. System Mode case i-call System (or kernel) memory
“trap” to O.S. l : n : code for read User Program (text) trap n Special mode-bit set in PSW register: mode-bit = 0 => user program executing mode-bit = 1 => system routine executing Privileged instructions possible only when mode-bit = 1! K. Salah
31
System Call Scenario User program executing (mode-bit = 0)
User makes a system call hardware sets mode-bit to 1 system saves state of user process branch to case statement in system code branch to code for system routine based on system call number copy parameters from user stack execute system call (using privileged instructions) restore state of user program hardware resets mode-bit return to user process K. Salah
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.