Download presentation
Presentation is loading. Please wait.
Published byKerry Butler Modified over 9 years ago
2
Introduction, concepts, review & historical perspective Processes ◦ Synchronization ◦ Scheduling ◦ Deadlock Memory management, address translation, and virtual memory Operating system management of I/O File systems Distributed systems (as time permits) CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt) Chapter 12
3
A program that runs on the “raw” hardware and supports ◦ Resource Abstraction ◦ Resource Sharing Abstracts and standardizes the interface to the user across different types of hardware ◦ Virtual machine hides the messy details which must be performed Manages the hardware resources ◦ Each program gets time with the resource ◦ Each program gets space on the resource May have potentially conflicting goals: ◦ Use hardware efficiently ◦ Give maximum performance to each user CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt) Chapter 13
4
Goal: really large memory with very low latency ◦ Latencies are smaller at the top of the hierarchy ◦ Capacities are larger at the bottom of the hierarchy Solution: move data between levels to create illusion of large memory with low latency Chapter 14 CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt) Access latency 1 ns 2–5 ns 50 ns 5 ms 50 sec < 1 KB 1 MB 256 MB 40 GB > 1 TB Capacity Registers Cache (SRAM) Main memory (DRAM) Magnetic disk Magnetic tape Better
5
Process: program in execution ◦ Address space (memory) the program can use ◦ State (registers, including program counter & stack pointer) OS keeps track of all processes in a process table Processes can create other processes ◦ Process tree tracks these relationships ◦ A is the root of the tree ◦ A created three child processes: B, C, and D ◦ C created two child processes: E and F ◦ D created one child process: G Chapter 15 CS 1550, cs.pitt.edu (originaly modified by Ethan L. Miller and Scott A. Brandt) A B EF CD G
6
Chapter 16 Potential deadlock Actual deadlock
7
Scheduling the processor among all ready processes The goal is to achieve: ◦ High processor utilization ◦ High throughput number of processes completed per of unit time ◦ Low response time time elapsed from the submission of a request until the first response is produced
8
Long-term: which process to admit? Medium-term: which process to swap in or out? Short-term: which ready process to execute next?
10
The selection function determines which ready process is selected next for execution The decision mode specifies the instants in time the selection function is exercised ◦ Nonpreemptive Once a process is in the running state, it will continue until it terminates or blocks for an I/O ◦ Preemptive Currently running process may be interrupted and moved to the Ready state by the OS Prevents one process from monopolizing the processor
11
First-Come, First-Served Scheduling Shortest-Job-First Scheduling ◦ Also referred to as Shortest Process Next Round-Robin Scheduling
12
Process Arrival Time Service Time 1 2 3 4 5 0 2 4 6 8 3 6 4 5 2 Service time = total processor time needed in one (CPU-I/O) cycle Jobs with long service time are CPU-bound jobs and are referred to as “long jobs”
13
Selection function: the process that has been waiting the longest in the ready queue (hence, FCFS) Decision mode: non-preemptive ◦ a process runs until it blocks for an I/O
14
Selection function: the process with the shortest expected CPU burst time ◦ I/O-bound processes will be selected first Decision mode: non-preemptive The required processing time, i.e., the CPU burst time, must be estimated for each process
15
Selection function: same as FCFS Decision mode: preemptive a process is allowed to run until the time slice period (quantum, typically from 10 to 100 ms) has expired a clock interrupt occurs and the running process is put on the ready queue
17
Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable supply of ready processes to consume available processor time 17
18
18
19
Equal-size partitions ◦ Any process whose size is less than or equal to the partition size can be loaded into an available partition ◦ If all partitions are full, the operating system can swap a process out of a partition ◦ A program may not fit in a partition. The programmer must design the program with overlays 19
20
Main memory use is inefficient. Any program, no matter how small, occupies an entire partition. This is called internal fragmentation. 20
21
21
22
22
23
Base register ◦ Starting address for the process Bounds register ◦ Ending location of the process These values are set when the process is loaded or when the process is swapped in 23
24
Partition memory into small equal fixed- size chunks and divide each process into the same size chunks The chunks of a process are called pages and chunks of memory are called frames Operating system maintains a page table for each process ◦ Contains the frame location for each page in the process ◦ Memory address consist of a page number and offset within the page 24
25
25
26
26
27
27
28
All segments of all programs do not have to be of the same length There is a maximum segment length Addressing consist of two parts - a segment number and an offset Since segments are not equal, segmentation is similar to dynamic partitioning 28
29
29
30
Logical ◦ Reference to a memory location independent of the current assignment of data to memory ◦ Translation must be made to the physical address Relative ◦ Address expressed as a location relative to some known point Physical ◦ The absolute address or actual location in main memory 30
31
31
32
32
33
Symmetric multiprocessing (SMP) ◦ Each processor runs an identical copy of the operating system. ◦ Many processes can run at once without performance deterioration. ◦ Most modern operating systems support SMP Asymmetric multiprocessing ◦ Each processor is assigned a specific task; master processor schedules and allocates work to slave processors. ◦ More common in extremely large systems
34
Distribute the computation among several physical processors. Loosely coupled system – each processor has its own local memory; processors communicate with one another through various communications lines, such as high- speed buses or telephone lines. Advantages of distributed systems. ◦ Resources Sharing ◦ Computation speed up – load sharing ◦ Reliability ◦ Communications
35
Definition: Tightly-coupled CPUs that do not share memory Also known as ◦ cluster computers ◦ clusters of workstations (COWs) CS 1550, cs.pitt.edu (originaly modified from MOS2 slides by A. Tanenbaum)35
36
Interconnection topologies (a) single switch (b) ring (c) grid (d) double torus (e) cube (f) hypercube CS 1550, cs.pitt.edu (originaly modified from MOS2 slides by A. Tanenbaum)36
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.