Download presentation
1
Operating System Support Focus on Architecture
Chapter 8 Operating System Support Focus on Architecture
2
Functions of Operating System
Managing Resources Access to System Utilities Access to files Access to I/O devices Managing Interrupts & Bus Control (DMA) Error detection and response Accounting Scheduling Processes (or tasks) Program creation Program execution I/O Processing Managing Memory Utilization Partitioning, Paging, Virtual memory, Segmentation
3
Types of Operating Systems
What do we want to Optimize in each? Interactive Batch Uni-tasking Multi-tasking Real-Time
4
Batch Operating System Resident Monitor
Jobs are batched in queue Monitor handles scheduling Monitor controls sequence of events to process batch When one job is finished, control returns to Monitor which loads next job
5
Desirable Hardware Support for OS
Memory protection To protect the Monitor & Utilities Timer To prevent a job monopolizing the system Privileged instructions Only executed by Monitor e.g. I/O, files Interrupts Allows for relinquishing and regaining control DMA Allows for optimizing bus usage
6
The Case for Multi-programmed Batch Systems
I/O devices are very slow Waiting is inefficient use of computer When one program is waiting for I/O, another can use the CPU
7
Multi-Programming with Three Programs
8
Utilization: Uni-programmed vs Multi-programmed
9
Multiprogramming Resource Utilization
10
Scheduling What are the challenges of scheduling ?
When do we start a new program ? When a program is blocked, what program is executed next? When several programs are blocked, which gets first dibs at the resource needed? How long can one process monopolize a resource?
11
How can we model Processing? A Five State Process Model
12
Types of Scheduling Needed
13
Keeping Track of Process Information
If Processes can be interrupted and restarted, what is necessary to be able to stop and restart programs?
14
Process Control Block Layout
15
Scheduling Time Sequence Example:
16
Some Key Elements of O/S
17
Process Scheduling: What happened to the Medium-term Queue ?
18
Memory Management What are the Memory Management Issues?
How can we keep the optimum “portions” of programs in memory to optimize the use of the resources ? How can we protect one program from corrupting other programs ? What do we do when all programs in memory and stalled and memory is full ?
19
Memory Management Uni-programming Multi-programming
Memory split into two One for Operating System (monitor) One for currently executing program Multi-programming “User” part is sub-divided and shared among active processes Note: Memory size implications - 16 bits 64K memory addresses - 24 bits 16M memory addresses - 32 bits 4G memory addresses - 64 bits ? memory addresses
20
Swapping Problem: I/O is so slow compared with CPU that even in multi-programming system, CPU can be idle most of the time Solutions: Increase amount of main memory Expensive Swapping
21
How Does Swapping Work? Long term queue of “processes stored on disk”
Processes moved in as space becomes available As a process completes it is moved out of main memory to make room for other process(es) If none of the processes in memory are ready (i.e. all I/O blocked) Swap out a blocked process (intermediate queue) Swap in a ready process or a new process But swapping is an I/O process… Isn’t I/O slow? So why does swapping make sense ?
22
Implementation of Swapping
23
Other Structures to help with Memory Management
Partitioning Paging
24
Partitioning Partitioning: Fixed Size Partitions:
Splitting memory into sections to allocate to processes (including Operating System!) Fixed Size Partitions: Equal-sized partitions Potentially a lot of wasted memory Variable-sized partitions Process is stored into smallest reasonable “hole” Dynamic partitions memory leak need periodic compaction
25
Fixed Partitioning
26
Effect of Dynamic Partitioning – memory leaks
27
What about Relocation Challenges?
Can’t expect that process will load into the same place in memory as last time, but Instructions contain addresses For Locations of data For Addresses for instructions (branching) How about having logical address and physical addresses ? Logical address - relative to beginning of program Physical address - actual location in memory (this time) Mechanisms: Use Base Address Automatic (hardware) Conversion
28
Paging Then: Split memory into equal sized, “small” chunks
- Page frames Operating System maintains list of free frames Then: Allocate the required number page frames to a process A process does not require contiguous page frames Each process has its own page table
29
Allocation of Free Frames
30
Paging Addresses- Logical and Physical
31
Paging Implementation Issues
Demand paging Do not require all pages of a process in memory Bring in pages as required Page fault Required page is not in memory Operating System must swap in required page May need to swap out a page to make space Perhaps select page to throw out based on recent history
32
Paging has a Potential for “Thrashing”
Too many processes in too little memory Operating System spends all its time swapping Little or no real work is done Solutions Good page replacement algorithms Reduce number of processes running Add more memory
33
Virtual Memory Differentiation “memory models”: Implications:
We do not need all pages of a process in memory for it to run - We can swap in pages only as required So - we can now run processes that are bigger than total memory available! Differentiation “memory models”: Main memory is called real memory physical memory User/programmer can see much bigger memory space virtual memory Implications: Page Tables can become huge and can’t fit into memory need multiple level tables, or inverted tables (Why inverted tables ?)
34
Alternate Inverted Page Table Structure
35
Problem with Inverted Page Table
Every virtual memory reference causes two physical memory accesses Fetch page table entry Fetch data Use Translation Lookaside Buffer special cache for page table(s)
36
TLB and Cache Operation (special Cache for tables)
37
Segmentation Segmentation is “partitioning” memory that is visible to the programmer - Note: Paging is not visible to the programmer Usually different segments are allocated to program and data There may be a number of program and data segments per process (program) e.g. to support protection levels, priority levels, organization, flexibility, etc.
38
Advantages of Segmentation
Simplifies handling of growing data structures Allows programs to be altered and recompiled independently, without re-linking and re-loading Lends itself to sharing among processes Lends itself to protection Can paging and segmentation be combined?
39
Example: Pentium II Address Translation Mechanism
“Segment” uses 2 bits to provide 4 levels of protection, typically: 0: OS kernel, 1: OS, 2: apps needing special security, 3: general apps
40
Pentium II (Uses hardware for segmentation & paging)
Unsegmented, unpaged virtual address = physical address Used in Low complexity, High performance systems Unsegmented, paged Memory viewed as paged linear address space Protection and management via paging (Ex: Berkeley UNIX) Segmented, unpaged Collection of local address spaces Protection to single byte level, Translation table needed is on chip when segment is in memory, provide predictable access times Segmented, paged Segmentation used to define logical memory partitions subject to access control Paging manages allocation of memory within partitions (Ex: Unix System V)
41
Scheduling: OS Review uni-programming multi-programming time-sharing
long-term scheduler (queue of all jobs potentially schedulable) short-term scheduler (queue of processes that are ready to execute) medium-term scheduling (queue of jobs that can reside in memory) blocked monitoring (queue of processes blocked for resources) new – ready – running – blocked – exit state machine Memory management: partitioning paging – frames, pages, page fault, page table, logical/physical addr virtual memory – inverted page table, Translation Lookaside Buffer segmentation
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.