Download presentation
Presentation is loading. Please wait.
1
Virtual Memory (Chapter 8) Memory Management is an intimate and complex interrelationship between processor hardware and operating system software. Paging and segmentation are the keys: All memory references within a process are logical addresses that are dynamically translated into physical address at run-time. A process may be swapped in and out of main memory such that it occupies different regions of main memory at different times during the course of execution. A process may be broken up into a number of pieces (pages or segments). These pieces need not be contiguously located in main memory during execution. The combination of dynamic run-time address translation and the use of a page or segment table permits this. If these characteristics are present, then it is not necessary that all pages or all the segments of a process be in main memory during execution. Only the pieces that holds the next instruction to be fetched and the piece that holds the next data location to be accessed have to be in the main memory.
2
Virtual Memory for improving System Utility More processes may be maintained in main memory. It is because we are only loading some of the pieces of any particular process, hence, there is room for more processes. This leads to more efficient use of processor. It is more likely that at least one of the more numerous processes will be in a Ready state at any particular time. It is possible for a process to be larger than all the main memory. Otherwise the programmer have to use techniques like overlay. Instead of the main memory, the programmer is dealing with huge memory, the size associated with disk storage. We referred the main memory as “real memory” and the disk storage as “virtual memory”. Table 7.1 summarizes the characteristics of paging and segmentation, with and without the use of virtual memory.
3
Locality and Virtual Memory The benefit of virtual memory is attractive, but will it work? Scenario: A large program / process with a few large arrays. Efficient means only to have some portion of the code and data in the main memory and trigger a fault when the needed code or data is not in the main memory. However, it demands the system to be clever enough to swap out the right page when bringing in the new ones. If the system throws out a piece just before it is about to be used, then it will have to go and get that piece again almost immediately. Too much of this leads to a condition known as thrashing. When thrashing occurs, the processor spends most of its time swapping pieces rather than executing the programs. Many algorithms are invented just to avoid thrashing and they all based on belief in the principle of locality. The principle of locality states that program and data reference within a process tend to cluster. (Figure 8.1)
4
Principle of Localoty
5
Virtual Memory (Combined Segmentation & Paging) Paging only Virtual address: Page_number + Offset Page Table Entry: Present_bit, Modified_bit, Other_control_bits, Frame_number Segmentation only Virtual address: Segment_number + Offset Segment Table Entry: Present_bit, Modified_bit, Other_control_bits, Segment_length, Segment_base_address Combined segmentation and paging Virtual address: Segment_number + Page_number + Offset Segment Table Entry: Control_bits, Segment_length, Segment_base_address Page Table Entry: Present_bit, Modified_bit, Other_control_bits, Frame_number
6
Virtual Memory Paging System & Segmentation System Paging System Page Table Structure (Figure 8.3) Two-Level Paging System (Figure 8.4, & 8.5) Inverted Page Table (Figure 8.6) Translation Lookaside Buffer -- TLB (Figure 8.7, 8.8, 8.9, & 8.10) Page Size (Figure 8.11 & Table 8.2) Segmentation System Virtual Memory Implications: –Simplifies the handling of growing data structures (dynamic segment size). –Allows programs to be altered and recompiled independently, without requiring the entire set of programs to be relinked or reloaded. –Lends itself to sharing among processes. –Lends itself to protection. Organization: –Combined Segmentation & Paging (Figure 8.12 & 8.13) –Protection & Sharing (Figure 8.14)
7
Operating System Policies for Virtual Memory Fetch Policy -- Demand vs. Pre-paging Placement Policy -- non-uniform memory access multiprocessor (NUMA) Replacement Policy –Basic Algorithms Optimal Least recently used (LRU) First in first out (FIFO) Clock –Page Buffering Resident Set Management –Resident set size -- Fixed vs. Variable –Replacement scope -- Global vs. Local Cleaning Policy -- Demand vs. Precleaning Load Control -- Degree of multiprogramming
8
Page Table Structure
9
Two-Level Paging System
11
Inverted Page Table
12
Translation Lookaside Buffer
16
Page Size
17
Combined Segmentation & Paging
19
Protection & Sharing
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.