Download presentation
Presentation is loading. Please wait.
1
Chapter 9 Virtual Memory
2
Reading Assignment Read Chap , 9.9
3
Given Credit Where It is Due
Some of the lecture notes are borrowed from Dr. Jonathan Walpole at Portland State University I have modified them and added new slides
4
Memory Management Techniques
Fixed Partitioning Dynamic Partitioning Simple Paging Simple Segmentation Virtual Memory Paging Virtual Memory Segmentation
5
Simple vs. Virtual Memory Management Techniques
The memory management techniques discussed so far all require a process to be loaded in the main memory completely Virtual memory allows the execution of processes that are not completely in main memory
6
Hardware and Control Structures (I)
Memory references are dynamically translated into physical addresses at run time A process may be swapped in and out of main memory such that it occupies different regions
7
Hardware and Control Structures (II)
A process may be broken up into pieces that do not need to locate contiguously in main memory All pieces of a process do not need to be loaded in main memory during execution
8
Execution of a Program (I)
Operating system brings into main memory a few pieces of the program Resident set - portion of process that is in main memory What will happen if an address that is not in main memory is needed? An interrupt is generated Operating system places the process in a blocking state
9
Execution of a Program (II)
Piece of process that contains the required logical address is brought into main memory Operating system issues a disk I/O Read request Another process is dispatched to run while the disk I/O takes place An interrupt is issued when disk I/O completes which causes the operating system to place the affected process in the Ready state What are the advantages of not having all pieces of a process in memory
10
Improved System Utilization
More processes may be maintained in main memory Only load in some pieces of each process With so many processes in main memory, it is very likely a process will be in the Ready state at any particular time A process may be larger than all of main memory
11
Types of Memory Real memory Virtual memory Main memory Memory on disk
Allows for effective multiprogramming and relieves the user of tight constraints of main memory
12
Thrashing In the steady state, main memory is fully occupied to accommodate as many processes as possible Thus, to swap in a process piece, the OS must swap out a piece Thrashing is a problem where A process piece is swapped out just before it is needed The processor spends most of its time swapping pieces rather than executing user instructions
13
Principle of Locality Program and data references within a process tend to cluster Only a few pieces of a process will be needed over a short period of time Possible to make intelligent guesses about which pieces will be needed in the future This suggests that virtual memory may work efficiently
14
Support Needed for Virtual Memory
Hardware must support paging and/or segmentation Operating system must be able to manage the movement of pages and/or segments between secondary memory and main memory
15
Paging Each process has its own page table
Each page table entry contains the frame number of the corresponding page in main memory A bit is needed to indicate whether the page is in main memory or not
16
Paging Other control bits: like the access control bits, reference bit indicating if it has been accessed since it was last brought into the memory
17
Modify Bit in Page Table
Modify bit is used to indicate if the page has been altered since it was last loaded into main memory If no change has been made, the page does not have to be written to the disk when it needs to be replaced
18
Address Translation
19
Page Table Size Consider a system with the following parameters: 4-Gbyte (232 bytes) virtual address space; page size of 4-kbyte (212 bytes). How many entries do we have for page table of a process? If we use 4 bytes for each entry, the page table is 222 bytes, occupying 210 pages!
20
Page Tables Page tables are also stored in virtual memory
When a process is running, part of its page table is in main memory
21
Two-Level Hierarchical Page Table
22
Address Translation
23
In-Class Exercise Consider a system with memory mapping done on a page basis and using a single-level page table. Assume that the necessary page table is always in memory. a. If a memory reference takes 200 ns, how long does a paged memory reference take? 400 nanoseconds. 200 to get the page table entry, and 200 to access the memory location.
24
Address Translation
25
Address Translation
26
Translation Lookaside Buffer
Each virtual memory reference can cause two physical memory accesses One to fetch the page table One to fetch the data To overcome this problem a high-speed cache is set up for page table entries Called a Translation Lookaside Buffer (TLB)
27
Translation Lookaside Buffer
Contains page table entries that have been most recently used Given a virtual address, processor examines the TLB If page table entry is present (TLB hit), the frame number is retrieved and the real address is formed
28
Translation Lookaside Buffer
If page table entry is not found in the TLB (TLB miss), the page number is used to index the process page table First checks if page is already in main memory If not in main memory a page fault is issued The TLB is updated to include the new page entry
29
Translation Lookaside Buffer
30
Translation Lookaside Buffer
31
Translation Lookaside Buffer
32
Translation Lookaside Buffer
33
In-Class Exercise Consider a system with memory mapping done on a page basis and using a single-level page table. Assume that the necessary page table is always in memory. a. If a memory reference takes 200 ns, how long does a paged memory reference take? (400 ns) b. Now we add an MMU TLB that imposes an overhead of 20 ns on a hit or a miss. If we assume that 85% of all memory references hit in the MMU TLB, what is the Effective Memory Access Time (EMAT)? Explain how the TLB hit rate affects the EMAT. a. 400 nanoseconds. 200 to get the page table entry, and 200 to access the memory location. b. This is a familiar effective time calculation: (220 0.85) + (420 0.15) = 250 Two cases: First, when the TLB contains the entry required. In that case we pay the 20 ns overhead on top of the 200 ns memory access time. Second, when the TLB does not contain the item. Then we pay an additional 200 ns to get the required entry into the TLB. c. The higher the TLB hit rate is, the smaller the EMAT is, because the additional 200 ns penalty to get the entry into the TLB contributes less to the EMAT.
34
Inverted page tables Problem:
Page table overhead increases with address space size Page tables get too big to fit in memory! Using hierarchical page tables for address translation requires multiple memory accesses Consider a computer with 64 bit addresses Assume 4 Kbyte pages (12 bits for the offset) Virtual address space = 252 pages! Page table needs 252 entries! This page table is much too large for memory! Many peta-bytes per process page table
35
Inverted page tables How many mappings do we need (maximum) at any time? We only need mappings for pages that are in memory! A 4 GB memory can hold 220 4KB pages Instead of having a large page table of 252 entries for every process, we need a total of 220 page table entries on this computer!
36
Inverted page tables An inverted page table
Has one entry for every frame of memory Records which page is in that frame Is indexed by frame number not page number! So how can we search an inverted page table on a TLB miss fault?
37
Inverted page tables If we have a page number and want to find its page table entry, do we Do an exhaustive search of all entries? No, that’s too slow! Why not maintain a hash table to allow fast access given a page number? O(1) lookup time with a good hash function
38
Hash Tables Data structure for associating a key with a value
Perform hash function on key to produce a hash Hash is a number that is used as an array index Each element of the array can be a linked list of entries (to handle collisions) The list must be searched to find the required entry for the key (entry’s key matches the search key) With a good hash function the list length will be very short
39
Inverted Page Table Key: Control bits Chain pointer Page number
Process identifier Control bits Chain pointer
40
Inverted Page Table
41
Fetch Policy Determines when a page should be brought into memory
Demand paging only brings pages into main memory when a reference is made to a location on the page (chap 9.2) Many page faults when process first started Prepaging brings in more pages than needed (chap 9.9.1) More efficient to bring in pages that reside contiguously on the disk
42
Replacement Policy Which page should be replaced?
Page removed should be the page least likely to be referenced in the near future Most policies predict the future behavior on the basis of past behavior
43
Replacement Policy Frame Locking (chap 9.9.6)
Associate a lock bit with each frame If a frame is locked, it may not be replaced Kernel of the operating system Key control structures I/O buffers Pages that are used for copying a file from a device must be locked from being selected for eviction by a page replacement algorithm
44
Basic Page Replacement
Find the location of the desired page on disk Find a free frame: If there is a free frame, use it If there is no free frame, use a page replacement algorithm to select a victim frame - Write victim frame to disk if dirty Bring the desired page into the (newly) free frame; update the page and frame tables Continue the process by restarting the instruction that caused this page fault Note now potentially 2 page transfers for page fault – increasing EAT (Effective Access Time)
45
Page Replacement
46
Page and Frame Replacement Algorithms
Frame-allocation algorithm (chap 9.5) determines How many frames to give each process Which frames to replace Page-replacement algorithm (chap 9.4) Want lowest page-fault rate Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string String is just page numbers, not full addresses Repeated access to the same page does not cause a page fault In the textbook examples, the reference string of referenced page numbers is 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
47
First-In-First-Out (FIFO) Algorithm
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 How to track ages of pages? Just use a FIFO queue 3 frames (3 pages can be in memory at a time per process) Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5 in a 3-frame/process memory system and a 4-frame/process memory system, how many pages faults are encountered using FIFO in each case? Adding more frames can cause more page faults! Belady’s Anomaly 15 page faults
48
FIFO Illustrating Belady’s Anomaly
49
Optimal Algorithm Replace page that will not be used for longest period of time 9 is optimal for the example How do you know this? Can’t read the future Used for comparison, measuring how well another algorithm performs
50
Least Recently Used (LRU) Algorithm
Use past knowledge rather than future Replace page that has not been used in the most amount of time 12 faults – better than FIFO but worse than OPT Generally good algorithm and frequently used But how to implement?
51
LRU Algorithm (Cont.) Counter implementation
Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter When a page needs to be changed, look at the counters to find the smallest value Search through table needed Stack implementation Keep a stack of page numbers in a double link form: Page referenced: move it to the top
52
Use Of A Stack to Record Most Recent Page References
53
LRU Algorithm (Cont.) Counter implementation
Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter When a page needs to be changed, look at the counters to find smallest value Search through table needed Stack implementation Keep a stack of page numbers in a double link form: Page referenced: move it to the top requires 6 pointers to be changed Each update more expensive No search for replacement LRU and OPT are cases of stack algorithms that don’t have Belady’s Anomaly
54
LRU Approximation Algorithms
LRU implementations: high overhead and slow! Reference bit With each page associate a bit, initially = 0 When page is referenced bit set to 1 Replace any with reference bit = 0 (if one exists) We do not know the order, however Second-chance algorithm Generally FIFO, plus hardware-provided reference bit Clock replacement If page to be replaced has Reference bit = 0 -> replace it reference bit = 1 then: set reference bit 0, leave page in memory replace next page, subject to same rules
55
Second-Chance (clock) Page-Replacement Algorithm
56
In-Class Exercise Consider the following page reference string:
7,0,1,2,0,3,0,4,2,3,0,3,2. Assuming demand paging with three frames, how many page faults would occur for the Second-Chance (Clock) replacement algorithm? Second-chance algorithm Generally FIFO, plus hardware-provided reference bit Clock replacement If page to be replaced has Reference bit = 0 -> replace it reference bit = 1 then: set reference bit 0, leave page in memory replace next page, subject to same rules
57
Answer to the In-Class Exercise
There are totally 9 page faults.
58
Enhanced Second-Chance Algorithm
Improve algorithm by using reference bit and modify bit (if available) in concert Take ordered pair (reference, modify) (0, 0) neither recently used not modified – best page to replace (0, 1) not recently used but modified – not quite as good, must write out before replacement (1, 0) recently used but clean – probably will be used again soon (1, 1) recently used and modified – probably will be used again soon and need to write out before replacement When page replacement called for, use the clock scheme but use the four classes replace page in lowest non-empty class Might need to search circular queue several times
59
Page-Buffering Algorithms
Always keep a pool of free frames Then frame available when needed, not found at fault time Read page into free frame and select victim to evict and add to free pool When convenient, evict victim Possibly, keep list of modified pages When the backing storage device becomes idle, write pages there and set to non-dirty Possibly, keep free frame contents intact and note what is in them If referenced again before reused, no need to load contents again from disk Generally useful to reduce penalty if wrong victim frame selected
60
In-Class Exercise Consider the following page reference string:
2, 3, 2, 1, 5, 2, 4, 5, 3, 2, 5, 2 Assuming demand paging with three frames, 1) how do the following four replacement algorithms behave? 2) how many page faults are encountered for each of them? OPT: LRU: FIFO: Clock:
61
Appendix
62
Segmentation May be unequal, dynamic size
Simplifies handling of growing data structures Allows programs to be altered and recompiled independently Lends itself to sharing data among processes Lends itself to protection
63
Segment Tables Starting addresses of corresponding segments in main memory Each entry contains the length of the segment A bit is needed to determine if segment is already in main memory Another bit is needed to determine if the segment has been modified since it was last loaded in main memory
64
Segment Table Entries
65
Segmentation
66
Combined Paging and Segmentation
Paging is transparent to the programmer Segmentation is visible to the programmer Each segment is broken into fixed-size pages
67
Address Translation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.