Download presentation
Presentation is loading. Please wait.
1
Chapter 9: Virtual Memory – Part I
Modified by Dr. Neerja Mhaskar for CS 3SH3
2
Virtual Memory All the memory management strategies discussed so far, required the entire program to be in memory before its executed. Facts: Code needs to be in memory to execute, but entire program rarely used (e.g.: error code, unusual routines, large data structures) Entire program code not needed at same time Virtual Memory is a technique that allows the execution of processes that are not completely in memory. Therefore, Program no longer constrained by limits of physical memory Each program takes less memory while running -> more programs run at the same time Allows address spaces to be shared by several processes Less I/O needed to load or swap programs into memory -> each user program runs faster.
3
Virtual-address Space
Logical address space of a process shown on the left. Unused address space between the stack and heap is called a hole. Virtual address spaces with holes called sparse address space. No physical memory needed until heap or stack grows to a given new page. Sparse address spaces enable, Sharing of libraries Sharing of memory by processes etc.
4
Demand Paging Virtual memory can be implemented via,
Demand paging and Demand segmentation. Demand paging: When a process is swapped in (from the disk), all its pages are not swapped in all at once. Rather they are swapped in only when the process needs them. Lazy swapper – Never swaps a page into memory unless page will be needed. Swapper that deals with pages is a pager Use the valid-invalid bit scheme (discussed in previous lecture) to distinguish pages in memory and or disk in the page table. Pure demand paging: Process is started with no pages in memory OS sets instruction pointer to first instruction of process, non-memory-resident -> page fault And for every other process pages on first access page fault occurs.
5
Basic Concepts The basic idea behind paging is that when a process is swapped in, the pager only loads into memory those pages that it expects the process to need ( right away. ) The valid–invalid bit scheme used to distinguish pages that are in memory (marked valid) pages on disk, that is not loaded in memory are invalid in the page table. If the process only ever accesses pages that are loaded in memory ( memory resident pages ), then the process runs exactly as if all the pages were loaded in to memory. On the other hand, if a page is needed that was not originally loaded up, then a page fault trap is generated,
6
Page Table When Some Pages Are Not in Main Memory
7
Page Fault If there is a reference to a page, first reference to that page will trap to operating system: page fault Operating system looks at another table to decide: Invalid reference abort Just not in memory Find free frame Swap page into frame via scheduled disk operation from secondary memory (swap space). Reset tables to indicate page now in memory Set validation bit = v Restart the instruction that caused the page fault Secondary memory (swap device with swap space): It is a high speed disk that holds pages not in main memory.
8
Steps in Handling a Page Fault
9
Performance of Demand Paging
Probability of a page fault = p (0 p 1) if p = 0 no page faults if p = 1, every reference is a fault Effective Access Time (EAT): EAT = (1 – p) x memory access time + p (page fault time) Page fault time base on below major components: Service the page fault interrupt, read in the page, restart the process
10
Demand Paging Example Memory access time = 200 nanoseconds
Average page-fault service time = 8 milliseconds EAT = (1 – p) x p (8 milliseconds) = (1 – p) x p x 8,000,000 = p x 7,999,800 If one access out of 1,000 causes a page fault that is p = 10-4, then EAT = 8.2 microseconds. This is a slowdown by a factor of 40!! If we want the slow down factor to be < 10 percent, then only one page fault should occur in every 400,000 memory accesses.
11
Copy-on-Write Consider the fork() system call to create a new child process. It creates a copy of the parent’s address space for the child. As most fork() calls are followed by exec() system, above steps is unnecessary. Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory If either process modifies a shared page, only then is the page copied COW allows more efficient process creation as only modified pages are copied.
12
COW example Before Process 1 Modifies Page C
After Process 1 Modifies Page C
13
Over allocation of memory
Processes usually do not need all its memory at a given instance in time, we could allocate less memory and have more processes in memory – increases level of multi-programming but this strategy results in over allocating memory. It is possible that all processes in memory may request its full memory requirement resulting there is no free frame to satisfy all the processes requirement. Additionally, memory is also need for kernel, I/O buffers etc. Over allocation of memory – possible solutions Swapping out a process entirely and freeing up space Page replacement – find some page in memory, but not really in use, page it out. Achieved by modifying page fault service routine. Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk
14
Basic Page Replacement
Find the location of the desired page on disk Find a free frame: If there is a free frame, use it If there is no free frame, use a page replacement algorithm to select a victim frame - Write victim frame to disk if dirty (modified) Bring the desired page into the (newly) free frame; update the page and frame tables Continue the process by restarting the instruction that caused the trap Note now potentially 2 page transfers for page fault – increasing EAT
15
Page Replacement
16
Page Replacement Algorithms
We must solve two major problems to implement demand paging: we must Develop a frame-allocation algorithm that determines: How many frames to give each process Which frames to replace Use a page-replacement algorithm which provides lowest page-fault rate on both first access and re-access Evaluate page replacement algorithm by running it on a particular reference string (list of page numbers) and computing the no. of page faults on it. In all the examples, the reference string of referenced page numbers is: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
17
First-In-First-Out (FIFO) Algorithm
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 3 frames (3 pages can be in memory at a time per process) 15 page faults Page faults can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5 (check this as an exercise with 3, 4 frames) Adding more frames can cause more page faults! Belady’s Anomaly
18
Optimal Algorithm Replace page that will not be used for longest period of time Used for measuring how well your algorithm performs 9 page faults is optimal for the example
19
Least Recently Used (LRU) Algorithm
Replace page that has not been used in the most amount of time Associate time of last use with each page 12 faults – better than FIFO but worse than OPT Generally good algorithm and frequently used
20
End of Chapter 9 – Part 1
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.