Download presentation
Presentation is loading. Please wait.
Published byEzra Reed Modified over 6 years ago
1
Operating Systems Virtual Memory Alok Kumar Jagadev
2
Background Virtual memory is a technique that allows the execution of processes which are not completely available in memory. The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory is the separation of user logical memory from physical memory. This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available.
3
Background Following are the situations, when entire program is not required to be loaded fully in main memory. User written error handling routines are used only when an error occurred in the data or computation. Certain options and features of a program may be used rarely. Many tables are assigned a fixed amount of address space even though only a small amount of the table is actually used. The ability to execute a program that is only partially in memory would counter many benefits.
4
Background Less number of I/O would be needed to load or swap each user program into memory. A program would no longer be constrained by the amount of physical memory that is available. Each user program could take less physical memory, more programs could be run the same time, with a corresponding increase in CPU utilization and throughput.
5
Background Virtual memory can be implemented via: Demand paging
Demand segmentation
6
Virtual Memory That is Larger Than Physical Memory
7
Virtual-address Space
Refers to the logical view of how a process is stored in memory. A process begins at a certain logical address (such as 0) and exists in contiguous memory. Physical frames may not be contiguous.
8
Shared Library Using Virtual Memory
Stack or heap grow if we wish to dynamically link libraries during program execution. System library can be shared by several process through mapping the shared object into a virtual address space.
9
Demand Paging A demand paging system is quite similar to a paging system with swapping. When a process is to be executed, swap it into memory. Rather than swapping the entire process into memory, however, a lazy swapper called pager is used. When a process is to be swapped in, the pager guesses which pages will be used before the process is swapped out again. Instead of swapping in a whole process, the pager brings only those necessary pages into memory. Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap time and the amount of physical memory needed.
10
Demand Paging Hardware support is required to distinguish between those pages that are in memory and those pages that are on the disk using the valid- invalid bit scheme. Where valid and invalid pages can be checked by checking the bit. Marking a page will have no effect if the process never attempts to access the page. While the process executes and accesses pages that are memory resident, execution proceeds normally.
11
Valid-Invalid Bit With each page table entry a valid–invalid bit is associated (v in-memory – memory resident, i not-in-memory) Initially valid–invalid bit is set to i on all entries Example of a page table snapshot: During address translation, if valid–invalid bit in page table entry is i page fault v i …. Frame # valid-invalid bit page table
12
Transfer of a Paged Memory to Contiguous Disk Space
13
Steps Steps Description Step 1
Check an internal table for this process, to determine whether the reference was a valid or it was an invalid memory access. Step 2 If the reference was invalid, terminate the process. If it was valid, but page have not yet brought in, page in the latter. Step 3 Find a free frame. Step 4 Schedule a disk operation to read the desired page into the newly allocated frame. Step 5 When the disk read is complete, modify the internal table kept with the process and the page table to indicate that the page is now in memory. Step 6 Restart the instruction that was interrupted by the illegal address trap. The process can now access the page as though it had always been in memory. Therefore, the operating system reads the desired page into memory and restarts the process as though the page had always been in memory.
14
Advantages/Disadvantages
Following are the advantages of Demand Paging Large virtual memory. More efficient use of memory. Unconstrained multiprogramming. There is no limit on degree of multiprogramming. Disadvantages Following are the disadvantages of Demand Paging Number of tables and amount of processor overhead for handling page interrupts are greater than in the case of the simple paged management techniques. Due to the lack of an explicit constraints on a jobs address space size.
15
Page Table When Some Pages Are Not in Main Memory
16
Page Fault If there is a reference to a page, first reference to that page will trap to operating system: page fault Operating system looks at another table to decide: Invalid reference abort Just not in memory Get empty frame Swap page into frame via scheduled disk operation Reset tables to indicate page now in memory Set validation bit = v Restart the instruction that caused the page fault
17
Steps in Handling a Page Fault
18
Performance of Demand Paging
Stages in Demand Paging Trap to the operating system Save the user registers and process state Determine that the interrupt was a page fault Check that the page reference was legal and determine the location of the page on the disk Issue a read from the disk to a free frame: Wait in a queue for this device until the read request is serviced Wait for the device seek and/or latency time Begin the transfer of the page to a free frame While waiting, allocate the CPU to some other user Receive an interrupt from the disk I/O subsystem (I/O completed) Save the registers and process state for the other user Determine that the interrupt was from the disk Correct the page table and other tables to show page is now in memory Wait for the CPU to be allocated to this process again Restore the user registers, process state, and new page table, and then resume the interrupted instruction
19
Paging If a page is not in physical memory find the page on disk
find a free frame bring the page into memory What if there is no free frame in memory?
20
Page Replacement Basic idea if there is a free page in memory, use it
if not, select a victim frame write the victim out to disk read the desired page into the now free frame update page tables restart the process
21
Page Replacement Main objective of a good replacement algorithm is to achieve a low page fault rate insure that heavily used pages stay in memory the replaced page should not be needed for some time Secondary objective is to reduce latency of a page fault efficient code replace pages that do not need to be written out
22
Reference String Reference string is the sequence of pages being referenced If user has the following sequence of addresses 123, 215, 600, 1234, 76, 96 If the page size is 100, then the reference string is 1, 2, 6, 12, 0, 0
23
Page and Frame Replacement Algorithms
Frame-allocation algorithm determines How many frames to give each process Which frames to replace Page-replacement algorithm Want lowest page-fault rate on both first access and re-access Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string String is just page numbers, not full addresses Repeated access to the same page does not cause a page fault The reference string is 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
24
Graph of Page Faults Versus The Number of Frames
25
First-In, First-Out (FIFO)
The oldest page in physical memory is the one selected for replacement Very simple to implement keep a list victims are chosen from the tail new pages in are placed at the head
26
First-In-First-Out (FIFO) Algorithm
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 3 frames (3 pages can be in memory at a time per process): 4 frames: FIFO Replacement manifests Belady’s Anomaly: more frames less page faults 1 2 3 4 5 9 page faults 10 page faults
27
FIFO Issues Poor replacement policy
Evicts the oldest page in the system usually a heavily used variable should be around for a long time FIFO replaces the oldest page - perhaps the one with the heavily used variable FIFO does not consider page usage
28
FIFO Illustrating Belady’s Anomaly
29
Optimal Page Replacement
Often called Balady’s Min Basic idea replace the page that will not be referenced for the longest time This gives the lowest possible fault rate Impossible to implement Does provide a good measure for other techniques
30
Optimal Algorithm An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. An optimal page-replacement algorithm exists, and has been called OPT or MIN. Replace the page that will not be used for the longest period of time. Use the time when a page is to be used.
31
Optimal Page Replacement
32
Optimal Algorithm Replace page that will not be used for longest period of time. 4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 How do you know this? Used for measuring how well your algorithm performs. 1 4 2 6 page faults 3 4 5
33
Least Recently Used (LRU)
Basic idea replace the page in memory that has not been accessed for the longest time Optimal policy looking back in time as opposed to forward in time fortunately, programs tend to follow similar behavior
34
Least Recently Used (LRU) Algorithm
Use past knowledge rather than future Replace page that has not been used in the most amount of time Associate time of last use with each page 12 faults – better than FIFO but worse than OPT Generally good algorithm and frequently used But how to implement?
35
LRU Issues How to keep track of last page access?
requires special hardware support 2 major solutions counters hardware clock “ticks” on every memory reference the page referenced is marked with this “time” the page with the smallest “time” value is replaced stack keep a stack of references on every reference to a page, move it to top of stack page at bottom of stack is next one to be replaced
36
LRU Issues Both techniques just listed require additional hardware
remember, memory reference are very common impractical to invoke software on every memory reference LRU is not used very often Instead, we will try to approximate LRU
37
Replacement Hardware Support
Most system will simply provide a reference bit in PT for each page On a reference to a page, this bit is set to 1 This bit can be cleared by the OS This simple hardware has lead to a variety of algorithms to approximate LRU
38
Thrashing If a process does not have “enough” pages, the page-fault rate is very high low CPU utilization OS thinks it needs increased multiprogramming adds another process to system Thrashing is when a process is busy swapping pages in and out
39
Thrashing If a process does not have “enough” pages, the page-fault rate is very high Page fault to get page Replace existing frame But quickly need replaced frame back This leads to: Low CPU utilization Operating system thinking that it needs to increase the degree of multiprogramming Another process added to the system Thrashing a process is busy swapping pages in and out
40
Thrashing (Cont.)
41
Cause of Thrashing Why does paging work? Locality model
process migrates from one locality to another localities may overlap Why does thrashing occur? sum of localities > total memory size How do we fix thrashing? Working Set Model Page Fault Frequency
42
Demand Paging and Thrashing
Why does demand paging work? Locality model Process migrates from one locality to another Localities may overlap Why does thrashing occur? size of locality > total memory size Limit effects by using local or priority page replacement
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.