Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 30: Virtual Memory-Page Replacement Algorithms

Similar presentations


Presentation on theme: "Lecture 30: Virtual Memory-Page Replacement Algorithms"— Presentation transcript:

1 Lecture 30: Virtual Memory-Page Replacement Algorithms

2 Review: Page Fault What if required page not in memory?
MMU reports a page fault to CPU CPU gives control to the OS OS fetches it from the disk Needs to evict an existing page from memory (page replacement policy) Instruction is restarted

3 OS Issues Fetch policy – when to fetch pages into memory?
Placement policy – where to place pages? Replacement policy All combined in the handling of a page fault The operating-system issues related to paging are traditionally split into three areas: the fetch policy, which governs when pages are fetched from secondary storage and which pages are fetched, the placement policy, which governs where the fetched pages are placed in primary storage, and the replacement policy, which governs when and which pages are removed from primary storage (and perhaps written back to secondary storage). Copyright © 2002 Thomas W. Doeppner. All rights reserved.

4 A Simple Paging Scheme Fetch policy Placement policy
start process off with no pages in primary storage bring in pages on demand (and only on demand) this is known as demand paging Placement policy it doesn’t matter—put the incoming page in the first available page frame Replacement policy replace the page that has been in primary storage the longest (FIFO policy) Let’s first consider a fairly simple scheme for paging. The fetch policy is based on demand paging: pages are fetched only when needed. So, when we start a process, none of its pages are in primary memory and pages are fetched in response to page faults. The idea is that, after a somewhat slow start, the process soon has enough pages in memory that it can execute fairly quickly. The big advantage of this approach is that no primary storage is wasted on unnecessary pages. As is usually the case, the placement policy is irrelevant—it doesn’t matter where pages are placed in primary storage. We start with a much too simple-minded replacement policy: when we want to fetch a page, but there is no room for it, we replace the page that has been in primary memory the longest. (This is known as a FIFO or first-in-first-out approach.) Copyright © 2002 Thomas W. Doeppner. All rights reserved.

5 Improving the Fetch Policy
Fault here Bring these in as well One way to improve the performance of our paging system is to reduce the number of page faults. Ideally, we’d like to be able to bring a page into primary storage before it is referenced (but only bring in those pages that will be referenced soon). In certain applications this might be possible, but for the general application, we don’t have enough knowledge of what the program will be doing. However, we can make the reasonable assumption that programs don’t reference their pages randomly, and furthermore, if a program has just referenced page i, there is a pretty good chance it might soon reference page i+1. We aren’t confident enough of this to go out of our way to fetch page i+1, but if it doesn’t cost us very much to fetch this next page, we might as well do it. We know that most of the time required for a disk transfer is spent in seek and rotational latency delays—the actual data transfer takes relatively little time. So, if we are fetching page i from disk and page i+1 happens to be stored on disk adjacent to page i, then it does not require appreciably more time to fetch both page i and page i+1 than it does to fetch just page i. Furthermore, if page i+2 follows page i+1, we might as well fetch it too. By using this approach, we are effectively increasing the page size—we are grouping hardware pages into larger, operating-system pages. However, we only fetch more pages than are required if there are free page frames available. Furthermore, we can arrange so that such “pre-paged” pages are the first candidates to be replaced if there is a memory shortage and these pages have not been referenced. Copyright © 2002 Thomas W. Doeppner. All rights reserved.

6 Page Replacement Problem Statement: A page is being brought into memory which has no free space. Which page should we replace to make space?

7 Page Replacement – Mental Exercise
What is the optimal policy, if we had the knowledge of the future page accesses? Policy: Choose the page which will be referenced farthest in the future However, don’t know the future Hope that the next few references will be for pages that were recently referenced What’s the use of knowing about this policy? Will help us assess the performance of a real algorithm

8 Replacement Algorithms
FIFO (First-In-First-Out) NRU (Not-Recently-Used) Second Chance LRU (Least-Recently-Used) Clock Algorithm(s) Two issues: How good is the decision? Overhead? Cost per memory access – should be very small Cost per replacement – can be larger Determining which pages to replace is a decision that could have significant ramifications to system performance. If the wrong pages are replaced, i.e. pages which are referenced again fairly soon, then not only must these pages be fetched again, but a new set of pages must be tagged for removal. The FIFO scheme has the benefit of being easy to implement, but often removes the wrong pages. Now, we can’t assume that we have perfect knowledge of a process’s reference pattern to memory, but we can assume that most processes are reasonably “well behaved.” In particular, it is usually the case that pages referenced recently will be referenced again soon, and that pages that haven’t been referenced for a while won’t be referenced for a while. This is the basis for a scheme known as LRU—least recently used. We order pages in memory by the time of their last access; the next page to be removed is the page whose last access was the farthest in the past. A variant of this approach is LFU—least frequently used. Pages are ordered by their frequency of use, and we replace the page which has been used the least frequently of those still in primary storage. Both these latter two policies have the advantage that they behave better for average programs than FIFO, but the disadvantage that they are costly to implement. It is straightforward to implement LRU for the Unix buffer cache, since the time to order the buffers is insignificant compared to the time required to access them. But we cannot afford to maintain pages in order of last reference.

9 FIFO F F F F F F F F F F F F F F F F F Hit ratio = 16/33

10 Help from Hardware For each page frame
Referenced Bit (R) – 1 if page frame has been referenced recently Modified Bit (M) – 1 if the page has been modified since it has been loaded Also known as “dirty bit”

11 Not Recently Used Algorithm (NRU)
Pages are classified into 4 groups not referenced, not modified (R=0, M=0) not referenced, modified (R=0, M=1) referenced, not modified (R=1, M=0) referenced, modified (R=1, M=1) NRU removes page at random from lowest numbered non empty class The R bit is cleared periodically

12 Second Chance Based on FIFO Old pages are inspected for replacement
But given a “second chance” if they have been used recently

13 Second Chance Algorithm
Page fault at time=20 Pages sorted in FIFO order (time of arrival) If earliest page has R=1, then give it a second chance by moving it to the end of the list

14 Clock Algorithm – Another Implementation of Second Chance
Order pages in circular list “Hand” of the clock points to the page to be replaced currently When required to evict a page If page pointed to has R=0, then evict it If R=1, then reset R and move hand forward Clock algorithm can be used with NRU (decision based on both R and M bits)

15 Least Recently Used (LRU)
Replace that page in memory which has been unused for the longest time Locality of Reference: Pages used in the near past will be used in the near future True in typical cases Example execution:

16 Least Recently Used (LRU)

17 Aging – Approximating LRU
Not Frequently Used The aging algorithm simulates LRU in software Keeps a 8 bit structure for each page frame Can provide an estimate of the usage history of the page

18 Example Page frame Time loaded Time referenced R bit M bit 60 161 1
60 161 1 130 160 2 26 162 3 20 163

19 Questions Which page frame will be replaced? First-In-First-Out (FIFO)
Not-Recently-Used (NRU) Second Chance Least-Recently-Used (LRU)

20 Anwser First-In-First-Out (FIFO) Not-Recently-Used (NRU) Second Chance
Page 3 Not-Recently-Used (NRU) Page 1 Second Chance Page 0 Least-Recently-Used (LRU)

21 Design issues Pageout Daemon Global vs. local page replacement

22 The “Pageout Daemon” Disk Pageout Daemon In-Use Page Frames
The (kernel) thread that maintains the free page-frame list is typically called the pageout daemon. Its job is to make certain that the free page-frame list has enough page frames on it. If the size of the list drops below some threshold, then the pageout daemon examines those page frames that are being used and selects a number of them to be freed. Before freeing a page, it must make certain that a copy of the current contents of the page exists on secondary storage. So, if the page has been modified since it was brought into primary storage (easily determined if there is a hardware-supported modified bit), it must first be written out to secondary storage. In many systems, the pageout daemon groups such pageouts into batches, so that a number of pages can be written out in a single operation, thus saving disk time. Unmodified, selected pages are transferred directly to the free page-frame list, modified pages are put there after they have been written out. In most systems, pages in the free list get a “second chance”—if a thread in a process references such a page, there is a page fault (the page frame has been freed and could be used to hold another page), but the page-fault handler checks to see if the desired page is still in primary storage, but in the free list. If it is in the free list, it is removed and given back to the faulting process. We still suffer the overhead of a trap, but there is no wait for I/O. In-Use Page Frames Free Page Frames Copyright © 2002 Thomas W. Doeppner. All rights reserved.

23 Global vs. local page replacement
Global allocation All processes compete for page frames from a single pool Don’t have to decide how many pages go to different processes High priority processes might get pages off lower priority processes Local allocation each process has its own private pool of page frames Equal allocation: processes get equal number of page frames Proportional allocation: number of page frames proportional to size of virtual memory used Another paging-related problem is the allocation of page frames among different processes (i.e. address spaces). There are two simple approaches to this. The first is to have all processes compete with one another for page frames. Thus, processes that reference a lot of pages tend to have a lot of page frames, and processes that reference fewer pages have fewer page frames. The other approach is to assign each process a fixed pool of page frames, thereby avoiding competition. This has the benefit that the actions of one process don’t have quite such an effect on others, but leaves the problem of determining how many page frames to give to each process.

24 Thrashing Consider a system that has exactly two page frames:
process A has a page in frame 1 process B has a page in frame 2 process B faults; the page in frame 1 is removed process A resumes execution and faults again; the page in frame 2 is removed …. The processes spend most of the time waiting for disk reads A program causing page faults every few instructions is said to be thrashing [Denning, 1968] The main argument against the global allocation of page frames is thrashing. The situation described in the picture is an extreme case, but it illustrates the general problem. If demand for page frames is too great, then while one process is waiting for a page to be fetched, the pages it already has are “stolen” from it to satisfy the demands of other processes. Thus when this first process finally resumes execution, it immediately faults again. A symptom of a thrashing situation is that the processor is spending a significant amount of time doing nothing—all of the processes are waiting on page-in requests. A perhaps apocryphal but amusing story is that the batch monitor on early paging systems would see that the idle time of the processor had dramatically increased and would respond by allowing more processes to run (thus aggravating an already terrible situation).

25 Working Set Locality of Reference:
During an phase of execution, a process references only a relatively small fraction of its pages. Definition: the Working Set of a process is the set of referenced pages in the last k memory references, Each process should have the working set in the memory Keep track of working set Make sure the process’ working set is in memory before letting the process run => loading the pages before letting it run => prepaging. Reduce the page fault rate, avoid thrashing this way

26 Belady's anomaly Intuitively, more page frames the memory has, the fewer page faults a program will get. Belady 1969


Download ppt "Lecture 30: Virtual Memory-Page Replacement Algorithms"

Similar presentations


Ads by Google