Chapter 21 Virtual Memoey: Policies

Slides:



Advertisements
Similar presentations
Page Replacement Algorithms
Advertisements

4.4 Page replacement algorithms
Page Replacement Algorithms
Virtual Memory 3 Fred Kuhns
Page-replacement policies On some standard algorithms for the management of resident virtual memory pages.
§12.4 Static Paging Algorithms
Lecture 8 Memory Management. Paging Too slow -> TLB Too big -> multi-level page table What if all that stuff does not fit into memory?
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Paging Algorithms Vivek Pai / Kai Li Princeton University.
Virtual Memory. Names, Virtual Addresses & Physical Addresses Source Program Absolute Module Name Space P i ’s Virtual Address Space P i ’s Virtual Address.
Introduction to Systems Programming Lecture 7
1 Lecture 9: Virtual Memory Operating System I Spring 2007.
OS Spring’04 Virtual Memory: Page Replacement Operating Systems Spring 2004.
Page Replacement Algorithms
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
1 Last Time: Paging Motivation Page Tables Hardware Support Benefits.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
CSS430 Virtual Memory Textbook Ch9
Virtual Memory.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Chapter 21 Virtual Memoey: Policies Chien-Chung Shen CIS, UD
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Memory: Paging.
Demand Paged Virtual Memory Andy Wang Operating Systems COP 4610 / CGS 5765.
Lecture 11 Page 1 CS 111 Online Virtual Memory A generalization of what demand paging allows A form of memory where the system provides a useful abstraction.
CS 3204 Operating Systems Godmar Back Lecture 21.
CPS110: Page replacement Landon Cox. Replacement  Think of physical memory as a cache  What happens on a cache miss?  Page fault  Must decide what.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 9: Virtual-Memory Management.
Virtual Memory Questions answered in this lecture: How to run process when not enough physical memory? When should a page be moved from disk to memory?
1 Lecture 8: Virtual Memory Operating System Fall 2006.
1 Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples (not covered.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Background Virtual memory – separation of user logical memory.
Page Replacement FIFO, LIFO, LRU, NUR, Second chance
COS 318: Operating Systems Virtual Memory Paging.
Virtual Memory What if program is bigger than available memory?
Logistics Homework 5 will be out this evening, due 3:09pm 4/14
Paging Review Page tables store translations from virtual pages to physical frames Issues: Speed - TLBs Size – Multi-level page tables.
CSE 351 Section 9 3/1/12.
Beyond Physical Memory: Policies
Chapter 9: Virtual Memory – Part I
Computer Architecture
Demand Paging Reference Reference on UNIX memory management
Lecture 10: Virtual Memory
CSE 120 Principles of Operating
Module 9: Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Chapter 9: Virtual-Memory Management
Memory and cache CPU Memory I/O.
ECE 445 – Computer Organization
Lecture 40 Syed Mansoor Sarwar
5: Virtual Memory Background Demand Paging
Demand Paged Virtual Memory
Lecture 30: Virtual Memory-Page Replacement Algorithms
VM Page Replacement Hank Levy.
Adapted from slides by Sally McKee Cornell University
There’s not always room for one more. Brian Bershad
Contents Memory types & memory hierarchy Virtual memory (VM)
Page Replacement FIFO, LIFO, LRU, NUR, Second chance
Operating Systems CMPSC 473
Lecture 9: Caching and Demand-Paged Virtual Memory
CSE 153 Design of Operating Systems Winter 19
Module 9: Virtual Memory
Chapter 9: Virtual Memory CSS503 Systems Programming
There’s not always room for one more. Brian Bershad
Virtual Memory CSE451 Andrew Whitaker.
Virtual Memory.
Presentation transcript:

Chapter 21 Virtual Memoey: Policies Chien-Chung Shen CIS/UD cshen@udel.edu

Which Page to Evict ? With VM, not all pages of a process need to be in physical memory Life is easy with a lot of free memory when page fault occurs, find a free page on the free-page list and assign it to the faulting page When little memory is free, OS starts paging out pages to make room for actively-used pages deciding which page (or pages) to evict is encapsulated within the page replacement policy Physical memory holds some subset of all pages in the system, it can rightly be viewed as a cache for virtual memory pages in the system How to decide which page(s) to evict to minimize the number of misses?

Average Memory Access Time Goal: a cache replacement policy that minimizes # of cache misses or maximizes # of cache hits Address space: 4KB, with 256-byte pages 4-bit VPN (16 pages) and 8-bit offset assume every page except virtual page 3 are already in memory memory references: 0x000, 0x100, 0x200, 0x300, 0x400, 0x500, 0x600, 0x700, 0x800, 0x900 hit, hit, hit, miss, hit, hit, hit, hit, hit, hit hit rate = 90% (miss rate = 10%) TD = 10 ms and TM = 100 ns AMAT = 0.9 · 100ns + 0.1 · 10ms = 1.00009 ms (or about 1 ms) hit rate = 99.9% -> AMAT = 10.1 us (100 times faster)

Optimal Replacement Policy The best possible replacement policy Belady: evict the page that will be accessed furthest in the future stream of virtual page references: 0, 1, 2, 0, 1, 3, 0, 3, 1, 2, 1 a cache (physical memory) that fits three pages begin with an empty cache cold-start misses (compulsory misses) One big issue with optimal policy? The future is not generally known

One Simple Policy FIFO stream of virtual page references: 0, 1, 2, 0, 1, 3, 0, 3, 1, 2, 1 a cache that fits three pages

Belady Anomaly Given a memory-reference stream: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 Using FIFO, how the cache hit rate changed when moving from a cache size of 3 to 4 pages ? in general, you would expect cache hit rate to increase (get better) when cache gets larger but in this case, with FIFO, it gets worse

Random Policy Figure shows how many hits Random achieves over 10,000 trials, each with a different random seed Just over 40% of the time, Random is as good as optimal, achieving 6 hits on the example trace Sometimes it does much worse, achieving 2 hits or fewer How Random does depends on the luck of the draw (%) (5 hits)

History Matters Problem with FIFO/Random: might kick out an important page that is about to be referenced again To improve our guess at the future, we once again lean on the past and use history as our guide if a program has accessed a page in the near past, it is likely to access it again in the near future What “history“ ? frequency: if a page has been accessed many times, perhaps it should not be replaced as it clearly has some value (LFU) recency: the more recently a page has been accessed, perhaps the more likely it will be accessed again (LRU) Both based on locality: programs tend to access certain code sequences (e.g., in a loop) and data structures (e.g., an array accessed by the loop) quite frequently try to use history to figure out which pages are important, and keep those pages in memory when it comes to eviction time

History-based Policies Least-Frequently-Used (LFU) policy replaces the least-frequently-used page when an eviction must take place Least-Recently- Used (LRU) policy replaces the least-recently-used page

LRU and Belady Anomaly LRU does not suffer from Belady anomaly Why? LRU has a stack property For algorithms with this property, a cache of size N + 1 naturally includes the contents of a cache of size N Thus, when increasing the cache size, hit rate will either stay the same or improve FIFO and Random (among others) clearly do not obey the stack property, and thus are susceptible to anomalous behavior

No-Locality Workload Each reference is to a random page within the set of accessed pages 100 unique pages accessed in 10,000 references When no locality in workload, LRU, FIFO, and Random all perform the same, with the hit rate exactly determined by the size of the cache When the cache is large enough to fit the entire workload, it also doesn’t matter which policy you use Optimal performs noticeably better than the realistic policies

80-20 Workload 80% of the references are made to 20% of the pages (the “hot” pages); the remaining 20% of the references are made to the remaining 80% of the pages (the “cold” pages) Optimal > LRU > FIFI/Random Question: Is LRU’s improvement over Random and FIFO really that big of a deal? Answer: It depends [If each miss is very costly, then even a small increase in hit rate can make a huge difference on performance. If misses are not so costly, then of course the benefits possible with LRU are not nearly as important]

Looping-Sequential Workload Refer to 50 pages in sequence, starting at 0, then 1, ..., up to page 49, and then loop, repeating those accesses, for a total of 10,000 accesses to 50 unique pages Common in many database applications A worst-case for LRU and FIFO With a cache of size 49, a looping-sequential workload of 50 pages results in a 0% hit rate Random has the property of not having weird corner-case behaviors

How to Implement “History” ? For LRU, do some accounting work on every memory reference Need hardware support a time field (in memory) per physical page (frame) of the system when a page is accessed, the time field would be set, by hardware, to the current time when replacing a page, OS could scan all the time fields to find the least-recently-used page Any comment? think 4GB memory with 4KB pages: 1 million pages

Approximating LRU One “use” (or “reference”) bit per page Whenever a page is referenced (i.e., read or written), itsuse bit is set by hardware to 1 Clock algorithm [by F.J. Corbato] all pages arranged in a circular list a clock hand points to some particular page to begin with When a replacement must occur, OS checks if the currently-pointed to page P has a use bit of 1 or 0 if 1, page P was recently used and thus is not a good candidate for replacement, set use bit to 0, and advance clock hand to next page P + 1 continues until it finds a use bit that is set to 0, implying this page has not been recently used (or, in the worst case, that all pages have been and that we have now searched through the entire set of pages, clearing all the bits)

A Clock Algorithm Variant Randomly scans pages when doing replacement when it encounters a page with a use bit set to 1, it clears the bit (i.e., sets it to 0) when it finds a page with use bit set to 0, it chooses it as its victim Not as perfect as LRU, but does better than approaches that don’t consider history at all

Considering Dirty Pages If a page has been modified and is thus dirty, it must be written back to disk to evict it, which is expensive If it has not been modified (and is thus clean), the eviction is free; the physical frame can simply be reused for other purposes without additional I/O Thus, some VM systems prefer to evict clean pages over dirty pages A “dirty” bit: set any time a page is written The clock algorithm could be changed to scan for pages that are both unused and clean to evict first

Other VM Policies (1) Page selection: decide when to bring a page into memory (paging-in) Demand paging: OS brings the page into memory when it is accessed on demand Prefetching: guess a page is about to be used, and thus bring it in ahead of time

Other VM Policies (2) Page-Out: how OS writes pages out to disk ? pages are written out one at a time collect a number of pending writes together in memory and write them to disk in one (more efficient) write (because of the nature of disk drive) clustering or grouping of writes

Thrashing When memory is simply oversubscribed (or memory demands of the set of running processes exceeds the available physical memory), the system will constantly be paging - thrashing Working set W(t, Δ) = the set of pages that a process has been referencing in the last Δ virtual time units Admission control - given a set of processes, system decides not to run a subset of processes, with the hope that the reduced set of processes working sets (the pages that they are using actively) fit in memory and thus can make progress