Demand Paged Virtual Memory Andy Wang Operating Systems COP 4610 / CGS 5765.

Slides:



Advertisements
Similar presentations
9.4 Page Replacement What if there is no free frame?
Advertisements

4.4 Page replacement algorithms
Chapter 3.3 : OS Policies for Virtual Memory
Chapter 11 – Virtual Memory Management
CS 333 Introduction to Operating Systems Class 14 – Page Replacement
Chapter 11 – Virtual Memory Management
Page-replacement policies On some standard algorithms for the management of resident virtual memory pages.
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Page Replacement Algorithms
Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
Virtual Memory. 2 What is virtual memory? Each process has illusion of large address space –2 32 for 32-bit addressing However, physical memory is much.
Virtual Memory.
Lecture 8 Memory Management. Paging Too slow -> TLB Too big -> multi-level page table What if all that stuff does not fit into memory?
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
1 Virtual Memory Management B.Ramamurthy. 2 Demand Paging Main memory LAS 0 LAS 1 LAS 2 (Physical Address Space -PAS) LAS - Logical Address.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
1 Virtual Memory vs. Physical Memory So far, all of a job’s virtual address space must be in physical memory However, many parts of programs are never.
Paging Algorithms Vivek Pai / Kai Li Princeton University.
Chapter 11 – Virtual Memory Management Outline 11.1 Introduction 11.2Locality 11.3Demand Paging 11.4Anticipatory Paging 11.5Page Replacement 11.6Page Replacement.
Virtual Memory Chapter 8.
CS444/CS544 Operating Systems Virtual Memory 4/06/2007 Prof. Searleman
Virtual Memory CSCI 444/544 Operating Systems Fall 2008.
1 Virtual Memory Management B.Ramamurthy Chapter 10.
Caching and Demand-Paged Virtual Memory
Page Replacement Algorithms
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
03/29/2004CSCI 315 Operating Systems Design1 Page Replacement Algorithms (Virtual Memory)
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
Lecture Topics: 11/17 Page tables TLBs Virtual memory flat page tables
Chapter 21 Virtual Memoey: Policies Chien-Chung Shen CIS, UD
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Operating Systems CMPSC 473 Virtual Memory Management (3) November – Lecture 20 Instructor: Bhuvan Urgaonkar.
Virtual Memory. Background Virtual memory is a technique that allows execution of processes that may not be completely in the physical memory. Virtual.
CPS110: Page replacement Landon Cox. Replacement  Think of physical memory as a cache  What happens on a cache miss?  Page fault  Must decide what.
Lecture Topics: 11/24 Sharing Pages Demand Paging (and alternative) Page Replacement –optimal algorithm –implementable algorithms.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Demand Paging Reference Reference on UNIX memory management
Virtual Memory Questions answered in this lecture: How to run process when not enough physical memory? When should a page be moved from disk to memory?
CSE 153 Design of Operating Systems Winter 2015 Lecture 12: Page Replacement.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 9: Virtual Memory.
Virtual Memory. 2 Last Week Memory Management Increase degree of multiprogramming –Entire process needs to fit into memory Dynamic Linking and Loading.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Background Virtual memory – separation of user logical memory.
COS 318: Operating Systems Virtual Memory Paging.
CS 333 Introduction to Operating Systems Class 14 – Page Replacement
Logistics Homework 5 will be out this evening, due 3:09pm 4/14
ITEC 202 Operating Systems
Chapter 21 Virtual Memoey: Policies
Demand Paging Reference Reference on UNIX memory management
Module 9: Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Chapter 9: Virtual-Memory Management
5: Virtual Memory Background Demand Paging
Demand Paged Virtual Memory
Operating Systems CMPSC 473
Page Allocation and Replacement
Virtual Memory: Working Sets
Lecture 9: Caching and Demand-Paged Virtual Memory
CSE 153 Design of Operating Systems Winter 19
Module 9: Virtual Memory
Chapter 9: Virtual Memory CSS503 Systems Programming
Virtual Memory.
Presentation transcript:

Demand Paged Virtual Memory Andy Wang Operating Systems COP 4610 / CGS 5765

Up to this point… We assume that a process needs to load all of its address space before running  e.g., 0x0 to 0xFFFFFFFF Observation: 90% of time is spent on 10% of code

Demand Paging Demand paging: allows pages that are referenced actively to be loaded into memory  Remaining pages stay on disk  Provides the illusion of infinite physical memory

Demand Paging Mechanism Page tables sometimes need to point to disk locations (as opposed to memory locations) A table entry needs a present (valid) bit  Present means a page is in memory  Not present means that there is a page fault

Page Fault Hardware trap OS performs the following steps while running other processes (analogy: firing and hiring someone)  Choose a page  If the page has been modified, write its contents to disk  Change the corresponding page table entry and TLB entry  Load new page into memory from disk  Update page table entry  Continue the thread

Transparent Page Faults Transparent (invisible) mechanisms  A process does not know how it happened  It needs to save the processor states and the faulting instruction

More on Transparent Page Faults An instruction may have side effects  Hardware needs to either unwind or finish off those side effects ld r1, x // page fault

More on Transparent Page Faults Hardware designers need to understand virtual memory  Unwinding instructions not always possible  Example: block transfer instruction source begin source end block trans dest begin dest end

Page Replacement Policies Random replacement: replace a random page + Easy to implement in hardware (e.g., TLB) - May toss out useful pages First in, first out (FIFO): toss out the oldest page + Fair for all pages - May toss out pages that are heavily used

More Page Replacement Policies Optimal (MIN): replaces the page that will not be used for the longest time + Optimal - Does not know the future Least-recently used (LRU): replaces the page that has not been used for the longest time + Good if past use predicts future use - Tricky to implement efficiently

More Page Replacement Policies Least frequently used (LFU): replaces the page that is used least often  Tracks usage count of pages + Good if past use predicts future use - Difficult to replace pages with high counts

Example A process makes references to 4 pages: A, B, E, and R  Reference stream: BEERBAREBEAR Physical memory size: 3 pages

FIFO Memory pageBEERBAREBEAR 1B 2 3

FIFO Memory pageBEERBAREBEAR 1B 2E 3

FIFO Memory pageBEERBAREBEAR 1B 2E* 3

FIFO Memory pageBEERBAREBEAR 1B 2E* 3R

FIFO Memory pageBEERBAREBEAR 1B* 2E* 3R

FIFO Memory pageBEERBAREBEAR 1B* 2E* 3R

FIFO Memory pageBEERBAREBEAR 1B*A 2E* 3R

FIFO Memory pageBEERBAREBEAR 1B*A 2E* 3R*

FIFO Memory pageBEERBAREBEAR 1B*A 2E** 3R*

FIFO Memory pageBEERBAREBEAR 1B*A 2E** 3R*

FIFO Memory pageBEERBAREBEAR 1B*A 2E**B 3R*

FIFO Memory pageBEERBAREBEAR 1B*A 2E**B 3R*

FIFO Memory pageBEERBAREBEAR 1B*A 2E**B 3R*E

FIFO Memory pageBEERBAREBEAR 1B*A* 2E**B 3R*E

FIFO Memory pageBEERBAREBEAR 1B*A* 2E**B 3R*E

FIFO Memory pageBEERBAREBEAR 1B*A*R 2E**B 3R*E

FIFO Memory pageBEERBAREBEAR 1B*A*R 2E**B 3R*E 7 page faults

FIFO Memory pageBEERBAREBEAR 1B*A*R 2E**B 3R*E 4 compulsory cache misses

MIN Memory pageBEERBAREBEAR 1B 2E* 3R

MIN Memory pageBEERBAREBEAR 1B* 2E* 3R

MIN Memory pageBEERBAREBEAR 1B* 2E* 3R

MIN Memory pageBEERBAREBEAR 1B*A 2E* 3R

MIN Memory pageBEERBAREBEAR 1B*A 2E* 3R*

MIN Memory pageBEERBAREBEAR 1B*A 2E** 3R*

MIN Memory pageBEERBAREBEAR 1B*A 2E** 3R*

MIN Memory pageBEERBAREBEAR 1B*A 2E** 3R*B

MIN Memory pageBEERBAREBEAR 1B*A 2E*** 3R*B

MIN Memory pageBEERBAREBEAR 1B*A* 2E*** 3R*B

MIN Memory pageBEERBAREBEAR 1B*A*R 2E*** 3R*B

MIN Memory pageBEERBAREBEAR 1B*A*R 2E*** 3R*B 6 page faults

LRU Memory pageBEERBAREBEAR 1B 2E* 3R

LRU Memory pageBEERBAREBEAR 1B* 2E* 3R

LRU Memory pageBEERBAREBEAR 1B* 2E* 3R

LRU Memory pageBEERBAREBEAR 1B* 2E*A 3R

LRU Memory pageBEERBAREBEAR 1B* 2E*A 3R*

LRU Memory pageBEERBAREBEAR 1B* 2E*A 3R*

LRU Memory pageBEERBAREBEAR 1B*E 2E*A 3R*

LRU Memory pageBEERBAREBEAR 1B*E 2E*A 3R*

LRU Memory pageBEERBAREBEAR 1B*E 2E*AB 3R*

LRU Memory pageBEERBAREBEAR 1B*E* 2E*AB 3R*

LRU Memory pageBEERBAREBEAR 1B*E* 2E*AB 3R*

LRU Memory pageBEERBAREBEAR 1B*E* 2E*AB 3R*A

LRU Memory pageBEERBAREBEAR 1B*E* 2E*AB 3R*A

LRU Memory pageBEERBAREBEAR 1B*E* 2E*ABR 3R*A

LRU Memory pageBEERBAREBEAR 1B*E* 2E*ABR 3R*A 8 page faults

LFU Memory pageBEERBAREBEAR 1B 2 3

LFU Memory pageBEERBAREBEAR 1B 2E 3

LFU Memory pageBEERBAREBEAR 1B 2E2 3

LFU Memory pageBEERBAREBEAR 1B 2E2 3R

LFU Memory pageBEERBAREBEAR 1B2 2E2 3R

LFU Memory pageBEERBAREBEAR 1B2 2E2 3RA

LFU Memory pageBEERBAREBEAR 1B2 2E2 3RAR

LFU Memory pageBEERBAREBEAR 1B2 2E23 3RAR

LFU Memory pageBEERBAREBEAR 1B23 2E23 3RAR

LFU Memory pageBEERBAREBEAR 1B23 2E234 3RAR

LFU Memory pageBEERBAREBEAR 1B23 2E234 3RARA

LFU Memory pageBEERBAREBEAR 1B23 2E234 3RARAR

LFU Memory pageBEERBAREBEAR 1B23 2E234 3RARAR 7 page faults

Does adding RAM always reduce misses? Yes for LRU and MIN  Memory content of X pages  X + 1 pages No for FIFO  Due to modulo math  Belady’s anomaly: getting more page faults by increasing the memory size

Belady’s Anomaly Memory pageABCDABEABCDE 1ADE* 2BA*C 3CB*D 9 page faults

Belady’s Anomaly Memory pageABCDABEABCDE 1A*ED 2B*AE 3CB 4DC 10 page faults

Implementing LRU Perfect LRU requires a timestamp on each reference to a cache page  Too expensive Common practice  Approximate the LRU behavior

Clock Algorithm Replaces an old page, but not the oldest page Arranges physical pages in a circle  With a clock hand Each page has a used bit  Set to 1 on reference  On page fault, sweep the clock hand If the used bit == 1, set it to 0 If the used bit == 0, pick the page for replacement

Clock Algorithm

replace

Clock Algorithm

The clock hand cannot sweep indefinitely  Each bit is eventually cleared Slow moving hand  Few page faults Quick moving hand  Many page faults

Nth Chance Algorithm A variant of clocking algorithm  A page has to be swept N times before being replaced  N  , Nth Chance Algorithm  LRU  Common implementation N = 2 for modified pages N = 1 for unmodified pages

States for a Page Table Entry Used bit: set when a page is referenced; cleared by the clock algorithm Modified bit: set when a page is modified; cleared when a page is written to disk Valid bit: set when a program can legitimately use this entry Read-only: set for a program to read the page, but not to modify it (e.g., code pages)

Thrashing Occurs when the memory is overcommitted  Pages are still needed are tossed out Example  A process needs 50 memory pages  A machine has only 40 memory pages  Need to constantly move pages between memory and disk

Thrashing Avoidance Programs should minimize the maximum memory requirement at a given time  e.g., matrix multiplications can be broken into sub- matrix multiplications OS figures out the memory needed for each process  Runs only the computations that can fit in RAM

Working Set A set of pages that was referenced in the previous T seconds  T  , working set  size of the entire process Observation  Beyond a certain threshold, more memory only slightly reduces the number of page faults

Working Set Memory pageABCDABCDEFGH 1ADCF 2BADG 3CBEH LRU, 3 memory pages, 12 page faults

Working Set Memory pageABCDABCDEFGH 1A*E 2B*F 3C*G 4D*H LRU, 4 memory pages, 8 page faults

Working Set Memory pageABCDABCDEFGH 1A*F 2B*G 3C*H 4D* 5E LRU, 5 memory pages, 8 page faults

Global and Local Replacement Policies Global replacement policy: all pages are in a single pool (e.g., UNIX)  One process needs more memory Grabs memory from another process that needs less + Flexible - One process can drag down the entire system Per-process replacement policy: each process has its own pool of pages