Demand Paging Virtual memory = indirect addressing + demand paging –Without demand paging, indirect addressing by itself is valuable because it reduces.

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
Advertisements

Chapter 4 Memory Management Basic memory management Swapping
4.4 Page replacement algorithms
Chapter 3.3 : OS Policies for Virtual Memory
CS 333 Introduction to Operating Systems Class 14 – Page Replacement
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Page Replacement Algorithms
Chapter 8 Virtual Memory
Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
Chapter 9: Virtual Memory
MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 3 Memory Management Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
Virtual Memory. 2 What is virtual memory? Each process has illusion of large address space –2 32 for 32-bit addressing However, physical memory is much.
1 Virtual Memory Management B.Ramamurthy. 2 Demand Paging Main memory LAS 0 LAS 1 LAS 2 (Physical Address Space -PAS) LAS - Logical Address.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
1 Memory Management Managing memory hierarchies. 2 Memory Management Ideally programmers want memory that is –large –fast –non volatile –transparent Memory.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Memory Management – 4 Page Replacement Algorithms CS 342 – Operating Systems.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Lecture 11: Memory Management
1 Virtual Memory vs. Physical Memory So far, all of a job’s virtual address space must be in physical memory However, many parts of programs are never.
Memory Management 2010.
Virtual Memory Chapter 8.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Memory Management Virtual Memory Page replacement algorithms
CS 333 Introduction to Operating Systems Class 14 – Page Replacement Jonathan Walpole Computer Science Portland State University.
Virtual Memory Management B.Ramamurthy. Paging (2) The relation between virtual addresses and physical memory addres- ses given by page table.
1 Virtual Memory Management B.Ramamurthy Chapter 10.
CS 333 Introduction to Operating Systems Class 14 – Page Replacement
OS Spring’04 Virtual Memory: Page Replacement Operating Systems Spring 2004.
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
CS 333 Introduction to Operating Systems Class 14 – Page Replacement Jonathan Walpole Computer Science Portland State University.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Virtual Memory.
O RERATıNG S YSTEM LESSON 10 MEMORY MANAGEMENT II 1.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
Memory Management Page replacement algorithms, segmentation Tanenbaum, ch. 3 p Silberschatz, ch. 8, 9 p
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
1 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms.
Chapter 21 Virtual Memoey: Policies Chien-Chung Shen CIS, UD
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
1 First-in, First-out Remove the oldest page Determine the age based on the loading time, not on the time being referenced Old pages may be heavily used.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Memory: Paging.
Lecture 11 Page 1 CS 111 Online Virtual Memory A generalization of what demand paging allows A form of memory where the system provides a useful abstraction.
Demand Paging Reference Reference on UNIX memory management
1 Virtual Memory. Cache memory: provides illusion of very high speed Virtual memory: provides illusion of very large size Main memory: reasonable cost,
操作系统原理 OPERATING SYSTEM Chapter 3 Memory Management 内存管理.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
1 Memory Management Chapter Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Virtual Memory. 2 Last Week Memory Management Increase degree of multiprogramming –Entire process needs to fit into memory Dynamic Linking and Loading.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Background Virtual memory – separation of user logical memory.
PAGE REPLACEMNT ALGORITHMS FUNDAMENTAL OF ALGORITHMS.
COS 318: Operating Systems Virtual Memory Paging.
CS 333 Introduction to Operating Systems Class 14 – Page Replacement
Virtual Memory What if program is bigger than available memory?
Virtual Memory Chapter 8.
Demand Paging Reference Reference on UNIX memory management
Chapter 8 Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Lecture 28: Virtual Memory-Address Translation
Chapter 9: Virtual-Memory Management
Page Replacement.
Computer Architecture
Lecture 9: Caching and Demand-Paged Virtual Memory
COMP755 Advanced Operating Systems
CSE 542: Operating Systems
Presentation transcript:

Demand Paging Virtual memory = indirect addressing + demand paging –Without demand paging, indirect addressing by itself is valuable because it reduces external fragmentation Demand paging attributes: –Process may run with less than all pages in memory –Unused portion of process’ address space resides on backing store (on disk) –Pages are loaded into memory from backing store, and visa versa upon page eviction –Pages loaded into memory as they are referenced (or demanded); less useful pages are evicted to make room

Demand Paging Solves … Insufficient memory for single process Insufficient memory for several processes during multiprogramming Relocation during execution via paging Efficient use of memory: only active subset of process’ pages are in memory (unavoidable waste is internal to pages) Ease of programming – no need to be concerned with partition starting address and limits Protection – separate address spaces enforced by translation of hardware Sharing: map same physical page into more than 1 address space Less disk activity than with swapping alone Fast process start-up: process can run with as little as 1 page Higher CPU utilization since many partially loaded processes can be in memory simultaneously

Data Structures for Demand Paging Page tables – location of pages that are in memory Backing store map – location of pages that are not in memory Physical memory map (frame map) –Fast physical to virtual mapping (to invalidate translations for page that is no longer in memory) –Allocation/use of page frames – in use or free –Number of page frames are allotted to each process Cache of page tables – Translation Lookaside Buffer (TLB)

Hardware Influence on Demand Paging TLB is special hardware Base/limit registers describe kernel’s and current process’ page table area Size of PTE and meaning of certain bits in PTE (present/absent, protection, referenced, modified bits)

Virtual Memory Policies Fetch policy – which pages to load and when? Page replacement policy – which pages to remove/overwrite and when in order to free up memory space?

Fetch Policy Demand paging – pages are loaded on demand, not in advance –Process starts up with none of its pages loaded in memory; page faults load pages into memory –Initially many page faults –Context switching may clear pages and will cause faults when process runs again

Fetch Policy Pre-paging – load pages before process runs –Need a working set of pages to load on context switches Few systems use pure demand paging, but instead, does pre-paging Thrashing – occurs when program causes page fault every few instructions –What can be done to alleviate problem?

Page Replacement Policy Question of which page to evict when memory is full and a new page is demanded Goal: reduce the number of page faults –Why? Page fault is very expensive – 10 msec. to fetch from disk; on a 100 MIPS machine, 10 msec. is equal to 1 Million instructions

Page Replacement Policy Demand paging is likely to cause a large number of page faults, especially when a process starts up Locality of reference saves demand paging –Locality of reference – next reference more likely to be near last one –Reasons: Next instruction in stream is close to the last Small loops Common subroutines Locality in data layout (arrays, matrices of data, fields of structure are contiguous) Sequential processing of files and data With locality of reference, a page that is brought in by one instruction is likely to contain the data required by the next instruction

Page Replacement Policy Policy can be global (inter-process) or local (per-process) Policies: –Optimal – replace page that will be used farthest in the future (or never used again) –FIFO & variants – might throw out important pages Second chance, clock –LRU & variants – difficult to implement exactly NFU – crude approximation to LRU Aging – efficient and good approximation to LRU –NRU – crude, simplistic version of LRU –Working set – expensive to implement Working set clock – efficient and most widely used in practice

Page Replacement Policy: NRU Not recently used – a poor man’s LRU Uses 2 bits (can be implemented in HW): –R – page has been referenced –M – page has been modified –OS clears R-bit periodically R=0 means pages are “cold” R=1 means pages are “hot” or recently referenced When free pages are needed, sweep through all of memory and reclaim pages based on R and M classes: –00 = not referenced, not modified –01 = not referenced, modified –10 = referenced, not modified –11 = referenced and modified –Pages should be removed in what order? –How can class 01 exist – if it was modified, shouldn’t the page have been referenced?

Page Replacement Policy: FIFO First-in, first-out Easy to implement, but does not consider page usage – pages that are frequently referenced might be removed Keep linked list representing order in which pages have been brought into memory

Page Replacement Policy: Second Chance Variant of FIFO When free page frames are needed, examine pages in FIFO order starting from the beginning –If R=0, reclaim page –If R=1, set R=0 and place at the end of FIFO list (hence, the second chance) –If not enough reclaims on first pass, revert to pure FIFO on second pass

Page Replacement Policy: Clock Variant of FIFO, better implementation of Second Chance Pages are arranged in circular list; pages never moved around the list When free page frames are needed, examine pages in clock-wise order from current position –If R=0, reclaim page –If R=1, set R=0 and advance hand

Page Replacement Policy: Clock 2-hand clock variant –one hand changes R from 1 to 0 –second hand reclaims pages How is this different from 1-hand clock? –With 1 hand, time between cycles of the hand is proportional to memory size –With 2 hands, time between changing R and reclaiming pages can be varied dynamically depending on the momentary need to reclaim page frames

Page Replacement Policy: LRU Keep linked list representing order in which pages have been used On potentially every reference, find referenced page in the list and bring it to the front – very expensive! Can do LRU faster with special hardware –On reference store time (or counter) in PTE; find page with oldest time (or lowest counter) to evict –NxN Matrix algorithm: Initially set NxN bits to 0 When page frame K is referenced, –Set all bits of row K to 1 –Set all bits of column K to 0 Row with lowest binary value is LRU –But, hardware is often not available!

Page Replacement Policy: NFU and Aging Simulating LRU in software NFU –At each clock interrupt, add R bit (0 or 1) to counter associated with page –Reclaim page with lowest counter –Disadvantage – no aging of pages Aging –Shift counter to the right, then add R to the left – recent R bit is added as most significant bit, thereby dominating the counter value –Reclaim page with lowest counter –Acceptable disadvantages Cannot distinguish between references early in clock interval from latter references since shift and add is done to all counters at once With 8-bit counter, have only a memory of 8 clock ticks back

Page Replacement Policy: Working Set Working set – set of pages process is currently using –As a function of k most recent memory references, w(k,t) is working set at any time t –As a function of past e msec. of execution time, w(e,t) is set of pages process referenced in the last e msec. of process’ execution time Replacement policy: find page not in working set and evict it

Page Replacement Policy: Working Set Implementation If R=0, page is candidate for removal –Calculate age = current time – time of last use –If age > threshold, page is reclaimed –If age < threshold, still in working set, but may be removed if it is oldest page in working set If R=1, set time of last use = current time –Page was recently referenced, so in working set If no page has R=0, choose random page for removal (one that requires no writeback)

Page Replacement Policy: Working Set Clock Use circular list of page frames If R=0, age > threshold, and –Page is clean (M=0), reclaim –Page is dirty (M=1), schedule write but advance hand to check other pages If R=1, set R=0 and advance hand At end of 1 st pass, if no page has been reclaimed: –If write has been scheduled, keep moving hand until write is done and page is clean. Evict 1 st clean page –If no writes scheduled, claim any clean page even though it is in the working set

Page Replacement Policy: Summary FIFO easy to implement, but does not account for page usage Because of locality of reference, LRU is a good approximation to optimal –Naïve LRU has high overhead – update some data structure on every reference! Practical algorithms –Approximate LRU –Maintain list of free page frames for quick allocation –When number of page frames in free list falls below a low water mark (threshold), invoke page replacement policy until number of free page frames goes above high water mark

Page Replacement Policy: Evaluation Metrics Results: page fault rate over some workload Speed: work that must be done on each reference Speed: work that must be done when page is reclaimed Overhead: storage required by algorithm’s bookkeeping and special hardware required