Virtual Memory.

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management Basic memory management Swapping
Advertisements

9.4 Page Replacement What if there is no free frame?
4.4 Page replacement algorithms
Page Replacement Algorithms
Virtual Memory (Chapter 4.3)
Module 10: Virtual Memory
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Chapter 9: Virtual Memory
Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
IOS103 OPERATING SYSTEM VIRTUAL MEMORY.
Chapter 9 Virtual Memory Bernard Chen 2007 Spring.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 36 Virtual Memory Read.
Chapter 9: Virtual Memory
Virtual Memory Management G. Anuradha Ref:- Galvin.
1 Virtual Memory Management B.Ramamurthy. 2 Demand Paging Main memory LAS 0 LAS 1 LAS 2 (Physical Address Space -PAS) LAS - Logical Address.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Virtual Memory.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Virtual Memory Background.
Chapter 10: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 10: Virtual Memory.
1 Lecture 9: Virtual Memory Operating System I Spring 2007.
Memory Management Virtual Memory Page replacement algorithms
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 9: Virtual Memory.
Virtual Memory Management B.Ramamurthy. Paging (2) The relation between virtual addresses and physical memory addres- ses given by page table.
1 Virtual Memory Management B.Ramamurthy Chapter 10.
Virtual Memory Basic Concepts Demand Paging
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 9: Virtual Memory.
Part 8: Virtual Memory. Silberschatz, Galvin and Gagne ©2005 Virtual vs. Physical Address Space Each process has its own virtual address space, which.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Virtual Memory.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Page Replacement Allocation of.
Operating Systems (CS 340 D) Princess Nora University Faculty of Computer & Information Systems Computer science Department.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 9: Virtual Memory.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 9: Virtual-Memory Management.
Operating Systems (CS 340 D) Princess Nora University Faculty of Computer & Information Systems Computer science Department.
CSC 360, Instructor: Kui Wu Memory Management II: Virtual Memory.
CS307 Operating Systems Virtual Memory Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Spring 2012.
Virtual Memory Various memory management techniques have been discussed. All these strategies have the same goal: to keep many processes in memory simultaneously.
Silberschatz, Galvin and Gagne  Operating System Concepts Virtual Memory Virtual memory – separation of user logical memory from physical memory.
1 Virtual Memory. Cache memory: provides illusion of very high speed Virtual memory: provides illusion of very large size Main memory: reasonable cost,
Virtual Memory The address used by a programmer will be called a virtual address or logical address. An address in main memory is called a physical address.
Chapter 9: Virtual-Memory Management. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 9: Virtual-Memory Management 9.1 Background.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
1 Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples (not covered.
10.1 Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples.
1 Contents Memory types & memory hierarchy Virtual memory (VM) Page replacement algorithms in case of VM.
Virtual Memory. 2 Last Week Memory Management Increase degree of multiprogramming –Entire process needs to fit into memory Dynamic Linking and Loading.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Background Virtual memory – separation of user logical memory.
CS 333 Introduction to Operating Systems Class 14 – Page Replacement
Chapter 9: Virtual Memory
Chapter 9: Virtual Memory – Part I
Chapter 9: Virtual Memory
Chapter 9: Virtual Memory
Operating Systems Virtual Memory Alok Kumar Jagadev.
Lecture 10: Virtual Memory
Review.
Module 9: Virtual Memory
Chapter 9: Virtual Memory
Chapter 9: Virtual-Memory Management
5: Virtual Memory Background Demand Paging
Chapter 9: Virtual Memory
Chapter 6 Virtual Memory
Contents Memory types & memory hierarchy Virtual memory (VM)
Operating Systems CMPSC 473
Module 9: Virtual Memory
Virtual Memory.
Module IV Memory Organization.
Presentation transcript:

Virtual Memory

Readings Silberschatz et al: chapter 9.1-9.6

Program # include <stdio.h> …. main() { /*Lots and lots of initialization code*/ for (i = 0; i < N; i++) … }

Program Typically once you initialize you don’t go back to that part of the code Do you need to keep that part of the code in main memory once it has been used?

Virtual Memory: Main Idea Processes use a virtual (logical) address space. Every process has its own address space The virtual address space can be larger than physical memory. Only part of the virtual address space is mapped to physical memory at any time. Parts of processes’ memory content is on disk. Hardware & OS collaborate to move memory contents to and from disk.

Demand Paging Bring a page into memory only when it is needed Why? Less I/O needed If a process of 10 pages actually uses only half of them, then demand paging saves the I/O necessary to load the 5 pages not used. Less memory needed Faster response More multiprogramming is possible

Demand Paging We need hardware support to distinguish between pages that are in memory and the pages that are on disk A valid-invalid bit is part of each page entry When the bit is set to “valid” the associated page is in memory If the bit is set to “invalid” the page is on the disk

Demand Paging The valid-invalid bit for 1 is set to “i” since the page is not in the physical memory The valid-invalid bit for 0 is “v” since the page is in memory

Page Fault What happens if a process tries to access a page that was not brought into memory? Example: User opens a Powerpoint file. Access to a page marked invalid causes a page fault The paging hardware, in translating the address through the page table, will notice that the invalid bit is set causing a trap to the operating system.

Steps in Handling a Page Fault

Steps in Handling a Page Fault Trap to the OS Prepare for a context switch Save the user registers and process state Read in the page Issue a read from the disk to a free frame Wait in the queue for this device until the read request is serviced Wait for the device seek and/or latency time Begin the transfer of the page to a free frame

Steps in Handling a Page Fault While waiting, allocate the CPU to some other user Receive an interrupt from the disk I/O subsystem when page is placed in physical memory

Steps in Handling a Page Fault Deal with Interrupt from the disk I/O system Save the registers and process state for the other user Correct the page table Process with a page fault is put into the ready queue Wait for the CPU to be allocated to the process again Restore the user registers, process state and new page table, and then resume the interrupted instruction

Challenge: Performance Page Fault Rate 0  p  1.0 if p = 0 no page faults if p = 1, every reference is a fault Let p be the probability of page fault: Avg. time = (1-p) * memory time + p * page fault time Assuming: memory time = 200ns, page fault time = 8 millisecond, p = 0.1% Avg time = 99.9% * 200 + 0.1% * 8000000 = 8200 Access time is directly proportional to the probability of a page fault If one access out of 1000 causes a page fault the effective access time is 8.2 microseconds Need to keep the page fault rate small!

Page Replacement So why allow demand paging? If a process of 10 pages actually uses only half of them, the demand paging saves the I/O necessary to load the 5 pages that are never used This allows us to increase the level of multiprogramming

Page Replacement Let’s assume that our physical memory consists of 40 frames We have 8 processes with 10 pages. That is 80 pages. Obviously 80 pages is more than 40 frames But if a process is only using half of its pages is this really a problem But there is a reason why there are 10 pages The process may need them The frames have been over-allocated i.e., overbooked

Page Replacement What do we do when a process needs a frame and there isn’t one free? Essentially we choose a frame and free it of the page that is currently residing on it

Page Replacement A page replacement algorithm describes which frame becomes a victim. Designing an appropriate algorithm is important is important since disk I/O is expensive Slight improvements in algorithms yield large gains in system performance

Page Replacement We will discuss several algorithms The examples assume: 3 frames Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1 Each of the numbers refers to a page number

Optimal Page Replacement Algorithm Replace page needed at the farthest point in future i.e. replace the page that will not be used for the longest period of time This should have the lowest page fault rate

Optimal Page Replacement

Optimal Page Replacement Optimal is easy to describe but impossible to implement At the time of the page fault, the OS has no way of knowing when each of the pages will be referenced next

FIFO Page Replacement Algorithm Maintain a linked list of all pages Each page is associated with the time when that page was brought into memory Page chosen to be replaced is the oldest page Implementation: FIFO queue A variable head points to the oldest page A variable tail points to the newest page brought in

FIFO Page Replacement Note: The read arrow is pointing to the oldest page

FIFO Page Replacement Advantages Disadvantage Easy to understand and program Disadvantage Performance is not always good The page replaced may be an initialization module that was used a long time ago and is no longer needed, but on the other hand … The page may contain a heavily used variable that was initialized early and is in constant use

LRU Page Replacement FIFO replacement algorithm uses the time when a page was brought into memory The optimal replacement algorithm uses the time when a page is to be used. Can we use the recent past as an approximation of the near future? This means replace the page that has not been used for the longest period of time This approach is the Least-Recently-Used (LRU) algorithm.

LRU Replacement Algorithm LRU replacement associates with each page the time of that page’s last use When a page must be replaced, LRU chooses the page that has not been used for the longest period of time.

LRU Page Replacement

LRU Page Replacement LRU is often used and is considered to be good The problem is with how to implement LRU

LRU Page Replacement LRU is often used and is considered to be good Challenge: Implementing LRU

LRU Implementation (1) Implementation using Counters: Associate each page-table entry with a time-of use field Add a logical clock or counter Clock is incremented for every memory reference Each time a page is referenced the time-of-use field is updated with the logical clock Requires an interrupt A search of the page table is needed to find the least recently used page

LRU Implementation (2) Implementation using Stack: Keep a stack of page numbers When a page is referenced, it is removed from the stack and put on the top The most recently used page is always at the top of the stack and the least recently used is at the bottom Should use a doubly-linked list since entries can be removed from the middle of the stack Removing a page and putting it on top of the stack requires changing multiple poiners

LRU Implementation Issues For each memory reference: Clock values need to be updated Updating a stack must be done for each memory reference Lots of overhead Operating systems often use an approximation algorithm

LRU Approximation (1) Implementation using Reference Bits: Each page entry has a set of reference bits e.g., 8 bits A memory reference causes the first bit to be set to 1 – does not require an interrupt The bits represent the history of page use for the last 8 time periods 00000000: implies that the page has not be used in the last 8 intervals If a page history has bits as 11000100 then it has been used more recently than a page with bits as 01110111

LRU Approximation At regular intervals (e.g., 100 milliseconds) a timer interrupt transfers control to the OS Bits are shifted to the right by one Last bit falls off Why is this an approximation? We know what pages have been used in a time interval but we do not know the order What if multiple pages have the lowest number Randomly select a page, use FIFO, swap out all pages with the lowest number

Other Algorithms Least frequently used (LFU) Most frequently used (MFU) Most OS’s use LRU

Least Recently Used (LRU) Algorithm Why does LRU work? Consider the following code segment: sum = 0; for (i=0; i< n; i++) { sum = sum + a[i]; } What do we see here? We see that a[i+1] is accessed soon after a[i] We see that the sum is periodically referenced

Least Recently Used (LRU) Algorithm What else do we see? We are cycling through the for-loop repeatedly The program exhibits Temporal locality: recently-referenced items are likely to be referenced in the near future Referencing sum For loop instructions Spatial locality: Items with nearby addresses tend to be referenced close together in time Variable a[i+1] accessed after variable a[i]

The Working Set Model Processes tend to exhibit a locality of reference e.g., This means that during any phase of execution, the process references only a relatively small fraction of its pages The set of pages that a process is currently using is called its working set If the entire working set is in memory, the process will run without causing many faults until it moves into another execution phase Example: Move to another loop

LRU Replacement Algorithm Locality suggests that memory references are on the same set of pages Studies suggest that programs exhibit high spatial and temporal locality

Summary We have studied the need for page replacement algorithms Several algorithms have been discussed including: Optimal FIFO LRU