Virtual Memory 1 1.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Chapter 8 Virtual Memory
OS Fall’02 Virtual Memory Operating Systems Fall 2002.
OS Spring ‘04 Paging and Virtual Memory Operating Systems Spring 2004.
Virtual Memory Chapter 8.
Memory Management (II)
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Memory Management -3 CS 342 – Operating Systems Ibrahim Korpeoglu Bilkent.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Paging and Virtual Memory. Memory management: Review  Fixed partitioning, dynamic partitioning  Problems Internal/external fragmentation A process can.
Virtual Memory Chapter 8.
1 Virtual Memory Chapter 8. 2 Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Chapter 3.2 : Virtual Memory
Translation Buffers (TLB’s)
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Virtual Memory I Chapter 8.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
1 Tuesday, July 04, 2006 "Programs expand to fill the memory available to hold them." - Modified Parkinson’s Law.
VIRTUAL MEMORY. Virtual memory technique is used to extents the size of physical memory When a program does not completely fit into the main memory, it.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Computer Architecture Lecture 28 Fasih ur Rehman.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Operating Systems COMP 4850/CISG 5550 Page Tables TLBs Inverted Page Tables Dr. James Money.
Fall 2000M.B. Ibáñez Lecture 17 Paging Hardware Support.
Memory Management Fundamentals Virtual Memory. Outline Introduction Motivation for virtual memory Paging – general concepts –Principle of locality, demand.
Virtual Memory 1 1.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Virtual Memory Hardware.
4.3 Virtual Memory. Virtual memory  Want to run programs (code+stack+data) larger than available memory.  Overlays programmer divides program into pieces.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
Page Table Implementation. Readings r Silbershatz et al:
CHAPTER 3-3: PAGE MAPPING MEMORY MANAGEMENT. VIRTUAL MEMORY Key Idea Disassociate addresses referenced in a running process from addresses available in.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
CS203 – Advanced Computer Architecture Virtual Memory.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Memory: Page Table Structure CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Virtual Memory Chapter 8.
CS161 – Design and Architecture of Computer
Memory: Page Table Structure
Memory.
Basic Paging (1) logical address space of a process can be made noncontiguous; process is allocated physical memory whenever the latter is available. Divide.
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Computer Organization
Memory Management Virtual Memory.
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Lecture 12 Virtual Memory.
CS703 - Advanced Operating Systems
Day 21 Virtual Memory.
Chapter 8: Main Memory.
Day 22 Virtual Memory.
Chapter 8 Virtual Memory
Virtual Memory Chapter 8.
Chapter 8: Main Memory.
Operating System Concepts
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Main Memory Session -15.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Lecture 29: Virtual Memory-Address Translation
Virtual Memory Hardware
Translation Buffers (TLB’s)
Contents Memory types & memory hierarchy Virtual memory (VM)
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Translation Buffers (TLB’s)
Translation Buffers (TLBs)
Operating Systems: Internals and Design Principles, 6/E
Review What are the advantages/disadvantages of pages versus segments?
4.3 Virtual Memory.
CSE 542: Operating Systems
Presentation transcript:

Virtual Memory 1 1

Virtual Memory All memory addresses within a process are logical addresses and they are translated into physical addresses at run-time Process image is divided in several small pieces (pages) that don’t need to be continuously allocated A process image can be swapped in and out of memory occupying different regions of main memory during its lifetime When OS supports virtual memory, it is not required for a process to have all its pages loaded in main memory at the time it executes

Virtual Memory At any time, the portion of process image that is loaded in main memory is called the resident set of the process If the CPU try to access an address belonging to a page that currently is not loaded in main memory, it generates a page fault interrupt and: The interrupted process changes to blocked state The OS issues a disk I/O read request The OS tries to dispatch another process while the I/O request is served Once the disk completes the page transfer, an I/O interrupt is issued The OS handles the I/O interrupt and moves the process with page fault back to ready state

Virtual Memory Since the OS only loads some pages of each process, more processes can be resident in main memory and be ready for execution Virtual memory gives the programmer the impression that he/she is dealing with a huge main memory (relying on available disk space). The OS loads automatically and on-demand pages of the running process. A process image may be larger than the entire main memory

P M other control bits Frame number Virtual Memory Each page table entry (PTE) has some control bits including P-bit and M-bit P-bit indicates whether the page table entry is valid (corresponding page is present in main memory) M-bit indicates if the content of the page has been modified since the page was loaded in memory. If the page has not been modified, it is not necessary to write its content back to disk when it needs to be replaced. Page table is too big to be stored in registers so has to be in main memory. If each process has a 4Mb page table, the amount of memory required to store page tables would be unacceptably high Virtual address Page number Offset How can we reduce memory overhead due to paging mechanism? Page table entry (PTE) P M other control bits Frame number

P M other control bits Frame number Virtual Memory How can we reduce memory overhead due to paging mechanism? Most virtual memory schemes use a two-level (or more) scheme to store large page tables and second level is stored in virtual memory rather than physical memory First level is a root page table (always in main memory) and each of its entries points to a second level page table (stored in virtual memory) If root page table has X entries and each page table has Y entries, then each process can have up to X*Y pages Size of each second level page table is equal to the page size Virtual address Page number Offset Page table entry (PTE) P M other control bits Frame number

Two level hierarchical page table Example of a two-level scheme with 32-bit virtual address Assume byte-level addressing and 4-Kb pages (212) The 4-Gb (232) virtual address space is composed of 220 pages Assume each page table entry (PTE) is 4 bytes Total user page table would require 4-Mb (222 bytes); it can be divided into 210 pages (second level page tables) kept in virtual memory and mapped by a root table with 210 PTEs and requiring 4-Kb Root page table always remains in main memory 10 most significant bits of a virtual address are used as index in the root page table to identify a second level page table If required page table is not in main memory, a page fault occurs Next 10 bits of virtual address are used as index in the page table to map virtual address to physical address Virtual address (32 bits  4 Gbyte virtual address space) 10 bits root table index 10 bits page table index Offset

10 bits root table index 10 bits page table index Offset Two level hierarchical page table Virtual address (32 bits  4 Gbyte virtual address space) 10 bits root table index 10 bits page table index Offset 4-Kb root page table 4-Mb user page table … 4-Gb user address space …

Two level hierarchical page table virtual address space (32 bits)

Virtual-to-Physical Lookups Programs only know virtual addresses The page table can be extremely large Each virtual address must be translated May involve walking hierarchical page table Page table stored in memory So, each program memory access requires several actual memory accesses Solution: cache “active” part of page table Use a translation lookaside buffer (TLB) TLB is an “associative mapping”; hence, the processor can query in parallel the TLB entries to determine if there is a match TLB works like a memory cache and it exploits “principle of locality”

Translation Lookaside Buffer (TLB) Virtual address VPage # offset VPage# PPage# ... Page table Miss VPage# PPage# ... . VPage# PPage# ... TLB Hit PPage # offset Note that each TLB entry must include the virtual page # as well as the corresponding PTE Physical address

TLB Function If a virtual address is presented to MMU, the hardware checks TLB by comparing all entries simultaneously (in parallel). If match is valid, the frame # is taken from TLB without going through page table. If a match is not found MMU detects miss and does a regular page table lookup. It then evicts one old entry out of TLB and replaces it with the new one, so that next time that page is found in TLB.

Effective Access Time with TLB TLB lookup time = s time unit Memory access time = m time unit Assume: Page table needs single access TLB Hit ratio = h Effective access time: EAT = (m + s) h + (2m + s)(1 – h) = 2m + s – m h

Inverted Page Tables As the size of virtual memory address space grows, additional levels must be added to multilevel page tables to avoid that the root page table becomes too large Assuming 64-bits address space, 4-Kb page size, and a PTE of 4 bytes, each page table can store 1024 entries, or 10 bits of address space. Thus 52/10= 6 levels are required or 6 memory accesses for each address translation However, size of physical memory is much smaller; hence, the idea of inverted page table can be exploited The number of entries in the inverted page table is equal to the number of physical memory frames

Inverted Page Tables Consider a simple inverted page table There is one entry per physical memory frame The table is now shared among the processes, so each PTE must contain the pair <process ID, virtual page #> Physical frame # is not stored, since the index in the table corresponds to it In order to translate a virtual address, the virtual page # and current process ID are compared against each entry, scanning the array sequentially. If a match is found, its index (in the inverted page table) is used to obtain a physical address. If no match is found, a page fault occurs. The search can be very inefficient since finding a match may require searching the entire table. To speed-up the searching, hashed inverted page tables are used

Hashed Inverted Page Tables Physical address Virtual address Main idea One PTE for each physical frame Hash (pid, vpage) to frame # Pros Small page table for large address space Cons Lookup is difficult Overhead of managing hash chains, etc [pag. 794 of classbook] PID vpage # 21 001 offset k offset PID Page # next Hash function i 11 005 k k 21 001 - n-1 Inverted page table