Virtual Memory: Concepts

Slides:



Advertisements
Similar presentations
Virtual Memory October 25, 2006 Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs class16.ppt.
Advertisements

Carnegie Mellon 1 Virtual Memory: Concepts : Introduction to Computer Systems 15 th Lecture, Oct. 14, 2010 Instructors: Randy Bryant and Dave O’Hallaron.
Today Virtual memory (VM) Overview and motivation
Virtual Memory Nov 27, 2007 Slide Source: Topics Motivations for VM Address translation Accelerating translation with TLBs class12.ppt.
Virtual Memory October 29, 2007 Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs class16.ppt.
Virtual Memory May 19, 2008 Topics Motivations for VM Address translation Accelerating translation with TLBs EECS213.
Carnegie Mellon 1 Virtual Memory: Concepts / : Introduction to Computer Systems 16 th Lecture, Oct. 21, 2014 Instructors: Greg Ganger, Greg.
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
Virtual & Dynamic Memory Management Summer 2014 COMP 2130 Intro Computer Systems Computing Science Thompson Rivers University.
Carnegie Mellon 1 Saint Louis University Virtual Memory CSCI 224 / ECE 317: Computer Architecture Instructor: Prof. Jason Fritts Slides adapted from Bryant.
Carnegie Mellon /18-243: Introduction to Computer Systems Instructors: Bill Nace and Gregory Kesden (c) All Rights Reserved. All work.
1 Seoul National University Virtual Memory: Systems.
1 Virtual Memory. 2 Outline Pentium/Linux Memory System Core i7 Suggested reading: 9.6, 9.7.
1 Virtual Memory: Concepts Andrew Case Slides adapted from Jinyang Li, Randy Bryant and Dave O’Hallaron.
Carnegie Mellon 1 Virtual Memory: Concepts Instructor: Rabi Mahapatra (TAMU) Slides: Randy Bryant and Dave O’Hallaron (CMU)
Pentium III Memory.
Virtual Memory March 23, 2000 Topics Motivations for VM Address translation Accelerating address translation with TLBs Pentium II/III memory system
Carnegie Mellon /18-243: Introduction to Computer Systems Instructors: Bill Nace and Gregory Kesden (c) All Rights Reserved. All work.
Virtual Memory April 3, 2001 Topics Motivations for VM Address translation Accelerating translation with TLBs class20.ppt.
University of Amsterdam Computer Systems – virtual memory Arnoud Visser 1 Computer Systems Virtual Memory.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Concepts Slides adapted from Bryant.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Concepts CENG331 - Computer Organization.
Virtual Memory Topics Motivations for VM Address translation Accelerating translation with TLBs CS 105 “Tour of the Black Holes of Computing!”
University of Washington Roadmap 1 car *c = malloc(sizeof(car)); c->miles = 100; c->gals = 17; float mpg = get_mpg(c); free(c); Car c = new Car(); c.setMiles(100);
Virtual Memory Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs.
Virtual Memory Topics Motivations for VM Address translation Accelerating translation with TLBs VM1 CS 105 “Tour of the Black Holes of Computing!”
Virtual Memory October 14, 2008 Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs lecture-14.ppt.
CSE 153 Design of Operating Systems Winter 2015 Lecture 11: Paging/Virtual Memory Some slides modified from originals by Dave O’hallaron.
1 Virtual Memory (I). 2 Outline Physical and Virtual Addressing Address Spaces VM as a Tool for Caching VM as a Tool for Memory Management VM as a Tool.
Roadmap C: Java: Assembly language: OS: Machine code: Computer system:
1 Virtual Memory. 2 Outline Virtual Space Address translation Accelerating translation –with a TLB –Multilevel page tables Different points of view Suggested.
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
University of Washington Indirection in Virtual Memory 1 Each process gets its own private virtual address space Solves the previous problems Physical.
Alan L. Cox Virtual Memory Alan L. Cox Some slides adapted from CMU slides.
1 Virtual Memory. 2 Outline Case analysis –Pentium/Linux Memory System –Core i7 Suggested reading: 9.7.
Virtual Memory Samira Khan Apr 27, 2017.
Virtual Memory Samira Khan Apr 25, 2017.
Section 9: Virtual Memory (VM)
Virtual Memory: Systems
Section 9: Virtual Memory (VM)
Today How was the midterm review? Lab4 due today.
Virtual Memory: Concepts CENG331 - Computer Organization
Virtual Memory III CSE 351 Spring 2017
Virtual Memory.
CS 105 “Tour of the Black Holes of Computing!”
CSE 153 Design of Operating Systems Winter 2018
CS 105 “Tour of the Black Holes of Computing!”
Virtual Memory II CSE 351 Autumn 2016
CSE 153 Design of Operating Systems Winter 2018
Virtual Memory: Concepts /18-213/14-513/15-513: Introduction to Computer Systems 17th Lecture, October 23, 2018.
Virtual Memory: Systems
Virtual Memory: Systems
CS 105 “Tour of the Black Holes of Computing!”
Virtual Memory II CSE 410 Winter 2017
CSE 351 Section 10 The END…Almost 3/7/12
ECE Dept., University of Toronto
Instructors: Majd Sakr and Khaled Harras
Virtual Memory Nov 27, 2007 Slide Source:
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Virtual Memory II CSE 351 Winter 2018
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CS 105 “Tour of the Black Holes of Computing!”
CS 105 “Tour of the Black Holes of Computing!”
Virtual Memory I CSE 351 Winter 2019
CSE 153 Design of Operating Systems Winter 2019
CSE 153 Design of Operating Systems Winter 2019
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Instructor: Phil Gibbons
Presentation transcript:

Virtual Memory: Concepts

Virtual Memory: Concepts Address spaces VM as a tool for caching VM as a tool for memory management VM as a tool for memory protection Address translation

Recall: Byte-Oriented Memory Organization • • • 00•••0 FF•••F Programs refer to data by address Conceptually, envision it as a very large array of bytes In reality, it’s not, but can think of it that way An address is like an index into that array and, a pointer variable stores an address Note: system provides private address spaces to each “process” Think of a process as a program being executed So, a program can clobber its own data, but not that of others

Recall: Simple Addressing Modes Normal (R) Mem[Reg[R]] Register R specifies memory address movl (%ecx),%eax Displacement D(R) Mem[Reg[R]+D] Register R specifies start of memory region Constant displacement D specifies offset movl 8(%ebp),%edx

Lets think on this: physical memory? • • • 00•••0 FF•••F How does everything fit? 32-bit addresses: ~4,000,000,000 (4 billion) bytes 64-bit addresses: ~16,000,000,000,000,000,000 (16 quintillion) bytes What if another process stores data into your memory? How could you debug your program?

So, we add a level of indirection One simple trick solves all three problems Each process gets its own private image of memory appears to be a full-sized private memory range This fixes “how to choose” and “others shouldn’t mess w/yours” surprisingly, it also fixes “making everything fit” Implementation: translate addresses transparently add a mapping function to map private addresses to physical addresses do the mapping on every load or store This mapping trick is the heart of virtual memory

Address Spaces Linear address space: Ordered set of contiguous non-negative integer addresses: {0, 1, 2, 3 … } Virtual address space: Set of N = 2n virtual addresses {0, 1, 2, 3, …, N-1} Physical address space: Set of M = 2m physical addresses {0, 1, 2, 3, …, M-1} Clean distinction between data (bytes) and their attributes (addresses) Each datum can now have multiple addresses Every byte in main memory: one physical address, one (or more) virtual addresses

A System Using Physical Addressing Main memory 0: 1: Physical address (PA) 2: 3: CPU 4: 4 5: 6: 7: 8: ... M-1: Data word Used in some “simple” systems, like embedded microcontrollers in cars, elevators, and digital picture frames

A System Using Virtual Addressing Main memory 0: CPU Chip 1: 2: Virtual address (VA) Physical address (PA) 3: CPU MMU 4: 4104 0004 5: 6: 7: 8: ... M-1: Data word Used in all modern servers, desktops, and laptops One of the great ideas in computer science

Why Virtual Memory? (1) VM allows efficient use of limited main memory (RAM) Use RAM as a cache for the parts of a virtual address space some non-cached parts stored on disk some (unallocated) non-cached parts stored nowhere Keep only active areas of virtual address space in memory transfer data back and forth as needed (2) VM simplifies memory management for programmers Each process gets a full, private linear address space (3) VM isolates address spaces One process can’t interfere with another’s memory because they operate in different address spaces User process cannot access privileged information different sections of address spaces have different permissions

Virtual Memory: Concepts Address spaces (1) VM as a tool for caching (2) VM as a tool for memory management (3) VM as a tool for memory protection Address translation

(1) VM as a Tool for Caching Virtual memory is an array of N contiguous bytes stored on disk. The contents of the array on disk are cached in physical memory (DRAM cache) These cache blocks are called pages (size is P = 2p bytes) Virtual memory Physical memory VP 0 Unallocated VP 1 Cached Empty PP 0 Uncached PP 1 Unallocated Empty Cached Uncached Empty Cached PP 2m-p-1 M-1 VP 2n-p-1 Uncached N-1 Virtual pages (VPs) stored on disk Physical pages (PPs) cached in DRAM

DRAM Cache Organization DRAM cache organization driven by the enormous miss penalty DRAM is about 10x slower than SRAM Disk is about 10,000x slower than DRAM Consequences Large page (block) size: typically 4-8 KB, sometimes 4 MB Fully associative Any VP can be placed in any PP Requires a “large” mapping function – different from CPU caches Highly sophisticated, expensive replacement algorithms Too complicated and open-ended to be implemented in hardware Write-back rather than write-through

Enabling data structure: Page Table A page table is an array of page table entries (PTEs) that maps virtual pages to physical pages. Per-process kernel data structure in DRAM Physical memory (DRAM) Physical page number or disk address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Page Hit Page hit: reference to VM word that is in physical memory (DRAM cache hit) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Page Fault Page fault: reference to VM word that is not in physical memory (DRAM cache miss) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Handling Page Fault Page miss causes page fault (an exception) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 3 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Handling Page Fault Page miss causes page fault (an exception) Page fault handler selects a victim to be evicted (here VP 4) Offending instruction is restarted: page hit! Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 3 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

Locality to the Rescue Again! Virtual memory works because of locality At any point in time, programs tend to access a set of active virtual pages called the working set Programs with better temporal locality will have smaller working sets If (working set size < main memory size) Good performance for one process after compulsory misses If ( SUM(working set sizes) > main memory size ) Thrashing: Performance meltdown where pages are moved (copied) in and out continuously

Virtual Memory: Concepts Address spaces (1) VM as a tool for caching (2) VM as a tool for memory management (3) VM as a tool for memory protection Address translation

(2) VM as a Tool for Memory Management Key idea: each process has its own virtual address space It can view memory as a simple linear array Mapping function scatters addresses through physical memory Well chosen mappings simplify memory allocation and management Address translation Virtual Address Space for Process 1: Physical Address Space (DRAM) VP 1 VP 2 PP 2 ... N-1 (e.g., read-only library code) PP 6 Virtual Address Space for Process 2: PP 8 VP 1 VP 2 ... ... N-1 M-1

Simplifying allocation and sharing Memory allocation Each virtual page can be mapped to any physical page A virtual page can be stored in different physical pages at different times Sharing code and data among processes Map multiple virtual pages to the same physical page (here: PP 6) Address translation Virtual Address Space for Process 1: Physical Address Space (DRAM) VP 1 VP 2 PP 2 ... N-1 (e.g., read-only library code) PP 6 Virtual Address Space for Process 2: PP 8 VP 1 VP 2 ... ... N-1 M-1

Simplifying Linking and Loading Memory invisible to user code Kernel virtual memory Linking Each program has similar virtual address space Code, stack, and shared libraries always start at the same address Loading execve() allocates virtual pages for .text and .data sections = creates PTEs marked as invalid The .text and .data sections are copied, page by page, on demand by the virtual memory system 0xc0000000 User stack (created at runtime) %esp (stack pointer) Memory-mapped region for shared libraries 0x40000000 brk Run-time heap (created by malloc) Read/write segment (.data, .bss) Loaded from the executable file Read-only segment (.init, .text, .rodata) 0x08048000 Unused

Virtual Memory: Concepts Address spaces (1) VM as a tool for caching (2) VM as a tool for memory management (3) VM as a tool for memory protection Address translation

VM as a Tool for Memory Protection Extend PTEs with permission bits Page fault handler checks these before remapping If violated, send process SIGSEGV (segmentation fault) Physical Address Space Process i: SUP READ WRITE Address VP 0: No Yes No PP 6 VP 1: No Yes Yes PP 4 PP 2 VP 2: Yes Yes Yes PP 2 • PP 4 PP 6 Process j: SUP READ WRITE Address PP 8 VP 0: No Yes No PP 9 PP 9 VP 1: Yes Yes Yes PP 6 VP 2: No Yes Yes PP 11 PP 11

Virtual Memory: Concepts Address spaces (1) VM as a tool for caching (2) VM as a tool for memory management (3) VM as a tool for memory protection Address translation

VM Address Translation Virtual Address Space V = {0, 1, …, N–1} Physical Address Space P = {0, 1, …, M–1} Address Translation MAP: V  P U {} For virtual address a: MAP(a) = a’ if data at virtual address a is at physical address a’ in P MAP(a) =  if data at virtual address a is not in physical memory Either invalid or stored on disk

Summary of Address Translation Symbols Basic Parameters N = 2n : Number of addresses in virtual address space M = 2m : Number of addresses in physical address space P = 2p : Page size (bytes) Components of the virtual address (VA) VPO: Virtual page offset VPN: Virtual page number TLBI: TLB index TLBT: TLB tag Components of the physical address (PA) PPO: Physical page offset (same as VPO) PPN: Physical page number CO: Byte offset within cache line CI: Cache index CT: Cache tag

Address Translation With a Page Table Virtual address n-1 p p-1 Page table base register (PTBR) Page table address for process Virtual page number (VPN) Virtual page offset (VPO) Physical page offset (PPO) p-1 Valid Physical page number (PPN) Page table Valid bit = 0: page not in memory (page fault) Physical page number (PPN) p m-1 Physical address

Address Translation: Page Hit 2 CPU Chip Cache/ Memory PTEA MMU 1 PTE CPU VA 3 PA 4 Data 5 1) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) MMU sends physical address to cache/memory 5) Cache/memory sends data word to processor

Address Translation: Page Fault Exception Page fault handler 4 2 CPU Chip Cache/ Memory Disk PTEA Victim page MMU 1 5 CPU VA PTE 7 3 New page 6 1) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) Valid bit is zero, so MMU triggers page fault exception 5) Handler identifies victim (and, if dirty, pages it out to disk) 6) Handler pages in new page and updates PTE in memory 7) Handler returns to original process, restarting faulting instruction

Views of virtual memory Programmer’s view of virtual memory Each process has its own private linear address space Cannot be corrupted by other processes System view of virtual memory Uses memory efficiently by caching virtual memory pages Efficient only because of locality Simplifies memory management and programming Simplifies protection by providing a convenient interpositioning point to check permissions

Integrating VM and Cache PTE CPU Chip PTE MMU PTEA hit Memory PTEA miss PTEA PTEA CPU VA PA PA PA miss Data PA hit L1 cache Data VA: virtual address, PA: physical address, PTE: page table entry, PTEA = PTE address

Speeding up Translation with a TLB Page table entries (PTEs) are cached in L1 like any other memory word PTEs may be evicted by other data references PTE hit still requires a small L1 delay Solution: Translation Lookaside Buffer (TLB) Small hardware cache in MMU Maps virtual page numbers to physical page numbers Contains complete page table entries for small number of pages

TLB Hit A TLB hit eliminates a memory access CPU Chip TLB Cache/ MMU 2 PTE VPN 3 Cache/ Memory MMU 1 CPU VA PA 4 Data 5 A TLB hit eliminates a memory access

TLB Miss CPU Chip TLB 4 2 PTE VPN Cache/ Memory MMU 1 3 CPU VA PTEA PA 5 Data 6 A TLB miss incurs an additional memory access (the PTE) Fortunately, TLB misses are rare. Why?