Virtual Memory.  Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient.

Slides:



Advertisements
Similar presentations
Fabián E. Bustamante, Spring 2007
Advertisements

C SINGH, JUNE 7-8, 2010IWW 2010, ISATANBUL, TURKEY Advanced Computers Architecture, UNIT 2 Advanced Computers Architecture Virtual Memory By Rohit Khokher.
COMP 3221: Microprocessors and Embedded Systems Lectures 27: Virtual Memory - III Lecturer: Hui Wu Session 2, 2005 Modified.
Virtual Memory October 25, 2006 Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs class16.ppt.
Carnegie Mellon 1 Virtual Memory: Concepts : Introduction to Computer Systems 15 th Lecture, Oct. 14, 2010 Instructors: Randy Bryant and Dave O’Hallaron.
Virtual Memory Oct. 21, 2003 Topics Motivations for VM Address translation Accelerating translation with TLBs class17.ppt “The course that gives.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
Virtual Memory Chapter 8.
Virtual Memory Nov 27, 2007 Slide Source: Topics Motivations for VM Address translation Accelerating translation with TLBs class12.ppt.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley and Rabi Mahapatra & Hank Walker.
Virtual Memory.
Virtual Memory October 2, 2000
Virtual Memory Topics Motivations for VM Address translation Accelerating translation with TLBs CS213.
Virtual Memory October 30, 2001 Topics Motivations for VM Address translation Accelerating translation with TLBs class19.ppt “The course that gives.
Virtual Memory Chapter 8.
©UCB CS 162 Ch 7: Virtual Memory LECTURE 13 Instructor: L.N. Bhuyan
Virtual Memory October 29, 2007 Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs class16.ppt.
CSCI2413 Lecture 6 Operating Systems Memory Management 2 phones off (please)
Virtual Memory May 19, 2008 Topics Motivations for VM Address translation Accelerating translation with TLBs EECS213.
Virtual Memory Topics Motivations for VM Address translation Accelerating translation with TLBs CS 105 “Tour of the Black Holes of Computing!”
– 1 – , F’02 Conventional DRAM Organization d x w DRAM: dw total bits organized as d supercells of size w bits cols rows internal row.
CML CML CS 230: Computer Organization and Assembly Language Aviral Shrivastava Department of Computer Science and Engineering School of Computing and Informatics.
Virtual Memory: Concepts
Carnegie Mellon /18-243: Introduction to Computer Systems Instructors: Bill Nace and Gregory Kesden (c) All Rights Reserved. All work.
1 Virtual Memory: Concepts Andrew Case Slides adapted from Jinyang Li, Randy Bryant and Dave O’Hallaron.
Virtual Memory. Virtual Memory: Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB)
Virtual Memory March 23, 2000 Topics Motivations for VM Address translation Accelerating address translation with TLBs Pentium II/III memory system
Virtual Memory April 3, 2001 Topics Motivations for VM Address translation Accelerating translation with TLBs class20.ppt.
University of Amsterdam Computer Systems – virtual memory Arnoud Visser 1 Computer Systems Virtual Memory.
Virtual Memory Additional Slides Slide Source: Topics Address translation Accelerating translation with TLBs class12.ppt.
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Concepts Slides adapted from Bryant.
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Virtual Memory: Concepts CENG331 - Computer Organization.
Virtual Memory Topics Motivations for VM Address translation Accelerating translation with TLBs CS 105 “Tour of the Black Holes of Computing!”
CS 105 “Tour of the Black Holes of Computing!”
Virtual Memory Topics Address spaces Motivations for virtual memory Address translation Accelerating translation with TLBs.
Virtual Memory Topics Motivations for VM Address translation Accelerating translation with TLBs VM1 CS 105 “Tour of the Black Holes of Computing!”
1 Virtual Memory (I). 2 Outline Physical and Virtual Addressing Address Spaces VM as a Tool for Caching VM as a Tool for Memory Management VM as a Tool.
Virtual Memory CS740 October 13, 1998 Topics page tables TLBs Alpha 21X64 memory system.
1 Virtual Memory. 2 Outline Virtual Space Address translation Accelerating translation –with a TLB –Multilevel page tables Different points of view Suggested.
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
University of Washington Indirection in Virtual Memory 1 Each process gets its own private virtual address space Solves the previous problems Physical.
CS203 – Advanced Computer Architecture Virtual Memory.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Virtual Memory Alan L. Cox Some slides adapted from CMU slides.
Computer Architecture Lecture 12: Virtual Memory I
Virtual Memory Chapter 8.
ECE232: Hardware Organization and Design
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Section 9: Virtual Memory (VM)
CS703 - Advanced Operating Systems
Section 9: Virtual Memory (VM)
Today How was the midterm review? Lab4 due today.
CS 704 Advanced Computer Architecture
Virtual Memory.
CS 105 “Tour of the Black Holes of Computing!”
CS 105 “Tour of the Black Holes of Computing!”
CSE 153 Design of Operating Systems Winter 2018
Virtual Memory: Concepts /18-213/14-513/15-513: Introduction to Computer Systems 17th Lecture, October 23, 2018.
CS 105 “Tour of the Black Holes of Computing!”
Instructors: Majd Sakr and Khaled Harras
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Virtual Memory Nov 27, 2007 Slide Source:
CS 105 “Tour of the Black Holes of Computing!”
CS 105 “Tour of the Black Holes of Computing!”
CSE 153 Design of Operating Systems Winter 2019
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Instructor: Phil Gibbons
Virtual Memory 1 1.
Presentation transcript:

Virtual Memory

 Next in memory hierarchy  Motivations:  to remove programming burdens of a small, limited amount of main memory  to allow efficient and safe sharing of memory among multiple programs the amount of main memory vs. the total amount of memory required by all the programmers  a fraction of the total required memory is active at any time  bring those active only to the physical memory  each program uses its own address space  VM implements the mapping of a program’s virtual address space to physical addresses

Motivations  Use physical DRAM as a cache for the disk  Address space of a process can exceed physical memory size  Sum of address spaces of multiple processes can exceed physical memory  Simplify memory management  Multiple processes resident in main memory  Each process with its own address space  Only “active” code and data is actually in memory  Allocate more memory to process as needed  Provide virtually contiguous memory space  Provide protection  One process can’t interfere with another  Because they operate in different address spaces  User process cannot access privileged information  Different sections of address spaces have different permissions. 3

Motivation 1: Caching  Use DRAM as a cache for the disk  Full address space is quite large:  32-bit addresses: 4 billion (~4 x 10 9 ) bytes  64-bit addresses: 16 quintillion (~16 x ) bytes  Disk storage is ~300X cheaper than DRAM storage  160GB of DRAM: ~$32,000  160GB of disk: ~$100  To access large amounts of data in a cost-effective manner, the bulk of the data must be stored on disk 4 1GB: ~$200 DISK 160 GB: ~$100 DRAM SRAM 4 MB: ~$500

DRAM vs. Disk  DRAM vs. disk is more extreme than SRAM vs. DRAM  Access latencies:  DRAM: ~10X slower than SRAM  Disk: ~100,000X slower than DRAM  Importance of exploiting spatial locality:  Disk: First byte is ~100,000X slower than successive  DRAM: ~4X improvement for fast page-mode vs. regular accesses  Bottom line:  Design decisions made for DRAM caches driven by enormous cost of misses 5 1GB: 10X slower 160 GB: 100,000X slower 4 MB: DISK DRAM SRAM

Cache for Disk  Design parameters  Line (block) size?  Large, since disk better at transferring large blocks  Associativity? (Block placement)  High, to minimize miss rate  Block identification?  Using tables  Write through or write back? (Write strategy)  Write back, since can’t afford to perform small writes to disk  Block replacement?  Not to be used in the future  Based on LRU (Least Recently Used)  Impact of design decisions for DRAM cache (main memory)  Miss rate: Extremely low. << 1%  Hit time: Match DRAM performance  Miss latency: Very high. ~20ms  Tag storage overhead: Low, relative to block size 6

Locating an Object  DRAM Cache  Each allocated page of virtual memory has entry in page table  Mapping from virtual pages to physical pages  Page table entry even if page not in memory  Specifies disk address  OS retrieves information  Page fault handler places a page from disk in DRAM 7 Data : 1: N-1: X Object Name Location D: J: X: 1 0 On Disk “Main Memory”Page Table

If there is physical memory only  Examples  Most Cray machines, early PCs, nearly all embedded systems, etc.  Addresses generated by the CPU correspond directly to bytes in physical memory 8 CPU 0: 1: N-1: Memory Physical Addresses

System with Virtual Memory  Examples  Workstations, servers, modern PCs, etc.  Address translation: Hardware converts virtual addresses to physical addresses via OS-managed lookup table (page table) 9 CPU 0: 1: N-1: Memory 0: 1: P-1: Page Table Disk Virtual Addresses Physical Addresses

Page Fault (Cache Miss)  What if an object is on disk rather than in memory?  Page table entry indicates virtual addresses not in memory  OS exception handler invoked to move data from disk into memory  Current process suspends, others can resume  OS has full control over placement, etc. 10 Memory Page Table Disk Virtual Addresses Physical Addresses CPU Memory Page Table Disk Virtual Addresses Physical Addresses Before fault After fault CPU

Servicing a Page Fault  Processor signals controller  Read block of length P starting at disk address X and store starting at memory address Y  Read occurs  Direct Memory Address (DMA)  Under control of I/O controller  I/O controller signals completion  Interrupt processor  OS resumes suspended process 11 Disk Memory-I/O Bus Processor Cache Memory I/O Controller Reg (2) DMA Transfer (1) Initiate Block Read (3) Read Done Disk

Motivation 2: Management  Multiple processes can reside in physical memory  Simplifying linking  Each process see the same basic format of its memory image. 12 kernel virtual memory Memory mapped region for shared libraries runtime heap (via malloc) program text (.text) initialized data (.data) uninitialized data (.bss) stack forbidden 0 %esp memory invisible to user code the “brk” ptr Linux/x86 process memory image

Virtual Address Space  Two processes share physical pages by mapping in page table  Separate Virtual Address Spaces (separate page table)  Virtual and physical address spaces divided into equal-sized blocks  Blocks are called “pages” (both virtual and physical)  Each process has its own virtual address space  OS controls how virtual pages are assigned to physical memory 13 Virtual Address Space for Process 1: Physical Address Space (DRAM) VP 1 VP 2 PP 2 Address Translation 0 0 N-1 0 M-1 VP 1 VP 2 PP 7 PP 10 (e.g., read/only library code)... Virtual Address Space for Process 2:

Memory Management  Allocating, deallocating, and moving memory  Can be done by manipulating page tables  Allocating contiguous chunks of memory  Can map contiguous range of virtual addresses to disjoint ranges of physical addresses  Loading executable binaries  Just fix page tables for processes  Data in the binaries are paged in on demand  Protection  Store protection information on page table entries  Usually checked by hardware 14

Motivation 3: Protection  Page table entry contains access rights information  Hardware enforces this protection (trap into OS if violation occurs 15 Page Tables Process i: Physical AddrRead?Write? PP 9 Yes No PP 4 Yes XXXXXXX No VP 0: VP 1: VP 2: 0: 1: N-1: Memory Physical AddrRead?Write? PP 6Yes PP 9No XXXXXXXNo VP 0: VP 1: VP 2: No Yes SUP? No Yes No SUP? Yes No Process j:

VM Address Translation  Page size = 2 12 = 4 KB  # Physical pages = 2 18 => 2 (12+18) = 1 GB Main Mem.  Virtual address space = 2 32 = 4 GB 16

VM Address Translation: Hit 17 Processor Hardware Addr Trans Mechanism Main Memory a a' physical addressvirtual addresspart of the on-chip memory management unit (MMU)

VM Address Translation: Miss 18  Processor Hardware Addr Trans Mechanism fault handler Main Memory Secondary memory a a' page fault physical address OS performs this transfer (only if miss) virtual addresspart of the on-chip MMU

Page Table 19 virtual page number (VPN)page offset virtual address physical page number (PPN)page offset physical address 0p–1pm–1 n–10p–1p page table base register if valid=0 then page not in memory valid physical page number (PPN) access VPN acts as table index

Multi-Level Page Table  Given:  4KB (2 12 ) page size  32-bit address space  4-byte PTE  Problem:  Would need a 4 MB page table!  2 20 *4 bytes  Common solution  multi-level page tables  e.g., 2-level table (P6)  Level 1 table: 1024 entries, each of which points to a Level 2 page table.  Level 2 table: 1024 entries, each of which points to a page 20 Level 1 Table... Level 2 Tables

TLB  “Translation Lookaside Buffer” (TLB)  Small hardware cache in MMU  Maps virtual page numbers to physical page numbers  Contains complete page table entries for small number of pages 21 miss VA TLB Lookup Cache Main Memory PA hit data Trans- lation hit miss CPU

Address Translation with TLB 22 virtual address virtual page number page offset physical address n–10p–1p validphysical page numbertag validtagdata = cache hit tagbyte offset index = TLB hit TLB Cache...

MIPS R2000 TLB 23

VM Summary  Programmer’s View  Large “flat” address space  Can allocate large blocks of contiguous addresses  Process “owns” machine  Has private address space  Unaffected by behavior of other processes  System View  Use virtual address space created by mapping to set of pages  Need not be contiguous  Allocated dynamically  Enforce protection during address translation  OS manages many processes simultaneously  Continually switching among processes  Especially when one must wait for resources  e.g. disk I/O to handle page faults 24