EECS 470 Virtual Memory Lecture 15. Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient.

Slides:



Advertisements
Similar presentations
Virtual Memory In this lecture, slides from lecture 16 from the course Computer Architecture ECE 201 by Professor Mike Schulte are used with permission.
Advertisements

16.317: Microprocessor System Design I
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Memory/Storage Architecture Lab Computer Architecture Virtual Memory.
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
Memory Management (II)
Computer ArchitectureFall 2008 © November 10, 2007 Nael Abu-Ghazaleh Lecture 23 Virtual.
Computer ArchitectureFall 2007 © November 21, 2007 Karem A. Sakallah Lecture 23 Virtual Memory (2) CS : Computer Architecture.
Paging and Virtual Memory. Memory management: Review  Fixed partitioning, dynamic partitioning  Problems Internal/external fragmentation A process can.
ECE 232 L27.Virtual.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 27 Virtual.
Translation Buffers (TLB’s)
Virtual Memory and Paging J. Nelson Amaral. Large Data Sets Size of address space: – 32-bit machines: 2 32 = 4 GB – 64-bit machines: 2 64 = a huge number.
Vm Computer Architecture Lecture 16: Virtual Memory.
©UCB CS 161 Ch 7: Memory Hierarchy LECTURE 24 Instructor: L.N. Bhuyan
CS 241 Section Week #12 (04/22/10).
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
Address Translation Mechanism of 80386
CS 153 Design of Operating Systems Spring 2015 Lecture 17: Paging.
Lecture 19: Virtual Memory
Operating Systems ECE344 Ding Yuan Paging Lecture 8: Paging.
Computer Structure 2012 – VM 1 Computer Structure X86 Virtual Memory and TLB Franck Sala Slides from Lihu and Adi’s Lecture.
Virtual Memory Part 1 Li-Shiuan Peh Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology May 2, 2012L22-1
4.3 Virtual Memory. Virtual memory  Want to run programs (code+stack+data) larger than available memory.  Overlays programmer divides program into pieces.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
Constructive Computer Architecture Virtual Memory: From Address Translation to Demand Paging Arvind Computer Science & Artificial Intelligence Lab. Massachusetts.
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
CS203 – Advanced Computer Architecture Virtual Memory.
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
Virtual Memory Chapter 7.4.
ECE232: Hardware Organization and Design
CS161 – Design and Architecture of Computer
Lecture 12 Virtual Memory.
From Address Translation to Demand Paging
CS703 - Advanced Operating Systems
From Address Translation to Demand Paging
Module: Virtual Memory
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Some Real Problem What if a program needs more memory than the machine has? even if individual programs fit in memory, how can we run multiple programs?
Paging Adapted from: © Ed Lazowska, Hank Levy, Andrea And Remzi Arpaci-Dussea, Michael Swift.
Memory Hierarchy Virtual Memory, Address Translation
CSE 153 Design of Operating Systems Winter 2018
From Address Translation to Demand Paging
Evolution in Memory Management Techniques
Translation Lookaside Buffer
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2005 Memory Management
Translation Buffers (TLB’s)
Virtual Memory Overcoming main memory size limitation
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Translation Buffers (TLB’s)
Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Translation Lookaside Buffers
CSE 153 Design of Operating Systems Winter 2019
Virtual Memory Lecture notes from MKP and S. Yalamanchili.
Translation Buffers (TLBs)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Virtual Memory.
Review What are the advantages/disadvantages of pages versus segments?
4.3 Virtual Memory.
Presentation transcript:

EECS 470 Virtual Memory Lecture 15

Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient point to implement memory protection Creates a flexible mechanism to implement shared memory communication

Address Translation Partition memory into pages –Typically 4k or 8k bytes –Trade-offs? VPN is index into page table –Produces page table entry –Holds address, permissions, availability Physical page number (PPN) replaces VPN –To form physical address

Fast Address Translation Cache recent translations –In translation look-aside buffer (TLB) –Provides single cycle access TLB is typically small –32 to 128 entries –As a result, highly associative Loaded with PTEs when page table translations occur –What is replaced?

TLB Miss Handling If TLB access misses –This is a new translation, or a previous replaced translation Initiate TLB miss handler –Walk page tables –Replace entry in TLB with PTE –Possible to implement page walker in H/W or S/W Trade-offs? If PTE entry is marked invalid –Page is not resident in physical memory –Declare page fault exception –OS will now do its thing…

Maintaining a Coherent TLB TLB must reflect changes to address mapping –Physical page replacement [use TLB entry invalidation] –Physical page allocation [use TLB entry initialization] Context switches –Essentially replaces every entry in the TLB –How does the hardware recognize a context switch? –Naïve approaches can lead to expensive context switches Due to many accompanying TLB misses –Optimization context switches with address space IDs Processor control state records current process ASID, updated by OS at context switches ASID fields included TLB entries, only match TLB entries that share the same ASID as the current process ASID

Implementing VM with Caches Uses a virtually index – physically tagged cache What is the advantage of a virtually indexed cache? What is the disadvantage of a virtually tagged cache?

Virtual Address Synonyms Problem: processes can share physical memory in different virtual address locations –What if these virtual addresses map to different locations in cache? Cache Aliases Solutions: –Don’t let processes share memory (e.g., no DLLs) –Use a physically indexed cache (S L O W) –Force shared memory to be aligned to set size of the cache (i.e., translated bits used to index the cache will be equal to physical address bits) –Force all cache set sizes <= page size (i.e., no translated bits are allowed to index cache), most popular solution

Case Study – Pentium 4

Pentium 4 Page Directory/Table Entries Global pages are not flushed from TLB –Reduces impact of context switches Accessed bit used by OS to implement clock algorithm Cache disabled bit used to specify memory-mapped I/O Present bit used to implement swapping Dirty bit tracks writes to page U/S and R/W implement page permissions

Application Specific Super- page Entries Used to map large objects treated as a single unit –Operating system code –Video frame buffer Super-page works just like a normal pages, but maps a larger space –Reduces “pressure” on TLB –Implications on TLB design?

Case Study – Alpha 21264