Synonyms v.p. x, process A v.p # index Map to same physical page

Slides:



Advertisements
Similar presentations
EECS 470 Virtual Memory Lecture 15. Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient.
Advertisements

Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
Memory Management (II)
1 Lecture 14: Virtual Memory Topics: virtual memory (Section 5.4) Reminders: midterm begins at 9am, ends at 10:40am.
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Translation Buffers (TLB’s)
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Virtual Memory and Paging J. Nelson Amaral. Large Data Sets Size of address space: – 32-bit machines: 2 32 = 4 GB – 64-bit machines: 2 64 = a huge number.
1 Lecture 14: Virtual Memory Today: DRAM and Virtual memory basics (Sections )
Virtual Memory I Chapter 8.
Mem. Hier. CSE 471 Aut 011 Evolution in Memory Management Techniques In early days, single program run on the whole machine –Used all the memory available.
Vir. Mem II CSE 471 Aut 011 Synonyms v.p. x, process A v.p. y, process B v.p # index Map to same physical page Map to synonyms in the cache To avoid synonyms,
CS 346 – Chapter 8 Main memory –Addressing –Swapping –Allocation and fragmentation –Paging –Segmentation Commitment –Please finish chapter 8.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Computer Architecture Lecture 28 Fasih ur Rehman.
Lecture 19: Virtual Memory
Operating Systems ECE344 Ding Yuan Paging Lecture 8: Paging.
Lecture 9: Memory Hierarchy Virtual Memory Kai Bu
Virtual Memory Expanding Memory Multiple Concurrent Processes.
Virtual Memory Part 1 Li-Shiuan Peh Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology May 2, 2012L22-1
1 Lecture 16: Virtual Memory Topics: virtual memory, improving TLB performance (Sections )
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
Translation Lookaside Buffer
CMSC 611: Advanced Computer Architecture
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Lecture 12 Virtual Memory.
Memory Caches & TLB Virtual Memory
CS703 - Advanced Operating Systems
Address Translation Mechanism of 80386
Cache Memory Presentation I
CSE 153 Design of Operating Systems Winter 2018
Virtual Memory Chapter 8.
Lecture 28: Virtual Memory-Address Translation
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Evolution in Memory Management Techniques
CMSC 611: Advanced Computer Architecture
CS 105 “Tour of the Black Holes of Computing!”
Module 2: Computer-System Structures
Performance metrics for caches
Performance metrics for caches
Translation Lookaside Buffer
Performance metrics for caches
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Translation Buffers (TLB’s)
Virtual Memory Overcoming main memory size limitation
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2004 Page Tables, TLBs, and Other Pragmatics Hank Levy 1.
CSE 586 Computer Architecture Lecture 7
Translation Buffers (TLB’s)
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CS 105 “Tour of the Black Holes of Computing!”
Lecture 24: Virtual Memory, Multiprocessors
CS 105 “Tour of the Black Holes of Computing!”
Lecture 23: Virtual Memory, Multiprocessors
CSE 471 Autumn 1998 Virtual memory
CSE 451: Operating Systems Lecture 10 Paging & TLBs
CSE 451: Operating Systems Winter 2005 Page Tables, TLBs, and Other Pragmatics Steve Gribble 1.
Main Memory Background
CSE 153 Design of Operating Systems Winter 2019
Module 2: Computer-System Structures
CSE 542: Operating Systems
Translation Buffers (TLBs)
Virtual Memory.
Performance metrics for caches
Review What are the advantages/disadvantages of pages versus segments?
CSE 542: Operating Systems
Virtual Memory 1 1.
Presentation transcript:

Synonyms v.p. x, process A v.p # index Map to same physical page v.p. y, process B Map to synonyms in the cache To avoid synonyms, O.S. or hardware enforces these bits to be the same Mem. Hier II CSE 471 Aut 02

Choosing a Page Size Large page size Smaller page table Better coverage in TLB Amortize time to bring in page from disk Allow fast access to larger L1 (virtually addressed, physically tagged) BUT more fragmentation (unused portions of the page) Longer time for start-up Mem. Hier II CSE 471 Aut 02

Allowing Multiple Page Sizes Recent micros support multiple page sizes Better use of TLB is main reason If only two page sizes (one basic, one large) A single bit in the PTE is sufficient (with don’t care masks) If more than 2 sizes, a field in PTE indicates the size (multiple of basic) of the page Physical frames of various sizes need to be contiguous Need heuristics to coalesce/split pages dynamically Mem. Hier II CSE 471 Aut 02

Protection Protect user processes from each other Disallow an unauthorized process to access (read, write, execute) part of the addressing space of another process Disallow the use of some instructions by user processes E.g., disallow process A to change the access rights to process B Leads to Kernel mode (operating system) vs. user mode Only kernel mode can use some privileged instructions (e.g., disabling interrupts, I/O functions) Providing system calls whereby CPU can switch between user and kernel mode Mem. Hier II CSE 471 Aut 02

Where to Put Access Rights Associate protection bits with PTE e.g., disallow user to read/write /execute in kernel space or read someone else’s program (hence TLB check for v.add. I-caches) e.g., allow one process to share-read some data with another but not modifying it etc. User/kernel protection model can be extended with rings of protection e.g., Pentium has 4 levels of protection. Further extension leads to the concept of capabilities. Access rights to objects are passed from one process to another Mem. Hier II CSE 471 Aut 02

I/O and Caches (primer for cache coherence) Main memory The “green” page is transferred to/from disk from/to memory. The red “line” is mapped in the cache. Cache Disk P Mem. Hier II CSE 471 Aut 02

I/O Passes through the Cache I/O data passes through the cache on its way from/to memory Previous figure does not reflect this possibility. No coherency problem but contention for cache access between the processor and the I/O Wipes out the entire page area of the cache (i.e. replaces useful lines from other pages that conflict with the page being transferred) Implemented in Amdahl machines (main frames @ 1980) w/o on-chip caches Mem. Hier II CSE 471 Aut 02

I/O and Caches – Use of the O.S. I/O interacts directly with main memory Software solution Output (Memory-disk): (1) Write-through cache. No problem since correct info is in memory; (2) Write back cache: purge the cache of all lines in the page with dirty bits via O.S. interaction (ISA needs a purge instruction) Input (WT and WB). Use a non-cacheable buffer space. Then after input, flush the cache of these addresses and make the buffer cacheable Mem. Hier II CSE 471 Aut 02

Software solution: Output WT case: The contents of the “red” line are already in memory. No problem. WB case. If the “red” line is dirty, first write it back to memory. It might also be wise to invalidate all entries belonging to the page in the cache Mem. Hier II CSE 471 Aut 02

I/O Consistency -- Hardware Approach Subset of the shared-bus multiprocessor cache coherence protocol Cache has duplicate set of tags Allows concurrent checking of the bus and service requests from the processor Cache controller snoops on the bus On input, if there is a match in tags, store in both the cache and main memory On output, if there is a match in tags: if WT invalidate the entry; if WB take the cache entry instead of the memory contents and invalidate. Mem. Hier II CSE 471 Aut 02

Hardware Solution: Output The cache controller snoops on the bus. When it’s time to transfer the red line, its contents come from the cache and the line is invalidated Mem. Hier II CSE 471 Aut 02

Hardware Solution: Input The cache controller snoops on the bus. If there is match in tags, then the data is stored both in the cache and in main memory. Mem. Hier II CSE 471 Aut 02