CSC3050 – Computer Architecture

Slides:



Advertisements
Similar presentations
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a cache for secondary (disk) storage – Managed jointly.
Advertisements

Virtual Memory In this lecture, slides from lecture 16 from the course Computer Architecture ECE 201 by Professor Mike Schulte are used with permission.
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
CSIE30300 Computer Architecture Unit 10: Virtual Memory Hsin-Chou Chi [Adapted from material by and
Virtual Memory Hardware Support
Cs 325 virtualmemory.1 Accessing Caches in Virtual Memory Environment.
Memory/Storage Architecture Lab Computer Architecture Virtual Memory.
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
S.1 Review: The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of.
Recap. The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of the.
Technical University of Lodz Department of Microelectronics and Computer Science Elements of high performance microprocessor architecture Virtual memory.
ECE 232 L27.Virtual.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 27 Virtual.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy (Part II)
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
CSE431 L22 TLBs.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 22. Virtual Memory Hardware Support Mary Jane Irwin (
Lecture 19: Virtual Memory
Lecture 15: Virtual Memory EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr.
CS 224 Spring 2011 Chapter 5B Computer Organization CS224 Chapter 5B: Exploiting the Memory Hierarchy, Part 2 Spring 2011 With thanks to M.J. Irwin, D.
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5B:Virtual Memory Adapted from Slides by Prof. Mary Jane Irwin, Penn State University Read Section 5.4,
July 30, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 8: Exploiting Memory Hierarchy: Virtual Memory * Jeremy R. Johnson Monday.
Virtual Memory Expanding Memory Multiple Concurrent Processes.
© 2004, D. J. Foreman 1 Virtual Memory. © 2004, D. J. Foreman 2 Objectives  Avoid copy/restore entire address space  Avoid unusable holes in memory.
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
Virtual Memory Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University.
Computer Organization CS224 Fall 2012 Lessons 45 & 46.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
CS.305 Computer Architecture Memory: Virtual Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
Constructive Computer Architecture Virtual Memory: From Address Translation to Demand Paging Arvind Computer Science & Artificial Intelligence Lab. Massachusetts.
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
Memory Hierarchy Ideal memory is fast, large, and inexpensive
ECE232: Hardware Organization and Design
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
CS352H: Computer Systems Architecture
Lecture 12 Virtual Memory.
From Address Translation to Demand Paging
From Address Translation to Demand Paging
Multilevel Memories (Improving performance using alittle “cash”)
CS 704 Advanced Computer Architecture
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Morgan Kaufmann Publishers
Cache Memory Presentation I
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
COSC121: Computer Systems. Managing Memory
Chapter 4 Large and Fast: Exploiting Memory Hierarchy Part 2 Virtual Memory 박능수.
CMSC 611: Advanced Computer Architecture
Lecture 23: Cache, Memory, Virtual Memory
ECE 445 – Computer Organization
Lecture 22: Cache Hierarchies, Memory
Lecture 12 Virtual Memory
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Contents Memory types & memory hierarchy Virtual memory (VM)
Computer Architecture
Virtual Memory Lecture notes from MKP and S. Yalamanchili.
Principle of Locality: Memory Hierarchies
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Virtual Memory.
Presentation transcript:

CSC3050 – Computer Architecture Prof. Yeh-Ching Chung School of Science and Engineering Chinese University of Hong Kong, Shenzhen

Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main memory Each gets a private virtual address space holding its frequently used code and data Protected from other programs CPU and OS translate virtual addresses to physical addresses VM “block” is called a page VM translation “miss” is called a page fault

Two Programs Sharing Physical Memory A program’s address space is divided into pages (all one fixed size) or segments (variable sizes) Program 1 virtual address space Main memory Program 2

Address Translation A virtual address is translated to a physical address by a combination of hardware and software So each memory request first requires an address translation from the virtual space to the physical space A virtual memory miss (i.e., when the page is not in physical memory) is called a page fault. Virtual page number Page offset 31 12 11 0 Physical page number 29 12 11 0 Translation Virtual Address (VA) Physical Address (PA)

Page Fault Penalty On page fault, the page must be fetched from disk Takes millions of clock cycles Handled by OS code Try to minimize page fault rate Fully associative placement in memory Smart replacement algorithms

Page Tables Stores placement information If page is present in memory Array of page table entries, indexed by virtual page number Page table register in CPU points to page table in physical memory If page is present in memory Page table entry (PTE) stores the physical page number Plus other status bits (referenced, dirty, …) If page is not present PTE can refer to location in swap space on disk

Address Translation Mechanisms

Replacement and Writes To reduce page fault rate, prefer least-recently used (LRU) replacement Reference bit (aka use bit) in PTE set to 1 on access to page Periodically cleared to 0 by OS A page with reference bit = 0 has not been used recently Disk writes take millions of cycles Block at once, not individual locations Write through is impractical Use write-back Dirty bit in PTE set when page is written

Fast Translation Using a TLB Address translation would appear to require extra memory references One to access the PTE Then the actual memory access But access to page tables has good locality So use a fast cache of PTEs within the CPU Called a Translation Look-aside Buffer (TLB) Typical: 16–512 PTEs, 0.5–1 cycle for hit, 10–100 cycles for miss, 0.01%–1% miss rate Misses could be handled by hardware or software

Fast Translation Using a TLB

TLB Misses If page is in memory If page is not in memory (page fault) Load the PTE from memory and retry Could be handled in hardware or in software If page is not in memory (page fault) OS handles fetching the page and updating the page table Then restart the faulting instruction

TLB and Cache

TLB Event Combinations TLB miss: PTE not in TLB Page Table miss: page not in memory Cache miss: block not in cache TLB Page Table Cache Possible? Under what circumstances? Hit Miss Yes. Although page table is never checked if TLB hits. Yes. TLB misses, PA in page table, data in cache. Yes. TLB misses, PA in page table, data not in cache. Yes. Page fault. Miss/Hit Impossible. Cannot have TLB translation if page is not in memory. Impossible. Data cannot be allowed in cache if the page is not in memory

The Hardware / Software Boundary Which part of address translation is done by hardware? TLB that caches recent translations: TLB access time is part of cache hit time May allot extra stage in pipeline Page Table storage, fault detection and updating Dirty & Reference bits Page faults result in interrupts Disk Placement: (Hardware) (Hardware) (Software) (Software)

The Memory Hierarchy Common principles apply at all levels of the memory hierarchy Based on notions of caching At each level in the hierarchy Block placement Finding a block Replacement on a miss Write policy

Block Placement Determined by associativity Direct mapped (1-way associative) One choice for placement n-way set associative n choices within a set Fully associative Any location Higher associativity reduces miss rate Increases complexity, cost, and access time

Finding a Block Hardware caches Virtual memory Reduce comparisons to reduce cost Virtual memory Full table lookup makes full associativity feasible Benefit in reduced miss rate

Replacement Choice of entry to replace on a miss Virtual memory Least recently used (LRU) Complex and costly hardware for high associativity Random Close to LRU, easier to implement Virtual memory LRU approximation with hardware support

Write Policy Write-through Write-back Virtual memory Update both upper and lower levels Simplifies replacement, but may require write buffer Write-back Update upper level only Update lower level when block is replaced Need to keep more state (e.g., dirty bit) Virtual memory Only write-back is feasible, given disk write latency

Sources of Misses Compulsory misses (aka cold start misses) First access to a block Capacity misses Due to finite cache size A replaced block is later accessed again Conflict misses (aka collision misses) In a non-fully associative cache Due to competition for entries in a set Would not occur in a fully associative cache of the same total size

Cache Design Trade-offs