Memory Hierarchy. Since 1980, CPU has outpaced DRAM... CPU 60% per yr 2X in 1.5 yrs DRAM 9% per yr 2X in 10 yrs 10 DRAM CPU Performance (1/latency) 100.

Slides:



Advertisements
Similar presentations
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Advertisements

Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Virtual Memory. The Limits of Physical Addressing CPU Memory A0-A31 D0-D31 “Physical addresses” of memory locations Data All programs share one address.
Appendix C: Review of Memory Hierarchy David Patterson Electrical Engineering and Computer Sciences University of California, Berkeley
Lec 4 – Memory Hierarchy Review modified from: David Patterson Electrical Engineering and Computer Sciences University of California, Berkeley
CPE 731 Advance Computer Architecture Memory Hierarchy Review Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of California,
Computer Architecture Lec 4 – Memory Hierarchy Review.
Cs 325 virtualmemory.1 Accessing Caches in Virtual Memory Environment.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
CIS429/529 Cache Basics 1 Caches °Why is caching needed? Technological development and Moore’s Law °Why are caches successful? Principle of locality °Three.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley and Rabi Mahapatra & Hank Walker.
Now, Review of Memory Hierarchy
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
CIS °The Five Classic Components of a Computer °Today’s Topics: Memory Hierarchy Cache Basics Cache Exercise (Many of this topic’s slides were.
ECE 232 L27.Virtual.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 27 Virtual.
Review of Memory Hierarchy (Appendix C)
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
Memory Hierarchy Review. 2 Review from last lecture Quantify and summarize performance –Ratios, Geometric Mean, Multiplicative Standard Deviation F&P:
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
Lecture 5 Review of Memory Hierarchy (Appendix C in textbook)
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSE 820 Advanced Computer Architecture Lec 4 – Memory Hierarchy Review Based on slides by David Patterson.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
Appendix C Memory Hierarchy. Why care about memory hierarchies? Processor-Memory Performance Gap Growing Major source of stall cycles: memory accesses.
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
EEL5708/Bölöni Lec 4.1 Fall 2004 September 10, 2004 Lotzi Bölöni EEL 5708 High Performance Computer Architecture Review: Memory Hierarchy.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
Computer Organization & Programming
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
CPE232 Cache Introduction1 CPE 232 Computer Organization Spring 2006 Cache Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
Cps 220 Cache. 1 ©GK Fall 1998 CPS220 Computer System Organization Lecture 17: The Cache Alvin R. Lebeck Fall 1999.
Summary of caches: The Principle of Locality: –Program likely to access a relatively small portion of the address space at any instant of time. Temporal.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
CS 5513 Computer Architecture Lecture 4 – Memory Hierarchy Review.
The Goal: illusion of large, fast, cheap memory
Since 1980, CPU has outpaced DRAM ...
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Cache Memory Presentation I
Lec 3 – Memory Hierarchy Review
Sandeep K. S. Gupta Dept. CSE, ASU
CPSC 614 Computer Architecture Lec 6 – Memory Hierarchy Review
Rose Liu Electrical Engineering and Computer Sciences
CS 5513 Computer Architecture Lecture 4 – Memory Hierarchy Review
CPE 631 Lecture 05: Cache Design
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
Electrical and Computer Engineering
Cache Memory Rabi Mahapatra
CPE 631 Lecture 04: Review of the ABC of Caches
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Memory Hierarchy

Since 1980, CPU has outpaced DRAM... CPU 60% per yr 2X in 1.5 yrs DRAM 9% per yr 2X in 10 yrs 10 DRAM CPU Performance (1/latency) Year Gap grew 50% per year Q. How do architects address this gap? A. Put smaller, faster “cache” memories between CPU and DRAM. Create a “memory hierarchy”.

1977: DRAM faster than microprocessors Apple ][ (1977) Steve Wozniak Steve Jobs CPU: 1000 ns DRAM: 400 ns

10/16/ Levels of the Memory Hierarchy CPU Registers 100s Bytes <10s ns Cache K Bytes ns cents/bit Main Memory M Bytes 200ns- 500ns $ cents /bit Disk G Bytes, 10 ms (10,000,000 ns) cents/bit Capacity Access Time Cost Tape infinite sec-min Registers Cache Memory Disk Tape Instr. Operands Blocks Pages Files Staging Xfer Unit prog./compiler 1-8 bytes cache cntl bytes OS 512-4K bytes user/operator Mbytes Upper Level Lower Level faster Larger

Memory Hierarchy: Apple iMac G5 iMac G5 1.6 GHz RegL1 InstL1 DataL2DRAMDisk Size 1K64K32K512K256M80G Latency Cycles, Time 1, 0.6 ns 3, 1.9 ns 3, 1.9 ns 11, 6.9 ns 88, 55 ns 10 7, 12 ms Let programs address a memory space that scales to the disk size, at a speed that is usually as fast as register access Managed by compiler Managed by hardware Managed by OS, hardware, application Goal: Illusion of large, fast, cheap memory

iMac’s PowerPC 970: All caches on-chip (1K) R eg ist er s 512K L2 L1 (64K Instruction) L1 (32K Data)

10/16/ The Principle of Locality The Principle of Locality: –Program access a relatively small portion of the address space at any instant of time. (This is kind of like in real life, we all have a lot of friends. But at any given time most of us can only keep in touch with a small group of them.) Two Different Types of Locality: –Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) –Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access) Last 15 years, HW relied on locality for speed It is a property of programs which is exploited in machine design.

10/16/ Memory Hierarchy: Terminology Hit: data appears in some block in the upper level (example: Block X) –Hit Rate: the fraction of memory access found in the upper level –Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss Miss: data needs to be retrieved from a block in the lower level (Block Y) –Miss Rate = 1 - (Hit Rate) –Miss Penalty: Time to replace a block in the upper level + Time to deliver the block the processor Hit Time << Miss Penalty Lower Level Memory Upper Level Memory To Processor From Processor Blk X Blk Y

10/16/ Cache Measures Hit rate: fraction found in that level –So high that usually talk about Miss rate –Miss rate fallacy: as MIPS to CPU performance, miss rate to average memory access time in memory Average memory-access time = Hit time + Miss rate x Miss penalty (ns or clocks) Miss penalty: time to replace a block from lower level, including time to replace in CPU –access time : time to lower level = f(latency to lower level) –transfer time : time to transfer block =f(BW between upper & lower levels)

10/16/ Performance - Cache Memory System T e : Effective memory access time in cache memory system T c : Cache access time T m : Main memory access time T e = T c + (1 - h) T m Example: T c = 0.4ns, T m = 1.2ns, h = 0.85% T e = ( ) × 1.2 = 0.58ns

10/16/ Associative Memory The time required to find an item in memory can be reduced considerably if stored data can be identified for access by the content of the data itself rather than by an address A memory unit accessed by content is called an associative memory or content addressable memory (CAM) When a word is to be read from an associative memory, the content of the word, or part of the word, is specified. It is more expensive than RAM as each cell must have storage capability as well as logic circuits for matching its content with an external argument

10/16/ Questions for Memory Hierarchy Q1: Where can a block be placed in the upper level? (Block placement) Q2: How is a block found if it is in the upper level? (Block identification) Q3: Which block should be replaced on a miss? (Block replacement) Q4: What happens on a write? (Write strategy)

10/16/ Direct Mapping Each memory block has only one place to load in cache Mapping table is made of RAM instead of CAM n-bit memory address consists of 2 parts; k bits of index field and n - k bits of tag field n-bit addresses are used to access main memory and k-bit index is used to access the cache

10/16/ Direct Mapping (contd..) When CPU generates a memory request, the index field is used for address to access the cache and the tag field is compared with the tag in the word read from the cache If two tags matches, there is a hit and the desired data word is in cache If there is no match, there is a miss and the required word is read from main memory, and stored in cache together with the new tag, replacing the old value The disadvantage is that the hit ratio drops if two or more words with same index but different tags are accessed repeatedly (minimized by the fact that they are far apart- 512 locations)

10/16/ Direct Mapping with blocks The index field is now divided into two parts; the block field and the word field. In a 512-word cache we have 64 blocks of 8 words each; the tag field stored within the cache is common to all eight words of the same block Every time a miss occurs, an entire block of eight words is transferred from main memory to cache memory; this takes extra time, but the hit ratio will most likely improve with a larger block size because of the sequential nature of programs

10/16/ Associative Mapping Any block location in cache can store any block in memory Most Flexible, fast, very expensive Mapping table is implemented in an associative memory Mapping table stores both address and the content of the memory word If the address is found, 12-bit data is read and sent to the CPU, if not main memory is accessed for the word and the address-data pair is transferred to the associative cache memory If the cache is full, an address-data pair is displaced to make room for the pair needed according to a replacement policy (e.g. Round- robin, FIFO, LRU, OPT, Random)

10/16/ Set-Associative Mapping Each word of cache can store two or more words of memory under the same index address The number of tag-data items in one word of cache is said to form a set The word length is 2(6 + 12) = 36 bits, the index address is 9 bits, and the cache can accommodate 1024 words of main memory The index value of the address is used to access the cache, and the tag field of the CPU address is compared with both tags; the comparison logic is done by an associative search of the tags in the set similar to an associative memory (thus the name set- associative)

10/16/ Q3: Which block should be replaced on a miss? Easy for Direct Mapped Set Associative or Fully Associative: –Random –LRU (Least Recently Used) Assoc: 2-way 4-way 8-way Size LRU Ran LRU Ran LRU Ran 16 KB5.2%5.7% 4.7%5.3%4.4%5.0% 64 KB1.9%2.0% 1.5%1.7% 1.4%1.5% 256 KB1.15%1.17% 1.13% 1.13% 1.12% 1.12%

Q3: After a cache read miss, if there are no empty cache blocks, which block should be removed from the cache? A randomly chosen block? Easy to implement, how well does it work? The Least Recently Used (LRU) block? Appealing, but hard to implement for high associativity Miss Rate for 2-way Set Associative Cache Also, try other LRU approx. SizeRandomLRU 16 KB 5.7%5.2% 64 KB 2.0%1.9% 256 KB 1.17%1.15%

Q4: What happens on a write? Write-ThroughWrite-Back Policy Data written to cache block also written to lower- level memory Write data only to the cache Update lower level when a block falls out of the cache DebugEasyHard Do read misses produce writes? NoYes Do repeated writes make it to lower level? YesNo Additional option -- let writes to an un-cached address allocate a new cache line (“write-allocate”).

Write Buffers for Write-Through Caches Q. Why a write buffer ? Processor Cache Write Buffer Lower Level Memory Holds data awaiting write-through to lower level memory Ans. So CPU doesn’t stall Q. Why a buffer, why not just one register ? Ans. Bursts of writes are common. Q. Are Read After Write (RAW) hazards an issue for write buffer? Ans. Yes! Drain buffer before next read, or send read 1 st after check write buffers.

10/16/ Basic Cache Optimizations Reducing Miss Rate 1.Larger Block size (compulsory misses) 2.Larger Cache size (capacity misses) 3.Higher Associativity (conflict misses) Reducing Miss Penalty 4.Multilevel Caches Reducing Hit Time 5.Giving Reads Priority over Writes E.g., Read completes before earlier writes in write buffer

The Limits of Physical Addressing CPU Memory A0-A31 D0-D31 “Physical addresses” of memory locations Data All programs share one address space: The physical address space No way to prevent a program from accessing any machine resource Machine language programs must be aware of the machine organization

Solution: Add a Layer of Indirection CPU Memory A0-A31 D0-D31 Data User programs run in an standardized virtual address space Address Translation hardware managed by the operating system (OS) maps virtual address to physical memory “Physical Addresses” Address Translation VirtualPhysical “Virtual Addresses” Hardware supports “modern” OS features: Protection, Translation, Sharing

10/16/ Three Advantages of Virtual Memory Translation: –Program can be given consistent view of memory, even though physical memory is scrambled –Makes multithreading reasonable (now used a lot!) –Only the most important part of program (“Working Set”) must be in physical memory. –Contiguous structures (like stacks) use only as much physical memory as necessary yet still grow later. Protection: –Different threads (or processes) protected from each other. –Different pages can be given special behavior » (Read Only, Invisible to user programs, etc). –Kernel data protected from User programs –Very important for protection from malicious programs Sharing: –Can map same physical page to multiple users (“Shared memory”)

Page tables encode virtual address spaces A machine usually supports pages of a few sizes (MIPS R4000): A valid page table entry codes physical memory “frame” address for the page A virtual address space is divided into blocks of memory called pages Physical Address Space Virtual Address Space frame

Page tables encode virtual address spaces A machine usually supports pages of a few sizes (MIPS R4000): Physical Memory Space A valid page table entry codes physical memory “frame” address for the page A virtual address space is divided into blocks of memory called pages frame A page table is indexed by a virtual address Page Table OS manages the page table for each ASID

10/16/ Physical Memory Space Page table maps virtual page numbers to physical frames ( “PTE” = Page Table Entry) Virtual memory => treat memory  cache for disk Details of Page Table Virtual Address Page Table index into page table Page Table Base Reg V Access Rights PA V page no.offset 12 table located in physical memory P page no.offset 12 Physical Address frame virtual address Page Table

10/16/ Summary #1/2: The Cache Design Space Several interacting dimensions –cache size –block size –associativity –replacement policy –write-through vs write-back –write allocation The optimal choice is a compromise –depends on access characteristics »workload –depends on technology / cost Simplicity often wins

10/16/ Summary #2/2: Caches The Principle of Locality: –Program access a relatively small portion of the address space at any instant of time. »Temporal Locality: Locality in Time »Spatial Locality: Locality in Space Write Policy: Write Through vs. Write Back