Caches 1 Computer Organization II ©2005-2013 McQuain Memory Technology Static RAM (SRAM) – 0.5ns – 2.5ns, $2000 – $5000 per GB Dynamic RAM (DRAM)

Slides:



Advertisements
Similar presentations
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Advertisements

Cache Performance 1 Computer Organization II © CS:APP & McQuain Cache Memory and Performance Many of the following slides are taken with.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Caching I Andreas Klappenecker CPSC321 Computer Architecture.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
Revision Mid 2 Prof. Sin-Min Lee Department of Computer Science.
Lecture 32: Chapter 5 Today’s topic –Cache performance assessment –Associative caches Reminder –HW8 due next Friday 11/21/2014 –HW9 due Wednesday 12/03/2014.
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Lecture 31: Chapter 5 Today’s topic –Direct mapped cache Reminder –HW8 due 11/21/
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
Computing Systems Memory Hierarchy.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
 Higher associativity means more complex hardware  But a highly-associative cache will also exhibit a lower miss rate —Each set has more blocks, so there’s.
Patterson & Hennessy: Ch. 5 Silberschatz et al: Ch. 8 & 9 Memory Hierarchy, Caching, Virtual Memory.
Five Components of a Computer
Chapter 5 Large and Fast: Exploiting Memory Hierarchy CprE 381 Computer Organization and Assembly Level Programming, Fall 2013 Zhao Zhang Iowa State University.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
Memory Hierarchy Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
Morgan Kaufmann Publishers
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy Dr. Hussein Al-Zoubi.
Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 1 Memory Technology Static RAM (SRAM) 0.5ns – 2.5ns, $2000 – $5000 per GB Dynamic RAM (DRAM)
Foundation of Systems Xiang Lian The University of Texas-Pan American.
University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell CS352H: Computer Systems Architecture Topic 11: Memory Hierarchy.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Lecture 14: Caching, cont. EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr.
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
Caches Hiding Memory Access Times. PC Instruction Memory 4 MUXMUX Registers Sign Ext MUXMUX Sh L 2 Data Memory MUXMUX CONTROLCONTROL ALU CTL INSTRUCTION.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Computer Organization CS224 Fall 2012 Lessons 41 & 42.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 5:
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
Computer Organization CS224 Fall 2012 Lessons 39 & 40.
The Memory Hierarchy (Lectures #17 - #20) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Computer Organization CS224 Fall 2012 Lessons 37 & 38.
Memory Hierarchy Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
The Memory Hierarchy Cache, Main Memory, and Virtual Memory Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University.
Cache Issues Computer Organization II 1 Main Memory Supporting Caches Use DRAMs for main memory – Fixed width (e.g., 1 word) – Connected by fixed-width.
Chapter 7 Large and Fast: Exploiting Memory Hierarchy.
COSC3330 Computer Architecture
Cache Memory and Performance
CS352H: Computer Systems Architecture
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Morgan Kaufmann Publishers Memory & Cache
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Morgan Kaufmann Publishers
The Memory Hierarchy Chapter 5
ECE 445 – Computer Organization
ECE 445 – Computer Organization
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Morgan Kaufmann Publishers Memory Hierarchy: Introduction
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Memory Hierarchy Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
Memory & Cache.
Memory Principles.
Presentation transcript:

Caches 1 Computer Organization II © McQuain Memory Technology Static RAM (SRAM) – 0.5ns – 2.5ns, $2000 – $5000 per GB Dynamic RAM (DRAM) – 50ns – 70ns, $20 – $75 per GB Magnetic disk – 5ms – 20ms, $0.20 – $2 per GB Ideal memory – Average access time similar to that of SRAM – Capacity and cost/GB similar to that of disk

Caches 2 Computer Organization II © McQuain Principle of Locality Programs access a small proportion of their address space at any time Spatial locality – Items near (in memory) those accessed recently are likely to be accessed soon – E.g., sequential instruction access, array data Temporal locality – Items accessed recently are likely to be accessed again soon – e.g., instructions in a loop, induction variables

Caches 3 Computer Organization II © McQuain Taking Advantage of Locality Memory hierarchy Store everything on disk Copy recently accessed (and nearby) items from disk to smaller DRAM memory – Main memory Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memory – Cache memory attached to CPU

Caches 4 Computer Organization II © McQuain Memory Hierarchy Levels Block (aka line): fundamental unit of copying – May be multiple words If accessed data is present in upper level – Hit: access satisfied by upper level n Hit ratio: hits/accesses If accessed data is absent – Miss: block copied from lower level n Time taken: miss penalty n Miss ratio: misses/accesses = 1 – hit ratio – Then accessed data supplied from upper level cache DRAM

Caches 5 Computer Organization II © McQuain Cache Memory Cache memory – The level of the memory hierarchy closest to the CPU Given accesses X 1, …, X n–1, X n How do we know if the data is present? Where do we look?

Caches 6 Computer Organization II © McQuain Direct Mapped Cache Location in cache determined entirely by memory address of cached data Direct mapped: only one choice – (Block address) modulo (#Blocks in cache) #Blocks is a power of 2 Use low-order address bits

Caches 7 Computer Organization II © McQuain Tags and Valid Bits How do we know which particular block is stored in a cache location? – Store the block address as well as the data – Actually, only need the high-order bits --- why?? – Called the tag What if there is no data in a location? – Valid bit: 1 = present, 0 = not present – Initially valid bit is 0

Caches 8 Computer Organization II © McQuain Cache Example 8-blocks, 1 word/block, direct mapped Initial state: IndexVTagData 000N 001N 010N 011N 100N 101N 110N 111N

Caches 9 Computer Organization II © McQuain Cache Example IndexVTagData 000N 001N 010N 011N 100N 101N 110N 111N Word addrBinary addrHit/missCache block Miss110

Caches 10 Computer Organization II © McQuain Cache Example IndexVTagData 000N 001N 010N 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss110

Caches 11 Computer Organization II © McQuain Cache Example IndexVTagData 000N 001N 010Y11Mem[11010] 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss010

Caches 12 Computer Organization II © McQuain Cache Example IndexVTagData 000N 001N 010Y11Mem[11010] 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Hit Hit010

Caches 13 Computer Organization II © McQuain Cache Example IndexVTagData 000Y10Mem[10000] 001N 010Y11Mem[11010] 011Y00Mem[00011] 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss Miss Hit000

Caches 14 Computer Organization II © McQuain Cache Example Word addrBinary addrHit/missCache block Miss010 IndexVTagData 000Y10Mem[10000] 001N 010Y11Mem[11010] 011Y00Mem[00011] 100N 101N 110Y10Mem[10110] 111N

Caches 15 Computer Organization II © McQuain Cache Example IndexVTagData 000Y10Mem[10000] 001N 010Y10Mem[10010] 011Y00Mem[00011] 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss010

Caches 16 Computer Organization II © McQuain Cache Addressing QTP:how are the low 2 bits used?

Caches 17 Computer Organization II © McQuain Memory Organization S GiB DRAM= 2 32 bytes = 2 20 blocks of 2 12 bytes = KiB blocks 4-KiB “blocks” W byte words byte

Caches 18 Computer Organization II © McQuain Address Subdivision 0x x x x x x xFFFFF Address: Block# x Word# x 4 + Byte# 0x000 0x001 0x002 0x x x3FF 4-KiB block word GiB memory

Caches 19 Computer Organization II © McQuain Address Subdivision

Caches 20 Computer Organization II © McQuain Example: Larger Block Size 64 blocks, 16 bytes/block – To what block number does address 1200 map? Block address =  1200/16  = 75 Block number = 75 modulo 64 = 11 TagIndexOffset bits6 bits22 bits

Caches 21 Computer Organization II © McQuain Block Size Considerations Larger blocks should reduce miss rate – Due to spatial locality But in a fixed-sized cache – Larger blocks  fewer of them fit in cache at once n More competition  increased miss rate – Larger blocks  pollution of cache Larger miss penalty – Can override benefit of reduced miss rate – Early restart and critical-word-first can help

Caches 22 Computer Organization II © McQuain Cache Misses On cache hit, CPU proceeds normally On cache miss – Stall the CPU pipeline – Fetch block from next level of hierarchy – Instruction cache miss n Restart instruction fetch – Data cache miss n Complete data access

Caches 23 Computer Organization II © McQuain Write-Through On data-write hit, could just update the block in cache – But then cache and memory would be inconsistent Write through: also update memory But makes writes take longer – e.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100 cycles n Effective CPI = ×100 = 11 Solution: write buffer – Holds data waiting to be written to memory – CPU continues immediately n Only stalls on write if write buffer is already full

Caches 24 Computer Organization II © McQuain Write-Back Alternative: On data-write hit, just update the block in cache – Keep track of whether each block is dirty When a dirty block is replaced – Write it back to memory – Can use a write buffer to allow replacing block to be read first

Caches 25 Computer Organization II © McQuain Write Allocation What should happen on a write miss? Alternatives for write-through – Allocate on miss: fetch the block – Write around: don’t fetch the block n Since programs often write a whole block before reading it (e.g., initialization) For write-back – Usually fetch the block

Caches 26 Computer Organization II © McQuain Main Memory Supporting Caches Use DRAMs for main memory – Fixed width (e.g., 1 word) – Connected by fixed-width clocked bus n Bus clock is typically slower than CPU clock Example cache block read – 1 bus cycle for address transfer – 15 bus cycles per DRAM access – 1 bus cycle per data transfer For 4-word block, 1-word-wide DRAM – Miss penalty =1 + 4×(15 + 1) = 65 bus cycles – Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle

Caches 27 Computer Organization II © McQuain Increasing Memory Bandwidth 4-word wide memory – Miss penalty = = 17 bus cycles – Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle 4-bank interleaved memory – Miss penalty = ×1 = 20 bus cycles – Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle

Caches 28 Computer Organization II © McQuain Advanced DRAM Organization Bits in a DRAM are organized as a rectangular array – DRAM accesses an entire row – Burst mode: supply successive words from a row with reduced latency Double data rate (DDR) DRAM – Transfer on both rising and falling clock edges Quad data rate (QDR) DRAM – Separate DDR inputs and outputs

Caches 29 Computer Organization II © McQuain Measuring Cache Performance Components of CPU time – Program execution cycles n Includes cache hit time – Memory stall cycles n Mainly from cache misses With simplifying assumptions:

Caches 30 Computer Organization II © McQuain Cache Performance Example Given – I-cache miss rate = 2% – D-cache miss rate = 4% – Miss penalty = 100 cycles – Base CPI (ideal cache) = 2 – Load & stores are 36% of instructions Miss cycles per instruction – I-cache: 0.02 × 100 = 2 – D-cache: 0.36 × 0.04 × 100 = 1.44 Actual CPI = = 5.44 – Ideal CPU is 5.44/2 =2.72 times faster

Caches 31 Computer Organization II © McQuain Average Access Time Hit time is also important for performance Average memory access time (AMAT) – AMAT = Hit time + Miss rate × Miss penalty Example – CPU with 1ns clock, hit time = 1 cycle, miss penalty = 20 cycles, I-cache miss rate = 5% – AMAT = × 20 = 2ns n 2 cycles per instruction

Caches 32 Computer Organization II © McQuain Performance Summary When CPU performance increased – Miss penalty becomes more significant Decreasing base CPI – Greater proportion of time spent on memory stalls Increasing clock rate – Memory stalls account for more CPU cycles Can’t neglect cache behavior when evaluating system performance

Caches 33 Computer Organization II © McQuain Associative Caches Fully associative – Allow a given block to go in any cache entry – Requires all entries to be searched at once – Comparator per entry (expensive) n-way set associative – Each set contains n entries – Block number determines which set n (Block number) modulo (#Sets in cache) – Search all entries in a given set at once – n comparators (less expensive)

Caches 34 Computer Organization II © McQuain Associative Cache Example

Caches 35 Computer Organization II © McQuain Spectrum of Associativity For a cache with 8 entries

Caches 36 Computer Organization II © McQuain Associativity Example Compare 4-block caches – Direct mapped vs 2-way set associative vs fully associative – Block access sequence: 0, 8, 0, 6, 8 Block addressCache indexHit/missCache content after access MissMem[0] 80MissMem[8] 00MissMem[0] 62MissMem[0]Mem[6] 80MissMem[8]Mem[6] Direct mapped

Caches 37 Computer Organization II © McQuain Associativity Example 2-way set associative Block addressCache indexHit/missCache content after access Set 0Set 1 00missMem[0] 80missMem[0]Mem[8] 00hitMem[0]Mem[8] 60missMem[0]Mem[6] 80missMem[8]Mem[6] Fully associative Block addressHit/missCache content after access 0missMem[0] 8missMem[0]Mem[8] 0hitMem[0]Mem[8] 6missMem[0]Mem[8]Mem[6] 8hitMem[0]Mem[8]Mem[6]

Caches 38 Computer Organization II © McQuain How Much Associativity Increased associativity decreases miss rate – But with diminishing returns Simulation of a system with 64KB D-cache, 16-word blocks, SPEC2000 – 1-way: 10.3% – 2-way: 8.6% – 4-way: 8.3% – 8-way: 8.1%

Caches 39 Computer Organization II © McQuain Set Associative Cache Organization

Caches 40 Computer Organization II © McQuain Replacement Policy Direct mapped: no choice Set associative – Prefer non-valid entry, if there is one – Otherwise, choose among entries in the set Least-recently used (LRU) – Choose the one unused for the longest time n Simple for 2-way, manageable for 4-way, too hard beyond that Random – Gives approximately the same performance as LRU for high associativity

Caches 41 Computer Organization II © McQuain Multilevel Caches Primary cache attached to CPU – Small, but fast Level-2 cache services misses from primary cache – Larger, slower, but still faster than main memory Main memory services L-2 cache misses Some high-end systems include L-3 cache

Caches 42 Computer Organization II © McQuain Multilevel Cache Example Given – CPU base CPI = 1, clock rate = 4GHz – Miss rate/instruction = 2% – Main memory access time = 100ns With just primary cache – Miss penalty = 100ns/0.25ns = 400 cycles – Effective CPI = × 400 = 9

Caches 43 Computer Organization II © McQuain Example (cont.) Now add L-2 cache – Access time = 5ns – Global miss rate to main memory = 0.5% Primary miss with L-2 hit – Penalty = 5ns/0.25ns = 20 cycles Primary miss with L-2 miss – Extra penalty = 400 cycles CPI = × × 400 = 3.4 Performance ratio = 9/3.4 = 2.6

Caches 44 Computer Organization II © McQuain Multilevel Cache Considerations Primary cache – Focus on minimal hit time L-2 cache – Focus on low miss rate to avoid main memory access – Hit time has less overall impact Results – L-1 cache usually smaller than a single cache – L-1 block size smaller than L-2 block size