The Memory Hierarchy (Lectures #17 - #20) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer.

Slides:



Advertisements
Similar presentations
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Advertisements

The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
Caching I Andreas Klappenecker CPSC321 Computer Architecture.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
Revision Mid 2 Prof. Sin-Min Lee Department of Computer Science.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
Memory: PerformanceCSCE430/830 Memory Hierarchy: Performance CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu (U. Maine)
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Lecture 31: Chapter 5 Today’s topic –Direct mapped cache Reminder –HW8 due 11/21/
Computing Systems Memory Hierarchy.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
Five Components of a Computer
EECS 318 CAD Computer Aided Design LECTURE 10: Improving Memory Access: Direct and Spatial caches Instructor: Francis G. Wolff Case.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy CprE 381 Computer Organization and Assembly Level Programming, Fall 2013 Zhao Zhang Iowa State University.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
Morgan Kaufmann Publishers
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell CS352H: Computer Systems Architecture Topic 11: Memory Hierarchy.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Caches Hiding Memory Access Times. PC Instruction Memory 4 MUXMUX Registers Sign Ext MUXMUX Sh L 2 Data Memory MUXMUX CONTROLCONTROL ALU CTL INSTRUCTION.
CPE232 Cache Introduction1 CPE 232 Computer Organization Spring 2006 Cache Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Memory Hierarchy How to improve memory access. Outline Locality Structure of memory hierarchy Cache Virtual memory.
1 CMPE 421 Parallel Computer Architecture PART3 Accessing a Cache.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
Computer Organization CS224 Fall 2012 Lessons 39 & 40.
Caches 1 Computer Organization II © McQuain Memory Technology Static RAM (SRAM) – 0.5ns – 2.5ns, $2000 – $5000 per GB Dynamic RAM (DRAM)
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Computer Organization CS224 Fall 2012 Lessons 37 & 38.
Memory Hierarchy Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
The Memory Hierarchy Cache, Main Memory, and Virtual Memory Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University.
Cache Issues Computer Organization II 1 Main Memory Supporting Caches Use DRAMs for main memory – Fixed width (e.g., 1 word) – Connected by fixed-width.
Chapter 7 Large and Fast: Exploiting Memory Hierarchy.
CMSC 611: Advanced Computer Architecture
Address – 32 bits WRITE Write Cache Write Main Byte Offset Tag Index Valid Tag Data 16K entries 16.
CS352H: Computer Systems Architecture
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Improving Memory Access 1/3 The Cache and Virtual Memory
Morgan Kaufmann Publishers Memory & Cache
Morgan Kaufmann Publishers
The Memory Hierarchy Chapter 5
CACHE MEMORY.
ECE 445 – Computer Organization
ECE 445 – Computer Organization
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Morgan Kaufmann Publishers Memory Hierarchy: Introduction
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Memory Hierarchy Lecture notes from MKP, H. H. Lee and S. Yalamanchili.
Memory & Cache.
Memory Principles.
Presentation transcript:

The Memory Hierarchy (Lectures #17 - #20) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization and Design, 4 th Edition, by Patterson and Hennessey, and were used with permission from Morgan Kaufmann Publishers.

Fall 2010ECE Computer Organization2 Material to be covered... Chapter 5: Sections 1 – 5, 11 – 12

Fall 2010ECE Computer Organization3 Objective (when designing a memory system) : Speed up memory access.

Fall 2010ECE Computer Organization4 Memory Technology Access TimeCost per GB Static RAM0.5 – 2.5 ns$2K - $5K Dynamic RAM50 – 70 ns$20 - $75 Magnetic Disk5 – 20 ms$ $2 Ideal MemoryAccess time of SRAMCost of Magnetic Disk

Fall 2010ECE Computer Organization5 Memory Hierarchy SpeedSizeCost / bitTechnology CPU Register fastestsmallesthighestFlip-Flop CacheSRAM Main Memory DRAM Mass Storage slowestlargestlowest Magnetic Disk

Fall 2010ECE Computer Organization6 Principle of Locality Programs access a small proportion of their address space at any time Temporal locality  Items accessed recently are likely to be accessed again soon  e.g., instructions in a loop Spatial locality  Items near those accessed recently are likely to be accessed soon  e.g., sequential instruction access, array data

Fall 2010ECE Computer Organization7 Taking Advantage of Locality Memory hierarchy  Hard disk ↔ Main Memory ↔ Cache Store everything on disk Copy recently accessed (and nearby) items from disk to smaller DRAM memory  Main memory Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memory  Cache memory attached to CPU

Fall 2010ECE Computer Organization8 Memory Hierarchy Levels Block (aka line): unit of copying  May be multiple words If accessed data is present in upper level  Hit: access satisfied by upper level Hit ratio: hits/accesses If accessed data is absent  Miss: block copied from lower level Time taken: miss penalty Miss ratio: misses/accesses = 1 – hit ratio  Then accessed data supplied from upper level Smaller, faster Larger, slower Smallest unit of memory that can be copied from one level of the memory hierarchy to another.

Fall 2010ECE Computer Organization9 Cache Memory Cache memory  The level of the memory hierarchy closest to the CPU Given accesses X 1, …, X n–1, X n §5.2 The Basics of Caches How do we know if the data is present in the cache? At what location do we look in the cache? Main Memory Addresses

Fall 2010ECE Computer Organization10 Direct Mapped Cache Location determined by address Direct mapped: only one choice  (Block address) modulo (#Blocks in cache) # of Blocks is a power of 2 Use low-order address bits to identify cache location Each memory address maps to one, and only one, cache location. Multiple memory addresses can map to the same cache location.

Fall 2010ECE Computer Organization11 Tags and Valid Bits How do we know which particular memory block is stored in a cache location?  Store block address as well as the data  Actually, only need the high-order bits  Called the tag What if there is no data in a location?  Valid bit: 1 = present, 0 = not present  Initially 0

Fall 2010ECE Computer Organization12 Cache Example 8-blocks, 1 word/block, direct mapped Initial state: IndexVTagData 000N 001N 010N 011N 100N 101N 110N 111N

Fall 2010ECE Computer Organization13 Cache Example IndexVTagData 000N 001N 010N 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss110

Fall 2010ECE Computer Organization14 Cache Example IndexVTagData 000N 001N 010Y11Mem[11010] 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss010

Fall 2010ECE Computer Organization15 Cache Example IndexVTagData 000N 001N 010Y11Mem[11010] 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Hit Hit010

Fall 2010ECE Computer Organization16 Cache Example IndexVTagData 000Y10Mem[10000] 001N 010Y11Mem[11010] 011Y00Mem[00011] 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss Miss Hit000

Fall 2010ECE Computer Organization17 Cache Example IndexVTagData 000Y10Mem[10000] 001N 010Y10Mem[10010] 011Y00Mem[00011] 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block Miss010

Fall 2010ECE Computer Organization18 Address Subdivision

Fall 2010ECE Computer Organization19 Example: Larger Block Size 64 blocks, 16 bytes/block  To what block number does address 1200 map?  What about address 1212? (Memory) Block address =  1200/16  = 75 (Cache) Block number = 75 modulo 64 = 11 TagIndexOffset bits6 bits22 bits

Fall 2010ECE Computer Organization20 Block Size Considerations Larger blocks should reduce miss rate  Due to spatial locality But in a fixed-sized cache  Larger blocks  fewer of them More competition  increased miss rate  Larger blocks  pollution Larger miss penalty  Can override benefit of reduced miss rate  Early restart and critical-word-first can help

Fall 2010ECE Computer Organization21 Cache Misses On cache hit, CPU proceeds normally On cache miss  Stall the CPU pipeline  Fetch block from next level of memory hierarchy  Instruction cache miss Restart instruction fetch  Data cache miss Complete data access

Fall 2010ECE Computer Organization22 Write-Through On data-write hit, could just update the block in cache  But then cache and memory would be inconsistent Write-through: also update main memory But makes writes take longer  e.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100 cycles Effective CPI = ×100 = 11 Solution: write buffer  Holds data waiting to be written to memory  CPU continues immediately Only stalls on write if write buffer is already full

Fall 2010ECE Computer Organization23 Write-Back Alternative to Write-Through On data-write hit, just update the block in cache  Keep track of whether each block is dirty When a dirty block is replaced  Write it back to memory  ADDITIONAL INFO REQUIRED – CONSULT TEXT  Can use a write buffer to allow replacing block to be read first

Fall 2010ECE Computer Organization24 Write Allocation What should happen on a write miss? CLARIFY THIS SLIDE – CONSULT TEXT Alternatives for write-through  Allocate on miss: fetch the block  Write around: don’t fetch the block Since programs often write a whole block before reading it (e.g., initialization) For write-back  Usually fetch the block

Fall 2010ECE Computer Organization25 Example: Intrinsity FastMATH Embedded MIPS processor  12-stage pipeline  Instruction and data access on each cycle Split cache: separate I-cache and D-cache  Each 16KB: 256 blocks × 16 words/block  D-cache: write-through or write-back (configurable) SPEC2000 miss rates  I-cache: 0.4%  D-cache: 11.4%  Weighted average: 3.2%

Fall 2010ECE Computer Organization26 Example: Intrinsity FastMATH

Fall 2010ECE Computer Organization27 Main Memory Supporting Caches Use DRAMs for main memory  Fixed width (e.g., 1 word)  Connected by fixed-width clocked bus Bus clock is typically slower than CPU clock Example cache block read  1 bus cycle for address transfer  15 bus cycles per DRAM access  1 bus cycle per data transfer For 4-word block, 1-word-wide DRAM  Miss penalty = 1 + 4×15 + 4×1 = 65 bus cycles  Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle

Fall 2010ECE Computer Organization28 Increasing Memory Bandwidth 4-word wide memory Miss penalty = = 17 bus cycles Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle 4-bank interleaved memory Miss penalty = ×1 = 20 bus cycles Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle

Fall 2010ECE Computer Organization29 Questions?