Download presentation
Presentation is loading. Please wait.
Published byRosalind Collins Modified over 9 years ago
1
Computer Organization CS224 Fall 2012 Lessons 37 & 38
2
Memory Technology Static RAM (SRAM) l 0.5ns – 2.5ns, $2000 – $5000 per GB Dynamic RAM (DRAM) l 50ns – 70ns, $20 – $75 per GB Magnetic disk l 5ms – 20ms, $0.20 – $2 per GB Ideal memory l Access time of SRAM l Capacity and cost/GB of disk §5.1 Introduction
3
Principle of Locality Programs access a small proportion of their address space at any time Temporal locality l Items accessed recently are likely to be accessed again soon l e.g., instructions in a loop, induction variables Spatial locality l Items near those accessed recently are likely to be accessed soon l E.g., sequential instruction access, array data
4
Taking Advantage of Locality Memory hierarchy Store everything on disk Copy recently accessed (and nearby) items from disk to smaller DRAM memory l Main memory Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memory l Cache memory attached to CPU
5
Memory Hierarchy Levels Block (aka line): unit of copying l May be multiple words If accessed data is present in upper level l Hit: access satisfied by upper level -Hit ratio: hits/accesses If accessed data is absent l Miss: block copied from lower level -Time taken: miss penalty -Miss ratio: misses/accesses = 1 – hit ratio l Then accessed data supplied from upper level
6
Cache Memory Cache memory l The level of the memory hierarchy closest to the CPU Given accesses X 1, …, X n–1, X n §5.2 The Basics of Caches How do we know if the data is present? Where do we look?
7
Direct Mapped Cache Location determined by address Direct mapped: only one choice l (Block address) modulo (#Blocks in cache) #Blocks is a power of 2 Use low-order address bits
8
Tags and Valid Bits How do we know which particular block is stored in a cache location? l Store block address as well as the data l Actually, only need the high-order bits l Called the tag What if there is no data in a location? l Valid bit: 1 = present, 0 = not present l Initially 0
9
Cache Example 8-blocks, 1 word/block, direct mapped Initial state IndexVTagData 000N 001N 010N 011N 100N 101N 110N 111N
10
Cache Example IndexVTagData 000N 001N 010N 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block 2210 110Miss110
11
Cache Example IndexVTagData 000N 001N 010Y11Mem[11010] 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block 2611 010Miss010
12
Cache Example IndexVTagData 000N 001N 010Y11Mem[11010] 011N 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block 2210 110Hit110 2611 010Hit010
13
Cache Example IndexVTagData 000Y10Mem[10000] 001N 010Y11Mem[11010] 011Y00Mem[00011] 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block 1610 000Miss000 300 011Miss011 1610 000Hit000
14
Cache Example IndexVTagData 000Y10Mem[10000] 001N 010Y10Mem[10010] 011Y00Mem[00011] 100N 101N 110Y10Mem[10110] 111N Word addrBinary addrHit/missCache block 1810 010Miss010
15
Address Subdivision
16
Example: Larger Block Size 64 blocks, 16 bytes/block l To what block number does address 1200 map? Block address = 1200/16 = 75 Block number = 75 modulo 64 = 11 TagIndexOffset 03491031 4 bits6 bits22 bits
17
Block Size Considerations Larger blocks should reduce miss rate l Due to spatial locality But in a fixed-sized cache l Larger blocks fewer of them -More competition increased miss rate l Larger blocks pollution Larger miss penalty l Can override benefit of reduced miss rate l Early restart and critical-word-first can help
18
Cache Misses On cache hit, CPU proceeds normally On cache miss l Stall the CPU pipeline l Fetch block from next level of hierarchy l Instruction cache miss -Restart instruction fetch l Data cache miss -Complete data access
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.