Download presentation
Presentation is loading. Please wait.
Published byAlexina Franklin Modified over 9 years ago
1
For each of these, where could the data be and how would we find it? TLB hit – cache or physical memory TLB miss – cache, memory, or disk Virtual memory hit - cache or physical memory Virtual memory miss - disk Cache hit - cache Cache miss – physical memory or disk
2
Summary: Levels of the Memory Hierarchy CPU Registers 100s Bytes <10s ns Cache K Bytes 10-100 ns $.01-.001/bit Main Memory M Bytes 100ns-1us $.01-.001 Disk G Bytes ms 10 - 10 cents -3 -4 Capacity Access Time Cost Tape infinite sec-min 10 -6 Registers Cache Memory Disk Tape Instr. Operands Blocks Pages Files Staging Xfer Unit prog./compiler 1-8 bytes cache cntl 8-128 bytes OS 512-4K bytes user/operator Mbytes Upper Level Lower Level faster Larger
3
Performance Simplified model: execution time = (execution cycles + stall cycles) cycle time stall cycles = memory accesses miss rate miss penalty
4
Reducing Cache Miss Penalty Multilevel caches Critical Word First –Request missed word first and send to CPU –Don’t wait for read to finish Early Restart –Fetch in normal order, but when requested word arrives, immediately send to CPU –Don’t wait for read to finish Give priority to reads over writes –What you’re reading may be in write buffer so have to check write buffer during read
5
Reducing Cache Miss Penalty Merging write buffer –Check to see if buffer has same addresses we’re trying to write and merge them Victim caches –Remember what was discarded from cache –Check that first and bring back in on miss
6
Reducing Cache Miss Rate Larger block size Larger caches Higher associativity Way prediction – predict most likely place for block, check first Compiler Optimizations –Loop interchange, for example
7
Real Systems Very complicated memory systems:
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.