Exploiting Memory Hierarchy Chapter 7 B.Ramamurthy 11/9/2018
Direct Mapped Cache: the Idea Main Memory All addresses with LSB 001 will map to purple cache slot All addresses with LSB 101 will map to blue cache slot And so on 11/9/2018
Cache Organization Content addressable memory Fully associative Set associative Fig.7.7 Cache Memory Organization Address Cache Memory Organization Data Regular Memory Organization 11/9/2018
Multi-word Cache Block Ordinary Memory word address Tag Index Block# Byte# within word Valid bit Tag Index Block# Byte# within block Block selection Data Block word 11/9/2018
Address Cache block# Example: Floor(457/4) 114 114%8 2 Floor (Address/#bytes per block)block# in main memory (Block# in memory % blocks in cache)cache block# Example: Floor(457/4) 114 114%8 2 11/9/2018
Handling Cache Misses Send original PC value to memory Perform read on main memory Write cache entry, putting the data from memory in data portion of the entry, write upper bits into tag, turn valid bit on. Restart the missed instruction. 11/9/2018
Handling Writes Write through: A scheme in which writes always update both the cache and the memory, ensuring that data is always consistent between two. Write-back: A scheme that handles writes by updating values only to the block in the cache, then writing the modified block to the main memory. 11/9/2018
Example SPEC2000 CPI 1.0 with no misses Each miss incurs 100 extra cycles; miss occurs 10% of the times. Average CPI : 1 + 100X0.1 = 1+ 10 = 11 cycles (not good!) 11/9/2018
An Example Cache: The Intrinsity FastMath processor 12-stage pipeline When operating on peak speed, the processor can request both an instruction and a data word on every clock. Separate instruction and data cache are used. Each cache is 16KB or 4K words with 16-word blocks. 11/9/2018
Fig7.9 256 blocks with 16 words per block. 11/9/2018