Presentation is loading. Please wait.

Presentation is loading. Please wait.

DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses.

Similar presentations


Presentation on theme: "DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses."— Presentation transcript:

1 DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses

2 DAP Spr.‘98 ©UCB 2 Review: Who Cares About the Memory Hierarchy? µProc 60%/yr. DRAM 7%/yr. 1 10 100 1000 19801981198319841985198619871988198919901991199219931994199519961997199819992000 DRAM CPU 1982 Processor-Memory Performance Gap: (grows 50% / year) Performance “Moore’s Law” Processor Only Thus Far in Course: –CPU cost/performance, ISA, Pipelined Execution CPU-DRAM Gap 1980: no cache in µproc; 1995 2-level cache on chip (1989 first Intel µproc with a cache on chip)

3 DAP Spr.‘98 ©UCB 3 The Goal: Illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and fast (most of the time)? Hierarchy of Levels –Uses smaller and faster memory technologies close to the processor –Fast access time in highest level of hierarchy –Cheap, slow memory furthest from processor The aim of memory hierarchy design is to have access time close to the highest level and size equal to the lowest level

4 DAP Spr.‘98 ©UCB 4 Recap: Memory Hierarchy Pyramid Processor (CPU) Size of memory at each level Level 1 Level 2 Level n Increasing Distance from CPU, Decreasing cost / MB Level 3... transfer datapath: bus Decreasing distance from CPU, Decreasing Access Time (Memory Latency)

5 DAP Spr.‘98 ©UCB 5 Memory Hierarchy: Terminology Hit: data appears in level X: Hit Rate: the fraction of memory accesses found in the upper level Miss: data needs to be retrieved from a block in the lower level (Block Y) Miss Rate = 1 - (Hit Rate) Hit Time: Time to access the upper level which consists of Time to determine hit/miss + memory access time Miss Penalty: Time to replace a block in the upper level + Time to deliver the block to the processor Note: Hit Time << Miss Penalty

6 DAP Spr.‘98 ©UCB 6 Current Memory Hierarchy Control Data- path Processor regs Secon- dary Mem- ory L2 Cache Speed(ns):0.5ns2ns6ns100ns10,000,000ns Size (MB):0.00050.051-4100-1000100,000 Cost ($/MB):--$100$30$1 $0.05 Technology:RegsSRAMSRAMDRAMDisk L1 cache Main Mem- ory

7 DAP Spr.‘98 ©UCB 7 Memory Hierarchy: Why Does it Work? Locality! Temporal Locality (Locality in Time): => Keep most recently accessed data items closer to the processor Spatial Locality (Locality in Space): => Move blocks consists of contiguous words to the upper levels Lower Level Memory Upper Level Memory To Processor From Processor Blk X Blk Y Address Space 02^n - 1 Probability of reference

8 DAP Spr.‘98 ©UCB 8 Memory Hierarchy Technology Random Access: –“Random” is good: access time is the same for all locations –DRAM: Dynamic Random Access Memory »High density, low power, cheap, slow »Dynamic: need to be “refreshed” regularly –SRAM: Static Random Access Memory »Low density, high power, expensive, fast »Static: content will last “forever”(until lose power) “Not-so-random” Access Technology: –Access time varies from location to location and from time to time –Examples: Disk, CDROM Sequential Access Technology: access time linear in location (e.g.,Tape) We will concentrate on random access technology –The Main Memory: DRAMs + Caches: SRAMs

9 DAP Spr.‘98 ©UCB 9 Introduction to Caches Cache –is a small very fast memory (SRAM, expensive) –contains copies of the most recently accessed memory locations (data and instructions): temporal locality –is fully managed by hardware (unlike virtual memory) –storage is organized in blocks of contiguous memory locations: spatial locality –unit of transfer to/from main memory (or L2) is the cache block General structure –n blocks per cache organized in s sets –b bytes per block –total cache size n*b bytes

10 DAP Spr.‘98 ©UCB 10 Cache Organization (1) How do you know if something is in the cache? (2) If it is in the cache, how to find it? Answer to (1) and (2) depends on type or organization of the cache In a direct mapped cache, each memory address is associated with one possible block within the cache –Therefore, we only need to look in a single location in the cache for the data if it exists in the cache

11 DAP Spr.‘98 ©UCB 11 Simplest Cache: Direct Mapped Main Memory 4-Block Direct Mapped Cache Block Address 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Cache Index 0 1 2 3 0010 0110 1010 1110 index determines block in cache index = (address) mod (# blocks) If number of cache blocks is power of 2, then cache index is just the lower n bits of memory address [ n = log 2 (# blocks) ] tag index Memory block address

12 DAP Spr.‘98 ©UCB 12 Issues with Direct-Mapped If block size > 1, rightmost bits of index are really the offset within the indexed block ttttttttttttttttt iiiiiiiiii oooo tagindexbyte to checkto offset if have selectwithin correct blockblockblock

13 DAP Spr.‘98 ©UCB 13 1612Byte offset HitData 16 32 4K entries 16 bits128 bits Mux 323232 2 32 Block offsetIndex Tag Address (showing bit positions) 31... 16 15.. 4 3 2 1 0 64KB Cache with 4-word (16-byte) blocks TagData V

14 DAP Spr.‘98 ©UCB 14 Direct-mapped Cache Contd. The direct mapped cache is simple to design and its access time is fast (Why?) Good for L1 (on-chip cache) Problem: Conflict Miss, so low hit ratio Conflict Misses are misses caused by accessing different memory locations that are mapped to the same cache index In direct mapped cache, no flexibility in where memory block can be placed in cache, contributing to conflict misses

15 DAP Spr.‘98 ©UCB 15 Another Extreme: Fully Associative Fully Associative Cache (8 word block) –Omit cache index; place item in any block! –Compare all Cache Tags in parallel By definition: Conflict Misses = 0 for a fully associative cache Byte Offset : Cache Data B 0 0 4 31 : Cache Tag (27 bits long) Valid : B 1B 31 : Cache Tag = = = = = :

16 DAP Spr.‘98 ©UCB 16 Fully Associative Cache Must search all tags in cache, as item can be in any cache block Search for tag must be done by hardware in parallel (other searches too slow) But, the necessary parallel comparator hardware is very expensive Therefore, fully associative placement practical only for a very small cache

17 DAP Spr.‘98 ©UCB 17 Compromise: N-way Set Associative Cache N-way set associative: N cache blocks for each Cache Index –Like having N direct mapped caches operating in parallel –Select the one that gets a hit Example: 2-way set associative cache –Cache Index selects a “set” of 2 blocks from the cache –The 2 tags in set are compared in parallel –Data is selected based on the tag result (which matched the address)

18 DAP Spr.‘98 ©UCB 18 Example: 2-way Set Associative Cache Cache Data Block 0 Cache Tag Valid ::: Cache Data Block 0 Cache Tag Valid ::: Cache Block Hit mux tag index offset address = =

19 DAP Spr.‘98 ©UCB 19 Set Associative Cache Contd. Direct Mapped, Fully Associative can be seen as just variations of Set Associative block placement strategy Direct Mapped = 1-way Set Associative Cache Fully Associative = n-way Set associativity for a cache with exactly n blocks

20 DAP Spr.‘98 ©UCB 20 Alpha 21264 Cache Organization

21 DAP Spr.‘98 ©UCB 21 Block Replacement Policy N-way Set Associative or Fully Associative have choice where to place a block, (which block to replace) –Of course, if there is an invalid block, use it Whenever get a cache hit, record the cache block that was touched When need to evict a cache block, choose one which hasn't been touched recently: “Least Recently Used” (LRU) –Past is prologue: history suggests it is least likely of the choices to be used soon –Flip side of temporal locality

22 DAP Spr.‘98 ©UCB 22 Review: Four Questions for Memory Hierarchy Designers Q1: Where can a block be placed in the upper level? (Block placement) –Fully Associative, Set Associative, Direct Mapped Q2: How is a block found if it is in the upper level? (Block identification) –Tag/Block Q3: Which block should be replaced on a miss? (Block replacement) –Random, LRU Q4: What happens on a write? (Write strategy) –Write Back or Write Through (with Write Buffer)

23 DAP Spr.‘98 ©UCB 23 Write Policy: Write-Through vs Write-Back Write-through: all writes update cache and underlying memory/cache –Can always discard cached data - most up-to-date data is in memory –Cache control bit: only a valid bit Write-back: all writes simply update cache –Can’t just discard cached data - may have to write it back to memory –Flagged write-back –Cache control bits: both valid and dirty bits Other Advantages: –Write-through: »memory (or other processors) always have latest data »Simpler management of cache –Write-back: »Needs much lower bus bandwidth due to infrequent access »Better tolerance to long-latency memory?

24 DAP Spr.‘98 ©UCB 24 Write Through: Write Allocate vs Non-Allocate Write allocate: allocate new cache line in cache –Usually means that you have to do a “read miss” to fill in rest of the cache-line! –Alternative: per/word valid bits Write non-allocate (or “write-around”): –Simply send write data through to underlying memory/cache - don’t allocate new cache line!

25 DAP Spr.‘98 ©UCB 25 Write Buffers Write Buffers (for wrt- through) –buffers words to be written in L2 cache/memory along with their addresses. –2 to 4 entries deep –all read misses are checked against pending writes for dependencies (associatively) –allows reads to proceed ahead of writes –can coalesce writes to same address Write-back Buffers –between a write-back cache and L2 or MM –algorithm »move dirty block to write-back buffer »read new block »write dirty block in L2 or MM –can be associated with victim cache (later) L1 L2 Write buffer to CPU

26 DAP Spr.‘98 ©UCB 26 Write Merge

27 DAP Spr.‘98 ©UCB 27 Miss-oriented Approach to Memory Access: –CPI Execution includes ALU and Memory instructions Review: Cache performance Separating out Memory component entirely –AMAT = Average Memory Access Time –CPI ALUOps does not include memory instructions

28 DAP Spr.‘98 ©UCB 28 Impact on Performance Suppose a processor executes at –Clock Rate = 200 MHz (5 ns per cycle), Ideal (no misses) CPI = 1.1 –50% arith/logic, 30% ld/st, 20% control Suppose that 10% of memory operations (Data) get 50 cycle miss penalty Suppose that 1% of instructions get same miss penalty CPI = ideal CPI + average stalls per instruction = 1.1(cycles/ins) + [ 0.30 (DataMops/ins) x 0.10 (miss/DataMop) x 50 (cycle/miss)] + [ 1 (InstMop/ins) x 0.01 (miss/InstMop) x 50 (cycle/miss)] = (1.1 + 1.5 +.5) cycle/ins = 3.1 58% (1.5/2.6) of the time the proc is stalled waiting for data memory! Total no. of memory accesses = one per instrn + 0.3 for data = 1.3 Thus, AMAT=(1/1.3)x[1+0.01x50]+(0.3/1.3)x[1+0.1x50]=2.54 cycles => instead of one cycle.

29 DAP Spr.‘98 ©UCB 29 Suppose a processor has the following parameters: –CPI = 2 (w/o memory stalls) –mem access per instruction = 1.5 Compare AMAT and CPU time for a direct mapped cache and a 2-way set associative cache assuming: –AMAT d = hit time + miss rate * miss penalty = 1*1 + 0.014*75 = 2.05 ns –AMAT 2 = 1*1.25 + 0.01*75 = 2 ns < 2.05 ns –CPI d = (CPI*cc + mem. stall time)*IC = (2*1 + 1.5*0.014*75)IC = 3.575*IC –CPI 2 = (2*1.25 + 1.5*0.01*75)IC = 3.625*IC > CPI d ! Change in cc affects all instructions while reduction in miss rate benefit only memory instructions. Impact of Change in cc ccHit cycleMiss penaltyMiss rate Direct map1ns175 ns1.4% 2-way associative 1.25ns(why?)175 ns1.0%

30 DAP Spr.‘98 ©UCB 30 Miss Penalty for Out-of-Order (OOO) Exe. Processor. In OOO processors, memory stall cycles are overlapped with execution of other instructions. Miss penalty should not include this overlapped part. mem stall cycle per instruction = mem miss per instruction x (total miss penalty – overlapped miss penalty) For the previous example. Suppose 30% of the 75ns miss penalty can be overlapped, what is the AMAT and CPU time? –Assume using direct map cache, cc=1.25ns to handle out of order execution. AMATd = 1*1.25 + 0.014*(75*0.7) = 1.985 ns With 1.5 memory accesses per instruction, CPU time =( 2*1.25 + 1.5 * 0.014 * (75*0.7))*IC = 3.6025 IC < CPU 2

31 DAP Spr.‘98 ©UCB 31 Lock-Up Free Cache Using MSHR (Miss Status Holding Register)

32 DAP Spr.‘98 ©UCB 32 Avg. Memory Access Time vs. Miss Rate Associativity reduces miss rate, but increases hit time due to increase in hardware complexity! Example: For on-chip cache, assume CCT = 1.10 for 2- way, 1.12 for 4-way, 1.14 for 8-way vs. CCT direct mapped Cache SizeAssociativity (KB)1-way2-way4-way8-way 12.332.152.072.01 21.981.861.761.68 41.721.671.611.53 81.461.481.471.43 161.291.321.321.32 321.201.241.251.27 641.141.201.211.23 1281.101.171.181.20 (Red means A.M.A.T. not improved by more associativity)

33 DAP Spr.‘98 ©UCB 33 Unified vs Split Caches Unified vs Separate I&D Example: –16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47% –32KB unified: Aggregate miss rate=1.99% Which is better (ignore L2 cache)? –Assume 33% data ops  75% accesses from instructions (1.0/1.33) –hit time=1, miss time=50 –Note that data hit has 1 stall for unified cache (only one port) AMAT Harvard =75%x(1+0.64%x50)+25%x(1+6.47%x50) = 2.05 AMAT Unified =75%x(1+1.99%x50)+25%x(1+1+1.99%x50)= 2.24 Proc I-Cache-1 Proc Unified Cache-1 Unified Cache-2 D-Cache-1 Proc Unified Cache-2


Download ppt "DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses."

Similar presentations


Ads by Google