Memory Hierarchy— Reducing Miss Penalty Reducing Hit Time Main Memory

Slides:



Advertisements
Similar presentations
Main MemoryCS510 Computer ArchitecturesLecture Lecture 15 Main Memory.
Advertisements

Anshul Kumar, CSE IITD CSL718 : Memory Hierarchy Cache Performance Improvement 23rd Feb, 2006.
Lecture 12 Reduce Miss Penalty and Hit Time
Miss Penalty Reduction Techniques (Sec. 5.4) Multilevel Caches: A second level cache (L2) is added between the original Level-1 cache and main memory.
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
Caches Vincent H. Berk October 21, 2005
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
CS252/Culler Lec 4.1 1/31/02 CS203A Graduate Computer Architecture Lecture 14 Cache Design Taken from Prof. David Culler’s notes.
ENGS 116 Lecture 141 Main Memory and Virtual Memory Vincent H. Berk October 26, 2005 Reading for today: Sections 5.1 – 5.4, (Jouppi article) Reading for.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
1 Lecture 13: Cache Innovations Today: cache access basics and innovations, DRAM (Sections )
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy CprE 381 Computer Organization and Assembly Level Programming, Fall 2013 Zhao Zhang Iowa State University.
EEE-445 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
Lecture 13 Main Memory Computer Architecture COE 501.
Main Memory CS448.
Memory Hierarchy— Reducing Miss Penalty Reducing Hit Time Main Memory Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2008.
Lecture 12: Memory Hierarchy— Five Ways to Reduce Miss Penalty (Second Level Cache) Professor Alvin R. Lebeck Computer Science 220 Fall 2001.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Spring 2003CSE P5481 Advanced Caching Techniques Approaches to improving memory system performance eliminate memory operations decrease the number of misses.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
MBG 1 CIS501, Fall 99 Lecture 11: Memory Hierarchy: Caches, Main Memory, & Virtual Memory Michael B. Greenwald Computer Architecture CIS 501 Fall 1999.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 5:
1 Adapted from UC Berkeley CS252 S01 Lecture 18: Reducing Cache Hit Time and Main Memory Design Virtucal Cache, pipelined cache, cache summary, main memory.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 29 Memory Hierarchy Design Cache Performance Enhancement by: Reducing Cache.
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
CMSC 611: Advanced Computer Architecture Memory & Virtual Memory Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material.
Administration Midterm on Thursday Oct 28. Covers material through 10/21. Histogram of grades for HW#1 posted on newsgroup. Sample problem set (and solutions)
CS 704 Advanced Computer Architecture
Address – 32 bits WRITE Write Cache Write Main Byte Offset Tag Index Valid Tag Data 16K entries 16.
Yu-Lun Kuo Computer Sciences and Information Engineering
Memory Hierarchy Reducing Hit Time Main Memory and Examples
Reducing Hit Time Small and simple caches Way prediction Trace caches
Associativity in Caches Lecture 25
Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses
CSC 4250 Computer Architectures
How will execution time grow with SIZE?
CS 704 Advanced Computer Architecture
The University of Adelaide, School of Computer Science
Lecture 9: Memory Hierarchy (3)
5.2 Eleven Advanced Optimizations of Cache Performance
Cache Memory Presentation I
Morgan Kaufmann Publishers Memory & Cache
Lec09 Memory Hierarchy yet again
ECE 445 – Computer Organization
CMSC 611: Advanced Computer Architecture
Lecture 08: Memory Hierarchy Cache Performance
CS203A Graduate Computer Architecture Lecture 13 Cache Design
Lecture: DRAM Main Memory
CPE 631 Lecture 05: Cache Design
Adapted from slides by Sally McKee Cornell University
Memory Hierarchy.
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Summary 3 Cs: Compulsory, Capacity, Conflict Misses Reducing Miss Rate
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Main Memory Background
Cache Memory Rabi Mahapatra
Cache Performance Improvements
Memory & Cache.
10/18: Lecture Topics Using spatial locality
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Memory Hierarchy— Reducing Miss Penalty Reducing Hit Time Main Memory Professor Alvin R. Lebeck Computer Science 220 / ECE 252 Fall 2006

Admin Work on Projects Read NUCA paper CPS 220

Review: Summary Ave Mem Acc Time = Hit time + (miss rate x miss penalty) 3 Cs: Compulsory, Capacity, Conflict Program transformations to reduce cache misses 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache. CPS 220

1. Reducing Miss Penalty: Read Priority over Write on Miss Write back with write buffers offer RAW conflicts with main memory reads on cache misses If simply wait for write buffer to empty might increase read miss penalty by 50% (old MIPS 1000) Check write buffer contents before read; if no conflicts, let the memory access continue Write Back? Read miss replacing dirty block Normal: Write dirty block to memory, and then do the read Instead copy the dirty block to a write buffer, then do the read, and then do the write CPU stall less since restarts as soon as read completes CPS 220

2. Subblock Placement to Reduce Miss Penalty Don’t have to load full block on a miss Have bits per subblock to indicate valid (Originally invented to reduce tag storage) Valid Bits 100 200 300 1 CPS 220

3. Early Restart and Critical Word First Don’t wait for full block to be loaded before restarting CPU Early restart—As soon as the requested word of the block arrrives, send it to the CPU and let the CPU continue execution Critical Word First—Request the missed word first from memory and send it to the CPU as soon as it arrives; let the CPU continue execution while filling the rest of the words in the block. Also called wrapped fetch and requested word first Generally useful only in large blocks, Spatial locality a problem; tend to want next sequential word, so not clear if benefit by early restart CPS 220

4. Non-blocking Caches to reduce stalls on misses Non-blocking cache or lockup-free cache allowing the data cache to continue to supply cache hits during a miss “hit under miss” reduces the effective miss penalty by being helpful during a miss instead of ignoring the requests of the CPU “hit under multiple miss” or “miss under miss” may further lower the effective miss penalty by overlapping multiple misses Significantly increases the complexity of the cache controller as there can be multiple outstanding memory accesses But important for large window processors (WIB / CFP) Memory level parallelism CPS 220

Value of Hit Under Miss for SPEC “Hit under i Misses” Integer Floating Point FP programs on average: AMAT= 0.68 -> 0.52 -> 0.34 -> 0.26 Int programs on average: AMAT= 0.24 -> 0.20 -> 0.19 -> 0.19 8 KB Data Cache, Direct Mapped, 32B block, 16 cycle miss CPS 220

5th Miss Penalty Reduction: Second Level Cache L2 Equations AMAT = Hit TimeL1 + Miss RateL1 x Miss PenaltyL1 Miss PenaltyL1 = Hit TimeL2 + Miss RateL2 x Miss PenaltyL2 AMAT = Hit TimeL1 + Miss RateL1 x (Hit TimeL2 + Miss RateL2 + Miss PenaltyL2) Definitions: Local miss rate— misses in this cache divided by the total number of memory accesses to this cache (Miss rateL2) Global miss rate—misses in this cache divided by the total number of memory accesses generated by the CPU (Miss RateL1 x Miss RateL2) CPS 220

Comparing Local and Global Miss Rates 32 KByte 1st level cache; Increasing 2nd level cache Global miss rate close to single level cache rate provided L2 >> L1 Don’t use local miss rate L2 not tied to CPU clock cycle Cost & A.M.A.T. Generally Fast Hit Times and fewer misses Since hits are few, target miss reduction CPS 220

Reducing Misses: Which apply to L2 Cache? Reducing Miss Rate 1. Reduce Misses via Larger Block Size 2. Reduce Conflict Misses via Higher Associativity 3. Reducing Conflict Misses via Victim Cache 4. Reducing Conflict Misses via Pseudo-Associativity 5. Reducing Misses by HW Prefetching Instr, Data 6. Reducing Misses by SW Prefetching Data 7. Reducing Capacity/Conf. Misses by Compiler Optimizations CPS 220

L2 cache block size & A.M.A.T. 32KB L1, 8 byte path to memory CPS 220

Summary: Reducing Miss Penalty Five techniques Read priority over write on miss Subblock placement Early Restart and Critical Word First on miss Non-blocking Caches (Hit Under Miss) Second Level Cache Can be applied recursively to Multilevel Caches Danger is that time to DRAM will grow with multiple levels in between CPS 220

Review: Improving Cache Performance Ave Mem Acc Time = Hit time + (miss rate x miss penalty) 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache. CPS 220

1. Fast Hit times via Small and Simple Caches Why Alpha 21164 has 8KB Instruction and 8KB data cache + 96KB second level cache Direct Mapped, on chip Impact of dynamic scheduling? Alpha 21264 has 64KB 2-way L1 Data and Inst Cache CPS 220

2. Fast Hit Times Via Pipelined Writes Pipeline Tag Check and Update Cache as separate stages; current write tag check & previous write cache update Only Write in the pipeline; empty during a miss Shaded is Delayed Write Buffer; must be checked on reads; either complete write or read from buffer CPS 220

3. Fast Writes on Misses Via Small Subblocks If most writes are 1 word, subblock size is 1 word, & write through then always write subblock & tag immediately Tag match and valid bit already set: Writing the block was proper, & nothing lost by setting valid bit on again. Tag match and valid bit not set: The tag match means that this is the proper block; writing the data into the subblock makes it appropriate to turn the valid bit on. Tag mismatch: This is a miss and will modify the data portion of the block. As this is a write-through cache, however, no harm was done; memory still has an up-to-date copy of the old value. Only the tag of the address of the write and the valid bits of the other subblock need be changed because the valid bit for this subblock has already been set Doesn’t work with write back due to last case CPS 220

5 & 6 ?? Why are loads slower than registers? How can we get some of the advantages of registers for caches? How can we apply basic ideas of branch prediction to loads?

5. Multiport Cache Allow more than one memory access per cycle Replicate the cache (2 read ports, 1 write port) Dual port the cache (2 ports for either read or write) Multiple Banks (different addresses to different caches) NUCA: many banks for cache & placement should move around the blocks so frequently touched blocks move closer to processor core

6. Load Value Prediction Predict result of load Use PC to index value prediction table

Cache Optimization Summary Technique MR MP HT Complexity Larger Block Size + – 0 Higher Associativity + – 1 Victim Caches + 2 Pseudo-Associative Caches + 2 HW Prefetching of Instr/Data + 2 Compiler Controlled Prefetching + 3 Compiler Reduce Misses + 0 Priority to Read Misses + 1 Subblock Placement + + 1 Early Restart & Critical Word 1st + 2 Non-Blocking Caches + 3 Second Level Caches + 2 Small & Simple Caches – + 0 Avoiding Address Translation + 2 Pipelining Writes + 1 Multiple Ports + 2 Load Value Prediction + 2 CPS 220

What is the Impact of What You’ve Learned About Caches? 1960-1985: Speed = ƒ(no. operations) 1997 Pipelined Execution & Fast Clock Rate Out-of-Order completion Superscalar Instruction Issue 1998: Speed = ƒ(non-cached memory accesses) What does this mean for Compilers?,Operating Systems?, Algorithms?, Data Structures? CPS 220

Memory Hierarchy— Main Memory and Enhancing its Performance

Main Memory Background Performance of Main Memory: Latency: Cache Miss Penalty Access Time: time between request and word arrives Cycle Time: time between requests Bandwidth: I/O & Large Block Miss Penalty (L2) Main Memory is DRAM: Dynamic Random Access Memory Dynamic since needs to be refreshed periodically (8 ms) Addresses divided into 2 halves (Memory as a 2D matrix): RAS or Row Access Strobe CAS or Column Access Strobe Cache uses SRAM: Static Random Access Memory No refresh (6 transistors/bit vs. 1 transistor/bit) Address not divided Size: DRAM/SRAM ­ 4-8, Cost/Cycle time: SRAM/DRAM ­ 8-16 CPS 220

Main Memory Performance Simple: CPU, Cache, Bus, Memory same width (1 word) Wide: CPU/Mux 1 word; Mux/Cache, Bus, Memory N words (Alpha: 64 bits & 256 bits) Interleaved: CPU, Cache, Bus 1 word: Memory N Modules (4 Modules); example is word interleaved CPU $ Mem Bus Memory Mux CPS 220

Main Memory Performance Timing model 1 to send address, 6 access time, 1 to send data Cache Block is 4 words Simple M.P. = 4 x (1+6+1) = 32 Wide M.P. = 1 + 6 + 1 = 8 Interleaved M.P. = 1 + 6 + 4x1 = 11 Address Bank 0 4 8 12 Address Bank 1 1 5 9 13 Address Bank 2 2 6 10 14 Address Bank 3 3 7 9 15 CPS 220

Independent Memory Banks Memory banks for independent accesses vs. faster sequential accesses Multiprocessor I/O Miss under Miss, Non-blocking Cache Superbank: all memory active on one block transfer Bank: portion within a superbank that is word interleaved Superbank Number Bank Number Bank Offset CPS 220

Independent Memory Banks How many banks? number banks >= number clocks to access word in bank For sequential accesses, otherwise will return to original bank before it has next word ready DRAM trend towards deep narrow chips. => fewer chips => harder to have banks of chips => put banks inside chips (internal banks) CPS 220

Avoiding Bank Conflicts Lots of banks int x[256][512]; for (j = 0; j < 512; j = j+1) for (i = 0; i < 256; i = i+1) x[i][j] = 2 * x[i][j]; Even with 128 banks, since 512 is multiple of 128, conflict structural hazard SW: loop interchange or declaring array not power of 2 HW: Prime number of banks bank number = address mod number of banks address within bank = address / number of banks modulo & divide per memory access? address within bank = address mod number words in bank (3, 7, 31) bank number? easy if 2N words per bank CPS 220

Fast Bank Number Chinese Remainder Theorem As long as two sets of integers ai and bi follow these rules and that ai and aj are co-prime if i != j, then the integer x has only one solution (unambiguous mapping): bank number = b0, number of banks = a0 (= 3 in example) address within bank = b1, number of words in bank = a1 (= 8 in example) N word address 0 to N-1, prime no. banks, # of words is power of 2 CPS 220

Prime Mapping Example Seq. Interleaved Modulo Interleaved Bank Number: 0 1 2 0 1 2 Address within Bank: 0 0 1 2 0 16 8 1 3 4 5 9 1 17 2 6 7 8 18 10 2 3 9 10 11 3 19 11 4 12 13 14 12 4 20 5 15 16 17 21 13 5 6 18 19 20 6 22 14 7 21 22 23 15 7 23 CPS 220

Fast Memory Systems: DRAM specific Multiple RAS accesses: several names (page mode) 64 Mbit DRAM: cycle time = 100 ns, page mode = 20 ns New DRAMs to address gap; what will they cost, will they survive? Synchronous DRAM: Provide a clock signal to DRAM, transfer synchronous to system clock RAMBUS: reinvent DRAM interface (Intel will use it) Each Chip a module vs. slice of memory Short bus between CPU and chips Does own refresh Variable amount of data returned 1 byte / 2 ns (500 MB/s per chip) Cached DRAM (CDRAM): Keep entire row in SRAM CPS 220

Main Memory Summary Big DRAM + Small SRAM = Cost Effective Cray C-90 uses all SRAM (how many sold?) Wider Memory Interleaved Memory: for sequential or independent accesses Avoiding bank conflicts: SW & HW DRAM specific optimizations: page mode & Specialty DRAM, CDRAM Niche memory or main memory? e.g., Video RAM for frame buffers, DRAM + fast serial output IRAM: Do you know what it is? CPS 220

Review: Reducing Miss Penalty Summary Five techniques Read priority over write on miss Subblock placement Early Restart and Critical Word First on miss Non-blocking Caches (Hit Under Miss) Second Level Cache Can be applied recursively to Multilevel Caches Danger is that time to DRAM will grow with multiple levels in between CPS 220

Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache Fast hit times by small and simple caches Fast hits via avoiding virtual address translation Fast hits via pipelined writes Fast writes on misses via small subblocks Fast hits by using multiple ports Fast hits by using value prediction CPS 220

Review: Cache Optimization Summary Technique MR MP HT Complexity Larger Block Size + – 0 Higher Associativity + – 1 Victim Caches + 2 Pseudo-Associative Caches + 2 HW Prefetching of Instr/Data + 2 Compiler Controlled Prefetching + 3 Compiler Reduce Misses + 0 Priority to Read Misses + 1 Subblock Placement + + 1 Early Restart & Critical Word 1st + 2 Non-Blocking Caches + 3 Second Level Caches + 2 Small & Simple Caches – + 0 Avoiding Address Translation + 2 Pipelining Writes + 1 CPS 220

Next Time Memory hierarchy and virtual memory