February 9, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.

Slides:



Advertisements
Similar presentations
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
Advertisements

Anshul Kumar, CSE IITD CSL718 : Memory Hierarchy Cache Performance Improvement 23rd Feb, 2006.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache III Steve Ko Computer Sciences and Engineering University at Buffalo.
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache II Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Address Translation and Protection Steve Ko Computer Sciences and Engineering University at.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Virtual Memory I Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache IV Steve Ko Computer Sciences and Engineering University at Buffalo.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
February 9, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.
CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
February 9, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.
CS 152 Computer Architecture and Engineering Lecture 8 - Memory Hierarchy-III Krste Asanovic Electrical Engineering and Computer Sciences University of.
CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of.
February 11, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 8 - Memory Hierarchy-III Krste Asanovic Electrical Engineering.
Caches Hakim Weatherspoon CS 3410, Spring 2012 Computer Science Cornell University See P&H 5.1, 5.2 (except writes)
© Krste Asanovic, 2014CS252, Spring 2014, Lecture 10 CS252 Graduate Computer Architecture Spring 2014 Lecture 10: Memory Krste Asanovic
ECE 252 / CPS 220 Advanced Computer Architecture I Lecture 13 Memory – Part 2 Benjamin Lee Electrical and Computer Engineering Duke University
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
Realistic Memories and Caches Li-Shiuan Peh Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology March 21, 2012L13-1
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
February 9, 2012CS152, Spring 2012 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
Spring 2003CSE P5481 Advanced Caching Techniques Approaches to improving memory system performance eliminate memory operations decrease the number of misses.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
CPE232 Cache Introduction1 CPE 232 Computer Organization Spring 2006 Cache Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 CMPE 421 Parallel Computer Architecture PART3 Accessing a Cache.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 5:
Cache (Memory) Performance Optimization. Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: reduce the miss rate.
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
Constructive Computer Architecture Realistic Memories and Caches Arvind Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology.
1 Lecture 10 Cache Peng Liu Physical Size Affects Latency 2 Small Memory CPU Big Memory CPU  Signals have further to travel  Fan.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Computer Architecture and Parallel Computing 并行结构与计算 Lecture 3 - Memory Peng LIU ( 刘鹏 ) College of Information Science & Electronic Engineering Zhejiang.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
Computer Organization & Design 计算机组成与设计 Weidong Wang ( 王维东 ) College of Information Science & Electronic Engineering 信息与通信工程研究所 Zhejiang.
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
CENG709 Computer Architecture and Operating Systems Lecture 7 - Memory Hierarchy-II Murat Manguoglu Department of Computer Engineering Middle East Technical.
CSC 4250 Computer Architectures
Multilevel Memories (Improving performance using alittle “cash”)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
5.2 Eleven Advanced Optimizations of Cache Performance
Cache Memory Presentation I
Peng Liu Lecture 10 Cache Peng Liu
Krste Asanovic Electrical Engineering and Computer Sciences
Krste Asanovic Electrical Engineering and Computer Sciences
Electrical and Computer Engineering
Lecture: Cache Innovations, Virtual Memory
Adapted from slides by Sally McKee Cornell University
Peng Liu Lecture 10 Cache (2) Peng Liu
Dept. of Info. Sci. & Elec. Engg.
CS 3410, Spring 2014 Computer Science Cornell University
CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture Lecture 6 – Memory II Krste Asanovic Electrical Engineering and Computer.
CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture Lecture 7 – Memory III Krste Asanovic Electrical Engineering and Computer.
CSC3050 – Computer Architecture
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache Performance Improvements
CSE 486/586 Distributed Systems Cache Coherence
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

February 9, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 7 - Memory Hierarchy-II Krste Asanovic Electrical Engineering and Computer Sciences University of California at Berkeley

February 9, 2011CS152, Spring Last time in Lecture 6 Dynamic RAM (DRAM) is main form of main memory storage in use today –Holds values on small capacitors, need refreshing (hence dynamic) –Slow multi-step access: precharge, read row, read column Static RAM (SRAM) is faster but more expensive –Used to build on-chip memory for caches Cache holds small set of values in fast memory (SRAM) close to processor –Need to develop search scheme to find values in cache, and replacement policy to make space for newly accessed locations Caches exploit two forms of predictability in memory reference streams –Temporal locality, same location likely to be accessed again soon –Spatial locality, neighboring location likely to be accessed soon

February 9, 2011CS152, Spring CPU-Cache Interaction (5-stage pipeline) PC addr inst Primary Instruction Cache 0x4 Add IR D nop hit ? PCen Decode, Register Fetch wdata R addr wdata rdata Primary Data Cache we A B YY ALU MD1 MD2 Cache Refill Data from Lower Levels of Memory Hierarchy hit ? Stall entire CPU on data cache miss To Memory Control M E

February 9, 2011CS152, Spring Improving Cache Performance Average memory access time = Hit time + Miss rate x Miss penalty To improve performance: reduce the hit time reduce the miss rate reduce the miss penalty What is the simplest design strategy? Biggest cache that doesn’t increase hit time past 1-2 cycles (approx 8-32KB in modern technology) [ design issues more complex with out-of-order superscalar processors ]

February 9, 2011CS152, Spring 2011 Serial-versus-Parallel Cache and Memory access  is HIT RATIO: Fraction of references in cache 1 -  is MISS RATIO: Remaining references CACHE Processor Main Memory Addr Data Average access time for serial search: t cache + (1 - ) t mem CACHE Processor Main Memory Addr Data Average access time for parallel search:  t cache + (1 - ) t mem Savings are usually small, t mem >> t cache, hit ratio  high High bandwidth required for memory path Complexity of handling parallel paths can slow t cache

February 9, 2011CS152, Spring Causes for Cache Misses Compulsory: first-reference to a block a.k.a. cold start misses - misses that would occur even with infinite cache Capacity: cache is too small to hold all data needed by the program - misses that would occur even under perfect replacement policy Conflict: misses that occur because of collisions due to block-placement strategy - misses that would not occur with full associativity

February 9, 2011CS152, Spring Effect of Cache Parameters on Performance Larger cache size + reduces capacity and conflict misses - hit time will increase Higher associativity + reduces conflict misses - may increase hit time Larger block size + reduces compulsory and capacity (reload) misses - increases conflict misses and miss penalty

February 9, 2011CS152, Spring Write Policy Choices Cache hit: –write through: write both cache & memory »Generally higher traffic but simpler pipeline & cache design –write back: write cache only, memory is written only when the entry is evicted »A dirty bit per block further reduces write-back traffic »Must handle 0, 1, or 2 accesses to memory for each load/store Cache miss: –no write allocate: only write to main memory –write allocate (aka fetch on write): fetch into cache Common combinations: –write through and no write allocate –write back with write allocate

February 9, 2011CS152, Spring Write Performance Tag Data V = Block Offset TagIndex t k b t HIT Data Word or Byte 2 k lines WE

February 9, 2011CS152, Spring Reducing Write Hit Time Problem: Writes take two cycles in memory stage, one cycle for tag check plus one cycle for data write if hit Solutions: Design data RAM that can perform read and write in one cycle, restore old value after tag miss Fully-associative (CAM Tag) caches: Word line only enabled if hit Pipelined writes: Hold write data for store in single buffer ahead of cache, write cache data during next store’s tag check

February 9, 2011CS152, Spring Pipelining Cache Writes TagsData TagIndexStore Data Address and Store Data From CPU Delayed Write DataDelayed Write Addr. =? Load Data to CPU Load/Store L S 10 Hit? Data from a store hit written into data portion of cache during tag access of subsequent store

February 9, 2011CS152, Spring Write Buffer to Reduce Read Miss Penalty Processor is not stalled on writes, and read misses can go ahead of write to main memory Problem: Write buffer may hold updated value of location needed by a read miss Simple scheme: on a read miss, wait for the write buffer to go empty Faster scheme: Check write buffer addresses against read miss addresses, if no match, allow read miss to go ahead of writes, else, return value in write buffer Data Cache Unified L2 Cache RF CPU Write buffer Evicted dirty lines for writeback cache OR All writes in writethrough cache

February 9, 2011CS152, Spring Block-level Optimizations Tags are too large, i.e., too much overhead –Simple solution: Larger blocks, but miss penalty could be large. Sub-block placement (aka sector cache) –A valid bit added to units smaller than full block, called sub-blocks –Only read a sub-block on a miss –If a tag matches, is the word in the cache?

February 9, 2011CS152, Spring CS152 Administrivia

February 9, 2011CS152, Spring Multilevel Caches Problem: A memory cannot be large and fast Solution: Increasing sizes of cache at each level CPU L1$ L2$ DRAM Local miss rate = misses in cache / accesses to cache Global miss rate = misses in cache / CPU memory accesses Misses per instruction = misses in cache / number of instructions

February 9, 2011CS152, Spring Presence of L2 influences L1 design Use smaller L1 if there is also L2 –Trade increased L1 miss rate for reduced L1 hit time and reduced L1 miss penalty –Reduces average access energy Use simpler write-through L1 with on-chip L2 –Write-back L2 cache absorbs write traffic, doesn’t go off-chip –At most one L1 miss request per L1 access (no dirty victim write back) simplifies pipeline control –Simplifies coherence issues –Simplifies error recovery in L1 (can use just parity bits in L1 and reload from L2 when parity error detected on L1 read)

February 9, 2011CS152, Spring Inclusion Policy Inclusive multilevel cache: –Inner cache holds copies of data in outer cache –External coherence snoop access need only check outer cache Exclusive multilevel caches: –Inner cache may hold data not in outer cache –Swap lines between inner/outer caches on miss –Used in AMD Athlon with 64KB primary and 256KB secondary cache Why choose one type or the other?

February 9, 2011CS152, Spring Itanium-2 On-Chip Caches (Intel/HP, 2002) Level 1: 16KB, 4-way s.a., 64B line, quad-port (2 load+2 store), single cycle latency Level 2: 256KB, 4-way s.a, 128B line, quad-port (4 load or 4 store), five cycle latency Level 3: 3MB, 12-way s.a., 128B line, single 32B port, twelve cycle latency

February 9, 2011CS152, Spring 2011 Power 7 On-Chip Caches [IBM 2009] 19 32KB L1 I$/core 32KB L1 D$/core 3-cycle latency 256KB Unified L2$/core 8-cycle latency 32MB Unified Shared L3$ Embedded DRAM 25-cycle latency to local slice

February 9, 2011CS152, Spring Prefetching Speculate on future instruction and data accesses and fetch them into cache(s) –Instruction accesses easier to predict than data accesses Varieties of prefetching –Hardware prefetching –Software prefetching –Mixed schemes What types of misses does prefetching affect?

February 9, 2011CS152, Spring Issues in Prefetching Usefulness – should produce hits Timeliness – not late and not too early Cache and bandwidth pollution L1 Data L1 Instruction Unified L2 Cache RF CPU Prefetched data

February 9, 2011CS152, Spring Hardware Instruction Prefetching Instruction prefetch in Alpha AXP –Fetch two blocks on a miss; the requested block (i) and the next consecutive block (i+1) –Requested block placed in cache, and next block in instruction stream buffer –If miss in cache but hit in stream buffer, move stream buffer block into cache and prefetch next block (i+2) L1 Instruction Unified L2 Cache RF CPU Stream Buffer Prefetched instruction block Req block Req block

February 9, 2011CS152, Spring Hardware Data Prefetching Prefetch-on-miss: –Prefetch b + 1 upon miss on b One Block Lookahead (OBL) scheme –Initiate prefetch for block b + 1 when block b is accessed –Why is this different from doubling block size? –Can extend to N-block lookahead Strided prefetch –If observe sequence of accesses to block b, b+N, b+2N, then prefetch b+3N etc. Example: IBM Power 5 [2003] supports eight independent streams of strided prefetch per processor, prefetching 12 lines ahead of current access

February 9, 2011CS152, Spring Software Prefetching for(i=0; i < N; i++) { prefetch( &a[i + 1] ); prefetch( &b[i + 1] ); SUM = SUM + a[i] * b[i]; }

February 9, 2011CS152, Spring Software Prefetching Issues Timing is the biggest issue, not predictability –If you prefetch very close to when the data is required, you might be too late –Prefetch too early, cause pollution –Estimate how long it will take for the data to come into L1, so we can set P appropriately – Why is this hard to do? for(i=0; i < N; i++) { prefetch( &a[i + P ] ); prefetch( &b[i + P ] ); SUM = SUM + a[i] * b[i]; } Must consider cost of prefetch instructions

February 9, 2011CS152, Spring Compiler Optimizations Restructuring code affects the data block access sequence –Group data accesses together to improve spatial locality –Re-order data accesses to improve temporal locality Prevent data from entering the cache –Useful for variables that will only be accessed once before being replaced –Needs mechanism for software to tell hardware not to cache data (“no-allocate” instruction hints or page table bits) Kill data that will never be used again –Streaming data exploits spatial locality but not temporal locality –Replace into dead cache locations

February 9, 2011CS152, Spring Loop Interchange for(j=0; j < N; j++) { for(i=0; i < M; i++) { x[i][j] = 2 * x[i][j]; } } for(i=0; i < M; i++) { for(j=0; j < N; j++) { x[i][j] = 2 * x[i][j]; } } What type of locality does this improve?

February 9, 2011CS152, Spring Loop Fusion for(i=0; i < N; i++) a[i] = b[i] * c[i]; for(i=0; i < N; i++) d[i] = a[i] * c[i]; for(i=0; i < N; i++) { a[i] = b[i] * c[i]; d[i] = a[i] * c[i]; } What type of locality does this improve?

February 9, 2011CS152, Spring for(i=0; i < N; i++) for(j=0; j < N; j++) { r = 0; for(k=0; k < N; k++) r = r + y[i][k] * z[k][j]; x[i][j] = r; } Matrix Multiply, Naïve Code Not touchedOld accessNew access xj i yk i zj k

February 9, 2011CS152, Spring for(jj=0; jj < N; jj=jj+B) for(kk=0; kk < N; kk=kk+B) for(i=0; i < N; i++) for(j=jj; j < min(jj+B,N); j++) { r = 0; for(k=kk; k < min(kk+B,N); k++) r = r + y[i][k] * z[k][j]; x[i][j] = x[i][j] + r; } Matrix Multiply with Cache Tiling What type of locality does this improve? yk i zj k xj i

February 9, 2011CS152, Spring Acknowledgements These slides contain material developed and copyright by: –Arvind (MIT) –Krste Asanovic (MIT/UCB) –Joel Emer (Intel/MIT) –James Hoe (CMU) –John Kubiatowicz (UCB) –David Patterson (UCB) MIT material derived from course UCB material derived from course CS252