Areg Melik-Adamyan, PhD

Slides:



Advertisements
Similar presentations
Lecture 12 Reduce Miss Penalty and Hit Time
Advertisements

CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 (and Appendix B) Memory Hierarchy Design Computer Architecture A Quantitative Approach,
The University of Adelaide, School of Computer Science
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 30, 2002 Topic: Caches (contd.)
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
The University of Adelaide, School of Computer Science
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Computing Systems Memory Hierarchy.
CSC 4250 Computer Architectures December 5, 2006 Chapter 5. Memory Hierarchy.
– 1 – CSCE 513 Fall 2015 Lec08 Memory Hierarchy IV Topics Pipelining Review Load-Use Hazard Memory Hierarchy Review Terminology review Basic Equations.
Spring 2003CSE P5481 Advanced Caching Techniques Approaches to improving memory system performance eliminate memory operations decrease the number of misses.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
What is it and why do we need it? Chris Ward CS147 10/16/2008.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
High Performance Computing1 High Performance Computing (CS 680) Lecture 2a: Overview of High Performance Processors * Jeremy R. Johnson *This lecture was.
Microprocessor Microarchitecture Memory Hierarchy Optimization Lynn Choi Dept. Of Computer and Electronics Engineering.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
CS203 – Advanced Computer Architecture Cache. Memory Hierarchy Design Memory hierarchy design becomes more crucial with recent multi-core processors:
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
CMSC 611: Advanced Computer Architecture
CS 704 Advanced Computer Architecture
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Caches Samira Khan March 23, 2017.
Reducing Hit Time Small and simple caches Way prediction Trace caches
CS352H: Computer Systems Architecture
Associativity in Caches Lecture 25
The University of Adelaide, School of Computer Science
Improving Memory Access 1/3 The Cache and Virtual Memory
CSC 4250 Computer Architectures
How will execution time grow with SIZE?
CS 704 Advanced Computer Architecture
The University of Adelaide, School of Computer Science
现代计算机体系结构 主讲教师:张钢 教授 天津大学计算机学院 课件、作业、讨论网址:
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Morgan Kaufmann Publishers
5.2 Eleven Advanced Optimizations of Cache Performance
Cache Memory Presentation I
Morgan Kaufmann Publishers Memory & Cache
Lec07 Memory Hierarchy II
Lec09 Memory Hierarchy yet again
Lec07 Memory Hierarchy II
The University of Adelaide, School of Computer Science
Chapter 2 Memory Hierarchy Design
Bojian Zheng CSCD70 Spring 2018
Chapter 8 Digital Design and Computer Architecture: ARM® Edition
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Adapted from slides by Sally McKee Cornell University
15-740/ Computer Architecture Lecture 18: Caching Basics
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Fundamentals of Computing: Computer Architecture
Cache Performance Improvements
The University of Adelaide, School of Computer Science
Memory & Cache.
10/18: Lecture Topics Using spatial locality
Presentation transcript:

Areg Melik-Adamyan, PhD Crash course on computer architecture — Memory Hierarchy and Cache performance Areg Melik-Adamyan, PhD

References CAQA 6th ed. Chapter 2 Appendix B Appendix D Appendix L Other Patt, “Requirements, Bottlenecks, and Good Fortune: Agents for Microprocessor Evolution,” Proceedings of the IEEE 2001.

Abstraction: Virtual vs. Physical Memory Programmer sees virtual memory Can assume the memory is “infinite” Reality: Physical memory size is much smaller than what the programmer assumes The system (system software + hardware, cooperatively) maps virtual memory addresses to physical memory The system automatically manages the physical memory space transparently to the programmer + Programmer does not need to know the physical size of memory nor manage it  A small physical memory can appear as a huge one to the programmer  Life is easier for the programmer -- More complex system software and architecture A classic example of the programmer/(micro)architect tradeoff

(Physical) Memory System You need a larger level of storage to manage a small amount of physical memory automatically  Physical memory has a backing store: disk We will first start with the physical memory system For now, ignore the virtualphysical indirection We will get back to it when the needs of virtual memory start complicating the design of physical memory…

Ideal World Pipeline (Instruction execution) Instruction Supply Data No pipeline stalls Perfect data flow (reg/memory dependencies) Zero-cycle interconnect (operand communication) Enough functional units Zero latency compute - Zero latency access - Infinite capacity - Zero cost - Perfect control flow Zero latency access Infinite capacity - Infinite bandwidth Zero cost

The University of Adelaide, School of Computer Science 7 June 2019 Memory Technology Amdahl: Memory capacity should grow linearly with processor speed Unfortunately, memory capacity and speed has not kept pace with processors Some optimizations: Multiple accesses to same row Synchronous DRAM Added clock to DRAM interface Burst mode with critical word first Wider interfaces Double data rate (DDR), Multiple banks on each DRAM device Chapter 2 — Instructions: Language of the Computer

The Problem Ideal memory’s requirements oppose each other Bigger is slower Bigger  Takes longer to determine the location Faster is more expensive Memory technology: SRAM vs. DRAM vs. Disk vs. Tape Higher bandwidth is more expensive Need more banks, more ports, higher frequency, or faster technology

The Problem Bigger is slower SRAM, 512 Bytes, sub-nanosec SRAM, KByte~MByte, ~nanosec DRAM, Gigabyte, ~50 nanosec Hard Disk, Terabyte, ~10 millisec Faster is more expensive (dollars and chip area) SRAM, < 10$ per Megabyte DRAM, < 1$ per Megabyte Hard Disk < 1$ per Gigabyte These sample values (circa ~2011) scale with time Other technologies have their place as well Flash memory, PC-RAM, MRAM, RRAM (not mature yet)

Motivation for Memory Hierarchy

Memory Tradeoffs

Memory Hierarchy Design The University of Adelaide, School of Computer Science 7 June 2019 Memory Hierarchy Design Memory hierarchy design becomes more crucial with recent multi- core processors: Aggregate peak bandwidth grows with # cores: Intel Core i7 can generate two references per core per clock Four cores and 3.2 GHz clock 25.6 billion 64-bit data references/second + 12.8 billion 128-bit instruction references/second = 409.6 GB/s! DRAM bandwidth is only 8% of this (34.1 GB/s) Requires: Multi-port, pipelined caches Two levels of cache per core Shared third-level cache on chip Chapter 2 — Instructions: Language of the Computer

Locality One’s recent past is a very good predictor of his/her near future. Temporal Locality: If you just did something, it is very likely that you will do the same thing again soon since you are here today, there is a good chance you will be here again and again regularly Spatial Locality: If you did something, it is very likely you will do something similar/related (in space) every time I find you in this room, you are probably sitting close to the same people

Memory Locality A “typical” program has a lot of locality in memory references typical programs are composed of “loops” Temporal: A program tends to reference the same memory location many times and all within a small window of time Spatial: A program tends to reference a cluster of memory locations at a time most notable examples: 1. instruction memory references 2. array/data structure references

Caching Basics: Exploit Temporal Locality Idea: Store recently accessed data in automatically managed fast memory (called cache) Anticipation: the data will be accessed again soon Temporal locality principle Recently accessed data will be again accessed in the near future This is what Maurice Wilkes had in mind: Wilkes, “Slave Memories and Dynamic Storage Allocation,” IEEE Trans. On Electronic Computers, 1965. “The use is discussed of a fast core memory of, say 32000 words as a slave to a slower core memory of, say, one million words in such a way that in practical cases the effective access time is nearer that of the fast memory than that of the slow memory.”

Caching Basics: Exploit Spatial Locality Idea: Store addresses adjacent to the recently accessed one in automatically managed fast memory Logically divide memory into equal size blocks Fetch to cache the accessed block in its entirety Anticipation: nearby data will be accessed soon Spatial locality principle Nearby data in memory will be accessed in the near future E.g., sequential instruction access, array traversal This is what IBM 360/85 implemented 16 Kbyte cache with 64 byte blocks Liptay, “Structural aspects of the System/360 Model 85 II: the cache,” IBM Systems Journal, 1968.

Memory Hierarchy Basics The University of Adelaide, School of Computer Science 7 June 2019 Memory Hierarchy Basics When a word is not found in the cache, a miss occurs: Fetch word from lower level in hierarchy, requiring a higher latency reference Lower level may be another cache or the main memory Also fetch the other words contained within the block Takes advantage of spatial locality Place block into cache in any location within its set, determined by address block address MOD number of sets in cache Chapter 2 — Instructions: Language of the Computer

Memory Hierarchy Basics The University of Adelaide, School of Computer Science 7 June 2019 Memory Hierarchy Basics n sets => n-way set associative Direct-mapped cache => one block per set Fully associative => one set Writing to cache: two strategies Write-through Immediately update lower levels of hierarchy Write-back Only update lower levels of hierarchy when an updated block is replaced Both strategies use write buffer to make writes asynchronous Chapter 2 — Instructions: Language of the Computer

Memory Hierarchy Basics The University of Adelaide, School of Computer Science 7 June 2019 Memory Hierarchy Basics Miss rate Fraction of cache access that result in a miss Causes of misses Compulsory First reference to a block Capacity Blocks discarded and later retrieved Conflict Program makes repeated references to multiple addresses from different blocks that map to the same location in the cache Chapter 2 — Instructions: Language of the Computer

Why This All Works?

Cache – Main Idea

Cache Lookup

Fully Associative Cache

Direct Map Cache

Direct Map Memory Layout

2-Way Associative Cache

Multi-way Memory Layout

Line Replacement Policy

Cache Performance

2-level Cache Performance

The University of Adelaide, School of Computer Science 7 June 2019 AMAT Formula Speculative and multithreaded processors may execute other instructions during a miss Reduces performance impact of misses Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 7 June 2019 Optimization Basics Six basic cache optimizations: Larger block size Reduces compulsory misses Increases capacity and conflict misses, increases miss penalty Larger total cache capacity to reduce miss rate Increases hit time, increases power consumption Higher associativity Reduces conflict misses Higher number of cache levels Reduces overall memory access time Giving priority to read misses over writes Reduces miss penalty Avoiding address translation in cache indexing Reduces hit time Chapter 2 — Instructions: Language of the Computer

Memory Technology and Basic Optimizations The University of Adelaide, School of Computer Science 7 June 2019 Memory Technology and Basic Optimizations Performance metrics Latency is concern of cache Bandwidth is concern of multiprocessors and I/O Access time Time between read request and when desired word arrives Cycle time Minimum time between unrelated requests to memory SRAM memory has low latency, use for cache Organize DRAM chips into many banks for high bandwidth, use for main memory Chapter 2 — Instructions: Language of the Computer

Advanced Optimizations The University of Adelaide, School of Computer Science 7 June 2019 Advanced Optimizations Reduce hit time Small and simple first-level caches Way prediction Increase bandwidth Pipelined caches, multibanked caches, non-blocking caches Reduce miss penalty Critical word first, merging write buffers Reduce miss rate Compiler optimizations Reduce miss penalty or miss rate via parallelization Hardware or compiler prefetching Chapter 2 — Instructions: Language of the Computer

L1 Size and Associativity The University of Adelaide, School of Computer Science 7 June 2019 L1 Size and Associativity Access time vs. size and associativity Chapter 2 — Instructions: Language of the Computer

L1 Size and Associativity The University of Adelaide, School of Computer Science 7 June 2019 L1 Size and Associativity Energy per read vs. size and associativity Copyright © 2019, Elsevier Inc. All rights Reserved Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 7 June 2019 Way Prediction To improve hit time, predict the way to pre-set mux Mis-prediction gives longer hit time Prediction accuracy > 90% for two-way > 80% for four-way I-cache has better accuracy than D-cache First used on MIPS R10000 in mid-90s Extend to predict block as well “Way selection” Increases mis-prediction penalty Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 7 June 2019 Pipelined Caches Pipeline cache access to improve bandwidth Examples: Pentium: 1 cycle Pentium Pro – Pentium III: 2 cycles Pentium 4 – Core i7: 4 cycles Increases branch mis-prediction penalty Makes it easier to increase associativity Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 7 June 2019 Multibanked Caches Organize cache as independent banks to support simultaneous access Intel i7 supports 4 banks for L1 and 8 banks for L2 Interleave banks according to block address Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 7 June 2019 Nonblocking Caches Allow hits before previous misses complete “Hit under miss” “Hit under multiple miss” L2 must support this In general, processors can hide L1 miss penalty but not L2 miss penalty Copyright © 2019, Elsevier Inc. All rights Reserved Chapter 2 — Instructions: Language of the Computer

Critical Word First, Early Restart The University of Adelaide, School of Computer Science 7 June 2019 Critical Word First, Early Restart Critical word first Request missed word from memory first Send it to the processor as soon as it arrives Early restart Request words in normal order Send missed work to the processor as soon as it arrives Effectiveness of these strategies depends on block size and likelihood of another access to the portion of the block that has not yet been fetched Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 7 June 2019 Merging Write Buffer When storing to a block that is already pending in the write buffer, update write buffer Reduces stalls due to full write buffer Do not apply to I/O addresses No write buffering Write buffering Chapter 2 — Instructions: Language of the Computer

The University of Adelaide, School of Computer Science 7 June 2019 Summary Copyright © 2019, Elsevier Inc. All rights Reserved Chapter 2 — Instructions: Language of the Computer

Compiler Optimizations The University of Adelaide, School of Computer Science 7 June 2019 Compiler Optimizations Loop Interchange Swap nested loops to access memory in sequential order Blocking Instead of accessing entire rows or columns, subdivide matrices into blocks Loop Fusion Prefetching Chapter 2 — Instructions: Language of the Computer

Code Optimization: Merging Arrays

Code Optimization: Loop Fusion

Code Optimization: Loop Interchange

Blocking for (i = 0; i < N; i = i + 1) for (j = 0; j < N; j = j + 1) { r = 0; for (k = 0; k < N; k = k + 1) r = r + y[i][k]*z[k][j]; x[i][j] = r; };

Blocking for (jj = 0; jj < N; jj = jj + B) for (kk = 0; kk < N; kk = kk + B) for (i = 0; i < N; i = i + 1) for (j = jj; j < min(jj + B,N); j = j + 1) { r = 0; for (k = kk; k < min(kk + B,N); k = k + 1) r = r + y[i][k]*z[k][j]; x[i][j] = x[i][j] + r; };

The University of Adelaide, School of Computer Science 7 June 2019 Hardware Prefetching Fetch two blocks on miss (include next sequential block) Pre-fetching Chapter 2 — Instructions: Language of the Computer

Prefetching

The University of Adelaide, School of Computer Science 7 June 2019 Compiler Prefetching Insert prefetch instructions before data is needed Non-faulting: prefetch doesn’t cause exceptions Register prefetch Loads data into register Cache prefetch Loads data into cache Combine with loop unrolling and software pipelining Copyright © 2019, Elsevier Inc. All rights Reserved Chapter 2 — Instructions: Language of the Computer