Soner Onder Michigan Technological University

Slides:



Advertisements
Similar presentations
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Advertisements

Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Performance of Cache Memory
Memory Hierarchy: The motivation
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
Caches Vincent H. Berk October 21, 2005
EECC550 - Shaaban #1 Lec # 9 Spring Memory Hierarchy: Motivation The gap between CPU performance and main memory speed has been widening.
Now, Review of Memory Hierarchy
Cache Memory Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
Memory Hierarchy Design Chapter 5 Karin Strauss. Background 1980: no caches 1995: two levels of caches 2004: even three levels of caches Why? Processor-Memory.
CIS629 - Fall 2002 Caches 1 Caches °Why is caching needed? Technological development and Moore’s Law °Why are caches successful? Principle of locality.
CIS °The Five Classic Components of a Computer °Today’s Topics: Memory Hierarchy Cache Basics Cache Exercise (Many of this topic’s slides were.
ENGS 116 Lecture 121 Caches Vincent H. Berk Wednesday October 29 th, 2008 Reading for Friday: Sections C.1 – C.3 Article for Friday: Jouppi Reading for.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Memory Hierarchy: Motivation
EECC551 - Shaaban #1 lec # 7 Winter Memory Hierarchy: The motivation The gap between CPU performance and main memory has been widening with.
Memory: PerformanceCSCE430/830 Memory Hierarchy: Performance CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu (U. Maine)
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Storage HierarchyCS510 Computer ArchitectureLecture Lecture 12 Storage Hierarchy.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
Lecture 14 Memory Hierarchy and Cache Design Prof. Mike Schulte Computer Architecture ECE 201.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Computer Organization & Programming
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
DECStation 3100 Block Instruction Data Effective Program Size Miss Rate Miss Rate Miss Rate 1 6.1% 2.1% 5.4% 4 2.0% 1.7% 1.9% 1 1.2% 1.3% 1.2% 4 0.3%
Review °We would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible. °So we create a memory hierarchy:
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
1 Memory Hierarchy Design Chapter 5. 2 Cache Systems CPUCache Main Memory Data object transfer Block transfer CPU 400MHz Main Memory 10MHz Bus 66MHz CPU.
Lecture slides originally adapted from Prof. Valeriu Beiu (Washington State University, Spring 2005, EE 334)
CPE 626 CPU Resources: Introduction to Cache Memories Aleksandar Milenkovic Web:
CMSC 611: Advanced Computer Architecture
COSC3330 Computer Architecture
Yu-Lun Kuo Computer Sciences and Information Engineering
Improving Memory Access 1/3 The Cache and Virtual Memory
CSC 4250 Computer Architectures
Multilevel Memories (Improving performance using alittle “cash”)
Appendix B. Review of Memory Hierarchy
Cache Memory Presentation I
Consider a Direct Mapped Cache with 4 word blocks
Morgan Kaufmann Publishers Memory & Cache
Morgan Kaufmann Publishers
ECE 445 – Computer Organization
Chapter 5 Memory CSE 820.
Lecture 08: Memory Hierarchy Cache Performance
CPE 631 Lecture 05: Cache Design
ECE232: Hardware Organization and Design
CS 704 Advanced Computer Architecture
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
/ Computer Architecture and Design
CS-447– Computer Architecture Lecture 20 Cache Memories
Electrical and Computer Engineering
Chapter Five Large and Fast: Exploiting Memory Hierarchy
September 1, 2000 Prof. John Kubiatowicz
Cache - Optimization.
Cache Memory Rabi Mahapatra
CPE 631 Lecture 04: Review of the ABC of Caches
Lecture 7 Memory Hierarchy and Cache Design
Cache Performance Improvements
Memory & Cache.
10/18: Lecture Topics Using spatial locality
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Memory Hierarchy— Motivation, Definitions, Four Questions about Memory Hierarchy Soner Onder Michigan Technological University Randy Katz & David A. Patterson University of California, Berkeley

Levels in a memory hierarchy

Basic idea Data Tag block Cache Memory Memory address =?

Who Cares about Memory Hierarchy? 1980: no cache in µproc; 1995 2-level cache, 60% trans. on Alpha 21164 µproc

General Principles Locality Temporal Locality: referenced again soon Spatial Locality: nearby items referenced soon Locality + smaller HW is faster = memory hierarchy Levels: each smaller, faster, more expensive/byte than level below Inclusive: data found in top also found in the bottom Definitions Upper is closer to processor Block: minimum unit that present or not in upper level Address = Block frame address + block offset address Hit time: time to access upper level, including hit determination

Cache Measures Hit rate: fraction found in that level So high that usually talk about Miss rate Miss rate fallacy: as MIPS to CPU performance, miss rate to average memory access time in memory Average memory-access time = Hit time + Miss rate x Miss penalty (ns or clocks) Miss penalty: time to replace a block from lower level, including time to replace in CPU access time: time to lower level = ƒ(lower level latency) transfer time: time to transfer block = ƒ(BW upper & lower, block size)

Block Size vs. Cache Measures Increasing Block Size generally increases Miss Penalty Miss Penalty Miss Rate X = Avg. Memory Access Time Block Size Block Size Block Size

Implications For CPU Fast hit check since every memory access Hit is the common case Unpredictable memory access time 10s of clock cycles: wait 1000s of clock cycles: Interrupt & switch & do something else New style: multithreaded execution How handle miss (10s => HW, 1000s => SW)?

Four Questions for Memory Hierarchy Designers Q1: Where can a block be placed in the upper level? (Block placement) Q2: How is a block found if it is in the upper level? (Block identification) Q3: Which block should be replaced on a miss? (Block replacement) Q4: What happens on a write? (Write strategy)

Q1: Where can a block be placed in the upper level? Block 12 placed in 8 block cache: Fully associative, direct mapped, 2-way set associative Set A. Mapping = Block Number Modulo Number Sets

Q2: How Is a Block Found If It Is in the Upper Level? Tag on each block No need to check index or block offset Increasing associativity shrinks index, expands tag FA: No index DM: Large index

Q3: Which Block Should be Replaced on a Miss? Easy for Direct Mapped S.A. or F.A.: Random (large associativities) LRU (smaller associativities) Associativity: 2-way 4-way 8-way Size LRU Random LRU Random LRU Random 16 KB 5.18% 5.69% 4.67% 5.29% 4.39% 4.96% 64 KB 1.88% 2.01% 1.54% 1.66% 1.39% 1.53% 256 KB 1.15% 1.17% 1.13% 1.13% 1.12% 1.12%

Q4: What Happens on a Write? Write through: The information is written to both the block in the cache and to the block in the lower-level memory. Write back: The information is written only to the block in the cache. The modified cache block is written to main memory only when it is replaced. is block clean or dirty? Pros and Cons of each: WT: read misses cannot result in writes (because of replacements) WB: no writes of repeated writes WT always combined with write buffers so that don’t wait for lower level memory

Example: 21064 Data Cache Index = 8 bits: 256 blocks = 8192/(32x1) Direct Mapped

Writes in Alpha 21064 No write merging vs. write merging in write buffer 4 entry, 4 word 16 sequential writes in a row

Structural Hazard: Instruction and Data? Size Instruction Cache Data Cache Unified Cache 1 KB 3.06% 24.61% 13.34% 2 KB 2.26% 20.57% 9.78% 4 KB 1.78% 15.94% 7.24% 8 KB 1.10% 10.19% 4.57% 16 KB 0.64% 6.47% 2.87% 32 KB 0.39% 4.82% 1.99% 64 KB 0.15% 3.77% 1.35% 128 KB 0.02% 2.88% 0.95% Relative weighting of instruction vs. data access

2-way Set Associative, Address to Select Word Two sets of Address tags and data RAM 2:1 Mux for the way Use address bits to select correct Data RAM

Cache Performance CPU time = (CPU execution clock cycles + Memory stall clock cycles) x clock cycle time Memory stall clock cycles = (Reads x Read miss rate x Read miss penalty + Writes x Write miss rate x Write miss penalty) Memory stall clock cycles = Memory accesses x Miss rate x Miss penalty

Cache Performance CPUtime = IC x (CPIexecution + Mem accesses per instruction x Miss rate x Miss penalty) x Clock cycle time Misses per instruction = Memory accesses per instruction x Miss rate CPUtime = IC x (CPIexecution + Misses per instruction x Miss penalty) x Clock cycle time

Improving Cache Performance Average memory-access time = Hit time + Miss rate x Miss penalty (ns or clocks) Improve performance by: 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

Summary CPU-Memory gap is major performance obstacle for performance, HW and SW Take advantage of program behavior: locality Time of program still only reliable performance measure 4Qs of memory hierarchy