Cache Memory Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.

Slides:



Advertisements
Similar presentations
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Advertisements

Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Virtual Memory. The Limits of Physical Addressing CPU Memory A0-A31 D0-D31 “Physical addresses” of memory locations Data All programs share one address.
Cs 325 virtualmemory.1 Accessing Caches in Virtual Memory Environment.
EE30332 Ch7 DP.1 °Most of the slides are from Prof. Dave Patterson of University of California at Berkeley °Part of the materials is from Sun Microsystems.
ECE 232 L26.Cache.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 26 Caches.
CIS429/529 Cache Basics 1 Caches °Why is caching needed? Technological development and Moore’s Law °Why are caches successful? Principle of locality °Three.
Now, Review of Memory Hierarchy
331 Week13.1Spring :332:331 Computer Architecture and Assembly Language Spring 2006 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
Computer Systems Organization: Lecture 11
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Memory Chapter 7 Cache Memories.
Memory Hierarchy Design Chapter 5 Karin Strauss. Background 1980: no caches 1995: two levels of caches 2004: even three levels of caches Why? Processor-Memory.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 3, 2003 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
361 Computer Architecture Lecture 14: Cache Memory
CIS629 - Fall 2002 Caches 1 Caches °Why is caching needed? Technological development and Moore’s Law °Why are caches successful? Principle of locality.
CIS °The Five Classic Components of a Computer °Today’s Topics: Memory Hierarchy Cache Basics Cache Exercise (Many of this topic’s slides were.
ENGS 116 Lecture 121 Caches Vincent H. Berk Wednesday October 29 th, 2008 Reading for Friday: Sections C.1 – C.3 Article for Friday: Jouppi Reading for.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
Review: The Memory Hierarchy
EEM 486 EEM 486: Computer Architecture Lecture 6 Memory Systems and Caches.
CS252/Kubiatowicz Lec 3.1 1/24/01 CS252 Graduate Computer Architecture Lecture 3 Caches and Memory Systems I January 24, 2001 Prof. John Kubiatowicz.
EENG449b/Savvides Lec /7/05 April 7, 2005 Prof. Andreas Savvides Spring g449b EENG 449bG/CPSC 439bG.
Lec17.1 °Q1: Where can a block be placed in the upper level? (Block placement) °Q2: How is a block found if it is in the upper level? (Block identification)
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Storage HierarchyCS510 Computer ArchitectureLecture Lecture 12 Storage Hierarchy.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
CSIE30300 Computer Architecture Unit 9: Improving Cache Performance Hsin-Chou Chi [Adapted from material by and
CS.305 Computer Architecture Improving Cache Performance Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
CPE 442 cache.1 Introduction To Computer Architecture CpE 442 Cache Memory Design.
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
1 Computer Architecture Cache Memory. 2 Today is brought to you by cache What do we want? –Fast access to data from memory –Large size of memory –Acceptable.
Memory Hierarchy— Motivation, Definitions, Four Questions about Memory Hierarchy, Improving Performance Professor Alvin R. Lebeck Computer Science 220.
1Chapter 7 Memory Hierarchies (Part 2) Outline of Lectures on Memory Systems 1. Memory Hierarchies 2. Cache Memory 3. Virtual Memory 4. The future.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
EEL-4713 Ann Gordon-Ross 1 EEL-4713 Computer Architecture Memory hierarchies.
Computer Organization & Programming
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
M E M O R Y. Computer Performance It depends in large measure on the interface between processor and memory. CPI (or IPC) is affected CPI = Cycles per.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
DECStation 3100 Block Instruction Data Effective Program Size Miss Rate Miss Rate Miss Rate 1 6.1% 2.1% 5.4% 4 2.0% 1.7% 1.9% 1 1.2% 1.3% 1.2% 4 0.3%
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
CPE232 Cache Introduction1 CPE 232 Computer Organization Spring 2006 Cache Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
Review °We would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible. °So we create a memory hierarchy:
1 CMPE 421 Parallel Computer Architecture PART3 Accessing a Cache.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
Improving Memory Access 2/3 The Cache and Virtual Memory
Cps 220 Cache. 1 ©GK Fall 1998 CPS220 Computer System Organization Lecture 17: The Cache Alvin R. Lebeck Fall 1999.
Constructive Computer Architecture Realistic Memories and Caches Arvind Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology.
1 Memory Hierarchy Design Chapter 5. 2 Cache Systems CPUCache Main Memory Data object transfer Block transfer CPU 400MHz Main Memory 10MHz Bus 66MHz CPU.
Lecture slides originally adapted from Prof. Valeriu Beiu (Washington State University, Spring 2005, EE 334)
CMSC 611: Advanced Computer Architecture
Soner Onder Michigan Technological University
Cache Memory Presentation I
Systems Architecture II
CPE 631 Lecture 05: Cache Design
CS 704 Advanced Computer Architecture
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache - Optimization.
Cache Memory Rabi Mahapatra
CPE 631 Lecture 04: Review of the ABC of Caches
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Cache Memory Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley

Revisiting Memory Hierarchy Facts –Big is slow –Fast is small Increase performance by having “hierarchy” of memory subsystems “Temporal Locality” and “Spatial Locality” are big ideas

Revisiting Memory Hierarchy Terms –Cache Miss –Cache Hit –Hit Rate –Miss Rate –Index, Offset and Tag

Direct Mapped Cache

Direct Mapped Cache [contd…] What is the size of cache ? 4K If I read What is the index number checked ? 32 If the number was found, what are the inputs to comparator ?

Direct Mapped Cache [contd…] Taking advantage of spatial locality, we read 4 bytes at a time

Direct Mapped Cache [contd…] Advantage –Simple –Fast Disadvantage –Mapping is fixed !!!

Associative Caches Block 12 placed in 8 block cache: –Fully associative, direct mapped, 2-way set associative –S.A. Mapping = Block Number Modulo Number Sets Block no. Fully associative: block 12 can go anywhere Block no. Direct mapped: block 12 can go only into block 4 (12 mod 8) Block no. Set associative: block 12 can go anywhere in set 0 (12 mod 4) Set 0 Set 1 Set 2 Set Block-frame address Block no.

Set Associative Cache N-way set associative: N entries for each Cache Index –N direct mapped caches operates in parallel Example: Two-way set associative cache –Cache Index selects a “set” from the cache –The two tags in the set are compared to the input in parallel –Data is selected based on the tag result Cache Data Cache Block 0 Cache TagValid ::: Cache Data Cache Block 0 Cache TagValid ::: Cache Index Mux 01 Sel1Sel0 Cache Block Compare Adr Tag Compare OR Hit

Example: 4-way set associative Cache What is the cache size in this case ?

Disadvantages of Set Associative Cache N-way Set Associative Cache versus Direct Mapped Cache: –N comparators vs. 1 –Extra MUX delay for the data –Data comes AFTER Hit/Miss decision and set selection In a direct mapped cache, Cache Block is available BEFORE Hit/Miss: Cache Data Cache Block 0 Cache TagValid ::: Cache Data Cache Block 0 Cache TagValid ::: Cache Index Mux 01 Sel1Sel0 Cache Block Compare Adr Tag Compare OR Hit

Fully Associative Cache –Forget about the Cache Index –Compare the Cache Tags of all cache entries in parallel –Example: Block Size = 32 B blocks, we need N 27-bit comparators By definition: Conflict Miss = 0 for a fully associative cache : Cache Data Byte : Cache Tag (27 bits long) Valid Bit : Byte 1Byte 31 : Byte 32Byte 33Byte 63 : Cache Tag Byte Select Ex: 0x01 = = = = =

Cache Misses Compulsory (cold start or process migration, first reference): first access to a block –“Cold” fact of life: not a whole lot you can do about it –Note: If you are going to run “billions” of instruction, Compulsory Misses are insignificant Capacity: –Cache cannot contain all blocks access by the program –Solution: increase cache size Conflict (collision): –Multiple memory locations mapped to the same cache location –Solution 1: increase cache size –Solution 2: increase associativity Coherence (Invalidation): other process (e.g., I/O) updates memory

Design Options at Constant Cost Direct MappedN-way Set AssociativeFully Associative Compulsory Miss Cache Size Capacity Miss Coherence Miss BigMediumSmall Same Conflict MissHighMediumZero LowMediumHigh Same

Four Questions for Cache Design Q1: Where can a block be placed in the upper level? (Block placement) Q2: How is a block found if it is in the upper level? (Block identification) Q3: Which block should be replaced on a miss? (Block replacement) Q4: What happens on a write? (Write strategy)

Where can a block be placed in the upper level? Direct Mapped Set Associative Fully Associative

How is a block found if it is in the upper level? Direct indexing (using index and block offset), tag compares, or combination Increasing associativity shrinks index, expands tag Block offset Block Address Tag Index Set Select Data Select

Which block should be replaced on a miss? Easy for Direct Mapped Set Associative or Fully Associative: –Random –LRU (Least Recently Used) Associativity:2-way4-way8-way SizeLRU Random LRU Random LRU Random 16 KB5.2%5.7% 4.7%5.3% 4.4%5.0% 64 KB1.9%2.0% 1.5%1.7% 1.4%1.5% 256 KB1.15%1.17% 1.13% 1.13% 1.12% 1.12%

What happens on a write? Write through—The information is written to both the block in the cache and to the block in the lower-level memory. Write back—The information is written only to the block in the cache. The modified cache block is written to main memory only when it is replaced. –is block clean or dirty? Pros and Cons of each? –WT: read misses cannot result in writes –WB: no writes of repeated writes WT always combined with write buffers so that don’t wait for lower level memory

Two types of Cache Instruction Cache Data Cache IR PC I -Cache D Cache AB R T IRex IRm IRwb miss invalid Miss

Improving Cache Performance Reduce Miss Rate –Associativity –Victim Cache –Compiler Optimizations Reduce Miss Penalty –Faster DRAM –Write Buffers Reduce Hit Time

Reduce Miss Rate

Reduce Miss Rate [contd…] Conflict miss rate 1-way associative cache size X = miss rate 2-way associative cache size X/2

3Cs Comparison Conflict

Compiler Optimizations Compiler Optimizations to reduce miss rate –Loop Fusion –Loop Interchange –Merge Arrays

Reducing Miss Penalty Faster RAM memories –Driven by technology and cost !!! –Eg: CRAY uses only SRAM

Reduce Hit Time Lower Associativity –Add L1 and L2 caches –L1 cache is small => Hit Time is critical –L2 cache has large => Miss Penalty is critical

Reducing Miss Penalty [contd…] A Write Buffer is needed between the Cache and Memory –Processor: writes data into the cache and the write buffer –Memory controller: write contents of the buffer to memory Write buffer is just a FIFO: Processor Cache Write Buffer DRAM

Designing the Memory System to Support Caches Eg: 1 clock cycle to send address and data, 15 for each DRAM access Case 1: (Main memory of one word) x x 1 = 65 clk cycles, bytes/clock = 16/65 = 0.25 Case 2: (Main memory of two words) x x 1 = 33 clk cycles, bytes/clock = 16/33 = 0.48 Case 3: (Interleaved memory) x x 1 = 20 clk cycles, bytes/clock = 16/20 = 0.80

The possible R/W cases Read Hit –Good for CPU !!! Read Miss –Stall CPU, fetch, Resume Write Hit –Write Through –Write Back Write Miss –Write entire block into memory, Read into cache

Performance CPU time = (CPU execution clock cycles + Memory stall clock cycles) x clock cycle time Memory stall clock cycles = (Reads x Read miss rate x Read miss penalty + Writes x Write miss rate x Write miss penalty) Memory stall clock cycles = Memory accesses x Miss rate x Miss penalty Different measure: AMAT Average Memory Access time (AMAT) = Hit Time + (Miss Rate x Miss Penalty) Note: memory hit time is included in execution cycles.

Performance [contd…] Suppose a processor executes at –Clock Rate = 200 MHz (5 ns per cycle) –Base CPI = 1.1 –50% arith/logic, 30% ld/st, 20% control Suppose that 10% of memory operations get 50 cycle miss penalty Suppose that 1% of instructions get same miss penalty CPI = Base CPI + average stalls per instruction 1.1(cycles/ins) + [ 0.30 (DataMops/ins) x 0.10 (miss/DataMop) x 50 (cycle/miss)] + [ 1 (InstMop/ins) x 0.01 (miss/InstMop) x 50 (cycle/miss)] = ( ) cycle/ins = % of the time the proc is stalled waiting for memory! AMAT=(1/1.3)x[1+0.01x50]+(0.3/1.3)x[1+0.1x50]=2.54

Summary The Principle of Locality: –Program likely to access a relatively small portion of the address space at any instant of time. Temporal Locality: Locality in Time Spatial Locality: Locality in Space Three (+1) Major Categories of Cache Misses: –Compulsory Misses: sad facts of life. Example: cold start misses. –Conflict Misses: increase cache size and/or associativity. Nightmare Scenario: ping pong effect! –Capacity Misses: increase cache size –Coherence Misses: Caused by external processors or I/O devices Cache Design Space –total size, block size, associativity –replacement policy –write-hit policy (write-through, write-back) –write-miss policy

Summary [contd…] Several interacting dimensions –cache size –block size –associativity –replacement policy –write-through vs write-back –write allocation The optimal choice is a compromise –depends on access characteristics workload use (I-cache, D-cache, TLB) –depends on technology / cost Simplicity often wins Associativity Cache Size Block Size Bad Good LessMore Factor AFactor B