CS61C L32 Caches III (1) Garcia, Fall 2006 © UCB Lecturer SOE Dan Garcia www.cs.berkeley.edu/~ddgarcia inst.eecs.berkeley.edu/~cs61c UC Berkeley CS61C.

Slides:



Advertisements
Similar presentations
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Advertisements

Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Memory Hierarchy CS465 Lecture 11. D. Barbara Memory CS465 2 Control Datapath Memory Processor Input Output Big Picture: Where are We Now?  The five.
CS 430 Computer Architecture 1 CS 430 – Computer Architecture Caches, Part II William J. Taffe using slides of David Patterson.
1 Lecture 20: Cache Hierarchies, Virtual Memory Today’s topics:  Cache hierarchies  Virtual memory Reminder:  Assignment 8 will be posted soon (due.
Computer ArchitectureFall 2007 © November 14th, 2007 Majd F. Sakr CS-447– Computer Architecture.
CS61C L22 Caches II (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
Lecturer PSOE Dan Garcia
CS61C L23 Cache II (1) Chae, Summer 2008 © UCB Albert Chae, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #23 – Cache II.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 32 – Caches III Prem Kumar of Northwestern has created a quantum inverter.
Modified from notes by Saeid Nooshabadi
CS61C L22 Caches III (1) A Carle, Summer 2006 © UCB inst.eecs.berkeley.edu/~cs61c/su06 CS61C : Machine Structures Lecture #22: Caches Andy.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
CS61C L23 Caches I (1) Beamer, Summer 2007 © UCB Scott Beamer, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #23 Cache I.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
CS61C L33 Caches III (1) Garcia, Spring 2007 © UCB Future of movies is 3D?  Dreamworks says they may exclusively release movies in this format. It’s based.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
COMP3221: Microprocessors and Embedded Systems Lecture 26: Cache - II Lecturer: Hui Wu Session 2, 2005 Modified from.
CS 61C L35 Caches IV / VM I (1) Garcia, Fall 2004 © UCB Andy Carle inst.eecs.berkeley.edu/~cs61c-ta inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
CS61C L33 Caches III (1) Garcia © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
COMP3221 lec34-Cache-II.1 Saeid Nooshabadi COMP 3221 Microprocessors and Embedded Systems Lectures 34: Cache Memory - II
CS61C L24 Cache II (1) Beamer, Summer 2007 © UCB Scott Beamer, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #24 Cache II.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.
Cs 61C L17 Cache.1 Patterson Spring 99 ©UCB CS61C Cache Memory Lecture 17 March 31, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson) www-inst.eecs.berkeley.edu/~cs61c/schedule.html.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 9, 2005 Topic: Caches (contd.)
Computer ArchitectureFall 2008 © November 3 rd, 2008 Nael Abu-Ghazaleh CS-447– Computer.
CS61C L18 Cache2 © UC Regents 1 CS61C - Machine Structures Lecture 18 - Caches, Part II November 1, 2000 David Patterson
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
CS 61C L23 Caches IV / VM I (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C :
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Lecture 19: Virtual Memory
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
Csci 211 Computer System Architecture – Review on Cache Memory Xiuzhen Cheng
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Review °We would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible. °So we create a memory hierarchy:
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 14 – Caches III Google Glass may be one vision of the future of post-PC.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
COMP 3221: Microprocessors and Embedded Systems Lectures 27: Cache Memory - III Lecturer: Hui Wu Session 2, 2005 Modified.
Lecture slides originally adapted from Prof. Valeriu Beiu (Washington State University, Spring 2005, EE 334)
CPE 626 CPU Resources: Introduction to Cache Memories Aleksandar Milenkovic Web:
CMSC 611: Advanced Computer Architecture
How will execution time grow with SIZE?
CS61C : Machine Structures Lecture 6. 2
Chapter 8 Digital Design and Computer Architecture: ARM® Edition
CS61C : Machine Structures Lecture 6. 2
Lecture 23: Cache, Memory, Virtual Memory
CS61CL Machine Structures Lec 11 – Introduction to Cache Design
Instructor Paul Pearce
ECE232: Hardware Organization and Design
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
CS-447– Computer Architecture Lecture 20 Cache Memories
Some of the slides are adopted from David Patterson (UCB)
Csci 211 Computer System Architecture – Review on Cache Memory
Cache Memory Rabi Mahapatra
Presentation transcript:

CS61C L32 Caches III (1) Garcia, Fall 2006 © UCB Lecturer SOE Dan Garcia inst.eecs.berkeley.edu/~cs61c UC Berkeley CS61C : Machine Structures Lecture 32 – Caches III gigaom.com/2006/11/14/100gbe 100 Gig Eth?!  Yesterday Infinera demoed a system that took a 100 GigE signal, broke it up into ten 10GigE signals and sent it over existing optical networks. Fast!!!

CS61C L32 Caches III (2) Garcia, Fall 2006 © UCB What to do on a write hit? Write-through update the word in cache block and corresponding word in memory Write-back update word in cache block allow memory word to be “stale”  add ‘dirty’ bit to each block indicating that memory needs to be updated when block is replaced  OS flushes cache before I/O… Performance trade-offs?

CS61C L32 Caches III (3) Garcia, Fall 2006 © UCB Block Size Tradeoff (1/3) Benefits of Larger Block Size Spatial Locality: if we access a given word, we’re likely to access other nearby words soon Very applicable with Stored-Program Concept: if we execute a given instruction, it’s likely that we’ll execute the next few as well Works nicely in sequential array accesses too

CS61C L32 Caches III (4) Garcia, Fall 2006 © UCB Block Size Tradeoff (2/3) Drawbacks of Larger Block Size Larger block size means larger miss penalty  on a miss, takes longer time to load a new block from next level If block size is too big relative to cache size, then there are too few blocks  Result: miss rate goes up In general, minimize Average Memory Access Time (AMAT) = Hit Time + Miss Penalty x Miss Rate

CS61C L32 Caches III (5) Garcia, Fall 2006 © UCB Block Size Tradeoff (3/3) Hit Time = time to find and retrieve data from current level cache Miss Penalty = average time to retrieve data on a current level miss (includes the possibility of misses on successive levels of memory hierarchy) Hit Rate = % of requests that are found in current level cache Miss Rate = 1 - Hit Rate

CS61C L32 Caches III (6) Garcia, Fall 2006 © UCB Extreme Example: One Big Block Cache Size = 4 bytesBlock Size = 4 bytes Only ONE entry (row) in the cache! If item accessed, likely accessed again soon But unlikely will be accessed again immediately! The next access will likely to be a miss again Continually loading data into the cache but discard data (force out) before use it again Nightmare for cache designer: Ping Pong Effect Cache Data Valid Bit B 0B 1B 3 Tag B 2

CS61C L32 Caches III (7) Garcia, Fall 2006 © UCB Block Size Tradeoff Conclusions Miss Penalty Block Size Increased Miss Penalty & Miss Rate Average Access Time Block Size Exploits Spatial Locality Fewer blocks: compromises temporal locality Miss Rate Block Size

CS61C L32 Caches III (8) Garcia, Fall 2006 © UCB Types of Cache Misses (1/2) “Three Cs” Model of Misses 1st C: Compulsory Misses occur when a program is first started cache does not contain any of that program’s data yet, so misses are bound to occur can’t be avoided easily, so won’t focus on these in this course

CS61C L32 Caches III (9) Garcia, Fall 2006 © UCB Types of Cache Misses (2/2) 2nd C: Conflict Misses miss that occurs because two distinct memory addresses map to the same cache location two blocks (which happen to map to the same location) can keep overwriting each other big problem in direct-mapped caches how do we lessen the effect of these? Dealing with Conflict Misses Solution 1: Make the cache size bigger  Fails at some point Solution 2: Multiple distinct blocks can fit in the same cache Index?

CS61C L32 Caches III (10) Garcia, Fall 2006 © UCB Fully Associative Cache (1/3) Memory address fields: Tag: same as before Offset: same as before Index: non-existant What does this mean? no “rows”: any block can go anywhere in the cache must compare with all tags in entire cache to see if data is there

CS61C L32 Caches III (11) Garcia, Fall 2006 © UCB Fully Associative Cache (2/3) Fully Associative Cache (e.g., 32 B block) compare tags in parallel Byte Offset : Cache Data B : Cache Tag (27 bits long) Valid : B 1B 31 : Cache Tag = = = = = :

CS61C L32 Caches III (12) Garcia, Fall 2006 © UCB Fully Associative Cache (3/3) Benefit of Fully Assoc Cache No Conflict Misses (since data can go anywhere) Drawbacks of Fully Assoc Cache Need hardware comparator for every single entry: if we have a 64KB of data in cache with 4B entries, we need 16K comparators: infeasible

CS61C L32 Caches III (13) Garcia, Fall 2006 © UCB Third Type of Cache Miss Capacity Misses miss that occurs because the cache has a limited size miss that would not occur if we increase the size of the cache sketchy definition, so just get the general idea This is the primary type of miss for Fully Associative caches.

CS61C L32 Caches III (14) Garcia, Fall 2006 © UCB N-Way Set Associative Cache (1/3) Memory address fields: Tag: same as before Offset: same as before Index: points us to the correct “row” (called a set in this case) So what’s the difference? each set contains multiple blocks once we’ve found correct set, must compare with all tags in that set to find our data

CS61C L32 Caches III (15) Garcia, Fall 2006 © UCB Associative Cache Example Here’s a simple 2 way set associative cache. Memory Memory Address A B C D E F Cache Index

CS61C L32 Caches III (16) Garcia, Fall 2006 © UCB N-Way Set Associative Cache (2/3) Basic Idea cache is direct-mapped w/respect to sets each set is fully associative basically N direct-mapped caches working in parallel: each has its own valid bit and data Given memory address: Find correct set using Index value. Compare Tag with all Tag values in the determined set. If a match occurs, hit!, otherwise a miss. Finally, use the offset field as usual to find the desired data within the block.

CS61C L32 Caches III (17) Garcia, Fall 2006 © UCB N-Way Set Associative Cache (3/3) What’s so great about this? even a 2-way set assoc cache avoids a lot of conflict misses hardware cost isn’t that bad: only need N comparators In fact, for a cache with M blocks, it’s Direct-Mapped if it’s 1-way set assoc it’s Fully Assoc if it’s M-way set assoc so these two are just special cases of the more general set associative design

CS61C L32 Caches III (18) Garcia, Fall 2006 © UCB 4-Way Set Associative Cache Circuit tag index

CS61C L32 Caches III (19) Garcia, Fall 2006 © UCB Block Replacement Policy Direct-Mapped Cache: index completely specifies position which position a block can go in on a miss N-Way Set Assoc: index specifies a set, but block can occupy any position within the set on a miss Fully Associative: block can be written into any position Question: if we have the choice, where should we write an incoming block? If there are any locations with valid bit off (empty), then usually write the new block into the first one. If all possible locations already have a valid block, we must pick a replacement policy: rule by which we determine which block gets “cached out” on a miss.

CS61C L32 Caches III (20) Garcia, Fall 2006 © UCB Block Replacement Policy: LRU LRU (Least Recently Used) Idea: cache out block which has been accessed (read or write) least recently Pro: temporal locality  recent past use implies likely future use: in fact, this is a very effective policy Con: with 2-way set assoc, easy to keep track (one LRU bit); with 4-way or greater, requires complicated hardware and much time to keep track of this

CS61C L32 Caches III (21) Garcia, Fall 2006 © UCB Block Replacement Example We have a 2-way set associative cache with a four word total capacity and one word blocks. We perform the following word accesses (ignore bytes for this problem): 0, 2, 0, 1, 4, 0, 2, 3, 5, 4 How many hits and how many misses will there be for the LRU block replacement policy?

CS61C L32 Caches III (22) Garcia, Fall 2006 © UCB Block Replacement Example: LRU Addresses 0, 2, 0, 1, 4, 0,... 0 lru 2 1 loc 0loc 1 set 0 set 1 02 lru set 0 set 1 0: miss, bring into set 0 (loc 0) 2: miss, bring into set 0 (loc 1) 0: hit 1: miss, bring into set 1 (loc 0) 4: miss, bring into set 0 (loc 1, replace 2) 0: hit 0 set 0 set 1 lru 02 set 0 set 1 lru set 0 set lru 2 4 set 0 set lru

CS61C L32 Caches III (23) Garcia, Fall 2006 © UCB Big Idea How to choose between associativity, block size, replacement & write policy? Design against a performance model Minimize: Average Memory Access Time = Hit Time + Miss Penalty x Miss Rate influenced by technology & program behavior Create the illusion of a memory that is large, cheap, and fast - on average How can we improve miss penalty?

CS61C L32 Caches III (24) Garcia, Fall 2006 © UCB Improving Miss Penalty When caches first became popular, Miss Penalty ~ 10 processor clock cycles Today 2400 MHz Processor (0.4 ns per clock cycle) and 80 ns to go to DRAM  200 processor clock cycles! Proc $2$2 DRAM $ MEM Solution: another cache between memory and the processor cache: Second Level (L2) Cache

CS61C L32 Caches III (25) Garcia, Fall 2006 © UCB Analyzing Multi-level cache hierarchy Proc $2$2 DRAM $ L1 hit time L1 Miss Rate L1 Miss Penalty Avg Mem Access Time = L1 Hit Time + L1 Miss Rate * L1 Miss Penalty L1 Miss Penalty = L2 Hit Time + L2 Miss Rate * L2 Miss Penalty Avg Mem Access Time = L1 Hit Time + L1 Miss Rate * (L2 Hit Time + L2 Miss Rate * L2 Miss Penalty ) L2 hit time L2 Miss Rate L2 Miss Penalty

CS61C L32 Caches III (26) Garcia, Fall 2006 © UCB And in Conclusion… We’ve discussed memory caching in detail. Caching in general shows up over and over in computer systems Filesystem cache Web page cache Game databases / tablebases Software memoization Others? Big idea: if something is expensive but we want to do it repeatedly, do it once and cache the result. Cache design choices: Write through v. write back size of cache: speed v. capacity direct-mapped v. associative for N-way set assoc: choice of N block replacement policy 2 nd level cache? 3 rd level cache? Use performance model to pick between choices, depending on programs, technology, budget,...

CS61C L32 Caches III (27) Garcia, Fall 2006 © UCB BONUS SLIDES

CS61C L32 Caches III (28) Garcia, Fall 2006 © UCB Example Assume Hit Time = 1 cycle Miss rate = 5% Miss penalty = 20 cycles Calculate AMAT… Avg mem access time = x 20 = cycles = 2 cycles

CS61C L32 Caches III (29) Garcia, Fall 2006 © UCB Ways to reduce miss rate Larger cache limited by cost and technology hit time of first level cache < cycle time (bigger caches are slower) More places in the cache to put each block of memory – associativity fully-associative  any block any line N-way set associated  N places for each block  direct map: N=1

CS61C L32 Caches III (30) Garcia, Fall 2006 © UCB Typical Scale L1 size: tens of KB hit time: complete in one clock cycle miss rates: 1-5% L2: size: hundreds of KB hit time: few clock cycles miss rates: 10-20% L2 miss rate is fraction of L1 misses that also miss in L2 why so high?

CS61C L32 Caches III (31) Garcia, Fall 2006 © UCB Example: with L2 cache Assume L1 Hit Time = 1 cycle L1 Miss rate = 5% L2 Hit Time = 5 cycles L2 Miss rate = 15% (% L1 misses that miss) L2 Miss Penalty = 200 cycles L1 miss penalty = * 200 = 35 Avg mem access time = x 35 = 2.75 cycles

CS61C L32 Caches III (32) Garcia, Fall 2006 © UCB Example: without L2 cache Assume L1 Hit Time = 1 cycle L1 Miss rate = 5% L1 Miss Penalty = 200 cycles Avg mem access time = x 200 = 11 cycles 4x faster with L2 cache! (2.75 vs. 11)

CS61C L32 Caches III (33) Garcia, Fall 2006 © UCB An actual CPU – Early PowerPC Cache 32 KByte Instructions and 32 KByte Data L1 caches External L2 Cache interface with integrated controller and cache tags, supports up to 1 MByte external L2 cache Dual Memory Management Units (MMU) with Translation Lookaside Buffers (TLB) Pipelining Superscalar (3 inst/cycle) 6 execution units (2 integer and 1 double precision IEEE floating point)

CS61C L32 Caches III (34) Garcia, Fall 2006 © UCB An Actual CPU – Pentium M 32KB I$ 32KB D$