Instructor: Justin Hsia 7/10/2013Summer 2013 -- Lecture #101 CS 61C: Great Ideas in Computer Architecture The Memory Hierarchy, Fully Associative Caches.

Slides:



Advertisements
Similar presentations
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 23, 2002 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Advertisements

1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
CMPE 421 Parallel Computer Architecture MEMORY SYSTEM.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
331 Week13.1Spring :332:331 Computer Architecture and Assembly Language Spring 2006 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
S.1 Review: The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Memory Chapter 7 Cache Memories.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 3, 2003 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
CIS °The Five Classic Components of a Computer °Today’s Topics: Memory Hierarchy Cache Basics Cache Exercise (Many of this topic’s slides were.
ENGS 116 Lecture 121 Caches Vincent H. Berk Wednesday October 29 th, 2008 Reading for Friday: Sections C.1 – C.3 Article for Friday: Jouppi Reading for.
Computer ArchitectureFall 2007 © November 7th, 2007 Majd F. Sakr CS-447– Computer Architecture.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
Overview: Memory Memory Organization: General Issues (Hardware) –Objectives in Memory Design –Memory Types –Memory Hierarchies Memory Management (Software.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Systems I Locality and Caching
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Adapted from Slides by Prof. Mary Jane Irwin, Penn State University.
CMPE 421 Parallel Computer Architecture
CS1104: Computer Organisation School of Computing National University of Singapore.
EECS 318 CAD Computer Aided Design LECTURE 10: Improving Memory Access: Direct and Spatial caches Instructor: Francis G. Wolff Case.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
Lecture 14 Memory Hierarchy and Cache Design Prof. Mike Schulte Computer Architecture ECE 201.
EEE-445 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
Computer Architecture Memory organization. Types of Memory Cache Memory Serves as a buffer for frequently accessed data Small  High Cost RAM (Main Memory)
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
King Fahd University of Petroleum and Minerals King Fahd University of Petroleum and Minerals Computer Engineering Department Computer Engineering Department.
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
CSE378 Intro to caches1 Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early.
Computer Organization & Programming
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Caches Hiding Memory Access Times. PC Instruction Memory 4 MUXMUX Registers Sign Ext MUXMUX Sh L 2 Data Memory MUXMUX CONTROLCONTROL ALU CTL INSTRUCTION.
CPE232 Cache Introduction1 CPE 232 Computer Organization Spring 2006 Cache Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Instructor: Justin Hsia 7/05/2012Summer Lecture #111 CS 61C: Great Ideas in Computer Architecture Direct-Mapped Caches.
CMSC 611: Advanced Computer Architecture
COSC3330 Computer Architecture
Computer Organization
CSE 351 Section 9 3/1/12.
Yu-Lun Kuo Computer Sciences and Information Engineering
The Goal: illusion of large, fast, cheap memory
Morgan Kaufmann Publishers Memory & Cache
Instructor: Justin Hsia
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
Caches III CSE 351 Autumn 2018 Instructor: Justin Hsia
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Presentation transcript:

Instructor: Justin Hsia 7/10/2013Summer Lecture #101 CS 61C: Great Ideas in Computer Architecture The Memory Hierarchy, Fully Associative Caches

Review of Last Lecture Floating point (single and double precision) approximates real numbers – Exponent field uses biased notation – Special cases: 0, ±∞, NaN, denorm – MIPS has special instructions and registers Performance measured in latency or bandwidth Latency measurement: – – Affected by different components of the computer 7/10/2013Summer Lecture #102

Great Idea #3: Principle of Locality/ Memory Hierarchy 7/10/2013Summer Lecture #103

Agenda Memory Hierarchy Overview Administrivia Fully Associative Caches Cache Reads and Writes 7/10/2013Summer Lecture #104

Storage in a Computer Processor – Holds data in register files (~ 100 B) – Registers accessed on sub-nanosecond timescale Memory (“main memory”) – More capacity than registers (~ GiB) – Access time ~ ns Hundreds of clock cycles per memory access?! 7/10/2013Summer Lecture #105

“Moore’s Law” Processor-Memory Performance Gap (grows 50%/year) Processor-Memory Gap 7/10/ first Intel CPU with cache on chip 1998 Pentium III has two cache levels on chip µProc 55%/year (2X/1.5yr) DRAM 7%/year (2X/10yrs) Summer Lecture #10

Library Analogy Writing a report on a specific topic – e.g. history of the computer (your internet is out) While at library, take books from shelves and keep them on desk If need more, go get them and bring back to desk – Don’t return earlier books since might still need them – Limited space on desk; which books do we keep? You hope these ~10 books on desk enough to write report – Only % of the books in UC Berkeley libraries! 7/10/2013Summer Lecture #107

Principle of Locality (1/3) Principle of Locality: Programs access only a small portion of the full address space at any instant of time – Recall: Address space holds both code and data – Loops and sequential instruction execution mean generally localized code access – Stack and Heap try to keep your data together – Arrays and structs naturally group data you would access together 7/10/2013Summer Lecture #108

Principle of Locality (2/3) Temporal Locality (locality in time) – Go back to the same book on desk multiple times – If a memory location is referenced then it will tend to be referenced again soon Spatial Locality (locality in space) – When go to shelves, grab many books on computers since related books are stored together – If a memory location is referenced, the locations with nearby addresses will tend to be referenced soon 7/10/2013Summer Lecture #109

Principle of Locality (3/3) We exploit the principle of locality in hardware via a memory hierarchy where: – Levels closer to processor are faster (and more expensive per bit so smaller) – Levels farther from processor are larger (and less expensive per bit so slower) Goal: Create the illusion of memory being almost as fast as fastest memory and almost as large as biggest memory of the hierarchy 7/10/2013Summer Lecture #1010

Memory Hierarchy Schematic 7/10/2013Summer Lecture #1011 Smaller, Faster, More expensive Bigger, Slower, Cheaper Processor Level 1 Level 2 Level n Level 3... LowerHigher

Cache Concept Introduce intermediate hierarchy level: memory cache, which holds a copy of a subset of main memory – As a pun, often use $ (“cash”) to abbreviate cache (e.g. D$ = Data Cache, L1$ = Level 1 Cache) Modern processors have separate caches for instructions and data, as well as several levels of caches implemented in different sizes Implemented with same IC processing technology as CPU and integrated on-chip – faster but more expensive than main memory 7/10/2013Summer Lecture #1012

Memory Hierarchy Technologies Caches use static RAM (SRAM) + Fast (typical access times of 0.5 to 2.5 ns) – Low density (6 transistor cells), higher power, expensive ($2000 to $4000 per GB in 2011) Static: content will last as long as power is on Main memory uses dynamic RAM (DRAM) + High density (1 transistor cells), lower power, cheaper ($20 to $40 per GB in 2011) – Slower (typical access times of 50 to 70 ns) Dynamic: needs to be “refreshed” regularly (~ every 8 ms) 7/10/2013Summer Lecture #1013

Memory Transfer in the Hierarchy Inclusive: data in L1$ ⊂ data in L2$ ⊂ data in MM ⊂ data in SM 7/10/2013Summer Lecture #1014 Processor L1$ L2$ Secondary Memory Main Memory Block: Unit of transfer between memory and cache

Managing the Hierarchy registers  memory – By compiler (or assembly level programmer) cache  main memory – By the cache controller hardware main memory  disks (secondary storage) – By the OS (virtual memory, which is a later topic) – Virtual to physical address mapping assisted by the hardware (TLB) – By the programmer (files) 7/10/2013Summer Lecture #1015 We are here

Typical Memory Hierarchy 7/10/2013Summer Lecture #1016 On-Chip Components Second Level Cache (SRAM) Second Level Cache (SRAM) Control Datapath Secondary Memory (Disk or Flash) RegFile Main Memory (DRAM) Data Cache Data Cache Instr Cache Instr Cache Cost/bit: highest lowest Speed:½’s 1’s10’s 100’s 1,000,000’s (cycles) Size: 100’s 10K’s M’s G’s T’s (bytes)

Review So Far Goal: present the programmer with ≈ as much memory as the largest memory at ≈ the speed of the fastest memory Approach: Memory Hierarchy – Successively higher levels contain “most used” data from lower levels – Exploits temporal and spatial locality – We will start by studying caches 7/10/2013Summer Lecture #1017

Agenda Memory Hierarchy Overview Administrivia Fully Associative Caches Cache Reads and Writes 7/10/2013Summer Lecture #1018

Administrivia Midterm – Friday 7/19, 9am-12pm, Location TBD – How to study: Studying in groups can help Take old exams for practice Look at lectures, section notes, project, hw, labs, etc. – Will cover material through caches Project 1 due Sunday – Shaun extra OHs Saturday 4-7pm 7/10/2013Summer Lecture #1019

Agenda Memory Hierarchy Overview Administrivia Fully Associative Caches Cache Reads and Writes 7/10/2013Summer Lecture #1020

Cache Management What is the overall organization of blocks we impose on our cache? – Where do we put a block of data from memory? – How do we know if a block is already in cache? – How do we quickly find a block when we need it? – When do we replace something in the cache? 7/10/2013Summer Lecture #1021

General Notes on Caches (1/4) Recall: Memory is byte-addressed We haven’t specified the size of our “blocks,” but will be multiple of word size (32-bits) – How do we access individual words or bytes within a block? Cache is smaller than memory – Can’t fit all blocks at once, so multiple blocks in memory must map to the same slot in cache – Need some way of identifying which memory block is currently in each cache slot 7/10/2013Summer Lecture #1022 OFFSET INDEX TAG

General Notes on Caches (2/4) Recall: hold subset of memory in a place that’s faster to access – Return data to you when you request it – If cache doesn’t have it, then fetches it for you Cache must be able to check/identify its current contents What does cache initially hold? – Garbage! Cache considered “cold” – Keep track with Valid bit 7/10/2013Summer Lecture #1023

General Notes on Caches (3/4) Effect of block size (K Bytes): – Spatial locality dictates our blocks consist of adjacent bytes, which differ in address by 1 – Offset field: Lowest bits of memory address can be used to index to specific bytes within a block Block size needs to be a power of two (in bytes) (address) modulo (# of bytes in a block) 7/10/2013Summer Lecture #1024

General Notes on Caches (4/4) Effect of cache size (C Bytes): – “Cache Size” refers to total stored data – Determines number of blocks the cache can hold (C/K blocks) – Tag field: Leftover upper bits of memory address determine which portion of memory the block came from (identifier) 7/10/2013Summer Lecture #1025

Fully Associative Caches Each memory block can map anywhere in the cache (fully associative) – Most efficient use of space – Least efficient to check To check a fully associative cache: 1)Look at ALL cache slots in parallel 2)If Valid bit is 0, then ignore 3)If Valid bit is 1 and Tag matches, then return that data 7/10/2013Summer Lecture #1026

Memory address fields: Meaning of the field sizes: – O bits ↔ 2 O bytes/block = 2 O-2 words/block – T bits = A – O, where A = # of address bits (A = 32 here) Fully Associative Cache Address Breakdown 7/10/2013Summer Lecture #1027 TagOffset 310 T bitsO bits

Caching Terminology (1/2) When reading memory, 3 things can happen: – Cache hit: Cache holds a valid copy of the block, so return the desired data – Cache miss: Cache does not have desired block, so fetch from memory and put in empty (invalid) slot – Cache miss with block replacement: Cache does not have desired block and is full, so discard one and replace it with desired data 7/10/2013Summer Lecture #1028

Block Replacement Policies Which block do you replace? – Use a cache block replacement policy – There are many (most are intuitively named), but we will just cover a few in this class Of note: – Random Replacement – Least Recently Used (LRU): requires some “management bits” 7/10/2013Summer Lecture #1029

Caching Terminology (2/2) How effective is your cache? – Want to max cache hits and min cache misses – Hit rate (HR): Percentage of memory accesses in a program or set of instructions that result in a cache hit – Miss rate (MR): Like hit rate, but for cache misses MR = 1 – HR How fast is your cache? – Hit time (HT): Time to access cache (including Tag comparison) – Miss penalty (MP): Time to replace a block in the cache from a lower level in the memory hierarchy 7/10/2013Summer Lecture #1030

Fully Associative Cache Implementation What’s actually in the cache? – Each cache slot contains the actual data block (8 × K = 8 × 2 O bits) – Tag field of address as identifier (T bits) – Valid bit (1 bit) – Any necessary replacement management bits (“LRU bits” – variable # of bits) Total bits in cache = # slots × (8×K + T + 1) + ? = (C/K) × (8×2 O + T + 1) + ? bits 7/10/2013Summer Lecture #1031

FA Cache Examples (1/4) Cache parameters: – Fully associative, address space of 64B, block size of 1 word, cache size of 4 words, LRU (2 bits) Address Breakdown: – 1 word = 4 bytes, so O = log 2 (4) = 2 – A = log 2 (64) = 6 bits, so T = 6 – 2 = 4 Bits in cache = (4/1) × (8× ) + 2 = 150 bits 7/10/2013Summer Lecture #1032 XX Block address Memory Addresses:

FA Cache Examples (1/4) Cache parameters: – Fully associative, address space of 64B, block size of 1 word, cache size of 4 words, LRU (2 bits) – Offset – 2 bits, Tag – 4 bits 37 bits per slot, 150 bits to implement with LRU 7/10/2013Summer Lecture #1033 VTag XXXXX0x?? XXXXX0x?? XXXXX0x?? XXXXX0x?? Block Size Slot LRU XX Cache Size Offset

FA Cache Examples (2/4) 1)Consider the sequence of memory address accesses /10/2013Summer Lecture # miss Starting with a cold cache: x?? x?? x?? x?? 10000M[0]M[1]M[2]M[3] x?? x?? x?? 10000M[0]M[1]M[2]M[3] x?? x?? x?? 10000M[0]M[1]M[2]M[3] x?? x?? x?? 2hit 10000M[0]M[1]M[2]M[3] 10001M[4]M[5]M[6]M[7] x?? x?? 10000M[0]M[1]M[2]M[3] 10001M[4]M[5]M[6]M[7] x?? x?? 8 miss 10000M[0]M[1]M[2]M[3] 10001M[4]M[5]M[6]M[7] 10010M[8]M[9]M[10]M[11] x??

FA Cache Examples (2/4) 1)Consider the sequence of memory address accesses /10/2013Summer Lecture # miss Starting with a cold cache: 8 requests, 6 misses – HR of 25% 10000M[0]M[1]M[2]M[3] 10001M[4]M[5]M[6]M[7] 10010M[8]M[9]M[10]M[11] x?? 10000M[0]M[1]M[2]M[3] 10001M[4]M[5]M[6]M[7] 10010M[8]M[9]M[10]M[11] 10101M[20]M[21]M[22]M[23] 10000M[0]M[1]M[2]M[3] 10001M[4]M[5]M[6]M[7] 10010M[8]M[9]M[10]M[11] 10101M[20]M[21]M[22]M[23] 10100M[16]M[17]M[18]M[19] 10001M[4]M[5]M[6]M[7] 10010M[8]M[9]M[10]M[11] 10101M[20]M[21]M[22]M[23] 16miss 10100M[16]M[17]M[18]M[19] 10000M[0]M[1]M[2]M[3] 10010M[8]M[9]M[10]M[11] 10101M[20]M[21]M[22]M[23] 2 hit M H M M

FA Cache Examples (3/4) 2)Same requests, but reordered /10/2013Summer Lecture # miss hit Starting with a cold cache: 10000M[0]M[1]M[2]M[3] x?? x?? x?? 10000M[0]M[1]M[2]M[3] x?? x?? x?? 10000M[0]M[1]M[2]M[3] x?? x?? x?? 2hit 10000M[0]M[1]M[2]M[3] x?? x?? x?? 0 hit

FA Cache Examples (3/4) 2)Same requests, but reordered /10/2013Summer Lecture # miss Starting with a cold cache: 8 requests, 5 misses – ordering matters! 10000M[0]M[1]M[2]M[3] x?? x?? x?? 10000M[0]M[1]M[2]M[3] 10100M[16]M[17]M[18]M[19] x?? x?? 10000M[0]M[1]M[2]M[3] 10100M[16]M[17]M[18]M[19] 10101M[20]M[21]M[22]M[23] x?? 10000M[0]M[1]M[2]M[3] 10100M[16]M[17]M[18]M[19] 10101M[20]M[21]M[22]M[23] 10010M[8]M[9]M[10]M[11] 20miss 10000M[0]M[1]M[2]M[3] 10100M[16]M[17]M[18]M[19] 10101M[20]M[21]M[22]M[23] 10010M[8]M[9]M[10]M[11] 4 miss M H H H

FA Cache Examples (4/4) 3)Original sequence, but double block size /10/2013Summer Lecture # miss Starting with a cold cache: 00000x?? 00000x?? 1000M[0]M[1]M[2]M[3]M[4]M[5]M[6]M[7] 00000x?? 1000M[0]M[1]M[2]M[3]M[4]M[5]M[6]M[7] 00000x?? 1000M[0]M[1]M[2]M[3]M[4]M[5]M[6]M[7] 00000x?? 2 4 hit 8 miss 1000M[0]M[1]M[2]M[3]M[4]M[5]M[6]M[7] 00000x?? 1000M[0]M[1]M[2]M[3]M[4]M[5]M[6]M[7] 1001M[8]M[9]M[10]M[11]M[12]M[13]M[14]M[15]

FA Cache Examples (4/4) 3)Original sequence, but double block size /10/2013Summer Lecture # miss Starting with a cold cache: 1000M[0]M[1]M[2]M[3]M[4]M[5]M[6]M[7] 1001M[8]M[9]M[10]M[11]M[12]M[13]M[14]M[15] 1010M[16]M[17]M[18]M[19]M[20]M[21]M[22]M[23] 1001M[8]M[9]M[10]M[11]M[12]M[13]M[14]M[15] 1010M[16]M[17]M[18]M[19]M[20]M[21]M[22]M[23] 1000M[0]M[1]M[2]M[3]M[4]M[5]M[6]M[7] 16 0 hit miss 1010M[16]M[17]M[18]M[19]M[20]M[21]M[22]M[23] 1001M[8]M[9]M[10]M[11]M[12]M[13]M[14]M[15] 2 hit 8 requests, 4 misses – cache parameters matter! M H H M

40 Question: Starting with the same cold cache as the first 3 examples, which of the sequences below will result in the final state of the cache shown here: (A) (B) (C) (D) 10000M[0]M[1]M[2]M[3] 10011M[12]M[13]M[14]M[15] 10001M[4]M[5]M[6]M[7] 10100M[16]M[17]M[18]M[19] LRU

Get To Know Your Staff Category: Music 7/10/2013Summer Lecture #1041

Agenda Memory Hierarchy Overview Administrivia Fully Associative Caches Cache Reads and Writes 7/10/2013Summer Lecture #1042

Memory Accesses The picture so far: Cache is separate from memory – Possible to hold different data? 7/10/2013Summer Lecture #1043 Cache Addr miss hit data CPU Main Memory

Cache Reads and Writes Want to handle reads and writes quickly while maintaining consistency between cache and memory (i.e. both know about all updates) – Policies for cache hits and misses are independent Here we assume the use of separate instruction and data caches (I$ and D$) – Read from both – Write only to D$ (assume no self-modifying code) 7/10/2013Summer Lecture #1044

Handling Cache Hits Read hits (I$ and D$) – Fastest possible scenario, so want more of these Write hits (D$) 1)Write-Through Policy: Always write data to cache and to memory (through cache) Forces cache and memory to always be consistent Slow! (every memory access is long) Include a Write Buffer that updates memory in parallel with processor 45 Assume present in all schemes when writing to memory 7/10/2013Summer Lecture #10

Handling Cache Hits Read hits (I$ and D$) – Fastest possible scenario, so want more of these Write hits (D$) 2)Write-Back Policy: Write data only to cache, then update memory when block is removed Allows cache and memory to be inconsistent Multiple writes collected in cache; single write to memory per block Dirty bit: Extra bit per cache row that is set if block was written to (is “dirty”) and needs to be written back 7/10/2013Summer Lecture #1046

Handling Cache Misses Miss penalty grows as block size does Read misses (I$ and D$) – Stall execution, fetch block from memory, put in cache, send requested data to processor, resume Write misses (D$) 1)Write allocate: Fetch block from memory, put in cache, execute a write hit Works with either write-through or write-back Ensures cache is up-to-date after write miss 7/10/2013Summer Lecture #1047

Handling Cache Misses Miss penalty grows as block size does Read misses (I$ and D$) – Stall execution, fetch block from memory, put in cache, send requested data to processor, resume Write misses (D$) 2)No-write allocate: Skip cache altogether and write directly to memory Cache is never up-to-date after write miss Ensures memory is always up-to-date 7/10/2013Summer Lecture #1048

Updated Cache Picture Fully associative, write through – Same as previously shown Fully associative, write back Write allocate/no-write allocate – Affects behavior, not design 7/10/2013Summer Lecture #1049 VDTag XXXXXX0x?? XXXXXX0x?? XXXXXX0x?? XXXXXX0x?? Slot LRU XX

Summary (1/2) Memory hierarchy exploits principle of locality to deliver lots of memory at fast speeds Fully Associative Cache: Every block in memory maps to any cache slot – Offset to determine which byte within block – Tag to identify if it’s the block you want Replacement policies: random and LRU Cache params: block size (K), cache size (C) 7/10/2013Summer Lecture #1050

Summary (2/2) Cache read and write policies: – Write-back and write-through for hits – Write allocate and no-write allocate for misses – Cache needs dirty bit if write-back 7/10/2013Summer Lecture #1051