CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1 Instructors: John Wawrzynek & Vladimir Stojanovic

Slides:



Advertisements
Similar presentations
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2 Instructors: Krste Asanovic & Vladimir Stojanovic
Advertisements

Multilevel Memory Caches Prof. Sirer CS 316 Cornell University.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
331 Week13.1Spring :332:331 Computer Architecture and Assembly Language Spring 2006 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Memory Chapter 7 Cache Memories.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 3, 2003 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
The Memory Hierarchy II CPSC 321 Andreas Klappenecker.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
CIS °The Five Classic Components of a Computer °Today’s Topics: Memory Hierarchy Cache Basics Cache Exercise (Many of this topic’s slides were.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
L18 – Memory Hierarchy 1 Comp 411 – Fall /30/2009 Memory Hierarchy Memory Flavors Principle of Locality Program Traces Memory Hierarchies Associativity.
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
CMPE 421 Parallel Computer Architecture
Lecture 19: Virtual Memory
CS 61C: Great Ideas in Computer Architecture Lecture 14: Caches, Part I Instructor: Sagar Karandikar
Multilevel Memory Caches Prof. Sirer CS 316 Cornell University.
CS 3410, Spring 2014 Computer Science Cornell University See P&H Chapter: , 5.8, 5.15.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
1 Computer Architecture Cache Memory. 2 Today is brought to you by cache What do we want? –Fast access to data from memory –Large size of memory –Acceptable.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
CSE378 Intro to caches1 Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early.
Computer Organization & Programming
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Caches Hiding Memory Access Times. PC Instruction Memory 4 MUXMUX Registers Sign Ext MUXMUX Sh L 2 Data Memory MUXMUX CONTROLCONTROL ALU CTL INSTRUCTION.
CPE232 Cache Introduction1 CPE 232 Computer Organization Spring 2006 Cache Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 CMPE 421 Parallel Computer Architecture PART3 Accessing a Cache.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
CS 61C: Great Ideas in Computer Architecture Lecture 15: Caches, Part 2 Instructor: Sagar Karandikar
CS 61C: Great Ideas in Computer Architecture Caches Part 2 Instructors: Nicholas Weaver & Vladimir Stojanovic
CS 110 Computer Architecture Lecture 15: Caches Part 2 Instructor: Sören Schwertfeger School of Information Science and Technology.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Instructor: Justin Hsia 7/05/2012Summer Lecture #111 CS 61C: Great Ideas in Computer Architecture Direct-Mapped Caches.
COSC3330 Computer Architecture
Memory COMPUTER ARCHITECTURE
Yu-Lun Kuo Computer Sciences and Information Engineering
The Goal: illusion of large, fast, cheap memory
Cache Memory Presentation I
Memristor memory on its way (hopefully)
Nicholas Weaver & Vladimir Stojanovic
Rocky K. C. Chang 18 November 2017
Instructors: Yuanqing Cheng
Rocky K. C. Chang 27 November 2017
Krste Asanović & Randy H. Katz
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
CS-447– Computer Architecture Lecture 20 Cache Memories
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Chapter Five Large and Fast: Exploiting Memory Hierarchy
10/18: Lecture Topics Using spatial locality
Presentation transcript:

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1 Instructors: John Wawrzynek & Vladimir Stojanovic

New-School Machine Structures (It’s a bit more complicated!) Parallel Requests Assigned to computer e.g., Search “Katz” Parallel Threads Assigned to core e.g., Lookup, Ads Parallel Instructions >1 one time e.g., 5 pipelined instructions Parallel Data >1 data one time e.g., Add of 4 pairs of words Hardware descriptions All one time Programming Languages 2 Smart Phone Warehouse Scale Computer Software Hardware Harness Parallelism & Achieve High Performance Logic Gates Core … Memory (Cache) Input/Output Computer Cache Memory Core Instruction Unit(s) Functional Unit(s) A 3 +B 3 A 2 +B 2 A 1 +B 1 A 0 +B 0 How do we know?

Processor Control Datapath Components of a Computer 3 PC Memory InputOutput Enable? Read/Write Address Write Data Read Data Processor-Memory Interface I/O-Memory Interfaces Program Data

Problem: Large memories slow? Library Analogy Finding a book in a large library takes time – Takes time to search a large card catalog – (mapping title/author to index number) – Round-trip time to walk to the stacks and retrieve the desired book. Larger libraries makes both delays worse Electronic memories have the same issue, plus the technologies that we use to store an individual bit get slower as we increase density (SRAM versus DRAM versus Magnetic Disk) 4 However what we want is a large yet fast memory!

Processor-DRAM Gap (latency) microprocessor executes ~one instruction in same time as DRAM access 2015 microprocessor executes ~1000 instructions in same time as DRAM access Slow DRAM access could have disastrous impact on CPU performance!

What to do: Library Analogy Want to write a report using library books – E.g., works of J.D. Salinger Go to Doe library, look up relevant books, fetch from stacks, and place on desk in library If need more, check them out and keep on desk – But don’t return earlier books since might need them You hope this collection of ~10 books on desk enough to write report, despite 10 being only % of books in UC Berkeley libraries 6

Big Idea: Memory Hierarchy Processor Size of memory at each level Increasing distance from processor, decreasing speed Level 1 Level 2 Level n Level 3... Inner Outer Levels in memory hierarchy As we move to outer levels the latency goes up and price per bit goes down. Why? 7

Real Memory Reference Patterns Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Memory. IBM Systems Journal 10(3): (1971) Time Memory Address (one dot per access)

Big Idea: Locality Temporal Locality (locality in time) – Go back to same book on desktop multiple times – If a memory location is referenced, then it will tend to be referenced again soon Spatial Locality (locality in space) – When go to book shelf, pick up multiple books on J.D. Salinger since library stores related books together – If a memory location is referenced, the locations with nearby addresses will tend to be referenced soon 9

Memory Reference Patterns Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Memory. IBM Systems Journal 10(3): (1971) Time Memory Address (one dot per access) Spatial Locality Spatial Locality Temporal Locality Temporal Locality

Principle of Locality Principle of Locality: Programs access small portion of address space at any instant of time (spatial locality) and repeatedly access that portion (temporal locality) What program structures lead to temporal and spatial locality in instruction accesses? In data accesses? 11

Memory Reference Patterns Address Time Instruction fetches Stack accesses Data accesses n loop iterations subroutine call subroutine return argument access vector access scalar accesses

Cache Philosophy Programmer-invisible hardware mechanism to give illusion of speed of fastest memory with size of largest memory – Works fine even if programmer has no idea what a cache is – However, performance-oriented programmers today sometimes “reverse engineer” cache design to design data structures to match cache – We’ll do that in Project 3 13

Memory Access without Cache Load word instruction: lw $t0,0($t1) $t1 contains 1022 ten, Memory[1022] = 99 1.Processor issues address 1022 ten to Memory 2.Memory reads word at address 1022 ten (99) 3.Memory sends 99 to Processor 4.Processor loads 99 into register $t0 14

Processor Control Datapath Adding Cache to Computer 15 PC Memory InputOutput Enable? Read/Write Address Write Data Read Data Processor-Memory Interface I/O-Memory Interfaces Program Data

Memory Access with Cache Load word instruction: lw $t0,0($t1) $t1 contains 1022 ten, Memory[1022] = 99 With cache: Processor issues address 1022 ten to Cache 1.Cache checks to see if has copy of data at address 1022 ten 2a.If finds a match (Hit): cache reads 99, sends to processor 2b.No match (Miss): cache sends address 1022 to Memory I.Memory reads 99 at address 1022 ten II.Memory sends 99 to Cache III.Cache replaces word with new 99 IV.Cache sends 99 to processor 2.Processor loads 99 into register $t0 16

Administrivia HW2 due 23:59:59 Project 3-1 due date now 10/21 – already released, start early Project 3-2 due date now 10/28 (release 10/18) Midterm 1: – grades posted today – Stats, next slide 17

18 Version A Version B

Cache “Tags” Need way to tell if have copy of location in memory so that can decide on hit or miss On cache miss, put memory address of block in “tag address” of cache block 1022 placed in tag next to data from memory (99) 19 TagData From earlier instructions

Anatomy of a 16 Byte Cache, 4 Byte Block Operations: 1.Cache Hit 2.Cache Miss 3.Refill cache from memory Cache needs Address Tags to decide if Processor Address is a Cache Hit or Cache Miss – Compares all 4 tags 20 Processor 32-bit Address 32-bit Data Cache 32-bit Address 32-bit Data Memory

TagData TagData Cache Replacement Suppose processor now requests location 511, which contains 11? Doesn’t match any cache block, so must “evict” one resident block to make room – Which block to evict? Replace “victim” with new memory block at address

Block Must be Aligned in Memory Word blocks are aligned, so binary address of all words in cache always ends in 00 two How to take advantage of this to save hardware and energy? Don’t need to compare last 2 bits of 32-bit byte address (comparator can be narrower) => Don’t need to store last 2 bits of 32-bit byte address in Cache Tag (Tag can be narrower) 22

Anatomy of a 32B Cache, 8B Block 23 Blocks must be aligned in pairs, otherwise could get same word twice in cache  Tags only have even- numbered words  Last 3 bits of address always 000 two  Tags, comparators can be narrower Can get hit for either word in block Processor 32-bit Address 32-bit Data Cache 32-bit Address 32-bit Data Memory

Hardware Cost of Cache Need to compare every tag to the Processor address Comparators are expensive Optimization: use 2 “sets” of data with a total of only 2 comparators 1 Address bit selects which set Compare only tags from selected set Generalize to more sets 24 Processor 32-bit Address Tag Data 32-bit Data Cache 32-bit Address 32-bit Data Memory Tag Data Set 0 Set 1 Tag Data Tag Data

Processor Address Fields used by Cache Controller Block Offset: Byte address within block Set Index: Selects which set Tag: Remaining portion of processor address Size of Index = log2 (number of sets) Size of Tag = Address size – Size of Index – log2 (number of bytes/block) Block offset Set Index Tag 25 Processor Address (32-bits total)

What is limit to number of sets? For a given total number of blocks, we can save more comparators if have more than 2 sets Limit: As Many Sets as Cache Blocks => only one block per set – only needs one comparator! Called “Direct-Mapped” Design 26 Block offset Index Tag

Direct Mapped Cache Ex: Mapping a 6-bit Memory Address In example, block size is 4 bytes/1 word Memory and cache blocks always the same size, unit of transfer between memory and cache # Memory blocks >> # Cache blocks – 16 Memory blocks = 16 words = 64 bytes => 6 bits to address all bytes – 4 Cache blocks, 4 bytes (1 word) per block – 4 Memory blocks map to each cache block Memory block to cache block, aka index: middle two bits Which memory block is in a given cache block, aka tag: top two bits Byte Within Block Byte Offset 23 Block Within $ 4 Mem Block Within $ Block TagIndex

One More Detail: Valid Bit When start a new program, cache does not have valid information for this program Need an indicator whether this tag entry is valid for this program Add a “valid bit” to the cache tag entry 0 => cache miss, even if by chance, address = tag 1 => cache hit, if processor address = tag 28

Caching: A Simple First Example Cache Main Memory Q: Where in the cache is the mem block? Use next 2 low-order memory address bits – the index – to determine which cache block (i.e., modulo the number of blocks in the cache) TagData Q: Is the memory block in cache? Compare the cache tag to the high-order 2 memory address bits to tell if the memory block is in the cache (provided valid bit is set) Valid 0000xx 0001xx 0010xx 0011xx 0100xx 0101xx 0110xx 0111xx 1000xx 1001xx 1010xx 1011xx 1100xx 1101xx 1110xx 1111xx One word blocks Two low order bits (xx) define the byte in the block (32b words) Index 29

One word blocks, cache size = 1K words (or 4KB) Direct-Mapped Cache Example 20 Tag 10 Index Data IndexTagValid Byte offset What kind of locality are we taking advantage of? 20 Data 32 Hit 30 Valid bit ensures something useful in cache for this index Compare Tag with upper part of Address to see if a Hit Read data from cache instead of memory if a Hit Comparator

Four words/block, cache size = 1K words Multiword-Block Direct-Mapped Cache 8 Index 2 Data IndexTagValid Byte offset 20 Tag HitData 32 Word offset What kind of locality are we taking advantage of? 31

Cache Names for Each Organization “Fully Associative”: Block can go anywhere – First design in lecture – Note: No Index field, but 1 comparator/block “Direct Mapped”: Block goes one place – Note: Only 1 comparator – Number of sets = number blocks “N-way Set Associative”: N places for a block – Number of sets = number of blocks / N – N comparators – Fully Associative: N = number of blocks – Direct Mapped: N = 1 32

Range of Set-Associative Caches For a fixed-size cache, and a given block size, each increase by a factor of 2 in associativity doubles the number of blocks per set (i.e., the number of “ways”) and halves the number of sets – decreases the size of the index by 1 bit and increases the size of the tag by 1 bit 33 Block offset Index Tag More Associativity (more ways) What if we can also change the block size?

Clickers/Peer Instruction For a cache with constant total capacity, if we increase the number of ways by a factor of 2, which statement is false: A: The number of sets could be doubled B: The tag width could decrease C: The block size could stay the same D: The block size could be halved E: Tag width must increase 34

Total Cash Capacity = 35 Associativity * # of sets * block_size Bytes = blocks/set * sets * Bytes/block Byte Offset TagIndex C = N * S * B address_size = tag_size + index_size + offset_size = tag_size + log2(S) + log2(B) Clicker Question: C remains constant, S and/or B can change such that C = 2N * (SB)’ => (SB)’ = SB/2 Tag_size = address_size – (log2(S) + log2(B)) = address_size – log2(SB) = address_size – (log2(SB) – 1)

Second- Level Cache (SRAM) Second- Level Cache (SRAM) Typical Memory Hierarchy Control Datapath Secondary Memory (Disk Or Flash) On-Chip Components RegFile Main Memory (DRAM) Data Cache Data Cache Instr Cache Instr Cache Speed (cycles): ½’s 1’s 10’s 100’s 1,000,000’s Size (bytes): 100’s 10K’s M’s G’s T’s 36 Principle of locality + memory hierarchy presents programmer with ≈ as much memory as is available in the cheapest technology at the ≈ speed offered by the fastest technology Cost/bit: highest lowest Third- Level Cache (SRAM) Third- Level Cache (SRAM)

Handling Stores with Write-Through Store instructions write to memory, changing values Need to make sure cache and memory have same values on writes: 2 policies 1) Write-Through Policy: write cache and write through the cache to memory – Every write eventually gets to memory – Too slow, so include Write Buffer to allow processor to continue once data in Buffer – Buffer updates memory in parallel to processor 37

Write-Through Cache Write both values in cache and in memory Write buffer stops CPU from stalling if memory cannot keep up Write buffer may have multiple entries to absorb bursts of writes What if store misses in cache? 38 Processor 32-bit Address 32-bit Data Cache 32-bit Address 32-bit Data Memory Addr Data Write Buffer

Handling Stores with Write-Back 2) Write-Back Policy: write only to cache and then write cache block back to memory when evict block from cache – Writes collected in cache, only single write to memory per block – Include bit to see if wrote to block or not, and then only write back if bit is set Called “Dirty” bit (writing makes it “dirty”) 39

Write-Back Cache Store/cache hit, write data in cache only & set dirty bit – Memory has stale value Store/cache miss, read data from memory, then update and set dirty bit – “Write-allocate” policy Load/cache hit, use value from cache On any miss, write back evicted block, only if dirty. Update cache with new block and clear dirty bit. 40 Processor 32-bit Address 32-bit Data Cache 32-bit Address 32-bit Data Memory D D D D D D D D Dirty Bits

Write-Through vs. Write-Back Write-Through: – Simpler control logic – More predictable timing simplifies processor control logic – Easier to make reliable, since memory always has copy of data (big idea: Redundancy!) Write-Back – More complex control logic – More variable timing (0,1,2 memory accesses per cache access) – Usually reduces write traffic – Harder to make reliable, sometimes cache has only copy of data 41

And In Conclusion, … 42 Principle of Locality for Libraries /Computer Memory Hierarchy of Memories (speed/size/cost per bit) to Exploit Locality Cache – copy of data lower level in memory hierarchy Direct Mapped to find block in cache using Tag field and Valid bit for Hit Cache design choice: Write-Through vs. Write-Back