Caches Samira Khan March 23, 2017.

Slides:



Advertisements
Similar presentations
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 23, 2002 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Advertisements

Lecture 19: Cache Basics Today’s topics: Out-of-order execution
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Practical Caches COMP25212 cache 3. Learning Objectives To understand: –Additional Control Bits in Cache Lines –Cache Line Size Tradeoffs –Separate I&D.
CS2100 Computer Organisation Cache II (AY2014/2015) Semester 2.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 3, 2003 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
361 Computer Architecture Lecture 14: Cache Memory
Cs 61C L17 Cache.1 Patterson Spring 99 ©UCB CS61C Cache Memory Lecture 17 March 31, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson) www-inst.eecs.berkeley.edu/~cs61c/schedule.html.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
CS 3410, Spring 2014 Computer Science Cornell University See P&H Chapter: , 5.8, 5.15.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
223 Memory Hierarchy: Caches, Virtual Memory Big memories are slow Fast memories are small Need to get fast, big memories Processor Computer Control Datapath.
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
1010 Caching ENGR 3410 – Computer Architecture Mark L. Chang Fall 2006.
Computer Organization & Programming
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
DECStation 3100 Block Instruction Data Effective Program Size Miss Rate Miss Rate Miss Rate 1 6.1% 2.1% 5.4% 4 2.0% 1.7% 1.9% 1 1.2% 1.3% 1.2% 4 0.3%
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Cps 220 Cache. 1 ©GK Fall 1998 CPS220 Computer System Organization Lecture 17: The Cache Alvin R. Lebeck Fall 1999.
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
Fall 2010 Computer Architecture Lecture 6: Caching Basics Prof. Onur Mutlu Carnegie Mellon University.
Memory Hierarchy and Cache. A Mystery… Memory Main memory = RAM : Random Access Memory – Read/write – Multiple flavors – DDR SDRAM most common 64 bit.
CMSC 611: Advanced Computer Architecture
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Main Memory Cache Architectures
Cache Memory.
COSC3330 Computer Architecture
CSE 351 Section 9 3/1/12.
Computer Architecture Lecture 18: Caches, Caches, Caches
Cache Performance Samira Khan March 28, 2017.
18-447: Computer Architecture Lecture 23: Caches
Multilevel Memories (Improving performance using alittle “cash”)
How will execution time grow with SIZE?
Computer Architecture Lecture 19: Memory Hierarchy and Caches
Cache Memory Presentation I
ECE 445 – Computer Organization
Lecture 21: Memory Hierarchy
Samira Khan University of Virginia Nov 5, 2017
Bojian Zheng CSCD70 Spring 2018
Cache Memories September 30, 2008
Lecture 23: Cache, Memory, Virtual Memory
Lecture 08: Memory Hierarchy Cache Performance
CPE 631 Lecture 05: Cache Design
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Chapter 6 Memory System Design
Adapted from slides by Sally McKee Cornell University
ECE232: Hardware Organization and Design
How can we find data in the cache?
M. Usha Professor/CSE Sona College of Technology
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Main Memory Cache Architectures
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
Lecture 22: Cache Hierarchies, Memory
CS-447– Computer Architecture Lecture 20 Cache Memories
CS 3410, Spring 2014 Computer Science Cornell University
15-740/ Computer Architecture Lecture 18: Caching Basics
Lecture 21: Memory Hierarchy
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache - Optimization.
Fundamentals of Computing: Computer Architecture
Cache Memory Rabi Mahapatra
Design of Digital Circuits Lecture 24: Memory Hierarchy and Caches
10/18: Lecture Topics Using spatial locality
Presentation transcript:

Caches Samira Khan March 23, 2017

Agenda Review from last lecture More Caches Data flow model Memory hierarchy More Caches

The Dataflow Model (of a Computer) Von Neumann model: An instruction is fetched and executed in control flow order As specified by the instruction pointer Sequential unless explicit control flow instruction Dataflow model: An instruction is fetched and executed in data flow order i.e., when its operands are ready i.e., there is no instruction pointer Instruction ordering specified by data flow dependence Each instruction specifies “who” should receive the result An instruction can “fire” whenever all operands are received Potentially many instructions can execute at the same time Inherently more parallel

Data Flow Advantages/Disadvantages Very good at exploiting irregular parallelism Only real dependencies constrain processing Disadvantages Debugging difficult (no precise state) Interrupt/exception handling is difficult (what is precise state semantics?) Too much parallelism? (Parallelism control needed) High bookkeeping overhead (tag matching, data storage) Memory locality is not exploited

OOO EXECUTION: RESTRICTED DATAFLOW An out-of-order engine dynamically builds the dataflow graph of a piece of the program which piece? The dataflow graph is limited to the instruction window Instruction window: all decoded but not yet retired instructions Can we do it for the whole program?

An Example OUT

The Memory Hierarchy

Ideal Memory Zero access time (latency) Infinite capacity Zero cost Infinite bandwidth (to support multiple accesses in parallel)

The Problem Ideal memory’s requirements oppose each other Bigger is slower Bigger  Takes longer to determine the location Faster is more expensive Memory technology: SRAM vs. DRAM vs. Disk vs. Tape Higher bandwidth is more expensive Need more banks, more ports, higher frequency, or faster technology

Why Memory Hierarchy? We want both fast and large But we cannot achieve both with a single level of memory Idea: Have multiple levels of storage (progressively bigger and slower as the levels are farther from the processor) and ensure most of the data the processor needs is kept in the fast(er) level(s)

The Memory Hierarchy fast move what you use here small With good locality of reference, memory appears as fast as and as large as faster per byte cheaper per byte backup everything here big but slow

Memory Locality A “typical” program has a lot of locality in memory references typical programs are composed of “loops” Temporal: A program tends to reference the same memory location many times and all within a small window of time Spatial: A program tends to reference a cluster of memory locations at a time most notable examples: instruction memory references array/data structure references

Hierarchical Latency Analysis For a given memory hierarchy level i it has a technology-intrinsic access time of ti, The perceived access time Ti is longer than ti Except for the outer-most hierarchy, when looking for a given address there is a chance (hit-rate hi) you “hit” and access time is ti a chance (miss-rate mi) you “miss” and access time ti +Ti+1 hi + mi = 1 Thus Ti = hi·ti + mi·(ti + Ti+1) Ti = ti + mi ·Ti+1 Miss-rate of just the references that missed at Li-1

Hierarchy Design Considerations Recursive latency equation Ti = ti + mi ·Ti+1 The goal: achieve desired T1 within allowed cost Ti  ti is desirable Keep mi low increasing capacity Ci lowers mi, but beware of increasing ti lower mi by smarter management (replacement::anticipate what you don’t need, prefetching::anticipate what you will need) Keep Ti+1 low faster lower hierarchies, but beware of increasing cost introduce intermediate hierarchies as a compromise

Intel Pentium 4 Example 90nm P4, 3.6 GHz L1 D-cache L2 D-cache C1 = 16K t1 = 4 cyc int / 9 cycle fp L2 D-cache C2 =1024 KB t2 = 18 cyc int / 18 cyc fp Main memory t3 = ~ 50ns or 180 cyc Notice best case latency is not 1 worst case access latencies are into 500+ cycles if m1=0.1, m2=0.1 T1=7.6, T2=36 if m1=0.01, m2=0.01 T1=4.2, T2=19.8 if m1=0.05, m2=0.01 T1=5.00, T2=19.8 if m1=0.01, m2=0.50 T1=5.08, T2=108

Caching Basics Block (line): Unit of storage in the cache Memory is logically divided into cache blocks that map to locations in the cache When data referenced HIT: If in cache, use cached data instead of accessing memory MISS: If not in cache, bring block into cache Maybe have to kick something else out to do it Some important cache design decisions Placement: where and how to place/find a block in cache? Replacement: what data to remove to make room in cache? Granularity of management: large, small, uniform blocks? Write policy: what do we do about writes? Instructions/data: Do we treat them separately?

Cache Abstraction and Metrics Cache hit rate = (# hits) / (# hits + # misses) = (# hits) / (# accesses) Average memory access time (AMAT) = ( hit-rate * hit-latency ) + ( miss-rate * miss-latency ) Aside: Can reducing AMAT reduce performance? Address Tag Store (is the address in the cache? + bookkeeping) Data Store (stores memory blocks) Hit/miss? Data

A Basic Hardware Cache Design We will start with a basic hardware cache design Then, we will examine a multitude of ideas to make it better

Blocks and Addressing the Cache Memory is logically divided into fixed-size blocks Each block maps to a location in the cache, determined by the index bits in the address used to index into the tag and data stores Cache access: 1) index into the tag and data stores with index bits in address 2) check valid bit in tag store 3) compare tag bits in address with the stored tag in tag store If a block is in the cache (cache hit), the stored tag should be valid and match the tag of the block 8-bit address tag index byte in block 3 bits 2 bits

Direct-Mapped Cache: Placement and Access 00 | 000 | 000 - 00 | 000 | 111 Assume byte-addressable memory: 256 bytes, 8-byte blocks  32 blocks 01 | 000 | 000 - 01 | 000 | 111 10 | 000 | 000 - 10 | 000 | 111 11 | 000 | 000 - 11 | 000 | 111 11 | 111 | 000 - 11 | 111 | 111 Memory

Direct-Mapped Cache: Placement and Access 00 | 000 | 000 - 00 | 000 | 111 Assume byte-addressable memory: 256 bytes, 8-byte blocks  32 blocks Assume cache: 64 bytes, 8 blocks Direct-mapped: A block can go to only one location 01 | 000 | 000 - 01 | 000 | 111 tag index byte in block 2b 3 bits 3 bits Tag store Data store Address 10 | 000 | 000 - 10 | 000 | 111 V tag 11 | 000 | 000 - 11 | 000 | 111 byte in block =? MUX Hit? Data 11 | 111 | 000 - 11 | 111 | 111 Memory

Direct-Mapped Cache: Placement and Access 00 | 000 | 000 - 00 | 000 | 111 Assume byte-addressable memory: 256 bytes, 8-byte blocks  32 blocks Assume cache: 64 bytes, 8 blocks Direct-mapped: A block can go to only one location Addresses with same index contend for the same location Cause conflict misses 01 | 000 | 000 - 01 | 000 | 111 tag index byte in block 2b 3 bits 3 bits Tag store Data store Address 10 | 000 | 000 - 10 | 000 | 111 V tag 11 | 000 | 000 - 11 | 000 | 111 byte in block =? MUX Hit? Data 11 | 111 | 000 - 11 | 111 | 111 Memory

Direct-Mapped Caches Direct-mapped cache: Two blocks in memory that map to the same index in the cache cannot be present in the cache at the same time One index  one entry Can lead to 0% hit rate if more than one block accessed in an interleaved manner map to the same index Assume addresses A and B have the same index bits but different tag bits A, B, A, B, A, B, A, B, …  conflict in the cache index All accesses are conflict misses

Set Associativity Addresses 0 and 8 always conflict in direct mapped cache Instead of having one column of 8, have 2 columns of 4 blocks Tag store Data store SET V tag V tag =? =? MUX Logic byte in block MUX Hit? 8-bit address tag index byte in block 2 bits 3 bits Key idea: Associative memory within the set + Accommodates conflicts better (fewer conflict misses) -- More complex, slower access, larger tag store

Higher Associativity 4-way + Likelihood of conflict misses even lower -- More tag comparators and wider data mux; larger tags Tag store =? =? =? =? 8-bit address tag index byte in block 1 bits 4 bits 3 bits Logic Hit? Data store MUX byte in block MUX

Full Associativity Fully associative cache 8-bit address tag index byte in block 0 bit 5 bits 3 bits Fully associative cache A block can be placed in any cache location Tag store =? =? =? =? =? =? =? =? Logic Hit? Data store MUX byte in block MUX

Exercise on Cache Indexing We assumed 8 byte blocks What happens if we have 16 byte blocks? Cache is 128B, 8 blocks Direct mapped 2-way? 4-way? 8-way? 8-bit address tag index byte in block ? bits 4 bits 8-bit address Direct mapped tag index byte in block 3 bits 1 bits 4 bits

Tag-Index-Offset m memory address bits S = 2s number of sets s (set) index bits B = 2b block size b (block) offset bits t = m − (s + b) tag bits C = B * S cache size (if direct-mapped)

Associativity (and Tradeoffs) Degree of associativity: How many blocks can map to the same index (or set)? Higher associativity ++ Higher hit rate -- Slower cache access time (hit latency and data access latency) -- More expensive hardware (more comparators) Diminishing returns from higher associativity hit rate associativity

Issues in Set-Associative Caches Think of each block in a set having a “priority” Indicating how important it is to keep the block in the cache Key issue: How do you determine/adjust block priorities? There are three key decisions in a set: Insertion, promotion, eviction (replacement) Insertion: What happens to priorities on a cache fill? Where to insert the incoming block, whether or not to insert the block Promotion: What happens to priorities on a cache hit? Whether and how to change block priority Eviction/replacement: What happens to priorities on a cache miss? Which block to evict and how to adjust priorities

Eviction/Replacement Policy Which block in the set to replace on a cache miss? Any invalid block first If all are valid, consult the replacement policy Random FIFO Least recently used (how to implement?) Not most recently used Least frequently used Hybrid replacement policies Optimal replacement policy?

Least Recently Used Replacement Policy B C D Tag store Data store =? Logic Hit? 4-way LRU MRU -1 MRU -2 MRU Set 0 ACCESS PATTERN: ACBD

Least Recently Used Replacement Policy B C D Tag store Data store =? Logic Hit? 4-way LRU MRU -1 MRU -2 MRU Set 0 ACCESS PATTERN: ACBDE

Least Recently Used Replacement Policy B C D Tag store Data store =? Logic Hit? 4-way MRU MRU -1 MRU -2 MRU Set 0 ACCESS PATTERN: ACBDE

Least Recently Used Replacement Policy B C D Tag store Data store =? Logic Hit? 4-way MRU MRU -1 MRU -2 MRU -1 Set 0 ACCESS PATTERN: ACBDE

Least Recently Used Replacement Policy B C D Tag store Data store =? Logic Hit? 4-way MRU MRU -2 MRU -2 MRU -1 Set 0 ACCESS PATTERN: ACBDE

Least Recently Used Replacement Policy B C D Tag store Data store =? Logic Hit? 4-way MRU MRU -2 LRU MRU -1 Set 0 ACCESS PATTERN: ACBDE

Least Recently Used Replacement Policy B C D Tag store Data store =? Logic Hit? 4-way MRU MRU LRU MRU -1 Set 0 ACCESS PATTERN: ACBDEB

Least Recently Used Replacement Policy B C D Tag store Data store =? Logic Hit? 4-way MRU -1 MRU LRU MRU -1 Set 0 ACCESS PATTERN: ACBDEB

Least Recently Used Replacement Policy B C D Tag store Data store =? Logic Hit? 4-way MRU -1 MRU LRU MRU -2 Set 0 ACCESS PATTERN: ACBDEB

Eviction/Replacement Policy Which block in the set to replace on a cache miss? Any invalid block first If all are valid, consult the replacement policy Random FIFO Least recently used (how to implement?) Not most recently used Least frequently used Hybrid replacement policies Optimal replacement policy?