CS252/Kubiatowicz Lec 3.1 1/24/01 CS252 Graduate Computer Architecture Lecture 3 Caches and Memory Systems I January 24, 2001 Prof. John Kubiatowicz.

Slides:



Advertisements
Similar presentations
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Advertisements

Anshul Kumar, CSE IITD CSL718 : Memory Hierarchy Cache Performance Improvement 23rd Feb, 2006.
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Lecture 12 Reduce Miss Penalty and Hit Time
Computation I pg 1 Embedded Computer Architecture Memory Hierarchy: Cache Recap Course 5KK73 Henk Corporaal November 2014
1 Adapted from UCB CS252 S01, Revised by Zhao Zhang in IASTATE CPRE 585, 2004 Lecture 14: Hardware Approaches for Cache Optimizations Cache performance.
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed, Nov 2, 2005 Mon, Nov 7, 2005 Topic: Caches (contd.)
Caches Vincent H. Berk October 21, 2005
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Nov. 10, 2003 Topic: Caches (contd.)
EE Architecture of Digital Systems Lecture 3 Cache Memory
Now, Review of Memory Hierarchy
Review CPSC 321 Andreas Klappenecker Announcements Tuesday, November 30, midterm exam.
DAP Spr.‘98 ©UCB 1 Lecture 13: Memory Hierarchy—Ways to Reduce Misses.
Cache Memory Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
CS252/Culler Lec 4.1 1/31/02 CS203A Graduate Computer Architecture Lecture 14 Cache Design Taken from Prof. David Culler’s notes.
Memory Hierarchy Design Chapter 5 Karin Strauss. Background 1980: no caches 1995: two levels of caches 2004: even three levels of caches Why? Processor-Memory.
EENG449b/Savvides Lec /1/04 April 1, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
CIS629 - Fall 2002 Caches 1 Caches °Why is caching needed? Technological development and Moore’s Law °Why are caches successful? Principle of locality.
CIS °The Five Classic Components of a Computer °Today’s Topics: Memory Hierarchy Cache Basics Cache Exercise (Many of this topic’s slides were.
Reducing Cache Misses (Sec. 5.3) Three categories of cache misses: 1.Compulsory –The very first access to a block cannot be in the cache 2.Capacity –Due.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
CPSC614 Lec 7.1 Exploiting Instruction-Level Parallelism with Software Approach #2 E. J. Kim.
CS252/Kubiatowicz Lec /22/03 CS252 Graduate Computer Architecture Lecture 15 Prediction (Finished) Caches I: 3 Cs and 7 ways to reduce misses October.
ENGS 116 Lecture 131 Caches and Virtual Memory Vincent H. Berk October 31 st, 2008 Reading for Today: Sections C.1 – C.3 (Jouppi article) Reading for Monday:
Reducing Cache Misses 5.1 Introduction 5.2 The ABCs of Caches 5.3 Reducing Cache Misses 5.4 Reducing Cache Miss Penalty 5.5 Reducing Hit Time 5.6 Main.
EENG449b/Savvides Lec /7/05 April 7, 2005 Prof. Andreas Savvides Spring g449b EENG 449bG/CPSC 439bG.
Lec17.1 °Q1: Where can a block be placed in the upper level? (Block placement) °Q2: How is a block found if it is in the upper level? (Block identification)
CSC 4250 Computer Architectures December 5, 2006 Chapter 5. Memory Hierarchy.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
Computer Architecture Ch5-1 Ping-Liang Lai ( 賴秉樑 ) Lecture 5 Review of Memory Hierarchy (Appendix C in textbook) Computer Architecture 計算機結構.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
Computer Organization & Programming
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
M E M O R Y. Computer Performance It depends in large measure on the interface between processor and memory. CPI (or IPC) is affected CPI = Cycles per.
Lecture 15 Calculating and Improving Cache Perfomance
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache And Pefetch Buffers Norman P. Jouppi Presenter:Shrinivas Narayani.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Memory Hierarchy—Improving Performance Professor Alvin R. Lebeck Computer Science 220 Fall 2008.
Pradondet Nilagupta (Based on notes Robert F. Hodson --- CNU)
1 Adapted from UC Berkeley CS252 S01 Lecture 17: Reducing Cache Miss Penalty and Reducing Cache Hit Time Hardware prefetching and stream buffer, software.
Memory Design Principles Principle of locality dominates design Smaller = faster Hierarchy goal: total memory system almost as cheap as the cheapest component,
High Performance Computing1 High Performance Computing (CS 680) Lecture 2a: Overview of High Performance Processors * Jeremy R. Johnson *This lecture was.
Memory Hierarchy— Five Ways to Reduce Miss Penalty.
Chapter 5 Memory Hierarchy Design. 2 Many Levels in Memory Hierarchy Pipeline registers Register file 1st-level cache (on-chip) 2nd-level cache (on same.
1 Memory Hierarchy Design Chapter 5. 2 Cache Systems CPUCache Main Memory Data object transfer Block transfer CPU 400MHz Main Memory 10MHz Bus 66MHz CPU.
CPE 626 CPU Resources: Introduction to Cache Memories Aleksandar Milenkovic Web:
CMSC 611: Advanced Computer Architecture
Memory Hierarchy 3 Cs and 6 Ways to Reduce Misses
The University of Adelaide, School of Computer Science
Lecture 13: Reduce Cache Miss Rates
Lecture 9: Memory Hierarchy (3)
5.2 Eleven Advanced Optimizations of Cache Performance
CS252 Graduate Computer Architecture Lecture 4 Cache Design
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Graduate Computer Architecture Lec 8 – Caches
Advanced Computer Architectures
January 24, 2001 Prof. John Kubiatowicz
Lecture 14: Reducing Cache Misses
Chapter 5 Memory CSE 820.
Lecture 08: Memory Hierarchy Cache Performance
CPE 631 Lecture 05: Cache Design
Memory Hierarchy.
Siddhartha Chatterjee
Cache - Optimization.
Cache Memory Rabi Mahapatra
Cache Performance Improvements
10/18: Lecture Topics Using spatial locality
Presentation transcript:

CS252/Kubiatowicz Lec 3.1 1/24/01 CS252 Graduate Computer Architecture Lecture 3 Caches and Memory Systems I January 24, 2001 Prof. John Kubiatowicz

CS252/Kubiatowicz Lec 3.2 1/24/01 CPU-DRAM Gap 1980: no cache in µproc; level cache on chip (1989 first Intel µproc with a cache on chip) Question: Who Cares About the Memory Hierarchy?

CS252/Kubiatowicz Lec 3.3 1/24/01 Generations of Microprocessors Time of a full cache miss in instructions executed: 1st Alpha: 340 ns/5.0 ns = 68 clks x 2 or136 2nd Alpha:266 ns/3.3 ns = 80 clks x 4 or320 3rd Alpha:180 ns/1.7 ns =108 clks x 6 or648 1/2X latency x 3X clock rate x 3X Instr/clock  ­5X

CS252/Kubiatowicz Lec 3.4 1/24/01 Processor-Memory Performance Gap “Tax” Processor % Area %Transistors (­cost)(­power) Alpha %77% StrongArm SA11061%94% Pentium Pro64%88% –2 dies per package: Proc/I$/D$ + L2$ Caches have no inherent value, only try to close performance gap

CS252/Kubiatowicz Lec 3.5 1/24/01 What is a cache? Small, fast storage used to improve average access time to slow memory. Exploits spacial and temporal locality In computer architecture, almost everything is a cache! –Registers a cache on variables –First-level cache a cache on second-level cache –Second-level cache a cache on memory –Memory a cache on disk (virtual memory) –TLB a cache on page table –Branch-prediction a cache on prediction information? Proc/Regs L1-Cache L2-Cache Memory Disk, Tape, etc. BiggerFaster

CS252/Kubiatowicz Lec 3.6 1/24/01 Example: 1 KB Direct Mapped Cache For a 2 ** N byte cache: –The uppermost (32 - N) bits are always the Cache Tag –The lowest M bits are the Byte Select (Block Size = 2 ** M) Cache Index : Cache Data Byte : Cache TagExample: 0x50 Ex: 0x01 0x50 Stored as part of the cache “state” Valid Bit : 31 Byte 1Byte 31 : Byte 32Byte 33Byte 63 : Byte 992Byte 1023 : Cache Tag Byte Select Ex: 0x00 9 Block address

CS252/Kubiatowicz Lec 3.7 1/24/01 Set Associative Cache N-way set associative: N entries for each Cache Index –N direct mapped caches operates in parallel Example: Two-way set associative cache –Cache Index selects a “set” from the cache –The two tags in the set are compared to the input in parallel –Data is selected based on the tag result Cache Data Cache Block 0 Cache TagValid ::: Cache Data Cache Block 0 Cache TagValid ::: Cache Index Mux 01 Sel1Sel0 Cache Block Compare Adr Tag Compare OR Hit

CS252/Kubiatowicz Lec 3.8 1/24/01 Disadvantage of Set Associative Cache N-way Set Associative Cache versus Direct Mapped Cache: –N comparators vs. 1 –Extra MUX delay for the data –Data comes AFTER Hit/Miss decision and set selection In a direct mapped cache, Cache Block is available BEFORE Hit/Miss: –Possible to assume a hit and continue. Recover later if miss. Cache Data Cache Block 0 Cache TagValid ::: Cache Data Cache Block 0 Cache TagValid ::: Cache Index Mux 01 Sel1Sel0 Cache Block Compare Adr Tag Compare OR Hit

CS252/Kubiatowicz Lec 3.9 1/24/01 Miss-oriented Approach to Memory Access: –CPI Execution includes ALU and Memory instructions Review: Cache performance Separating out Memory component entirely –AMAT = Average Memory Access Time –CPI ALUOps does not include memory instructions

CS252/Kubiatowicz Lec /24/01 Impact on Performance Suppose a processor executes at –Clock Rate = 200 MHz (5 ns per cycle), Ideal (no misses) CPI = 1.1 –50% arith/logic, 30% ld/st, 20% control Suppose that 10% of memory operations get 50 cycle miss penalty Suppose that 1% of instructions get same miss penalty CPI = ideal CPI + average stalls per instruction 1.1(cycles/ins) + [ 0.30 (DataMops/ins) x 0.10 (miss/DataMop) x 50 (cycle/miss)] + [ 1 (InstMop/ins) x 0.01 (miss/InstMop) x 50 (cycle/miss)] = ( ) cycle/ins = % of the time the proc is stalled waiting for memory! AMAT=(1/1.3)x[1+0.01x50]+(0.3/1.3)x[1+0.1x50]=2.54

CS252/Kubiatowicz Lec /24/01 Example: Harvard Architecture Unified vs Separate I&D (Harvard) Table on page 384: –16KB I&D: Inst miss rate=0.64%, Data miss rate=6.47% –32KB unified: Aggregate miss rate=1.99% Which is better (ignore L2 cache)? –Assume 33% data ops  75% accesses from instructions (1.0/1.33) –hit time=1, miss time=50 –Note that data hit has 1 stall for unified cache (only one port) AMAT Harvard =75%x(1+0.64%x50)+25%x(1+6.47%x50) = 2.05 AMAT Unified =75%x(1+1.99%x50)+25%x( %x50)= 2.24 Proc I-Cache-1 Proc Unified Cache-1 Unified Cache-2 D-Cache-1 Proc Unified Cache-2

CS252/Kubiatowicz Lec /24/01 Review: Four Questions for Memory Hierarchy Designers Q1: Where can a block be placed in the upper level? (Block placement) –Fully Associative, Set Associative, Direct Mapped Q2: How is a block found if it is in the upper level? (Block identification) –Tag/Block Q3: Which block should be replaced on a miss? (Block replacement) –Random, LRU Q4: What happens on a write? (Write strategy) –Write Back or Write Through (with Write Buffer)

CS252/Kubiatowicz Lec /24/01 Review: Improving Cache Performance 1. Reduce the miss rate, 2. Reduce the miss penalty, or 3. Reduce the time to hit in the cache.

CS252/Kubiatowicz Lec /24/01 Reducing Misses Classifying Misses: 3 Cs –Compulsory —The first access to a block is not in the cache, so the block must be brought into the cache. Also called cold start misses or first reference misses. (Misses in even an Infinite Cache) –Capacity —If the cache cannot contain all the blocks needed during execution of a program, capacity misses will occur due to blocks being discarded and later retrieved. (Misses in Fully Associative Size X Cache) –Conflict —If block-placement strategy is set associative or direct mapped, conflict misses (in addition to compulsory & capacity misses) will occur because a block can be discarded and later retrieved if too many blocks map to its set. Also called collision misses or interference misses. (Misses in N-way Associative, Size X Cache) More recent, 4th “C”: –Coherence - Misses caused by cache coherence.

CS252/Kubiatowicz Lec /24/01 3Cs Absolute Miss Rate (SPEC92) Conflict Compulsory vanishingly small

CS252/Kubiatowicz Lec /24/01 2:1 Cache Rule Conflict miss rate 1-way associative cache size X = miss rate 2-way associative cache size X/2

CS252/Kubiatowicz Lec /24/01 3Cs Relative Miss Rate Conflict Flaws: for fixed block size Good: insight => invention

CS252/Kubiatowicz Lec /24/01 How Can Reduce Misses? 3 Cs: Compulsory, Capacity, Conflict In all cases, assume total cache size not changed: What happens if: 1) Change Block Size: Which of 3Cs is obviously affected? 2) Change Associativity: Which of 3Cs is obviously affected? 3) Change Compiler: Which of 3Cs is obviously affected?

CS252/Kubiatowicz Lec /24/01 1. Reduce Misses via Larger Block Size

CS252/Kubiatowicz Lec /24/01 2. Reduce Misses via Higher Associativity 2:1 Cache Rule: –Miss Rate DM cache size N ­ Miss Rate 2-way cache size N/2 Beware: Execution time is only final measure! –Will Clock Cycle time increase? –Hill [1988] suggested hit time for 2-way vs. 1-way external cache +10%, internal + 2%

CS252/Kubiatowicz Lec /24/01 Example: Avg. Memory Access Time vs. Miss Rate Example: assume CCT = 1.10 for 2-way, 1.12 for 4-way, 1.14 for 8-way vs. CCT direct mapped Cache SizeAssociativity (KB)1-way2-way4-way8-way (Red means A.M.A.T. not improved by more associativity)

CS252/Kubiatowicz Lec /24/01 3. Reducing Misses via a “Victim Cache” How to combine fast hit time of direct mapped yet still avoid conflict misses? Add buffer to place data discarded from cache Jouppi [1990]: 4-entry victim cache removed 20% to 95% of conflicts for a 4 KB direct mapped data cache Used in Alpha, HP machines To Next Lower Level In Hierarchy DATA TAGS One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator One Cache line of Data Tag and Comparator

CS252/Kubiatowicz Lec /24/01 4. Reducing Misses via “Pseudo-Associativity” How to combine fast hit time of Direct Mapped and have the lower conflict misses of 2-way SA cache? Divide cache: on a miss, check other half of cache to see if there, if so have a pseudo-hit (slow hit) Drawback: CPU pipeline is hard if hit takes 1 or 2 cycles –Better for caches not tied directly to processor (L2) –Used in MIPS R1000 L2 cache, similar in UltraSPARC Hit Time Pseudo Hit Time Miss Penalty Time

CS252/Kubiatowicz Lec /24/01 5. Reducing Misses by Hardware Prefetching of Instructions & Datals E.g., Instruction Prefetching –Alpha fetches 2 blocks on a miss –Extra block placed in “stream buffer” –On miss check stream buffer Works with data blocks too: –Jouppi [1990] 1 data stream buffer got 25% misses from 4KB cache; 4 streams got 43% –Palacharla & Kessler [1994] for scientific programs for 8 streams got 50% to 70% of misses from 2 64KB, 4-way set associative caches Prefetching relies on having extra memory bandwidth that can be used without penalty

CS252/Kubiatowicz Lec /24/01 6. Reducing Misses by Software Prefetching Data Data Prefetch –Load data into register (HP PA-RISC loads) –Cache Prefetch: load into cache (MIPS IV, PowerPC, SPARC v. 9) –Special prefetching instructions cannot cause faults; a form of speculative execution Prefetching comes in two flavors: –Binding prefetch: Requests load directly into register. »Must be correct address and register! –Non-Binding prefetch: Load into cache. »Can be incorrect. Frees HW/SW to guess! Issuing Prefetch Instructions takes time –Is cost of prefetch issues < savings in reduced misses? –Higher superscalar reduces difficulty of issue bandwidth

CS252/Kubiatowicz Lec /24/01 7. Reducing Misses by Compiler Optimizations McFarling [1989] reduced caches misses by 75% on 8KB direct mapped cache, 4 byte blocks in software Instructions –Reorder procedures in memory so as to reduce conflict misses –Profiling to look at conflicts(using tools they developed) Data –Merging Arrays: improve spatial locality by single array of compound elements vs. 2 arrays –Loop Interchange: change nesting of loops to access data in order stored in memory –Loop Fusion: Combine 2 independent loops that have same looping and some variables overlap –Blocking: Improve temporal locality by accessing “blocks” of data repeatedly vs. going down whole columns or rows