Systems Architecture II

Slides:



Advertisements
Similar presentations
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Advertisements

Computation I pg 1 Embedded Computer Architecture Memory Hierarchy: Cache Recap Course 5KK73 Henk Corporaal November 2014
Performance of Cache Memory
Cache Here we focus on cache improvements to support at least 1 instruction fetch and at least 1 data access per cycle – With a superscalar, we might need.
1 Copyright © 2012, Elsevier Inc. All rights reserved. Chapter 2 (and Appendix B) Memory Hierarchy Design Computer Architecture A Quantitative Approach,
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Review CPSC 321 Andreas Klappenecker Announcements Tuesday, November 30, midterm exam.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Cache Memory Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
Chapter 7 Large and Fast: Exploiting Memory Hierarchy Bo Cheng.
Memory Hierarchy Design Chapter 5 Karin Strauss. Background 1980: no caches 1995: two levels of caches 2004: even three levels of caches Why? Processor-Memory.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
Memory: PerformanceCSCE430/830 Memory Hierarchy: Performance CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu (U. Maine)
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
July 30, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 8: Exploiting Memory Hierarchy: Virtual Memory * Jeremy R. Johnson Monday.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
DECStation 3100 Block Instruction Data Effective Program Size Miss Rate Miss Rate Miss Rate 1 6.1% 2.1% 5.4% 4 2.0% 1.7% 1.9% 1 1.2% 1.3% 1.2% 4 0.3%
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Caches Hiding Memory Access Times. PC Instruction Memory 4 MUXMUX Registers Sign Ext MUXMUX Sh L 2 Data Memory MUXMUX CONTROLCONTROL ALU CTL INSTRUCTION.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1  2004 Morgan Kaufmann Publishers Chapter Seven Memory Hierarchy-3 by Patterson.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Jan. 5, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 2: Performance Evaluation and Benchmarking * Jeremy R. Johnson Wed. Oct. 4,
High Performance Computing1 High Performance Computing (CS 680) Lecture 2a: Overview of High Performance Processors * Jeremy R. Johnson *This lecture was.
1 Memory Hierarchy Design Chapter 5. 2 Cache Systems CPUCache Main Memory Data object transfer Block transfer CPU 400MHz Main Memory 10MHz Bus 66MHz CPU.
CMSC 611: Advanced Computer Architecture
Address – 32 bits WRITE Write Cache Write Main Byte Offset Tag Index Valid Tag Data 16K entries 16.
COSC3330 Computer Architecture
Memory COMPUTER ARCHITECTURE
The Goal: illusion of large, fast, cheap memory
Morgan Kaufmann Publishers Large and Fast: Exploiting Memory Hierarchy
Improving Memory Access 1/3 The Cache and Virtual Memory
CSC 4250 Computer Architectures
CSCI206 - Computer Organization & Programming
Cache Memory Presentation I
Morgan Kaufmann Publishers Memory & Cache
Morgan Kaufmann Publishers
ECE 445 – Computer Organization
Chapter 8 Digital Design and Computer Architecture: ARM® Edition
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Lecture 08: Memory Hierarchy Cache Performance
ECE 445 – Computer Organization
Systems Architecture I (CS ) Lecture 16: Exceptions
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Chapter 5 Exploiting Memory Hierarchy : Cache Memory in CMP
Morgan Kaufmann Publishers
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Systems Architecture I (CS ) Lecture 17: Exceptions
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache - Optimization.
Systems Architecture II
Cache Memory Rabi Mahapatra
Memory & Cache.
10/18: Lecture Topics Using spatial locality
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Systems Architecture II September 4, 1997 Systems Architecture II (CS 282-001) Lecture 6: Exploiting Memory Hierarchy: Cache Memory* Jeremy R. Johnson Mon July 9, 2001 *This lecture was derived from material in the text (Chap. 7). All figures from Computer Organization and Design: The Hardware/Software Approach, Second Edition, by David Patterson and John Hennessy, are copyrighted material (COPYRIGHT 1998 MORGAN KAUFMANN PUBLISHERS, INC. ALL RIGHTS RESERVED). July 9, 2001 Systems Architecture II

Systems Architecture II September 4, 1997 Introduction Objective: To design a memory system that provides the illusion of a very large memory that can be accessed as fast as a very small memory. Principle of Locality Programs tend to access a relatively small portion of their address space at an instant of time. Topics memory hierarchy and temporal and spatial locality The basics of cache Cache performance miss rate miss penalty Improving cache performance associativity multi-level caches July 9, 2001 Systems Architecture II

Systems Architecture II Memory Hierarchy July 9, 2001 Systems Architecture II

Systems Architecture II Mapping to Cache Cache - the level of the memory hierarchy between the CPU and main memory. Direct-Mapped Cache - memory mapped to one location in cache (Block address) mod (Number of block in cache) Number of blocks is typically a power of two  cache location obtained from low-order bits of address. ... 1 2 3 28 29 30 31 July 9, 2001 Systems Architecture II

Locating an Data in the Cache Compare cache index (mapping) to Tag (high-order bits) to see if element is currently in cache Valid bit used to indicate whether data in cache is valid A hit occurs when the data is in cache, otherwise it is a miss The extra time required when a cache miss occurs is called the miss penalty July 9, 2001 Systems Architecture II

Systems Architecture II Example 32-word memory 8-word cache July 9, 2001 Systems Architecture II

An Example Cache: DECStation 3100 64KB Instruction Cache 64KB Data Cache Direct-mapped One-word block Read miss Send address to cache (from PC or ALU) If miss then send address to memory and when it returns write to cache Write miss write through (simultaneously write to cache and memory) 4 word write buffer July 9, 2001 Systems Architecture II

Taking Advantage of Spatial Locality Increased Block-Length Instead of bringing in one word to cache when a miss occurs, bring a block of words (typically 2-4). Especially with instruction read, it is likely that the subsequent words will be used. Reduce miss rate Disadvantage: increase miss penalty July 9, 2001 Systems Architecture II

Mapping an Address to a Multiword Cache Block (Block address) mod (Number of cache blocks) Range: l = Byte address/Bytes per block  Bytes per block r = l + (Bytes per block - 1) July 9, 2001 Systems Architecture II

Systems Architecture II Miss Rate vs. Block Size Cache size => July 9, 2001 Systems Architecture II

Designing the Memory System to Support Cache Increase memory bandwidth to support larger block sizes Assume 1 cycle to send address, 15 cycles for each access initiated, 1 cycle to send word of data. Blocksize = 4 words. 1 + 4(15 + 1) = 65 cycles 1 + 15 + 1 = 17 cycles 1 + 15 + 41 = 20 cycles July 9, 2001 Systems Architecture II

Measuring Cache Performance CPU time = (CPU execution clock cycles + Memory stall clock cycles)  Clock-cycle time Memory stall clock cycles = Read-stall cycles + Write-stall cycles Read-stall cycles = Reads/program  Read miss rate  Read miss penalty Write-stall cycles = (Writes/program  Write miss rate  Write miss penalty) + Write buffer stalls (assumes write-through cache) Write buffer stalls should be negligible and write and read miss penalties equal (cost to fetch block from memory) Memory stall clock cycles = Mem access/program  miss rate  miss penalty July 9, 2001 Systems Architecture II

Systems Architecture II Calculation I Assume I-miss rate of 2% (gcc) amd D-miss rate of 4% Assume CPI = 2 (without stalls) and miss penalty of 40 cycles Assume 36% loads/stores What is the CPI with memory stalls? How much faster would a machine with perfect cache run? What happens if the processor is made faster, but the memory system stays the same (e.g. reduce CPI to 1)? How does Amdahls’s law come into play? July 9, 2001 Systems Architecture II

Systems Architecture II Calculation II Suppose the performance of the machine in the previous example is improved by doubling the clock speed (main memory speed remains the same). Hint: since the clock rate is doubled and the memory speed remains the same, the miss penalty becomes twice as much (80 cycles). How much faster will the machine be assuming the same miss rate as the previous example? Conclusion: Relative cache penalties increase as the machine becomes faster. July 9, 2001 Systems Architecture II

Reducing Cache Misses with a more Flexible Replacement Strategy In a direct mapped cache a block can go in exactly one place in cache In a fully associative cache a block can go anywhere in cache A compromise is to use a set associative cache where a block can go into a fixed number of locations in cache, determined by: (Block number) mod (Number of sets in cache) July 9, 2001 Systems Architecture II

Systems Architecture II Example Three small 4 word caches: Direct mapped, two-way set associative, fully associative How many misses in the sequence of block addresses: 0, 8, 0, 6, 8? How does this change with 8 words, 16 words? July 9, 2001 Systems Architecture II

Locating a Block in Cache Check the tag of every cache block in the appropriate set Address consists of 3 parts Replacement strategy: E.G. Least Recently Used (LRU) July 9, 2001 Systems Architecture II

Size of Tags vs. Associativity Increasing associativity requires more comparators, as well as more tag bits per cache block. Assume a cache with 4K blocks and 32 bit addresses Find the total number of sets and the total number of tag bits for a direct mapped cache two-way set associative cache four-way set associative cache fully associative cache July 9, 2001 Systems Architecture II

Reducing the Miss Penalty using Multilevel Caches To further reduce the gap between fast clock rates of CPUs and the relatively long time to access memory additional levels of cache are used (level two and level three caches). The design of the primary cache is driven by the goal of a fast hit rate, which implies a relatively small size A secondary cache is provided to reduce the miss rate and penalty needed to go to memory. Example: Assume CPI = 1 (with no misses) and 500 MHz clock 200 ns memory access time 5% miss rate for primary cache secondary cache with 20 ns access time and miss rate of 2% What is the total CPI with and without secondary cache? How much of an improvement does secondary cache provide? July 9, 2001 Systems Architecture II