Lecture 14 Memory Hierarchy and Cache Design Prof. Mike Schulte Computer Architecture ECE 201.

Slides:



Advertisements
Similar presentations
1 COMP 206: Computer Architecture and Implementation Montek Singh Wed., Oct. 23, 2002 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Advertisements

1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
1 Recap: Memory Hierarchy. 2 Memory Hierarchy - the Big Picture Problem: memory is too slow and or too small Solution: memory hierarchy Fastest Slowest.
CMPE 421 Parallel Computer Architecture MEMORY SYSTEM.
Memory Hierarchy: The motivation
Memory Subsystem and Cache Adapted from lectures notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
EECC550 - Shaaban #1 Lec # 9 Spring Memory Hierarchy: Motivation The gap between CPU performance and main memory speed has been widening.
CIS429/529 Cache Basics 1 Caches °Why is caching needed? Technological development and Moore’s Law °Why are caches successful? Principle of locality °Three.
Now, Review of Memory Hierarchy
331 Week13.1Spring :332:331 Computer Architecture and Assembly Language Spring 2006 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 20 - Memory.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 3, 2003 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
CIS629 - Fall 2002 Caches 1 Caches °Why is caching needed? Technological development and Moore’s Law °Why are caches successful? Principle of locality.
CIS °The Five Classic Components of a Computer °Today’s Topics: Memory Hierarchy Cache Basics Cache Exercise (Many of this topic’s slides were.
ENGS 116 Lecture 121 Caches Vincent H. Berk Wednesday October 29 th, 2008 Reading for Friday: Sections C.1 – C.3 Article for Friday: Jouppi Reading for.
Computer ArchitectureFall 2007 © November 7th, 2007 Majd F. Sakr CS-447– Computer Architecture.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Memory Hierarchy: Motivation
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
EEM 486 EEM 486: Computer Architecture Lecture 6 Memory Systems and Caches.
©UCB CS 161 Ch 7: Memory Hierarchy LECTURE 14 Instructor: L.N. Bhuyan
DAP Spr.‘98 ©UCB 1 Lecture 11: Memory Hierarchy—Ways to Reduce Misses.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Memory Hierarchy and Cache Design
Storage HierarchyCS510 Computer ArchitectureLecture Lecture 12 Storage Hierarchy.
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Adapted from Slides by Prof. Mary Jane Irwin, Penn State University.
CS1104: Computer Organisation School of Computing National University of Singapore.
EECS 318 CAD Computer Aided Design LECTURE 10: Improving Memory Access: Direct and Spatial caches Instructor: Francis G. Wolff Case.
1 CSCI 2510 Computer Organization Memory System I Organization.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
Lecture 19 Today’s topics Types of memory Memory hierarchy.
EEE-445 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
EEL5708/Bölöni Lec 4.1 Fall 2004 September 10, 2004 Lotzi Bölöni EEL 5708 High Performance Computer Architecture Review: Memory Hierarchy.
1010 Caching ENGR 3410 – Computer Architecture Mark L. Chang Fall 2006.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
Computer Organization & Programming
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Memory Hierarchy: Terminology Hit: data appears in some block in the upper level (example: Block X)  Hit Rate : the fraction of memory access found in.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
Adapted from Computer Organization and Design, Patterson & Hennessy, UCB ECE232: Hardware Organization and Design Part 14: Memory Hierarchy Chapter 5 (4.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Introduction to computer architecture April 7th. Access to main memory –E.g. 1: individual memory accesses for j=0, j++, j
CMSC 611: Advanced Computer Architecture
COSC3330 Computer Architecture
Computer Organization
Soner Onder Michigan Technological University
Yu-Lun Kuo Computer Sciences and Information Engineering
The Goal: illusion of large, fast, cheap memory
CPE 631 Lecture 05: Cache Design
CMSC 611: Advanced Computer Architecture
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
Morgan Kaufmann Publishers Memory Hierarchy: Introduction
CPE 631 Lecture 04: Review of the ABC of Caches
Lecture 7 Memory Hierarchy and Cache Design
Presentation transcript:

Lecture 14 Memory Hierarchy and Cache Design Prof. Mike Schulte Computer Architecture ECE 201

The Five Classic Components of a Computer Memory is usually implemented as: –Dynamic Random Access Memory (DRAM) - for main memory –Static Random Access Memory (SRAM) - for cache The Big Picture: Where are We Now? Control Datapath Memory Processor Input Output

Technology Trends (from 1st lecture) DRAM YearSizeCycle Time Kb250 ns Kb220 ns Mb190 ns Mb165 ns Mb145 ns Mb120 ns Mb100 ns Gb80 ns CapacitySpeed (latency) Logic:2x in 3 years2x in 3 years DRAM:4x in 3 years2x in 10 years Disk:4x in 3 years2x in 10 years 1000:1!2:1!

µProc 60%/yr. (2X/1.5yr) DRAM 9%/yr. (2X/10 yrs) DRAM CPU 1982 Processor-Memory Performance Gap: (grows 50% / year) Performance Time “Moore’s Law” Processor-DRAM Memory Gap (latency) Who Cares About Memory?

Today’s Situation: Microprocessors Rely on caches to bridge gap Cache is a high-speed memory between the processor and main memory Microprocessor-DRAM performance gap –time of a full cache miss in instructions executed 1st Alpha (7000): 340 ns/5.0 ns = 68 clks x 2 or136 instructions 2nd Alpha (8400):266 ns/3.3 ns = 80 clks x 4 or320 instructions 3rd Alpha (t.b.d.):180 ns/1.7 ns =108 clks x 6 or648 instructions –1/2X latency x 3X clock rate x 3X Instr/clock  ­5X 1980: no cache in µproc; level cache, 60% trans. on Alpha µproc

An Expanded View of the Memory System Control Datapath Memory Processor Memory Fastest Slowest Smallest Biggest Highest Lowest Speed: Size: Cost:

Memory Hierarchy: How Does it Work? Temporal Locality (Locality in Time): => Keep most recently accessed data items closer to the processor Spatial Locality (Locality in Space): => Move blocks consists of contiguous words to the upper levels Lower Level Memory Upper Level Memory To Processor From Processor Blk X Blk Y

Memory Hierarchy: Terminology Hit: data appears in some block in the upper level (example: Block X) –Hit Rate: the fraction of memory access found in the upper level –Hit Time: Time to access the upper level which consists of RAM access time + Time to determine hit/miss Miss: data needs to be retrieve from a block in the lower level (Block Y) –Miss Rate = 1 - (Hit Rate) –Miss Penalty: Time to replace a block in the upper level + Time to deliver the block the processor Hit Time << Miss Penalty Lower Level Memory Upper Level Memory To Processor From Processor Blk X Blk Y

Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the user with as much memory as is available in the cheapest technology. –Provide access at the speed offered by the fastest technology. Control Datapath Secondary Storage (Disk) Processor Registers Main Memory (DRAM) Second Level Cache (SRAM) On-Chip Cache 1s 10,000,000s (10s ms) Speed (ns): 10s100s GsSize (bytes):KsMs Tertiary Storage (Tape) 10,000,000,000s (10s sec) Ts

How is the hierarchy managed? Registers Memory –by compiler (programmer?) cache memory –by the hardware memory disks –by the hardware and operating system (virtual memory) –by the programmer (files)

Memory Hierarchy Technology Random Access: –“Random” is good: access time is the same for all locations –DRAM: Dynamic Random Access Memory »High density, low power, cheap, slow »Dynamic: need to be “refreshed” regularly –SRAM: Static Random Access Memory »Low density, high power, expensive, fast »Static: content will last “forever”(until lose power) “Non-so-random” Access Technology: –Access time varies from location to location and from time to time –Examples: Disk, CDROM Sequential Access Technology: access time linear in location (e.g.,Tape)

General Principles of Memory Locality –Temporal Locality: referenced memory is likely to be referenced again soon (e.g. code within a loop) –Spatial Locality: memory close to referenced memory is likely to be referenced soon (e.g., data in a sequentially access array) Definitions –Upper: memory closer to processor –Block: minimum unit that is present or not present –Block address: location of block in memory –Hit: Data is found in the desired location –Hit time: time to access upper level –Miss rate: percentage of time item not found in upper level Locality + smaller HW is faster = memory hierarchy –Levels: each smaller, faster, more expensive/byte than level below –Inclusive: data found in upper leve also found in the lower level

Memory Hierarchy Registers (D Flip-Flops) L1 Cache (SRAM) L2 Cache (SRAM) Main Memory (DRAM) Disks (Magnetic) Secondary Storage Memory Hierarchy Processor Lower level Upper level

Differences in Memory Levels

Four Questions for Memory Hierarchy Designers Q1: Where can a block be placed in the upper level? (Block placement) Q2: How is a block found if it is in the upper level? (Block identification) Q3: Which block should be replaced on a miss? (Block replacement) Q4: What happens on a write? (Write strategy)

Q1: Where can a block be placed? Direct Mapped: Each block has only one place that it can appear in the cache. Fully associative: Each block can be placed anywhere in the cache. Set associative: Each block can be placed in a restricted set of places in the cache. –If there are n blocks in a set, the cache placement is called n-way set associative What is the associativity of a direct mapped cache?

Associativity Examples Cache size is 8 blocks Where does word 12 from memory go? Fully associative: Block 12 can go anywhere Direct mapped: Block no. = (Block address) mod (No. of blocks in cache) Block 12 can go only into block 4 (12 mod 8 = 4) => Access block using lower 3 bits 2-way set associative: Set no. = (Block address) mod (No. of sets in cache) Block 12 can go anywhere in set 0 (12 mod 4 = 0) => Access set using lower 2 bits

Q2: How Is a Block Found? The address can be divided into two main parts –Block offset: selects the data from the block offset size = log2(block size) –Block address: tag + index »index: selects set in cache index size = log2(#blocks/associativity) »tag: compared to tag in cache to determine hit tag size = addreess size - index size - offset size Each block has a valid bit that tells if the block is valid - the block is in the cache if the tags match and the valid bit is set. TagIndex

Q3: Which Block Should be Replaced on a Miss? Easy for Direct Mapped - only on choice Set Associative or Fully Associative: –Random - easier to implement –Least Recently used - harder to implement Miss rates for caches with different size, associativity and replacemnt algorithm. Associativity:2-way4-way8-way SizeLRURandomLRURandomLRURandom 16 KB5.18%5.69%4.67%5.29%4.39%4.96% 64 KB1.88%2.01%1.54%1.66%1.39%1.53% 256 KB1.15%1.17%1.13%1.13%1.12%1.12% For caches with low miss rates, random is almost as good as LRU.

Q4: What Happens on a Write? Write through: The information is written to both the block in the cache and to the block in the lower-level memory. Write back: The information is written only to the block in the cache. The modified cache block is written to main memory only when it is replaced. –is block clean or dirty? (add a dirty bit to each block) Pros and Cons of each: –Write through »Read misses cannot result in writes to memory, »Easier to implement »Always combine with write buffers to avoid memory latency –Write back »Less memory traffic »Perform writes at the speed of the cache

Q4: What Happens on a Write? Since data does not have to be brought into the cache on a write miss, there are two options: –Write allocate »The block is brought into the cache on a write miss »Used with write-back caches »Hope subsequent writes to the block hit in cache –No-write allocate »The block is modified in memory, but not brought into the cach »Used with write-through caches »Writes have to go to memory anyway, so why bring the block into the cache

Example: Alpha Data Cache The data cache of the Alpha has the following features –8 KB of data –32 byte blocks –Direct mapped placement –Write through (no-write allocate, 4-block write buffer) –34 bit physical address composed of »5 bit block offset »8 bit index »21 bit tag

Example: Alpha Data Cache A cache read has 4 steps (1) The address from the cache is divided into the tag, index, and block offset (2) The index selects block (3) The address tag is compared with the tag in the cache, the valid bit is checked, and data to be loaded is selected (4) If the valid bit is set, the data is loaded into the processor If there is a write, the data is also sent to the write buffer

2-Way Set Associative Cache Features of an 8 KB 2-way set associative cache 5 bit block offset 7 bit index 22 bit tag The set associative cache has extra hardware for 2 tag comparisons mux to select data

Calculating Bits in Cache How many total bits are needed for a direct- mapped cache with 64 KBytes of data and one word blocks, assuming a 32-bit address? –64 Kbytes = 16 K words = 2^14 words = 2^14 blocks –block size = 4 bytes => offset size = 2 bits, –#sets = #blocks = 2^14 => index size = 14 bits –tag size = address size - index size - offset size = = 16 bits –bits/block = data bits + tag bits + valid bit = = 49 –bits in cache = #blocks x bits/block = 2^14 x 49 = 98 Kbytes How many total bits would be needed for a 4-way set associative cache to store the same amount of data –block size and #blocks does not change –#sets = #blocks/4 = (2^14)/4 = 2^12 => index size = 12 bits –tag size = address size - index size - offset = = 18 bits –bits/block = data bits + tag bits + valid bit = = 51 –bits in cache = #blocks x bits/block = 2^14 x 51 = 102 Kbytes Increase associativity => increase bits in cache

Calculating Bits in Cache How many total bits are needed for a direct- mapped cache with 64 KBytes of data and 8 word blocks, assuming a 32-bit address? –64 Kbytes = 2^14 words = (2^14)/8 = 2^11 blocks –block size = 32 bytes => offset size = 5 bits, –#sets = #blocks = 2^11 => index size = 11 bits –tag size = address size - index size - offset size = = 16 bits –bits/block = data bits + tag bits + valid bit = 8x = 273 bits –bits in cache = #blocks x bits/block = 2^11 x 273 = Kbytes Increase block size => decrease bits in cache

Summary CPU-Memory gap is major performance obstacle for achieving high performance Memory hierarchies –Take advantage of program locality –Closer to processor => smaller, faster, more expensive –Further from processor => bigger, slower, less expensive 4 questions for memory hierarchy –Block placement, block identification, block replacement, and write stategy Cache parameters –Cache size, block size, associativity