1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are 2 - 25ns at cost of $100 to $250 per Mbyte. DRAM access times are 60-120ns.

Slides:



Advertisements
Similar presentations
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
Advertisements

Computation I pg 1 Embedded Computer Architecture Memory Hierarchy: Cache Recap Course 5KK73 Henk Corporaal November 2014
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
11/8/2005Comp 120 Fall November 9 classes to go! Read Section 7.5 especially important!
The Memory Hierarchy CPSC 321 Andreas Klappenecker.
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Chapter 7 Large and Fast: Exploiting Memory Hierarchy Bo Cheng.
Memory Hierarchy Design Chapter 5 Karin Strauss. Background 1980: no caches 1995: two levels of caches 2004: even three levels of caches Why? Processor-Memory.
The Memory Hierarchy II CPSC 321 Andreas Klappenecker.
331 Lec20.1Fall :332:331 Computer Architecture and Assembly Language Fall 2003 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
ENGS 116 Lecture 121 Caches Vincent H. Berk Wednesday October 29 th, 2008 Reading for Friday: Sections C.1 – C.3 Article for Friday: Jouppi Reading for.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Caching I Andreas Klappenecker CPSC321 Computer Architecture.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
11/3/2005Comp 120 Fall November 10 classes to go! Cache.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
Caching II Andreas Klappenecker CPSC321 Computer Architecture.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
Processor Design 5Z032 Memory Hierarchy Chapter 7 Henk Corporaal Eindhoven University of Technology 2009.
Memory: PerformanceCSCE430/830 Memory Hierarchy: Performance CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu (U. Maine)
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
Computing Systems Memory Hierarchy.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Computer Organization and Architecture (AT70.01) Comp. Sc. and Inf. Mgmt. Asian Institute of Technology Instructor: Dr. Sumanta Guha Slide Sources: Patterson.
CMPE 421 Parallel Computer Architecture
EECS 318 CAD Computer Aided Design LECTURE 10: Improving Memory Access: Direct and Spatial caches Instructor: Francis G. Wolff Case.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
Caches Where is a block placed in a cache? –Three possible answers  three different types AnywhereFully associativeOnly into one block Direct mappedInto.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
DECStation 3100 Block Instruction Data Effective Program Size Miss Rate Miss Rate Miss Rate 1 6.1% 2.1% 5.4% 4 2.0% 1.7% 1.9% 1 1.2% 1.3% 1.2% 4 0.3%
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
Improving Memory Access 2/3 The Cache and Virtual Memory
The Memory Hierarchy (Lectures #17 - #20) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer.
Chapter 5 Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
1 ECE3055 Computer Architecture and Operating Systems Lecture 8 Memory Subsystem Prof. Hsien-Hsin Sean Lee School of Electrical and Computer Engineering.
COSC3330 Computer Architecture
The Goal: illusion of large, fast, cheap memory
Chapter Seven.
Improving Memory Access 1/3 The Cache and Virtual Memory
Cache Memory Presentation I
Morgan Kaufmann Publishers Memory & Cache
November 14 6 classes to go! Read
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
CS-447– Computer Architecture Lecture 20 Cache Memories
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache - Optimization.
Memory & Cache.
Presentation transcript:

1 Chapter Seven

2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns at cost of $5 to $10 per Mbyte. Disk access times are 10 to 20 million ns at cost of $.10 to $.20 per Mbyte. Try and give it to them anyway –build a memory hierarchy Exploiting Memory Hierarchy 1997

3 Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality: it will tend to be referenced again soon spatial locality: nearby items will tend to be referenced soon. Why does code have locality? Our initial focus: two levels (upper, lower) –block: minimum unit of data moved to cache. –hit: data requested is in the upper level –miss: data requested is not in the upper level

4

5 Two issues: –How do we know if a data item is in the cache? –If it is, how do we find it? Our first example: – block size is one word of data – "direct mapped" For each item of data at the lower level, there is exactly one location in the cache where it might be. e.g., lots of items at the lower level share locations in the upper level Cache

6 Cache memory Level of memory hierarchy between main memory and the CPU. Cache is constructed from very fast memory. It would be too expensive to construct main memory from this very fast memory. Cache memory is quite common. Note: For Cache line is often used instead of block. Address is broken into Tag, index and byte offset. The index is used to select the block in cache. The tag is compared with the value of the tag field in the cache blocked indexed. The offset is used to select a particular word in the cache.

7 Direct Mapped cache

8 Mapping: address is modulo the number of blocks in the cache Direct Mapped Cache

9 Taking advantage of spatial locality: Direct Mapped Cache (a). The instruction cache is direct mapped cache, and contains 64 lines. Each line contains 8 instructions. How many address bits are required for the tag? Answer: 64 lines require 6 bits for the index (2 6 = 64). 4 X 8 = 2 5 bytes in a line requires 5 bits for the offset. The number of bits for the tag = 32 – 6 – 5 = 21 bits.

10

11

12

13

14

15

16

17

18

19 Homework A certain MIPS computer has a separate cache for instructions and data. (a). The instruction cache is direct mapped cache, and contains 64 lines. Each line contains 8 instructions. How many address bits are required for the tag? (b). The data cache is four way set associative and can hold a maximum or 4096 bytes of data. If four address bits are required for the index, how many bits are used for the tag? (c). Would it be possible to build the instruction cache such that the most significant bits of the address are used for the index instead of being used for the tag? (d). Explain either why this cannot be done, or if it can be done why it is not done.

20 Read hits –this is what we want! Read misses –stall the CPU, fetch block from memory, deliver to cache, restart Write hits: –can replace data in cache and memory (write-through) –write the data only into the cache (write-back the cache later) Write misses: –read the entire block into the cache, then write the word Hits vs. Misses

21 Make reading multiple words easier by using banks of memory It can get a lot more complicated... Hardware Issues

22 Increasing the block size tends to decrease miss rate: Use split caches because there is more spatial locality in code: Performance

23 Performance Simplified model: execution time = (execution cycles + stall cycles)  cycle time stall cycles = # of instructions  miss ratio  miss penalty(in cycles) Two ways of improving performance: –decreasing the miss ratio –decreasing the miss penalty What happens if we increase block size?

24 Compared to direct mapped, give a series of references that: –results in a lower miss ratio using a 2-way set associative cache –results in a higher miss ratio using a 2-way set associative cache assuming we use the “least recently used” replacement strategy Decreasing miss ratio with associativity

25 An implementation (b). The data cache is four way set associative and can hold a maximum or 4096 bytes of data. If four address bits are required for the index, how many bits are used for the tag? Answer: Each set contains 4096/4 = 1024 = 2 10 bytes of data. 4 index bits implies 2 4 lines in each set. Thus, there are 2 10 /2 4 = 2 6 bytes in each line requiring 6 bits for the offset. The number of tag bits = 32 – 4 – 6 = 22.

26 Performance

27 Decreasing miss penalty with multilevel caches Add a second level cache: –often primary cache is on the same chip as the processor –use SRAMs to add another cache above primary memory (DRAM) –miss penalty goes down if data is in 2nd level cache Example: –CPI of 1.0 on a 500Mhz machine with a 5% miss rate, 200ns DRAM access –Adding 2nd level cache with 20ns access time decreases miss rate to 2% Using multilevel caches: –try and optimize the hit time on the 1st level cache –try and optimize the miss rate on the 2nd level cache

28 Cache Summary Processor performance has increased faster than memory performance. Cache memory helps resolve this problem. When a word is not found in cache multiple words (called a block or line) are moved form main memory to cache memory. Each cache line contains a tag to identify the main memory address of the line. Set associative memories a use to approximate a fully associative cache memory. Average memory access time = Hit time + Miss rate X Miss penalty –Modern processors may execute other instructions when a miss occurs to reduce the effect of the miss.