Caches The principle that states that if data is used, its neighbor will likely be used soon.

Slides:



Advertisements
Similar presentations
Computation I pg 1 Embedded Computer Architecture Memory Hierarchy: Cache Recap Course 5KK73 Henk Corporaal November 2014
Advertisements

Recitation 7 Caching By yzhuang. Announcements Pick up your exam from ECE course hub ◦ Average is 43/60 ◦ Final Grade computation? See syllabus
The Lord of the Cache Project 3. Caches Three common cache designs: Direct-Mapped store in exactly one cache line Fully Associative store in any cache.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
Multilevel Memory Caches Prof. Sirer CS 316 Cornell University.
How caches take advantage of Temporal locality
1 Lecture 12: Cache Innovations Today: cache access basics and innovations (Sections )
Embedded Computer Architecture 5KK73 TU/e Henk Corporaal Bart Mesman Data Memory Management Part d: Data Layout for Caches.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
11/3/2005Comp 120 Fall November 10 classes to go! Cache.
EENG449b/Savvides Lec /13/04 April 13, 2004 Prof. Andreas Savvides Spring EENG 449bG/CPSC 439bG Computer.
Cs 61C L17 Cache.1 Patterson Spring 99 ©UCB CS61C Cache Memory Lecture 17 March 31, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson) www-inst.eecs.berkeley.edu/~cs61c/schedule.html.
Direct Map Cache Tracing Exercise. Exercise #1: Setup Information CS2100 Cache I 2 Memory 4GB Memory Address 310N-1N Block Number Offset 1 Block = 8 bytes.
CSCE 212 Quiz 11 – 4/13/11 Given a direct-mapped cache with 8 one-word blocks and the following 32-bit memory address references: 1 2, ,
Computer ArchitectureFall 2007 © November 12th, 2007 Majd F. Sakr CS-447– Computer Architecture.
Caches – basic idea Small, fast memory Stores frequently-accessed blocks of memory. When it fills up, discard some blocks and replace them with others.
Storage HierarchyCS510 Computer ArchitectureLecture Lecture 12 Storage Hierarchy.
Caches – basic idea Small, fast memory Stores frequently-accessed blocks of memory. When it fills up, discard some blocks and replace them with others.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
Cache Basics Define temporal and spatial locality.
1 CMPE 421 Advanced Computer Architecture Accessing a Cache PART1.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Outline Cache writes DRAM configurations Performance Associative caches Multi-level caches.
Caching Chapter 7.
11 Intro to cache memory Kosarev Nikolay MIPT Nov, 2009.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
Exam 2 Review Two’s Complement Arithmetic Ripple carry ALU logic and performance Look-ahead techniques, performance and equations Basic multiplication.
Additional Slides By Professor Mary Jane Irwin Pennsylvania State University Group 1.
Cache Operation.
CSE 351 Caches. Section Feedback Before we begin, we’d like to get some feedback about section If you could answer the following questions on the provided.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Computer Organization CS224 Fall 2012 Lessons 37 & 38.
CSCI206 - Computer Organization & Programming
Improving Memory Access The Cache and Virtual Memory
Address – 32 bits WRITE Write Cache Write Main Byte Offset Tag Index Valid Tag Data 16K entries 16.
Soner Onder Michigan Technological University
CSE 351 Section 9 3/1/12.
Tutorial Nine Cache CompSci Semester One 2016.
The Goal: illusion of large, fast, cheap memory
Improving Memory Access 1/3 The Cache and Virtual Memory
Morgan Kaufmann Publishers
Lecture 21: Memory Hierarchy
Lecture 21: Memory Hierarchy
CSCI206 - Computer Organization & Programming
Systems Architecture II
Interconnect with Cache Coherency Manager
Lecture 22: Cache Hierarchies, Memory
Direct Mapping.
10/16: Lecture Topics Memory problem Memory Solution: Caches Locality
ECE232: Hardware Organization and Design
CSE 351: The Hardware/Software Interface
Lecture 20: OOO, Memory Hierarchy
EE108B Review Session #6 Daxia Ge Friday February 23rd, 2007
Lecture 22: Cache Hierarchies, Memory
CS-447– Computer Architecture Lecture 20 Cache Memories
Basic Cache Operation Prof. Eric Rotenberg
Lecture 21: Memory Hierarchy
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache - Optimization.
Lecture 13: Cache Basics Topics: terminology, cache organization (Sections )
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Caches The principle that states that if data is used, its neighbor will likely be used soon

Caches The principle that states that if data is used, its neighbor will likely be used soon Spatial Locality

Caches The time it takes a cache to receive the data from the lower level of memory

Caches The time it takes a cache to receive the data from the lower level of memory Miss Penalty

Caches The principle that states that if data is used, it will likely be used again soon

Caches The principle that states that if data is used, it will likely be used again soon Temporal Locality

Caches The time it takes to find out if an item is in the cache and return the data if it is.

Caches The time it takes to find out if an item is in the cache and return the data if it is. Access Time

Caches A cache configuration that requires multiple tag checks each access.

Caches A cache configuration that requires multiple tag checks each access. 2+ Associativity

Caches A cache attribute that takes advantage of spatial locality

Caches A cache attribute that takes advantage of spatial locality Large Block Size

Caches Cache attributes that decrease access time

Caches Cache attributes that decrease access time Small, Low Associativity

Caches Cache attributes that decrease miss penalty

Caches Cache attributes that decrease miss penalty Small Block Size, Multi-Level Caches

Caches Cache attributes that decrease miss rate

Caches Cache attributes that decrease miss rate Large Cachesize, Large Blocksize, High Associativity

Caches log 2 ( CacheSize / (BlockSize * Assoc)) =

Caches log 2 ( CacheSize / (BlockSize * Assoc)) # bits in the index

Caches log 2 ( BlockSize / WordSize ) =

Caches log 2 ( Blocksize / Wordsize ) = # bits in the Block Offset

Caches log 2 ( WordSize) =

Caches log 2 ( WordSize ) = # bits in the Byte Offset

Caches #address bits - log 2 ( CacheSize / Associativity ) =

Caches #address bits - log 2 ( CacheSize / Associativity ) = # bits in the Tag