Memory Hierarchies Exercises [7.1-7.3] Describe the general characteristics of a program that would exhibit very little spatial or temporal locality with.

Slides:



Advertisements
Similar presentations
Cache Memory Exercises. Questions I Given: –memory is little-endian and byte addressable; memory size; –number of cache blocks, size of cache block –An.
Advertisements

Practical Caches COMP25212 cache 3. Learning Objectives To understand: –Additional Control Bits in Cache Lines –Cache Line Size Tradeoffs –Separate I&D.
M. Mateen Yaqoob The University of Lahore Spring 2014.
Chapter 12 Memory Organization
1 Recap: Memory Hierarchy. 2 Memory Hierarchy - the Big Picture Problem: memory is too slow and or too small Solution: memory hierarchy Fastest Slowest.
Simulations of Memory Hierarchy LAB 2: CACHE LAB.
CSC1016 Coursework Clarification Derek Mortimer March 2010.
The Lord of the Cache Project 3. Caches Three common cache designs: Direct-Mapped store in exactly one cache line Fully Associative store in any cache.
6/12/2015Page 1 Exploiting Memory Hierarchy Chapter 7 B.Ramamurthy.
How caches take advantage of Temporal locality
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Nov. 3, 2003 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Caches The principle that states that if data is used, its neighbor will likely be used soon.
1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.
Direct Map Cache Tracing Exercise. Exercise #1: Setup Information CS2100 Cache I 2 Memory 4GB Memory Address 310N-1N Block Number Offset 1 Block = 8 bytes.
CDA 3103 Computer Organization Review Instructor: Hao Zheng Dept. Comp. Sci & Eng. USF.
CSCE 212 Quiz 11 – 4/13/11 Given a direct-mapped cache with 8 one-word blocks and the following 32-bit memory address references: 1 2, ,
1  The second question was how to determine whether or not the data we’re interested in is already stored in the cache.  If we want to read memory address.
Caches – basic idea Small, fast memory Stores frequently-accessed blocks of memory. When it fills up, discard some blocks and replace them with others.
Maninder Kaur CACHE MEMORY 24-Nov
Caches – basic idea Small, fast memory Stores frequently-accessed blocks of memory. When it fills up, discard some blocks and replace them with others.
Lecture Objectives: 1)Define set associative cache and fully associative cache. 2)Compare and contrast the performance of set associative caches, direct.
1 Memory Hierarchy The main memory occupies a central position by being able to communicate directly with the CPU and with auxiliary memory devices through.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
Ch. 12 Cache Direct Mapped Cache. Comp Sci mem hierarchy 2 Memory Hierarchy Registers: very few, very fast cache memory: small, fast main memory:
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5A: Exploiting the Memory Hierarchy, Part 2 Adapted from Slides by Prof. Mary Jane Irwin, Penn State University.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 January Session 2.
Lecture 5 Cache Operation ECE 463/521 Fall 2002 Edward F. Gehringer Based on notes by Drs. Eric Rotenberg & Tom Conte of NCSU.
Additional Slides By Professor Mary Jane Irwin Pennsylvania State University Group 3.
The Memory Hierarchy Lecture # 30 15/05/2009Lecture 30_CA&O_Engr Umbreen Sabir.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
Lecture 08: Memory Hierarchy Cache Performance Kai Bu
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
Caching Chapter 7.
11 Intro to cache memory Kosarev Nikolay MIPT Nov, 2009.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Exam 2 Review Two’s Complement Arithmetic Ripple carry ALU logic and performance Look-ahead techniques, performance and equations Basic multiplication.
Additional Slides By Professor Mary Jane Irwin Pennsylvania State University Group 1.
Lecture 20 Last lecture: Today’s lecture: Types of memory
1 Appendix C. Review of Memory Hierarchy Introduction Cache ABCs Cache Performance Write policy Virtual Memory and TLB.
Cache Operation.
Constructive Computer Architecture Realistic Memories and Caches Arvind Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology.
CSE 351 Caches. Section Feedback Before we begin, we’d like to get some feedback about section If you could answer the following questions on the provided.
Memory Hierarchy and Cache Design (3). Reducing Cache Miss Penalty 1. Giving priority to read misses over writes 2. Sub-block placement for reduced miss.
نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cache Memory.
Chapter 9 Memory Organization. 9.1 Hierarchical Memory Systems Figure 9.1.
CSE 351 Caches. Before we start… A lot of people confused lea and mov on the midterm Totally understandable, but it’s important to make the distinction.
1 Memory Hierarchy Design Chapter 5. 2 Cache Systems CPUCache Main Memory Data object transfer Block transfer CPU 400MHz Main Memory 10MHz Bus 66MHz CPU.
1 Lecture 20: OOO, Memory Hierarchy Today’s topics:  Out-of-order execution  Cache basics.
1 Memory Systems Caching Lecture 24 Digital Design and Computer Architecture Harris & Harris Morgan Kaufmann / Elsevier, 2007.
Associative Mapping A main memory block can load into any line of cache Memory address is interpreted as tag and word Tag uniquely identifies block of.
Tutorial Nine Cache CompSci Semester One 2016.
Direct Cache Structure
Consider a Direct Mapped Cache with 4 word blocks
Morgan Kaufmann Publishers
Lecture 6 Memory Hierarchy
Lecture 21: Memory Hierarchy
CSCI206 - Computer Organization & Programming
FIGURE 12-1 Memory Hierarchy
Set-Associative Cache
Lecture 22: Cache Hierarchies, Memory
Direct Mapping.
CSCI 6307 Foundation of Systems – Exercise (3)
How can we find data in the cache?
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Basic Cache Operation Prof. Eric Rotenberg
Presentation transcript:

Memory Hierarchies Exercises [ ] Describe the general characteristics of a program that would exhibit very little spatial or temporal locality with regards to data accesses very little spatial but very high temporal very high spatial but very little temporal

Memory Hierarchies Exercises [ ] Describe the general characteristics of a program that would exhibit very little spatial or temporal locality with regards to instruction accesses very little spatial but very high temporal very high spatial but very little temporal

Memory Hierarchies Exercises [7.7] Here is a series of address references given as word addresses: 1, 4, 8, 5, 20, 17, 19, 56, 9, 11, 4, 43, 5, 6, 9, 17 assuming a direct-mapped cache of 16 one-word blocks that is initially empty, label each reference in the last as a hit or miss and show the cache’s final contents

Memory Hierarchies Exercises [7.7] SlotContents 0 11* * 20* * * Memory refCache hit/miss 1miss miss 17miss 19miss 56miss 9 11miss 4 43miss 5hit 6miss 9hit 17hit Asterisks (*) represent overwritten references

Memory Hierarchies Exercises [7.8] Here is a series of address references given as word addresses: 1, 4, 8, 5, 20, 17, 19, 56, 9, 11, 4, 43, 5, 6, 9, 17 assuming a direct-mapped cache of four- word blocks (16 words total) that is initially empty, label each reference in the last as a hit or miss and show the cache’s final contents

Memory Hierarchies Exercises [7.8] SlotContents Word in block Memory refCache hit/miss 1miss 4 8 5hit 20miss 17miss 19miss 56miss 9 11hit 4miss 43miss 5hit hit

Memory Hierarchies Exercises [7.25] Can you make a fully associative cache containing exactly 3K words of data? How about a set-associative or a direct- mapped cache containing exactly 3K words of data?

Memory Hierarchies Exercises [7.25] Fully associative cache of 3K words? Yes. A fully associative cache has only one set, so no index is used to access the cache. Instead, we need to compare the tag with every cache location (3K of them). Set-associative cache of 3K words? Yes. Implement a three-way set-associative cache. This means that there will be 1024 sets, which require a 10-bit index field to access. Direct-mapped cache of 3K words? No. (Cannot be done efficiently.) To access a 3K word direct-mapped cache would require between 11 and 12 index bits. Using 11 bits will allow us to only access 2K of the cache, while using 12 bits will cause us to map some addresses to nonexistent cache locations.

Arithmetic Integer Representations [ ] Convert these decimal values to Two’s Complement binary –0 –2 –512 –-1023

Arithmetic Integer Representations [ ] Convert these decimal values to Two’s Complement binary – – – –