Computer Architecture Lecture 26 Fasih ur Rehman.

Slides:



Advertisements
Similar presentations
361 Computer Architecture Lecture 15: Cache Memory
Advertisements

SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
© Karen Miller, What do we want from our computers?  correct results we assume this feature, but consider... who defines what is correct?  fast.
CSC 4250 Computer Architectures December 8, 2006 Chapter 5. Memory Hierarchy.
Overview of Cache and Virtual MemorySlide 1 The Need for a Cache (edited from notes with Behrooz Parhami’s Computer Architecture textbook) Cache memories.
Characteristics of Computer Memory
Memory Problems Prof. Sin-Min Lee Department of Mathematics and Computer Sciences.
Memory Organization.
Cs 61C L17 Cache.1 Patterson Spring 99 ©UCB CS61C Cache Memory Lecture 17 March 31, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson) www-inst.eecs.berkeley.edu/~cs61c/schedule.html.
Cache Organization Topics Background Simple examples.
5/27/99 Ashish Sabharwal1 Cache Misses - The 3 C’s n Compulsory misses –cold start –don’t have a choice –except that by increasing block size, can reduce.
Cache Memories Effectiveness of cache is based on a property of computer programs called locality of reference Most of programs time is spent in loops.
COEN 180 Main Memory Cache Architectures. Basics Speed difference between cache and memory is small. Therefore:  Cache algorithms need to be implemented.
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Unit-4 (CO-MPI Autonomous)
Maninder Kaur CACHE MEMORY 24-Nov
Spring 2009W. Rhett DavisNC State UniversityECE 406Slide 1 ECE 406 – Design of Complex Digital Systems Lecture 19: Cache Operation & Design Spring 2009.
Computer Architecture Lecture 28 Fasih ur Rehman.
1 Project: Virtual Memory Manager Lubomir Bic. 2 Assignment Design and implement a virtual memory system (VM) using segmentation and paging The system.
Memory Hierarchy and Cache Memory Jennifer Tsay CS 147 Section 3 October 8, 2009.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
In1210/01-PDS 1 TU-Delft The Memory System. in1210/01-PDS 2 TU-Delft Organization Word Address Byte Address
IT253: Computer Organization
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIT 301 (Blum)1 Cache Based in part on Chapter 9 in Computer Architecture (Nicholas Carter)
Computer Architecture Memory organization. Types of Memory Cache Memory Serves as a buffer for frequently accessed data Small  High Cost RAM (Main Memory)
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
CS 1104 Help Session I Caches Colin Tan, S
CSE 241 Computer Engineering (1) هندسة الحاسبات (1) Lecture #3 Ch. 6 Memory System Design Dr. Tamer Samy Gaafar Dept. of Computer & Systems Engineering.
Computer Architecture Lecture 32 Fasih ur Rehman.
The Memory Hierarchy Lecture # 30 15/05/2009Lecture 30_CA&O_Engr Umbreen Sabir.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Computer Architecture Lecture 27 Fasih ur Rehman.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
1 CSCI 2510 Computer Organization Memory System II Cache In Action.
DECStation 3100 Block Instruction Data Effective Program Size Miss Rate Miss Rate Miss Rate 1 6.1% 2.1% 5.4% 4 2.0% 1.7% 1.9% 1 1.2% 1.3% 1.2% 4 0.3%
Chapter 9 Memory Organization By Nguyen Chau Topics Hierarchical memory systems Cache memory Associative memory Cache memory with associative mapping.
11 Intro to cache memory Kosarev Nikolay MIPT Nov, 2009.
Lecture 20 Last lecture: Today’s lecture: Types of memory
Cache Small amount of fast memory Sits between normal main memory and CPU May be located on CPU chip or module.
Introduction to computer architecture April 7th. Access to main memory –E.g. 1: individual memory accesses for j=0, j++, j
Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
Memory Hierarchy and Cache. A Mystery… Memory Main memory = RAM : Random Access Memory – Read/write – Multiple flavors – DDR SDRAM most common 64 bit.
Computer Orgnization Rabie A. Ramadan Lecture 9. Cache Mapping Schemes.
Computer Architecture Lecture 25 Fasih ur Rehman.
Associative Mapping A main memory block can load into any line of cache Memory address is interpreted as tag and word Tag uniquely identifies block of.
Cache Memory Yi-Ning Huang. Principle of Locality Principle of Locality A phenomenon that the recent used memory location is more likely to be used again.
CMSC 611: Advanced Computer Architecture
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Cache Memory.
COSC3330 Computer Architecture
CS2100 Computer Organization
Cache Memory Presentation I
Consider a Direct Mapped Cache with 4 word blocks
William Stallings Computer Organization and Architecture 7th Edition
Chapter 5 Memory CSE 820.
M. Usha Professor/CSE Sona College of Technology
CSC3050 – Computer Architecture
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache - Optimization.
10/18: Lecture Topics Using spatial locality
Overview Problem Solution CPU vs Memory performance imbalance
Presentation transcript:

Computer Architecture Lecture 26 Fasih ur Rehman

Last Class Memories Cont. – Speed Size and Cost – Memory Hierarchy – Memory Systems

Today’s Agenda Cache Memories – Mapping Functions

Mapping Functions Three types of Mappings – Direct Mapping – Associative Mapping – Set Associative Mapping

Direct Mapping

Memory address divided into 3 fields – Block determines position of block in cache – Tag used to keep track of which block is in cache (as many blocks can map to same position in cache) – Word selects which word of the block to return for a read operation Given an address t, b, w – See if it is already in cache by comparing t with the tag in block b – If not, replace the current block at b with a new one from memory Simple but not very flexible

Example Assume separate instruction and data caches Cache has space for 8 blocks Block contains 1 16-bit word A(4,10) is an array of words located at 7A00- 7A27 in column order Normalize an array by its average

Example (Direct Mapping) Least significant 3-bits of address determine location in cache When i=9 and 8, get a cache hit (2 hits in total) For j loop only 2 out of the 8 cache positions used Tags not shown but are needed Very inefficient cache utilization

Associative Mapping

In direct mapping, the block is restricted to reside in a given position in the cache Associative mapping allows block to reside in an arbitrary cache location In this example, all 128 tag entries must be compared with the address Tag in parallel so cost is higher.

Example LRU replacement policy Get hits for i=9,8,..,2 If i loop was a forward one, would get no hits!

Set Associative Mapping

Combination of direct and associative Blocks 0,64,128,…,4032 map to cache set 0 and can occupy either of 2 positions within that set A cache with k-blocks per set is called a k- way set associative cache

Example Since all accessed blocks have even addresses, only half the cache is used i.e. they all map to set 1 Get hits for i=9,8,7,6 Random replacement would have better average performance

Replacement Algorithms Direct mapped cache – Position of each block fixed, no replacement algorithm needed Associative and Set associative – Need to decide which block to replace (keep ones likely to be used in cache) – One strategy is least recently used (LRU) e.g. for a 4 block cache, use a 2-bit counter. Set =0 for block accessed, other blocks incremented. Block with count=3 replaced upon miss – Another is random replacement (choose random block). Advantage is that it is easier to implement at high speed

Summary Cache – Mapping Functions