نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cache Memory.

Slides:



Advertisements
Similar presentations
M. Mateen Yaqoob The University of Lahore Spring 2014.
Advertisements

Chapter 12 Memory Organization
Cache Memory Locality of reference: It is observed that when a program refers to memory, the access to memory for data as well as code are confined to.
Dr. Bernard Chen Ph.D. University of Central Arkansas
Memory Hierarchies Exercises [ ] Describe the general characteristics of a program that would exhibit very little spatial or temporal locality with.
Characteristics of Computer Memory
Computer Architecture, Memory Hierarchy & Virtual Memory
Memory Organization.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1  Caches load multiple bytes per block to take advantage of spatial locality  If cache block size = 2 n bytes, conceptually split memory into 2 n -byte.
CS 524 (Wi 2003/04) - Asim LUMS 1 Cache Basics Adapted from a presentation by Beth Richardson
Lecture 32: Chapter 5 Today’s topic –Cache performance assessment –Associative caches Reminder –HW8 due next Friday 11/21/2014 –HW9 due Wednesday 12/03/2014.
Characteristics of Computer Memory
Computer Architecture Part III-C: Memory Access and Management.
Caches – basic idea Small, fast memory Stores frequently-accessed blocks of memory. When it fills up, discard some blocks and replace them with others.
Maninder Kaur CACHE MEMORY 24-Nov
Memory Systems Architecture and Hierarchical Memory Systems
Cache memory October 16, 2007 By: Tatsiana Gomova.
Faculty of Information Technology Department of Computer Science Computer Organization and Assembly Language Chapter 4 Cache Memory.
1 Overview 2 Cache entry structure 3 mapping function 4 Cache hierarchy in a modern processor 5 Advantages and Disadvantages of Larger Caches 6 Implementation.
Caches – basic idea Small, fast memory Stores frequently-accessed blocks of memory. When it fills up, discard some blocks and replace them with others.
CMPE 421 Parallel Computer Architecture
Chapter Twelve Memory Organization
1 Memory Hierarchy The main memory occupies a central position by being able to communicate directly with the CPU and with auxiliary memory devices through.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
Computer Architecture Memory organization. Types of Memory Cache Memory Serves as a buffer for frequently accessed data Small  High Cost RAM (Main Memory)
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
2007 Sept. 14SYSC 2001* - Fall SYSC2001-Ch4.ppt1 Chapter 4 Cache Memory 4.1 Memory system 4.2 Cache principles 4.3 Cache design 4.4 Examples.
CSE 241 Computer Engineering (1) هندسة الحاسبات (1) Lecture #3 Ch. 6 Memory System Design Dr. Tamer Samy Gaafar Dept. of Computer & Systems Engineering.
1 How will execution time grow with SIZE? int array[SIZE]; int sum = 0; for (int i = 0 ; i < ; ++ i) { for (int j = 0 ; j < SIZE ; ++ j) { sum +=
CML CML CS 230: Computer Organization and Assembly Language Aviral Shrivastava Department of Computer Science and Engineering School of Computing and Informatics.
CSE378 Intro to caches1 Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early.
MEMORY ORGANIZATION - Memory hierarchy - Main memory - Auxiliary memory - Cache memory.
Memory Hierarchy. Hierarchy List Registers L1 Cache L2 Cache Main memory Disk cache Disk Optical Tape.
Lecture#15. Cache Function The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
Chapter 9 Memory Organization By Nguyen Chau Topics Hierarchical memory systems Cache memory Associative memory Cache memory with associative mapping.
1  2004 Morgan Kaufmann Publishers Chapter Seven Memory Hierarchy-3 by Patterson.
Dr. Bernard Chen Ph.D. University of Central Arkansas Spring 2010
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
Cache Memory Yi-Ning Huang. Principle of Locality Principle of Locality A phenomenon that the recent used memory location is more likely to be used again.
Cache Memory.
Computer Organization
COSC3330 Computer Architecture
The Goal: illusion of large, fast, cheap memory
Ramya Kandasamy CS 147 Section 3
How will execution time grow with SIZE?
Dr. Bernard Chen Ph.D. University of Central Arkansas
Morgan Kaufmann Publishers
William Stallings Computer Organization and Architecture 7th Edition
Set-Associative Cache
Lecture 22: Cache Hierarchies, Memory
Module IV Memory Organization.
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Chapter 6 Memory System Design
Chap. 12 Memory Organization
CMSC 611: Advanced Computer Architecture
Cache Memory.
Morgan Kaufmann Publishers
Memory Organization.
Miss Rate versus Block Size
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Presentation transcript:

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cache Memory

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cache memory If the active portions of the program and data are placed in a fast small memory, the average memory access time can be reduced, Thus reducing the total execution time of the program Such a fast small memory is referred to as cache memory The cache is the fastest component in the memory hierarchy and approaches the speed of CPU component

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cache memory When CPU needs to access memory, the cache is examined If the word is found in the cache, it is read from the fast memory If the word addressed by the CPU is not found in the cache, the main memory is accessed to read the word

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cache memory When the CPU refers to memory and finds the word in cache, it is said to produce a hit Otherwise, it is a miss The performance of cache memory is frequently measured in terms of a quantity called hit ratio Hit ratio = hit / (hit+miss)

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cache memory The basic characteristic of cache memory is its fast access time, Therefore, very little or no time must be wasted when searching the words in the cache The transformation of data from main memory to cache memory is referred to as a mapping process, there are three types of mapping: Associative mapping Direct mapping Set-associative mapping

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cache memory To help understand the mapping procedure, we have the following example:

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Associative mapping The fastest and most flexible cache organization uses an associative memory The associative memory stores both the address and data of the memory word This permits any location in cache to store any word from main memory The address value of 15 bits is shown as a five-digit octal number and its corresponding 12-bit word is shown as a four-digit octal number

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Associative mapping

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Associative mapping A CPU address of 15 bits is places in the argument register and the associative memory use searched for a matching address If the address is found, the corresponding 12-bits data is read and sent to the CPU If not, the main memory is accessed for the word If the cache is full, an address-data pair must be displaced to make room for a pair that is needed and not presently in the cache

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Direct Mapping Associative memory is expensive compared to RAM In general case, there are 2^k words in cache memory and 2^n words in main memory (in our case, k=9, n=15) The n bit memory address is divided into two fields: k-bits for the index and n-k bits for the tag field

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Direct Mapping

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Direct Mapping

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Set-Associative Mapping The disadvantage of direct mapping is that two words with the same index in their address but with different tag values cannot reside in cache memory at the same time Set-Associative Mapping is an improvement over the direct-mapping in that each word of cache can store two or more word of memory under the same index address

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Set-Associative Mapping

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Set-Associative Mapping In the slide, each index address refers to two data words and their associated tags Each tag requires six bits and each data word has 12 bits, so the word length is 2*(6+12) = 36 bits

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cash Memory Example: Processors use 2 levels of cache Example used is a Intel 2800 MHz Level 1 (also called Primary) Very small amount of cache 12kb Fastest Memory Stores recently used data and instructions Level 2 (also called secondary) 512Kb Faster than main memory, but slower than Level 1 Stores what can not fit into the smaller Level 1 Cache

نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Block diagram showing L1 and L2 cache memories in a computer system.