CACHE MEMORY.

Slides:



Advertisements
Similar presentations
SE-292 High Performance Computing
Advertisements

SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
Lecture 8: Memory Hierarchy Cache Performance Kai Bu
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
CMPE 421 Parallel Computer Architecture MEMORY SYSTEM.
Computer ArchitectureFall 2008 © October 27th, 2008 Majd F. Sakr CS-447– Computer Architecture.
Caching I Andreas Klappenecker CPSC321 Computer Architecture.
Memory Hierarchy and Cache Design The following sources are used for preparing these slides: Lecture 14 from the course Computer architecture ECE 201 by.
Cache memory October 16, 2007 By: Tatsiana Gomova.
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5A: Exploiting the Memory Hierarchy, Part 1 Adapted from Slides by Prof. Mary Jane Irwin, Penn State University.
CMPE 421 Parallel Computer Architecture
Chapter Twelve Memory Organization
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
CSE378 Intro to caches1 Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early.
Computer Organization & Programming
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Memory Hierarchy: Terminology Hit: data appears in some block in the upper level (example: Block X)  Hit Rate : the fraction of memory access found in.
Memory Hierarchy How to improve memory access. Outline Locality Structure of memory hierarchy Cache Virtual memory.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Lecture 5: Memory Performance. Types of Memory Registers L1 cache L2 cache L3 cache Main Memory Local Secondary Storage (local disks) Remote Secondary.
Memory Hierarchy and Caches. Who Cares about Memory Hierarchy? Processor Only Thus Far in Course CPU-DRAM Gap 1980: no cache in µproc; level cache,
The Memory Hierarchy (Lectures #17 - #20) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer.
SSU 1 Dr.A.Srinivas PES Institute of Technology Bangalore, India 9 – 20 July 2012.
Computer Organization CS224 Fall 2012 Lessons 37 & 38.
CACHE MEMORY CS 147 October 2, 2008 Sampriya Chandra.
1 Contents Memory types & memory hierarchy Virtual memory (VM) Page replacement algorithms in case of VM.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
CMSC 611: Advanced Computer Architecture
COSC3330 Computer Architecture
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Chapter 2: Computer-System Structures(Hardware)
Chapter 2: Computer-System Structures
Computer Organization
Address – 32 bits WRITE Write Cache Write Main Byte Offset Tag Index Valid Tag Data 16K entries 16.
CSE 351 Section 9 3/1/12.
Yu-Lun Kuo Computer Sciences and Information Engineering
The Goal: illusion of large, fast, cheap memory
CS352H: Computer Systems Architecture
Improving Memory Access 1/3 The Cache and Virtual Memory
Ramya Kandasamy CS 147 Section 3
How will execution time grow with SIZE?
Today How’s Lab 3 going? HW 3 will be out today
Cache Memory Presentation I
Morgan Kaufmann Publishers Memory & Cache
Morgan Kaufmann Publishers
CS-301 Introduction to Computing Lecture 17
Lecture 28: Virtual Memory-Address Translation
Systems Architecture II
Lecture 08: Memory Hierarchy Cache Performance
Lecture 22: Cache Hierarchies, Memory
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
10/16: Lecture Topics Memory problem Memory Solution: Caches Locality
CMSC 611: Advanced Computer Architecture
Memory Organization.
Miss Rate versus Block Size
Morgan Kaufmann Publishers Memory Hierarchy: Cache Basics
Morgan Kaufmann Publishers Memory Hierarchy: Introduction
Contents Memory types & memory hierarchy Virtual memory (VM)
CS-447– Computer Architecture Lecture 20 Cache Memories
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache Memory and Performance
Memory & Cache.
Sarah Diesburg Operating Systems CS 3430
Memory Principles.
Overview Problem Solution CPU vs Memory performance imbalance
Sarah Diesburg Operating Systems COP 4610
Page Main Memory.
Presentation transcript:

CACHE MEMORY

LOCALITY PRINCIPAL OF LOCALITY is the tendency to reference data items that are near other recently referenced data items, or that were recently referenced themselves. TEMPORAL LOCALITY : memory location that is referenced once is likely to be referenced multiple times in near future. SPATIAL LOCALITY : memory location that is referenced once, then the program is likely to be reference a nearby memory location in near future.

CACHE MEMORY Principle of locality helped to speed up main memory access by introducing small fast memories known as CACHE MEMORIES that hold blocks of the most recently referenced instructions and data items. Cache is a small fast storage device that holds the operands and instructions most likely to be used by the CPU.

Memory Hierarchy of early computers: 3 levels CPU registers DRAM Memory Disk storage

Due to increasing gap between CPU and main Memory, small SRAM memory called L1 cache inserted. L1 caches can be accessed almost as fast as the registers, typically in 1 or 2 clock cycle Due to even more increasing gap between CPU and main memory, Additional cache: L2 cache inserted between L1 cache and main memory : accessed in fewer clock cycles.

L2 cache attached to the memory bus or to its own cache bus Some high performance systems also include additional L3 cache which sits between L2 and main memory . It has different arrangement but principle same. The cache is placed both physically closer and logically closer to the CPU than the main memory.

CACHE LINES / BLOCKS Cache memory is subdivided into cache lines Cache Lines / Blocks: The smallest unit of memory than can be transferred between the main memory and the cache.

TAG / INDEX Every address field consists of two primary parts: a dynamic (tag) which contains the higher address bits, and a static (index) which contains the lower address bits The first one may be modified during run-time while the second one is fixed.

VALID BIT / DIRTY BIT When a program is first loaded into main memory, the cache is cleared, and so while a program is executing, a valid bit is needed to indicate whether or not the slot holds a line that belongs to the program being executed. There is also a dirty bit that keeps track of whether or not a line has been modified while it is in the cache. A slot that is modified must be written back to the main memory before the slot is reused for another line.

Example: Memory segments and cache segments are exactly of the same size. Every memory segment contains equally sized N memory lines. Memory lines and cache lines are exactly of the same size. Therefore, to obtain an address of a memory line, it needs to determine the number of its memory segment first and the number of the memory line inside of that segment second, then to merge both numbers. Substitute the segment number with the tag and the line number with the index, and you should have realized the idea in general.

Therefore, cache line's tag size depends on 3 factors: Size of cache memory; Associativity of cache memory; Cacheable range of operating memory. Here, Stag — size of cache tag, in bits; Smemory — cacheable range of operating memory, in bytes; Scache — size of cache memory, in bytes; A — associativity of cache memory, in ways.

CACHE HITS / MISSES Cache Hit: a request to read from memory, which can satisfy from the cache without using the main memory. Cache Miss: A request to read from memory, which cannot be satisfied from the cache, for which the main memory has to be consulted.