Cache Memory Ross Galijan. Library analogy Imagine a library, whose shelves are lined with books Problem: While one person can walk around the library.

Slides:



Advertisements
Similar presentations
Chapter 6: Memory Management
Advertisements

Computer System Organization Computer-system operation – One or more CPUs, device controllers connect through common bus providing access to shared memory.
1 Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers By Sreemukha Kandlakunta Phani Shashank.
Modified from notes by Saeid Nooshabadi COMP3221: Microprocessors and Embedded Systems Lecture 25: Cache - I Lecturer:
How caches take advantage of Temporal locality
11/2/2004Comp 120 Fall November 9 classes to go! VOTE! 2 more needed for study. Assignment 10! Cache.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Multiprocessing Memory Management
Memory Organization.
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
11/3/2005Comp 120 Fall November 10 classes to go! Cache.
Cs 61C L17 Cache.1 Patterson Spring 99 ©UCB CS61C Cache Memory Lecture 17 March 31, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson) www-inst.eecs.berkeley.edu/~cs61c/schedule.html.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
CS 524 (Wi 2003/04) - Asim LUMS 1 Cache Basics Adapted from a presentation by Beth Richardson
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
Maninder Kaur CACHE MEMORY 24-Nov
Memory Systems Architecture and Hierarchical Memory Systems
CMPE 421 Parallel Computer Architecture
1. Memory Manager 2 Memory Management In an environment that supports dynamic memory allocation, the memory manager must keep a record of the usage of.
CS1104: Computer Organisation School of Computing National University of Singapore.
Lecture 19: Virtual Memory
IT253: Computer Organization
Memory and cache CPU Memory I/O. CEG 320/52010: Memory and cache2 The Memory Hierarchy Registers Primary cache Secondary cache Main memory Magnetic disk.
How to Build a CPU Cache COMP25212 – Lecture 2. Learning Objectives To understand: –how cache is logically structured –how cache operates CPU reads CPU.
CS 3410, Spring 2014 Computer Science Cornell University See P&H Chapter: , 5.8, 5.15.
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
Computer Architecture Memory organization. Types of Memory Cache Memory Serves as a buffer for frequently accessed data Small  High Cost RAM (Main Memory)
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
Computer Architecture Lecture 26 Fasih ur Rehman.
Hashing Hashing is another method for sorting and searching data.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Cache Memory By Tom Austin. What is cache memory? A cache is a collection of duplicate data, where the original data is expensive to fetch or compute.
Introduction to Virtual Memory and Memory Management
Computer Organization and Design Memory Hierarchy Montek Singh Nov 30, 2015 Lecture 15.
A BRIEF INTRODUCTION TO CACHE LOCALITY YIN WEI DONG 14 SS.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
11 Intro to cache memory Kosarev Nikolay MIPT Nov, 2009.
Memory Hierarchy: Terminology Hit: data appears in some block in the upper level (example: Block X)  Hit Rate : the fraction of memory access found in.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
CHAPTER 3-3: PAGE MAPPING MEMORY MANAGEMENT. VIRTUAL MEMORY Key Idea Disassociate addresses referenced in a running process from addresses available in.
Operating Systems Session 7: – Virtual Memory organization Operating Systems.
What is it and why do we need it? Chris Ward CS147 10/16/2008.
نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Cache Memory.
COSC2410: LAB 19 INTRODUCTION TO MEMORY/CACHE DIRECT MAPPING 1.
Computer Orgnization Rabie A. Ramadan Lecture 9. Cache Mapping Schemes.
Associative Mapping A main memory block can load into any line of cache Memory address is interpreted as tag and word Tag uniquely identifies block of.
Cache Memory Yi-Ning Huang. Principle of Locality Principle of Locality A phenomenon that the recent used memory location is more likely to be used again.
Memory Management Virtual Memory.
Virtual Memory Chapter 7.4.
Memory and cache CPU Memory I/O.
Ramya Kandasamy CS 147 Section 3
William Stallings Computer Organization and Architecture 7th Edition
CS Introduction to Operating Systems
Andy Wang Operating Systems COP 4610 / CGS 5765
Memory and cache CPU Memory I/O.
Andy Wang Operating Systems COP 4610 / CGS 5765
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE451 Virtual Memory Paging Autumn 2002
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
Sarah Diesburg Operating Systems CS 3430
Andy Wang Operating Systems COP 4610 / CGS 5765
10/18: Lecture Topics Using spatial locality
Memory Principles.
Sarah Diesburg Operating Systems COP 4610
Presentation transcript:

Cache Memory Ross Galijan

Library analogy Imagine a library, whose shelves are lined with books Problem: While one person can walk around the library and read a book or two at a time, referencing multiple books for research can be difficult because books are all over the place

Library analogy Solution: Add tables to central library locations so people can take multiple books off the shelves and place them on the tables for easier access Tables have limited space compared to shelves, but books can be laid out for easier reading If a required book is not on the table, go find it on the shelves and bring it to the table If the table is full, remove a (presumably not needed) book to make room for another

Library analogy If a book is not in your hand, check the table If a book is not on the table, check the shelves If a book is not on the shelves, check if the book can be ordered from another library (and take forever to arrive) If a book is not in the system, you're in big trouble because your research cannot conclude

Relevance of library analogy Book in hand Book on table Book on shelf Book in system L1/L2 cache RAM HDD network/optical disc

Why not simply use one giant cache? Cache memory is smaller, and much more expensive. 1 TB HDD: $50~$100 1 TB RAM: At least $10,000 1 TB L2 cache: (this space intentionally left blank) Caching balances both access time and storage cost

Principle of Locality Data most recently accessed OR data near recently accessed data is likely to be accessed again in the near future The purpose of a cache is to store such data in an area more easily accessed than, say, the hard drive In a library, one will take related books off a shelf and bring them to a table, but will not return them to the shelves until either research is complete or the table is out of space

Two replacement methods Least recently added Removes items in the order which they entered the cache Requires entry order to be stored Easily implemented, but also easily ineffective Least recently used Removes the item which was last used the longest time ago Requires last access time to be stored Efficient, but difficult to implement

Cache mapping Direct mapping: each location in memory is mapped to a single location in cache Easily determines if data is in the cache, but potentially wastes a lot of space Memory address X maps to a cache of size Y at cache location X mod Y Associative mapping: any location in memory can be mapped to any location in cache Fully utilizes cache space, but needs to search entire cache to determine a miss – wastes time

Set associative mapping Combines direct mapping and associative mapping The cache is divided into N blocks of equal size Data from memory address X is assigned a specific block X mod N Each block then follows associative mapping Note that associative mapping is like a set associative map with one block, and direct mapping is like a set associative map with each block having size 1

A few formulas Hit ratio: cache hits / cache misses Average access time (single cache): hit ratio * cache access time + (1 – hit ratio) * memory access time Average access time (multiple cache): cache 1 hit ratio * cache 1 access time + … + cache X hit ratio * cache X access time + … + (1 – cache 1 hit ratio - … - cache X hit ratio - …) * memory access time