What is cache memory?. Cache Cache is faster type of memory than is found in main memory. In other words, it takes less time to access something in cache.

Slides:



Advertisements
Similar presentations
COMP375 Computer Architecture and Organization Senior Review.
Advertisements

Computer System Organization Computer-system operation – One or more CPUs, device controllers connect through common bus providing access to shared memory.
Practical Caches COMP25212 cache 3. Learning Objectives To understand: –Additional Control Bits in Cache Lines –Cache Line Size Tradeoffs –Separate I&D.
CMPE 421 Parallel Computer Architecture MEMORY SYSTEM.
© Karen Miller, What do we want from our computers?  correct results we assume this feature, but consider... who defines what is correct?  fast.
What is memory? Memory is used to store information within a computer, either programs or data. Programs and data cannot be used directly from a disk or.
Memory Hierarchy. Smaller and faster, (per byte) storage devices Larger, slower, and cheaper (per byte) storage devices.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
1 Lecture 2: Review of Computer Organization Operating System Spring 2007.
Characteristics of Computer Memory
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Memory Organization.
Caching I Andreas Klappenecker CPSC321 Computer Architecture.
Chapter 1 and 2 Computer System and Operating System Overview
11/3/2005Comp 120 Fall November 10 classes to go! Cache.
Overview: Memory Memory Organization: General Issues (Hardware) –Objectives in Memory Design –Memory Types –Memory Hierarchies Memory Management (Software.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Systems I Locality and Caching
Basic Microcomputer Design. Inside the CPU Registers – storage locations Control Unit (CU) – coordinates the sequencing of steps involved in executing.
Faculty of Information Technology Department of Computer Science Computer Organization and Assembly Language Chapter 4 Cache Memory.
CMPE 421 Parallel Computer Architecture
Computers organization & Assembly Language Chapter 0 INTRODUCTION TO COMPUTING Basic Concepts.
EECS 318 CAD Computer Aided Design LECTURE 10: Improving Memory Access: Direct and Spatial caches Instructor: Francis G. Wolff Case.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
IT253: Computer Organization
Computer Architecture Lecture 3 Cache Memory. Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
King Fahd University of Petroleum and Minerals King Fahd University of Petroleum and Minerals Computer Engineering Department Computer Engineering Department.
2007 Sept. 14SYSC 2001* - Fall SYSC2001-Ch4.ppt1 Chapter 4 Cache Memory 4.1 Memory system 4.2 Cache principles 4.3 Cache design 4.4 Examples.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
CSE378 Intro to caches1 Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early.
Computer Studies/ICT SS2
Memory Hierarchy. Hierarchy List Registers L1 Cache L2 Cache Main memory Disk cache Disk Optical Tape.
Lecture#15. Cache Function The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Caches Hiding Memory Access Times. PC Instruction Memory 4 MUXMUX Registers Sign Ext MUXMUX Sh L 2 Data Memory MUXMUX CONTROLCONTROL ALU CTL INSTRUCTION.
Lecture 1: Review of Computer Organization
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Memory Hierarchy How to improve memory access. Outline Locality Structure of memory hierarchy Cache Virtual memory.
Review of Computer System Organization. Computer Startup For a computer to start running when it is first powered up, it needs to execute an initial program.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
Topics in Memory System Design 2016/2/5\course\cpeg323-07F\Topic7.ppt1.
Memory Hierarchy David Kilgore CS 147 Dr. Lee Spring 2008.
What is it and why do we need it? Chris Ward CS147 10/16/2008.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
CSE373: Data Structures & Algorithms Lecture 26: Memory Hierarchy and Locality 1 Kevin Quinn, Fall 2015.
Introduction to computer architecture April 7th. Access to main memory –E.g. 1: individual memory accesses for j=0, j++, j
Computer Architecture Lecture 25 Fasih ur Rehman.
Advanced Computer Architecture CS 704 Advanced Computer Architecture Lecture 26 Memory Hierarchy Design (Concept of Caching and Principle of Locality)
Memory Miss Elliott.
Improving Memory Access 1/3 The Cache and Virtual Memory
Ramya Kandasamy CS 147 Section 3
Architecture Background
Cache Memory Presentation I
CACHE MEMORY.
Cache memory Direct Cache Memory Associate Cache Memory
Teaching Computing to GCSE
Computer Architecture
Chapter 1 Computer System Overview
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache Memory and Performance
Overview Problem Solution CPU vs Memory performance imbalance
Computer Science. The CPU The CPU is made up of 3 main parts : Cache ALU Control Unit.
Presentation transcript:

What is cache memory?

Cache Cache is faster type of memory than is found in main memory. In other words, it takes less time to access something in cache than in main memory. It ‘sits’ between external memory (main memory) and the processor. Runs at the speeds close to that of the of the processor and has two main types.

L1 which stores instructions going to the processor. Often split into two L1cache for instructions and L1 cache for data. L2 which is used buffer data between L1 cache and main memory, sometimes called unified cache.

Principle of locality The nature of programs and structure of data often means that requests to memory are not random, but localised. Programs tend to reuse data and instructions that have been recently used – temporal locality. Instructions and data referenced closed together in time, or often close together physically in memory- spatial locality.

When data is transferred between main memory and the processor, a copy of the data is saved to cache. When the processor needs more data, the addresses are checked to see if the data is already in the cache, if it there is no need to transfer the data from main memory which is slower. This is a cache hit and the data is taken directly from the cache. If the data is not, in the cache, this is known as a cache miss and a memory transfer is performed.

The ability to store more data in a cache will makes the computer faster, so bigger caches are an advantage. However, cost is a limiting factor as this type of memory is more expensive than main memory.

Comparison of speeds for cache and main memory Processor CPU (MHz)L1(ns)L2(ns) main memory(ns) Pentium Pentium II Pentium III Pentium Data taken from Dick D (2002) The P.C. Support Handbook page 120.

Harvard Architecture Instruction and data memories occupy different address spaces. That is, there is an address 'zero' in instruction space that refers to an instruction storage location and also an address 'zero' in data space that refers to a distinct data storage location. Instruction and data memories have separate hardware pathways to the central processing unit (CPU).

Summary There is hierarchy of memory. The higher level in the hierarchy a device the more expensive, less that can be stored, but it is but the quicker to access. Cache, a form of RAM, ‘sits’ between external memory (main memory) and the processor. Runs at the speed of the processor and has two main types. – L1 which stores instructions going to the processor. – L2 which is used buffer data from memory.

Harvard architecture – Instruction and data memories occupy different address spaces. – Instruction and data memories have separate hardware pathways to the central processing unit (CPU).

Sources for further reading Chalk et al (2004) Chapter 6 pages Dick D (2002) The P.C. Support Handbook page 120.