Cache Memory.

Slides:



Advertisements
Similar presentations
Datorteknik VirtualMemory bild 1 Virtual Memory User memory model so far: Separate Instruction and Data memory In reality they share the same memory space.
Advertisements

1 Cache and Caching David Sands CS 147 Spring 08 Dr. Sin-Min Lee.
Segmentation and Paging Considerations
CSIE30300 Computer Architecture Unit 10: Virtual Memory Hsin-Chou Chi [Adapted from material by and
Using one level of Cache:
Memory Hierarchy. Smaller and faster, (per byte) storage devices Larger, slower, and cheaper (per byte) storage devices.
CSCE 212 Chapter 7 Memory Hierarchy Instructor: Jason D. Bakos.
S.1 Review: The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of.
Recap. The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of the.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Computer System Overview
Translation Buffers (TLB’s)
Virtual Memory. Why do we need VM? Program address space: 0 – 2^32 bytes –4GB of space Physical memory available –256MB or so Multiprogramming systems.
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Virtual Memory I Chapter 8.
ECE Dept., University of Toronto
Computer Systems Overview. Page 2 W. Stallings: Operating Systems: Internals and Design, ©2001 Operating System Exploits the hardware resources of one.
CMPE 421 Parallel Computer Architecture
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
IT253: Computer Organization
Chapter Twelve Memory Organization
Virtual Memory Expanding Memory Multiple Concurrent Processes.
What is cache memory?. Cache Cache is faster type of memory than is found in main memory. In other words, it takes less time to access something in cache.
Fall 2000M.B. Ibáñez Lecture 17 Paging Hardware Support.
King Fahd University of Petroleum and Minerals King Fahd University of Petroleum and Minerals Computer Engineering Department Computer Engineering Department.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 3:00-4:00 PM.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
Cache Memory. Reading From Memory Writing To Memory.
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Computer Systems Overview. Lecture 1/Page 2AE4B33OSS W. Stallings: Operating Systems: Internals and Design, ©2001 Operating System Exploits the hardware.
Memory Management & Virtual Memory. Hierarchy Cache Memory : Provide invisible speedup to main memory.
Virtual Memory Chapter 8.
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
Lecture 11 Virtual Memory
CSE 351 Section 9 3/1/12.
The Memory System (Chapter 5)
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Lecture 12 Virtual Memory.
Virtual Memory - Part II
Virtual Memory User memory model so far:
Ramya Kandasamy CS 147 Section 3
Assembly Language for Intel-Based Computers, 5th Edition
Architecture Background
Cache Memory Presentation I
Virtual Memory Chapter 8.
Lecture 21: Memory Hierarchy
CGS 3763 Operating Systems Concepts Spring 2013
Module IV Memory Organization.
Performance metrics for caches
Performance metrics for caches
10/16: Lecture Topics Memory problem Memory Solution: Caches Locality
COT 4600 Operating Systems Spring 2011
Memory Management by Segmentation
Performance metrics for caches
Translation Buffers (TLB’s)
TLB Performance Seung Ki Lee.
CS-447– Computer Architecture Lecture 20 Cache Memories
Translation Buffers (TLB’s)
Memory Hierarchy Memory: hierarchy of components of various speeds and capacities Hierarchy driven by cost and performance In early days Primary memory.
CSE 471 Autumn 1998 Virtual memory
Translation Buffers (TLBs)
Operating Systems: Internals and Design Principles, 6/E
Performance metrics for caches
Review What are the advantages/disadvantages of pages versus segments?
Virtual Memory 1 1.
Presentation transcript:

Cache Memory

Reading From Memory

Writing To Memory

Accessing memory When accessing memory (reading or writing) we need to “address” the memory location where the are reading or writing to. For reading memory for example, recall that in hardware, we place the address in the MAR register, the value goes through a demux which causes all memory locations to be ANDed with a zero with the exception of the address location we are interested which gets ANDed with a one. The output of all of these ANDed copies of all memory locations go through a massive OR gate and eventuall all bits of the memory location we are interested in get copied to the MBR…l An equal amount of hardware is needed to write to memory. Since the larger the memory is, the longer the access time is for each memory reference, the concept of a smaller memory containing the most commonly referenced pages could save time in memory reference times.

Processor with Cache Memory

Hierarchy of memory access With a cache memory added in front of the Main memory, when the processor running a program goes to access storage, the storage will of course be on disk, it may be in main memory (MM is a subset of disk), and it may be in the cache (cache is a subset of MM). If the page referenced is in the cache, great the access time will be very short, if not then the next step is to access main memory, if not in MM then we go out to the disk.

Writing policy with Caches Write Back Cache Cache policy where if a change is made to the page of cache storage then the corresponding page in main memory is not updated. Reads obviously do not need to be updated but writes do. Write Through Cache Cache policy where if a change is made to the page of cache storage then the corresponding page in main memory is also updated. Reads obviously do not need to be updated but writes do.

Hit Ratio (Miss ratio) The probability of the reference being in the cache is the hit ratio. The probability of the reference not being in the cache is the miss ratio. Consider the following example: If access to the cache takes 1 S and access to the main memory takes 10 S What value of p (the hit ratio) would it be advantages to use the cache? Without the cache the access time is 10 S per memory reference. With the cache, the average memory reference is: (Hit ratio) * (cache access time) + (miss ratio)*(cache access time + MM access time) = (hit ratio) (1) + (1 – hit ratio) (1+10) = (hit ratio) + 11 – (hit ratio)(11) = 11 – 10(hit ratio) So when is 11 – 10(hit ratio) < 10 Hit ratio > 1/10.

Which Cache is better (if any)? Consider using 1 of 2 caches, a 32K cache or a 64K cache, and then MM. Access to the 32K cache takes 1 S, and has a miss ratio of 1/3. Access to the 64K cache takes 2 S and has a miss ratio of 1/6 Access to MM is 10 S and never misses. Which cache should be used if any? The expected number of seconds per memory reference is: Enocache = 10 S E32K = (2/3) (1 S) + (1/3) (1+10) = 4 1/3 S. E64K= (5/6) (2 S) + (1/6) (2+10) = 3 2/3 S   So here the 64K cache is the best of the 3 options.

Multiple caches Consider 2 caches, MM and disk. Access to the 32K cache takes 1 S, and has a miss ratio of 1/2. Access to the 64K cache takes 2 S and has a miss ratio of 1/4 Access to MM is 10 S and has a miss ratio of 1/10. Disk has all pages and takes 100 S. The expected number of seconds per memory reference is: E = (0.5)(1 S) + (0.25)(1+2 S) + (0.15)(1+ 2+10 S) + (0.10)(1+ 2+10+100 S)

Multiple caches with TLB Consider 2 caches, MM and disk. TLB access time is 1 S and has a miss ratio of 4/5. Access to the 32K cache takes 5 S, and has a miss ratio of 1/2. Access to the 64K cache takes 10 S and has a miss ratio of 1/4 Access to MM is 30 S and has a miss ratio of 1/10. Disk has all pages and takes 100 S. The expected number of seconds per memory reference is: E = (0.2)(1 S) + (0.3)(5 S) + (0.25)(5+10+30 S) + (0.10)(5+10+30+100 S)