The Memory Hierarchy II CPSC 321 Andreas Klappenecker.

Slides:



Advertisements
Similar presentations
1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
Advertisements

Virtual Memory main memory can act as a cache for secondary storage motivation: Allow programs to use more memory that there is available transparent to.
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
CSIE30300 Computer Architecture Unit 10: Virtual Memory Hsin-Chou Chi [Adapted from material by and
Virtual Memory Hardware Support
Caching IV Andreas Klappenecker CPSC321 Computer Architecture.
Cs 325 virtualmemory.1 Accessing Caches in Virtual Memory Environment.
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
The Memory Hierarchy CPSC 321 Andreas Klappenecker.
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
Chap. 7.4: Virtual Memory. CS61C L35 VM I (2) Garcia © UCB Review: Caches Cache design choices: size of cache: speed v. capacity direct-mapped v. associative.
S.1 Review: The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Review CPSC 321 Andreas Klappenecker Announcements Tuesday, November 30, midterm exam.
Recap. The Memory Hierarchy Increasing distance from the processor in access time L1$ L2$ Main Memory Secondary Memory Processor (Relative) size of the.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Chapter 7 Large and Fast: Exploiting Memory Hierarchy Bo Cheng.
Caching I Andreas Klappenecker CPSC321 Computer Architecture.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
©UCB CS 162 Ch 7: Virtual Memory LECTURE 13 Instructor: L.N. Bhuyan
Caching II Andreas Klappenecker CPSC321 Computer Architecture.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy (Part II)
Processor Design 5Z032 Memory Hierarchy Chapter 7 Henk Corporaal Eindhoven University of Technology 2009.
11/10/2005Comp 120 Fall November 10 8 classes to go! questions to me –Topics you would like covered –Things you don’t understand –Suggestions.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
Computer Organization and Architecture (AT70.01) Comp. Sc. and Inf. Mgmt. Asian Institute of Technology Instructor: Dr. Sumanta Guha Slide Sources: Patterson.
CSE431 L22 TLBs.1Irwin, PSU, 2005 CSE 431 Computer Architecture Fall 2005 Lecture 22. Virtual Memory Hardware Support Mary Jane Irwin (
Lecture 19: Virtual Memory
Lecture 15: Virtual Memory EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr.
CPE432 Chapter 5A.1Dr. W. Abu-Sufah, UJ Chapter 5B:Virtual Memory Adapted from Slides by Prof. Mary Jane Irwin, Penn State University Read Section 5.4,
1  2004 Morgan Kaufmann Publishers Multilevel cache Used to reduce miss penalty to main memory First level designed –to reduce hit time –to be of small.
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
The Goal: illusion of large, fast, cheap memory Fact: Large memories are slow, fast memories are small How do we create a memory that is large, cheap and.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Nov. 15, 2000Systems Architecture II1 Machine Organization (CS 570) Lecture 8: Memory Hierarchy Design * Jeremy R. Johnson Wed. Nov. 15, 2000 *This lecture.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1  2004 Morgan Kaufmann Publishers Chapter Seven Memory Hierarchy-3 by Patterson.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
CS.305 Computer Architecture Memory: Virtual Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
Improving Memory Access 2/3 The Cache and Virtual Memory
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
Summary of caches: The Principle of Locality: –Program likely to access a relatively small portion of the address space at any instant of time. Temporal.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
3/1/2002CSE Virtual Memory Virtual Memory CPU On-chip cache Off-chip cache DRAM memory Disk memory Note: Some of the material in this lecture are.
1 ECE3055 Computer Architecture and Operating Systems Lecture 8 Memory Subsystem Prof. Hsien-Hsin Sean Lee School of Electrical and Computer Engineering.
CS203 – Advanced Computer Architecture Virtual Memory.
CS161 – Design and Architecture of Computer
Improving Memory Access The Cache and Virtual Memory
ECE232: Hardware Organization and Design
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Lecture 12 Virtual Memory.
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: illusion of having more physical memory program relocation protection.
Virtual Memory 4 classes to go! Today: Virtual Memory.
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Cache - Optimization.
Presentation transcript:

The Memory Hierarchy II CPSC 321 Andreas Klappenecker

Today’s Menu Cache Virtual Memory Translation Lookaside Buffer

Caches Why? How?

Memory Users want large and fast memories SRAM is too expensive for main memory DRAM is too slow for many purposes Compromise Build a memory hierarchy

Locality If an item is referenced, then it will be again referenced soon (temporal locality) nearby data will be referenced soon (spatial locality) Why does code have locality?

Mapping: address modulo the number of blocks in the cache, x -> x mod B Direct Mapped Cache

Cache with 1024=2 10 words tag from cache is compared against upper portion of the address If tag=upper 20 bits and valid bit is set, then we have a cache hit otherwise it is a cache miss What kind of locality are we taking advantage of? Direct Mapped Cache The index is determined by address mod 1024

Taking advantage of spatial locality: Direct Mapped Cache

Bits in a Cache How many total bits are required for a direct-mapped cache with 16 KB of data, 4 word blocks, assuming a 32 bit address? 16 KB = 4K words = 2^12 words Block size of 4 words => 2^10 blocks Each block has 4 x 32 = 128 bits of data + tag + valid bit tag + valid bit = (32 – 10 – 2 – 2) + 1 = 19 Total cache size = 2^10*( ) = 2^10 * 147 Therefore, 147 KB are needed for the cache

Cache Block Mapping Direct mapped cache a block goes in exactly one place in the cache Fully associative cache a block can go anywhere in the cache it is difficult to find a block parallel comparison to speed-up search Set associative cache a block can go to a (small) number of places compromise between the two extremes above

Cache Types

Set Associative Caches Each block maps to a unique set, the block can be placed into any element of that set, Position is given by (Block number) modulo (# of sets in cache) If the sets contain n elements, then the cache is called n-way set associative

Summary: Where can a Block be Placed? NameNumber of SetsBlocks per Set Direct mapped# Blocks in Cache1 Set associative (#Blocks in Cache) Associativity Associativity (typically 2-8) Fully associative1Number of blocks in cache

Summary: How is a Block Found? AssociativityNumber of Sets# Comparisons Direct mappedIndex1 Set associativeIndex the set, search among elements Degree of Associativity Fully associative search all cache entries size of the cache separate lookup table0

Virtual Memory

Processor generates virtual addresses Memory is accessed using physical addresses Virtual and physical memory is broken into blocks of memory, called pages A virtual page may be absent from main memory, residing on the disk or may be mapped to a physical page

Virtual Memory Main memory can act as a cache for the secondary storage (disk) Virtual address generated by processor (left) Address translation (middle) Physical addresses (right) Advantages: illusion of having more physical memory program relocation protection

Pages: virtual memory blocks Page faults: if data is not in memory, retrieve it from disk huge miss penalty, thus pages should be fairly large (e.g., 4KB) reducing page faults is important (LRU is worth the price) can handle the faults in software instead of hardware using write-through takes too long so we use writeback Example: page size 2 12 =4KB; 2 18 physical pages; main memory <= 1GB; virtual memory <= 4GB

Page Faults Incredible high penalty for a page fault Reduce number of page faults by optimizing page placement Use fully associative placement full search of pages is impractical pages are located by a full table that indexes the memory, called the page table the page table resides within the memory

Page Tables The page table maps each page to either a page in main memory or to a page stored on disk

Page Tables

Making Memory Access Fast Page tables slow us down Memory access will take at least twice as long access page table in memory access page What can we do? Memory access is local => use a cache that keeps track of recently used address translations, called translation lookaside buffer

Making Address Translation Fast A cache for address translations: translation lookaside buffer

Translation Lookaside Buffer Some typical values for a TLB TLB size Block size: 1-2 page table entries (4-8bytes each) Hit time: clock cycle Miss penalty: clock cycles Miss rate: 0.01%-1%

TLBs and Caches

More Modern Systems Very complicated memory systems:

Processor speeds continue to increase very fast — much faster than either DRAM or disk access times Design challenge: dealing with this growing disparity Trends: synchronous SRAMs (provide a burst of data) redesign DRAM chips to provide higher bandwidth or processing restructure code to increase locality use prefetching (make cache visible to ISA) Some Issues