CDA 3101 Spring 2016 Introduction to Computer Organization Physical Memory, Virtual Memory and Cache 22, 29 March 2016.

Slides:



Advertisements
Similar presentations
Lecture 19: Cache Basics Today’s topics: Out-of-order execution
Advertisements

1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
COMP381 by M. Hamdi 1 Virtual Memory. COMP381 by M. Hamdi 2 Virtual Memory: The Problem For example: MIPS64 is a 64-bit architecture allowing an address.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
CSC 4250 Computer Architectures December 8, 2006 Chapter 5. Memory Hierarchy.
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
1 Lecture 20 – Caching and Virtual Memory  2004 Morgan Kaufmann Publishers Lecture 20 Caches and Virtual Memory.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Review CPSC 321 Andreas Klappenecker Announcements Tuesday, November 30, midterm exam.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Oct 31, 2005 Topic: Memory Hierarchy Design (HP3 Ch. 5) (Caches, Main Memory and.
Chapter 7 Large and Fast: Exploiting Memory Hierarchy Bo Cheng.
ECE 232 L27.Virtual.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 27 Virtual.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
ENEE350 Ankur Srivastava University of Maryland, College Park Based on Slides from Mary Jane Irwin ( )
Cs 61C L17 Cache.1 Patterson Spring 99 ©UCB CS61C Cache Memory Lecture 17 March 31, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson) www-inst.eecs.berkeley.edu/~cs61c/schedule.html.
©UCB CS 162 Ch 7: Virtual Memory LECTURE 13 Instructor: L.N. Bhuyan
331 Lec20.1Spring :332:331 Computer Architecture and Assembly Language Spring 2005 Week 13 Basics of Cache [Adapted from Dave Patterson’s UCB CS152.
1  2004 Morgan Kaufmann Publishers Chapter Seven.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Lecture 33: Chapter 5 Today’s topic –Cache Replacement Algorithms –Multi-level Caches –Virtual Memories 1.
Lecture 19: Virtual Memory
Lecture 15: Virtual Memory EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
1  2004 Morgan Kaufmann Publishers Multilevel cache Used to reduce miss penalty to main memory First level designed –to reduce hit time –to be of small.
July 30, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 8: Exploiting Memory Hierarchy: Virtual Memory * Jeremy R. Johnson Monday.
The Memory Hierarchy 21/05/2009Lecture 32_CA&O_Engr Umbreen Sabir.
IT253: Computer Organization
10/18: Lecture topics Memory Hierarchy –Why it works: Locality –Levels in the hierarchy Cache access –Mapping strategies Cache performance Replacement.
CS1104 – Computer Organization PART 2: Computer Architecture Lecture 10 Memory Hierarchy.
CSIE30300 Computer Architecture Unit 08: Cache Hsin-Chou Chi [Adapted from material by and
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
1  1998 Morgan Kaufmann Publishers Recap: Memory Hierarchy of a Modern Computer System By taking advantage of the principle of locality: –Present the.
2015/11/26\course\cpeg323-08F\Topic7e1 Virtual Memory.
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
CS.305 Computer Architecture Memory: Caches Adapted from Computer Organization and Design, Patterson & Hennessy, © 2005, and from slides kindly made available.
CPE232 Cache Introduction1 CPE 232 Computer Organization Spring 2006 Cache Introduction Dr. Gheith Abandah [Adapted from the slides of Professor Mary Irwin.
Review °We would like to have the capacity of disk at the speed of the processor: unfortunately this is not feasible. °So we create a memory hierarchy:
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
1  2004 Morgan Kaufmann Publishers Chapter Seven Memory Hierarchy-3 by Patterson.
Princess Sumaya Univ. Computer Engineering Dept. Chapter 5:
1 Chapter Seven. 2 Users want large and fast memories! SRAM access times are ns at cost of $100 to $250 per Mbyte. DRAM access times are ns.
1  1998 Morgan Kaufmann Publishers Chapter Seven.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
3/1/2002CSE Virtual Memory Virtual Memory CPU On-chip cache Off-chip cache DRAM memory Disk memory Note: Some of the material in this lecture are.
CS203 – Advanced Computer Architecture Virtual Memory.
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
Virtual Memory. Cache memory enhances performance by providing faster memory access speed. Virtual memory enhances performance by providing greater memory.
CMSC 611: Advanced Computer Architecture Memory & Virtual Memory Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material.
CS161 – Design and Architecture of Computer
CMSC 611: Advanced Computer Architecture
CS 704 Advanced Computer Architecture
ECE232: Hardware Organization and Design
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
Lecture 12 Virtual Memory.
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Cache Memory Presentation I
Morgan Kaufmann Publishers
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Part V Memory System Design
CMSC 611: Advanced Computer Architecture
Chapter Five Large and Fast: Exploiting Memory Hierarchy
Presentation transcript:

CDA 3101 Spring 2016 Introduction to Computer Organization Physical Memory, Virtual Memory and Cache 22, 29 March 2016

Overview of Memory Storage of data and instructions Modelled as linear array of 32-bit words MIPS: 32-bit addresses => 2 32 words Response time for any word is the same Memory can be read/written in 1 cycle Types of memory –Physical: What is installed in the computer –Virtual: Disk storage that looks like memory –Cache: Fast memory for temporary storage

Memory Instructions lw reg, offset(base) sw reg, offset(base) Load => Memory-to-register transfer Store => Register-to-memory transfer Base Reg sw lw Offset

Memory Hierarchy 1.Registers –Fast storage internal to processor –Speed = 1 CPU clock cycle Persistence = Few cycles Capacity ~ 0.1K to 2K Bytes 2.Cache –Fast storage internal or external to processor –Speed = A few CPU clock cycles Persistence = Tens to Hundreds of pipeline cycles, 0.5MB to 2MB 3.Main Memory –Main storage usually external to processor, < 16GB –Speed = 1-10 CPU clocks Persistence = msec to days 4.Disk Storage –Very slow – Access time = 1 to 15 msec –Used for backing store for main memory

Physical vs. Virtual Memory Physical Memory –Installed in Computer: 4-32GB RAM in PC –Limited by size and power supply of processor –Potential extent limited by size of address space Virtual Memory –How to put 2 32 words in your PC? –Not as RAM – not enough space or power –Make the CPU believe it has 2 32 words –Have the Main Memory act as a long-term Cache –Page the main memory contents to/from disk –Paging table (address system) works with Memory Management Unit to control page storage/retrieval

Cache Memory Principle of Locality –lw and sw access small part of memory in a given time slice (tens of contiguous cycles) Cache subsamples memory => temp storage –3 mapping schemes: Given memory block B Direct Mapped: one-to-one cache -> memory map (B contained in one and only one cache block) Set Associative: One of n cache blocks contains B Fully Associative: Any cache block can contain B Different mapping schemes for different applications and data access patterns

Cache Memory - Terms Block: Group of words Set: Group of Blocks Hit –When you look for B in the cache, you find it –Hit Time: time it takes to find B –Hit Rate: fraction of the time you find B in the cache Miss  –Look for B in cache, can’t find it –Miss Rate: fraction of the time you can’t find B in the cache when you go looking for B –Causes: Poor cache replacement strategy Lack of locality in memory access

Set Associative Mapping Generalizes all Cache Mapping Schemes –Assume cache contains N blocks –1-way SA cache: Direct Mapping –M-way SA cache: if M = N, then fully assoc. Advantage –Decreases miss rate (more places to find B) Disdvantage –Increases hit time (more places to look for B) –More complicated hardware

How SA Cache Works Cache Address Components: Index i: Selects the set S i Tag: Used to find the block you’re looking for, by comparing with the n blocks in the selected set S Block Offset: Address of desired data within the block TagIndexOffset Set 0 Block #1

Example: 2-way Cache Hardware Cache Address Tag Index Offset Valid Tag Data == Hit 2-to-1 Mux Data Mux control N-way cache: N muxes, and gates, comparators

Handling a Cache Miss Example: Instruction Cache Miss (couldn’t find instr. in cache) 1.Send original PC value (current PC – 4) to the Memory 2.Main Memory performs Read, we wait for access to complete 3.Write the cache entry: Put data read from memory into the cache Data field Write upper bits of address into the cache Tag field Turn the Valid Bit ON 4.Restart instruction execution at IF stage of pipeline – instruction will be refetched, this time will be in cache

Cost of a Cache Miss Example: Instruction Cache Miss (couldn’t find instr. in cache) Observe: Bigger blocks take more time to fetch Oops: Bigger blocks exploit spatial locality property Which one? Solution: Make cache as big as possible This decreases effect of blocksize Miss Rate Block Size Small Cache Big Cache

Block Write Strategies 1) Write-Through: Write the data to (a) cache and (b) block in main memory Advantage: Misses are simpler & cheaper because you don’t have to write the block back to a lower level Advantage: Easier implementation, only need write buffer 2) Write-Back: Write the data only to cache block. Write to memory only when block is replaced Advantage: Writes are limited only by cache write rate Advantage: Multi-word writes supported, since only one (efficient) write to main memory is made per block

Know these Concepts & Techniques OVERVIEW: pp [Patterson&Hennessey, 5 th Ed.] Design of Memory to Support Cache: pp  Measuring and Improving Cache Performance: pp Dependable Memory Hierarchy: pp Cache Coherence: pp Cache Controllers: pp , p. 470 READINGS FROM OUR COURSE WEB PAGE: Overview of Memory Hierarchies 6.2. Basics of Cache and Virtual Memory 6.3. Memory Systems Performance Analysis & Metrics 

New Topic: Virtual Memory (VM) Observe: Cache is a cache for main memory *** Main Memory is a Cache for Disk Storage *** Justification: VM allows efficient and safe sharing of memory among multiple programs (multiprogramming support) VM removes the programming headaches of a small amount of physical memory VM simplifies loading the program by supporting relocation History: VM was developed first, then cache Cache is based on VM technology, not conversely

Virtual Memory Terms Page:A virtual memory block (cache = block, VM = page) Page Fault: A miss on MemRead (cache = miss, VM = page fault) Physical Address: Where data is stored in physical memory Virtual Address: Produced by CPU which sees a big address space, then translated by HW + SW to yield a physical address Memory Mapping or Address Translation: Process of transforming Virtual Address to Physical Address Translation Lookaside Buffer: Helps make memory mapping more efficient

Virtual Memory Process CPU Virtual Address:Virtual page numberPage offset Translation Physical Address:Physical Page NumberPage offset Physical Memory N bits K 0 M bits K 0 M is always less than N Otherwise, why have VM?

Virtual Memory Cost Page Fault: Re-fetch the data from disk  (millisecond latency) Reducing VM Cost: Make pages large enough to amortize high disk access time Allow fully associative page placement to reduce page fault rate Handle page faults in software, because you have a lot of time between disk accesses (versus cache, which is v. fast) Use clever software algorithms to place pages (we have the time) Random page replacement, versus Least Recently Used Use Write-back (faster) instead of write-through (too long)

Know these Concepts and Techniques Patterson and Hennesey, 5 th Edition Virtual Machines: pp Virtual Memory: pp Things you must know for Exam-2 and the Final Exam: 1.What is a cache, and how does it work (in detail)? 2.What are performance penalties of caches, how to reduce them? 3.How is translation lookaside buffer useful, and how does it work? 4.Difference between write-through and write-back, and which one is used for cache, which one for VM…advantages of each 5.How to compute cache and virtual memory performance

Conclusions Caching improves memory efficiency by: –Restricting frequently-occurring read/write operations to fast memory –Buffering register-to-memory access, thereby leveling resource use –Keeping the I/O pipeline almost always full (occupied) to maximize hierarchical memory system throughput Virtual Memory is useful because: –Maps a big symbolic address space to a small physical address space – works very similar to cache Next Time: I/O and Buses

Think: Exam-2 + Weekend!!