Memory/Storage Architecture Lab Computer Architecture Virtual Memory.

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management Basic memory management Swapping
Advertisements

Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a cache for secondary (disk) storage – Managed jointly.
Virtual Memory Operating System Concepts chapter 9 CS 355
EECS 470 Virtual Memory Lecture 15. Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient.
OS Fall’02 Virtual Memory Operating Systems Fall 2002.
9.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Virtual Memory OSC: Chapter 9. Demand Paging Copy-on-Write Page Replacement.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
The Memory Hierarchy (Lectures #24) ECE 445 – Computer Organization The slides included herein were taken from the materials accompanying Computer Organization.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Lecture 11: Memory Management
Paging and Virtual Memory. Memory management: Review  Fixed partitioning, dynamic partitioning  Problems Internal/external fragmentation A process can.
Memory Management 2010.
Virtual Memory Chapter 8.
Translation Buffers (TLB’s)
Virtual Memory and Paging J. Nelson Amaral. Large Data Sets Size of address space: – 32-bit machines: 2 32 = 4 GB – 64-bit machines: 2 64 = a huge number.
Virtual Memory Management B.Ramamurthy. Paging (2) The relation between virtual addresses and physical memory addres- ses given by page table.
1 Virtual Memory Management B.Ramamurthy Chapter 10.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Virtual Memory.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
Operating Systems Chapter 8
Lecture 19: Virtual Memory
Lecture 15: Virtual Memory EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr.
July 30, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 8: Exploiting Memory Hierarchy: Virtual Memory * Jeremy R. Johnson Monday.
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
© 2004, D. J. Foreman 1 Virtual Memory. © 2004, D. J. Foreman 2 Objectives  Avoid copy/restore entire address space  Avoid unusable holes in memory.
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Virtual Memory.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Page Replacement Allocation of.
Virtual Memory Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts Essentials – 9 th Edition Chapter 9: Virtual-Memory Management.
CS307 Operating Systems Virtual Memory Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Spring 2012.
Silberschatz, Galvin and Gagne  Operating System Concepts Virtual Memory Virtual memory – separation of user logical memory from physical memory.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
1 Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples (not covered.
10.1 Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement Allocation of Frames Thrashing Operating System Examples.
Virtual Memory 1 Computer Organization II © McQuain Virtual Memory Use main memory as a “cache” for secondary (disk) storage – Managed jointly.
Virtual Memory Chapter 8.
CS161 – Design and Architecture of Computer
Memory Hierarchy Ideal memory is fast, large, and inexpensive
CS161 – Design and Architecture of Computer
CS352H: Computer Systems Architecture
CS703 - Advanced Operating Systems
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Morgan Kaufmann Publishers
Virtual Memory © 2004, D. J. Foreman.
Module 9: Virtual Memory
Chapter 9: Virtual-Memory Management
5: Virtual Memory Background Demand Paging
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Translation Buffers (TLB’s)
Contents Memory types & memory hierarchy Virtual memory (VM)
CSE451 Virtual Memory Paging Autumn 2002
Translation Buffers (TLB’s)
CSC3050 – Computer Architecture
Computer Architecture
Virtual Memory © 2004, D. J. Foreman.
Virtual Memory Lecture notes from MKP and S. Yalamanchili.
Translation Buffers (TLBs)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Module 9: Virtual Memory
Review What are the advantages/disadvantages of pages versus segments?
Presentation transcript:

Memory/Storage Architecture Lab Computer Architecture Virtual Memory

2 Memory/Storage Architecture Lab 2 What do we want? Physical Logical Memory with infinite capacity

3 Memory/Storage Architecture Lab 3 Virtual Memory Concept  Hide all physical aspects of memory from users. Memory is a logically unbounded virtual (logical) address space of 2 n bytes. Only portions of virtual address space are in physical memory at any one time.

4 Memory/Storage Architecture Lab 4 Paging  A process’s virtual address space is divided into equal sized pages.  A virtual address is a pair (p, o).

5 Memory/Storage Architecture Lab 5 Paging  Physical memory is divided into equal sized frames. size of page = size of frame  Physical memory address is a pair (f, o).

6 Memory/Storage Architecture Lab 6 Paging

7 Memory/Storage Architecture Lab 7 Mapping from a Virtual to a Physical Address

8 Memory/Storage Architecture Lab 8 Paging: Virtual Address Translation

9 Memory/Storage Architecture Lab 9 Paging: Page Table Structure  One table for each process - part of process’s state.  Contents Flags: valid/invalid (also called resident) bit, dirty bit, reference (also called clock or used) bit. Page frame number.

10 Memory/Storage Architecture Lab 10 Paging: Example

11 Memory/Storage Architecture Lab 11 Demand Paging  Bring a page into physical memory (i.e., map a page to a frame) only when it is needed.  Advantages: Program size is no longer constrained by the physical memory size. Less memory needed  more processes. Less I/O needed  faster response. Advantages from paging − Contiguous allocation is no longer needed  no external fragmentation problem. − Arbitrary relocation is possible. − Variable-sized I/O is no longer needed.

12 Memory/Storage Architecture Lab 12 Translation Look-aside Buffer (TLB)  Problem - Each (virtual) memory reference requires two memory references!  Solution: Translation lookaside buffer.

13 Memory/Storage Architecture Lab 13 A Big Picture

14 Memory/Storage Architecture Lab 14 On TLB misses  If page is in memory Load the PTE (page table entry) from memory and retry Could be handled in hardware − Can get complex for more complicated page table structures Or in software − Raise a special exception, with optimized handler  If page is not in memory (page fault) OS handles fetching the page and updating the page table Then restart the faulting instruction

15 Memory/Storage Architecture Lab 15 TLB Miss Handler  TLB miss indicates Page present, but PTE not in TLB Page not preset  Must recognize TLB miss before destination register overwritten Raise exception  Handler copies PTE from memory to TLB Then restarts instruction If page not present, page fault will occur

16 Memory/Storage Architecture Lab 16 Page Fault Handler  Use faulting virtual address to find PTE  Locate page on disk  Choose page to replace If dirty, write to disk first  Read page into memory and update page table  Make process runnable again Restart from faulting instruction

17 Memory/Storage Architecture Lab 17 Paging: Protection and Sharing  Protection Protection is specified per page basis.  Sharing Sharing is done by pages in different processes mapped to the same frames. Sharing

18 Memory/Storage Architecture Lab 18 Virtual Memory Performance  Example Memory access time: 100 ns Disk access time: 25 ms Effective access time − Let p = the probability of a page fault − Effective access time = 100(1-p) + 25,000,000p − If we want only 10% degradation 110 > ,000,000p 10 > 25,000,000p p < (one fault every 2,500,000 references)  Lesson: OS had better do a good job of page replacement!

19 Memory/Storage Architecture Lab 19 Replacement Algorithm - LRU (Least Recently Used) Algorithm  Replace the page that has not been used for the longest time.

20 Memory/Storage Architecture Lab 20 LRU Algorithm - Implementation  Maintain a stack of recently used pages according to the recency of their uses. Top: Most recently used (MRU) page. Bottom: Least recently used (LRU) page.  Always replace the bottom (LRU) page.

21 Memory/Storage Architecture Lab 21 LRU Approximation - Second-Chance Algorithm  Also called the clock algorithm.  A variation used in UNIX.  Maintain a circular list of pages resident in memory. At each reference, the reference (also called used or clock) bit is simply set by hardware. At a page fault, clock sweeps over pages looking for one with reference bit = 0. − Replace a page that has not been referenced for one complete revolution of the clock.

22 Memory/Storage Architecture Lab 22 Second-Chance Algorithm valid/invalid bit reference (used) bit frame number

23 Memory/Storage Architecture Lab 23 Page Size  Small page sizes + less internal fragmentation, better memory utilization. - large page table, high page fault handling overheads.  Large page sizes + small page table, small page fault handling overheads. - more internal fragmentation, worse memory utilization.

24 Memory/Storage Architecture Lab 24 I/O Interlock  Problem - DMA Assume global page replacement. A process blocked on an I/O operation appears to be an ideal candidate for replacement. If replaced, however, I/O operation can corrupt the system.  Solutions 1. Lock pages in physical memory using lock bits, or 2. Perform all I/O into and out of OS space.

25 Memory/Storage Architecture Lab 25 Segmentation with Paging

26 Memory/Storage Architecture Lab 26 Segmentation with Paging  Individual segments are implemented as a paged, virtual address space. A logical address is now a triple (s, p, o)

27 Memory/Storage Architecture Lab 27 Segmentation with Paging  Address translation

28 Memory/Storage Architecture Lab 28 Segmentation with Paging  Additional benefits Protection: protection can be specified per segment basis rather than per page basis. Sharing

29 Memory/Storage Architecture Lab 29 Typical Memory Hierarchy - The Big Picture

30 Memory/Storage Architecture Lab 30 Typical Memory Hierarchy - The Big Picture

31 Memory/Storage Architecture Lab 31 Typical Memory Hierarchy - The Big Picture

32 Memory/Storage Architecture Lab 32 A Common Framework for Memory Hierarchies  Question 1: Where can a Block be Placed? One place (direct- mapped), a few places (set associative), or any place (fully associative)  Question 2: How is a Block Found? Indexing (direct-mapped), limited search (set associative), full search (fully associative)  Question 3: Which Block is Replaced on a Miss? Typically LRU or random  Question 4: How are Writes Handled? Write-through or write- back