Download presentation
Presentation is loading. Please wait.
Published byJonah Shelton Modified over 9 years ago
1
Memory/Storage Architecture Lab Computer Architecture Virtual Memory
2
2 Memory/Storage Architecture Lab 2 What do we want? Physical Logical Memory with infinite capacity
3
3 Memory/Storage Architecture Lab 3 Virtual Memory Concept Hide all physical aspects of memory from users. Memory is a logically unbounded virtual (logical) address space of 2 n bytes. Only portions of virtual address space are in physical memory at any one time.
4
4 Memory/Storage Architecture Lab 4 Paging A process’s virtual address space is divided into equal sized pages. A virtual address is a pair (p, o).
5
5 Memory/Storage Architecture Lab 5 Paging Physical memory is divided into equal sized frames. size of page = size of frame Physical memory address is a pair (f, o).
6
6 Memory/Storage Architecture Lab 6 Paging
7
7 Memory/Storage Architecture Lab 7 Mapping from a Virtual to a Physical Address
8
8 Memory/Storage Architecture Lab 8 Paging: Virtual Address Translation
9
9 Memory/Storage Architecture Lab 9 Paging: Page Table Structure One table for each process - part of process’s state. Contents Flags: valid/invalid (also called resident) bit, dirty bit, reference (also called clock or used) bit. Page frame number.
10
10 Memory/Storage Architecture Lab 10 Paging: Example
11
11 Memory/Storage Architecture Lab 11 Demand Paging Bring a page into physical memory (i.e., map a page to a frame) only when it is needed. Advantages: Program size is no longer constrained by the physical memory size. Less memory needed more processes. Less I/O needed faster response. Advantages from paging − Contiguous allocation is no longer needed no external fragmentation problem. − Arbitrary relocation is possible. − Variable-sized I/O is no longer needed.
12
12 Memory/Storage Architecture Lab 12 Translation Look-aside Buffer (TLB) Problem - Each (virtual) memory reference requires two memory references! Solution: Translation lookaside buffer.
13
13 Memory/Storage Architecture Lab 13 A Big Picture
14
14 Memory/Storage Architecture Lab 14 On TLB misses If page is in memory Load the PTE (page table entry) from memory and retry Could be handled in hardware − Can get complex for more complicated page table structures Or in software − Raise a special exception, with optimized handler If page is not in memory (page fault) OS handles fetching the page and updating the page table Then restart the faulting instruction
15
15 Memory/Storage Architecture Lab 15 TLB Miss Handler TLB miss indicates Page present, but PTE not in TLB Page not preset Must recognize TLB miss before destination register overwritten Raise exception Handler copies PTE from memory to TLB Then restarts instruction If page not present, page fault will occur
16
16 Memory/Storage Architecture Lab 16 Page Fault Handler Use faulting virtual address to find PTE Locate page on disk Choose page to replace If dirty, write to disk first Read page into memory and update page table Make process runnable again Restart from faulting instruction
17
17 Memory/Storage Architecture Lab 17 Paging: Protection and Sharing Protection Protection is specified per page basis. Sharing Sharing is done by pages in different processes mapped to the same frames. Sharing
18
18 Memory/Storage Architecture Lab 18 Virtual Memory Performance Example Memory access time: 100 ns Disk access time: 25 ms Effective access time − Let p = the probability of a page fault − Effective access time = 100(1-p) + 25,000,000p − If we want only 10% degradation 110 > 100 + 25,000,000p 10 > 25,000,000p p < 0.0000004 (one fault every 2,500,000 references) Lesson: OS had better do a good job of page replacement!
19
19 Memory/Storage Architecture Lab 19 Replacement Algorithm - LRU (Least Recently Used) Algorithm Replace the page that has not been used for the longest time.
20
20 Memory/Storage Architecture Lab 20 LRU Algorithm - Implementation Maintain a stack of recently used pages according to the recency of their uses. Top: Most recently used (MRU) page. Bottom: Least recently used (LRU) page. Always replace the bottom (LRU) page.
21
21 Memory/Storage Architecture Lab 21 LRU Approximation - Second-Chance Algorithm Also called the clock algorithm. A variation used in UNIX. Maintain a circular list of pages resident in memory. At each reference, the reference (also called used or clock) bit is simply set by hardware. At a page fault, clock sweeps over pages looking for one with reference bit = 0. − Replace a page that has not been referenced for one complete revolution of the clock.
22
22 Memory/Storage Architecture Lab 22 Second-Chance Algorithm valid/invalid bit reference (used) bit frame number
23
23 Memory/Storage Architecture Lab 23 Page Size Small page sizes + less internal fragmentation, better memory utilization. - large page table, high page fault handling overheads. Large page sizes + small page table, small page fault handling overheads. - more internal fragmentation, worse memory utilization.
24
24 Memory/Storage Architecture Lab 24 I/O Interlock Problem - DMA Assume global page replacement. A process blocked on an I/O operation appears to be an ideal candidate for replacement. If replaced, however, I/O operation can corrupt the system. Solutions 1. Lock pages in physical memory using lock bits, or 2. Perform all I/O into and out of OS space.
25
25 Memory/Storage Architecture Lab 25 Segmentation with Paging
26
26 Memory/Storage Architecture Lab 26 Segmentation with Paging Individual segments are implemented as a paged, virtual address space. A logical address is now a triple (s, p, o)
27
27 Memory/Storage Architecture Lab 27 Segmentation with Paging Address translation
28
28 Memory/Storage Architecture Lab 28 Segmentation with Paging Additional benefits Protection: protection can be specified per segment basis rather than per page basis. Sharing
29
29 Memory/Storage Architecture Lab 29 Typical Memory Hierarchy - The Big Picture
30
30 Memory/Storage Architecture Lab 30 Typical Memory Hierarchy - The Big Picture
31
31 Memory/Storage Architecture Lab 31 Typical Memory Hierarchy - The Big Picture
32
32 Memory/Storage Architecture Lab 32 A Common Framework for Memory Hierarchies Question 1: Where can a Block be Placed? One place (direct- mapped), a few places (set associative), or any place (fully associative) Question 2: How is a Block Found? Indexing (direct-mapped), limited search (set associative), full search (fully associative) Question 3: Which Block is Replaced on a Miss? Typically LRU or random Question 4: How are Writes Handled? Write-through or write- back
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.