Download presentation
Presentation is loading. Please wait.
Published byMyles Wheeler Modified over 9 years ago
1
1 Virtual Memory
2
Cache memory: provides illusion of very high speed Virtual memory: provides illusion of very large size Main memory: reasonable cost, but slow & small Memory Hierarchy Summary
3
Virtual Memory Program addresses only logical addresses Hardware maps logical addresses to physical addresses Only part of a process is loaded into memory process may be larger than main memory additional processes allowed in main memory since only part of each process needs to be in physical memory memory loaded/unloaded as the programs execute Real Memory – The physical memory occupied by a program (frames) Virtual memory – The larger memory space perceived by the program (pages) Virtual Memory
4
Virtual Memory That is Larger Than Physical Memory
5
Virtual Memory Principle of Locality – A program tends to reference the same items - even if same item not used, nearby items will often be referenced Resident Set – Those parts of the program being actively used (remaining parts of program on disk) Thrashing – Constantly needing to get pages off secondary storage happens if the O.S. throws out a piece of memory that is about to be used can happen if the program scans a long array – continuously referencing pages not used recently O.S. must watch out for this situation! Virtual Memory
6
Decisions about virtual memory: Fetch Policy – when to bring a page in? When needed or in anticipation of need? Placement – where to put it? Replacement – what to unload to make room for a new page? Resident Set Management – how many pages to keep in memory? Fixed # of pages or variable? Reassign pages to other processes? Cleaning Policy – when to write a page to disk? Load Control – degree of multiprogramming?
7
Paging and Virtual Memory Large logical memory – small real memory Demand paging allows size of logical address space not constrained by physical memory higher utilization of the system Paging implementation frame allocation how many per process? page replacement how do we choose a frame to replace?
8
Demand Paging Bring a page into memory only when it is needed. Less I/O needed Less memory needed Faster response More users Page is needed reference to it invalid reference abort not-in-memory bring to memory
9
Transfer of a Paged Memory to Contiguous Disk Space
10
Page Table of Demand Paging
11
Page Fault If there is ever a reference to a page, first reference will trap to OS page fault OS looks at another table to decide: Invalid reference abort. Just not in memory. Get empty frame. Swap page into frame. Reset tables, validation bit = 1. Restart instruction: Least Recently Used block move auto increment/decrement location
12
Steps in Handling a Page Fault
13
What happens if there is no free frame? Page replacement – find some page in memory, but not really in use, swap it out. algorithm performance – want an algorithm which will result in minimum number of page faults. Same page may be brought into memory several times.
14
Demand Paging Paged memory combined with swapping Processes reside in main/secondary memory Could also be termed as lazy swapping bring pages into memory only when accessed What about at context switch time? could swap out entire process restore page state as remembered anticipate which pages are needed
15
Page Replacement Demand paging allows us to over-allocate no free frames – must implement frame replacement Frame replacement select a frame (victim) write the victim frame to disk read in new frame update page tables restart process
16
Page Replacement If no frames are free, two page transfers doubles page fault service time Reduce overhead using dirty bit dirty bit is set whenever a page is modified if dirty, write page, else just throw it out
17
Paging Implementation (continued … ) Must be able to restart a process at any time instruction fetch operand fetch operand store (any memory reference) Consider simple instruction Add C,A,B (C = A + B) All operands on different pages Instruction not in memory 4 possible page faults )-: slooooow :-(
18
Performance of Demand Paging Page Fault Rate 0 p 1.0 if p = 0 no page faults if p = 1, every reference is a fault Effective Access Time (EAT) EAT = (1 – p) x memory access + p (page fault overhead + [swap page out ] + swap page in + restart overhead)
19
Demand Paging Example Memory access time = 1 microsecond 50% of the time the page that is being replaced has been modified and therefore needs to be swapped out. Swap Page Time = 10 msec = 10,000 microsecond EAT = (1 – p) x 1 + p (15000) =1 + 15000p (in microsecond)
20
Performance Example Paging Time … Disk latency8 milliseconds Disk seek15 milliseconds Disk transfer time1 millisecond Total paging time~25 milliseconds Could be longer due to device queueing time other paging overhead
21
Paging Performance (continued … ) Effective access time: EAT = (1 - p) ma + p pft where: p is probability of page fault ma is memory access time pft is page fault time
22
Paging Performance (continued … ) Effective access time with 100 ns memory access and 25 ms page fault time: EAT = (1 - p) ma + p pft = (1 - p) 100 + p 25,000,000 = 100 + 24,999,900 p What is the EAT if p = 0.001 (1 out of 1000)? 100 + 24999,990 0.001 = 25 microseconds 250 times slowdown! How do we get less than 10% slowdown? 100 + 24999,990 p 1.10 100 ns = 110 ns Less than 1 out of 2,500,000 accesses fault
23
Paging Improvements Paging needs to be as fast as possible Disk access time is faster if: use larger blocks no file table lookup or other indirect lookup binary boundaries Most systems have a separate swap space Copy entire file image into swap at load time Demand page Or … Demand pages initially from the file system Write pages to swap as they are needed
24
Fetch Policy Demand paging means that a process starts slowly. produces a flurry of page faults early, then settles down Locality means a smaller number of pages per process are needed. desired set of pages should be in memory - working set Prepaging means bringing in pages that are likely to be used in the near future. try to take advantage of disk characteristics generally more efficient to load several consecutive sectors/pages than individual sectors due to seek, rotational latency hard to correctly guess which pages will be referenced easier to guess at program startup may load unnecessary pages
25
Placement Policies Where to put the page trivial in a paging system – can be placed anywhere Best-fit, First-Fit, or Next-Fit can be used with segmentation is a concern with distributed systems
26
Replacement Policies Replacement Policy which page to replace when a new page needs to be loaded tends to combine several things: how many page frames are allocated replace only a page in the current process or from all processes? (Resident Set Management) from pages being considered, selecting one page to be replaced Frame Locking require a page to stay in memory O.S. Kernel and Interrupt Handlers real-Time processes other key data structures implemented by bit in data structures
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.