Download presentation
1
Principles of Virtual Memory
Virtual Memory, Paging, Segmentation
2
Overview Virtual Memory Paging Segmentation
Combined Segmentation and Paging Bibliography
3
1. Virtual Memory 1.1 Why Virtual Memory (VM)? 1.2 What is VM ?
1.3 The Mapping Process 1.4 Terms & Definitions 1.5 The Principle of Locality 1.6 VM: Features 1.7 VM: Advantages 1.8 VM: Disadvantages 1.9 VM: Implementation
4
1.1 Why Virtual Memory (VM)?
Shortage of memory Efficient memory management needed Process 3 Process may be too big for physical memory More active processes than physical memory can hold Requirements of multiprogramming Efficient protection scheme Simple way of sharing Process 2 Process 4 Process 1 Mention external fragmentation OS Memory
5
1.2 What is VM? 0xA0F4 Program: .... Mov AX, 0xA0F4 Table (one per
Process) 0xC0F4 Mapping Unit (MMU) Virtual Address Not only address to address mapping, but piece to piece VAS usually much larger than PAS „Piece“ of Virtual Memory Physical Memory „Piece“ of Physical Memory Physical Address Virtual Memory Note: It does not matter at which physical address a „piece“ of VM is placed, since the corresponding addresses are mapped by the mapping unit.
6
check using mapping table
1.3 The Mapping Process Usually every process has its own mapping table own virtual address space (assumed from now on) Not every „piece“ of VM has to be present in PM „Pieces“ may be loaded from HDD as they are referenced Rarely used „pieces“ may be discarded or written out to disk ( swapping) check using mapping table MMU piece in physical memory? OS brings „piece“ in from HDD virtual address memory access fault translate address yes OS adjusts mapping table physical address
7
1.4 Terms & Notions Virtual memory (VM) is
Not a physical device but an abstract concept Comprised of the virtual address spaces (of all processes) Virtual address space (VAS) (of one process) Set of visible virtual addresses (Some systems may use a single VAS for all processes) Resident set Pieces of a process currently in physical memory Working set Set of pieces a process is currently working on
8
1.5 The Principle of Locality
Memory references within a process tend to cluster Working set should be part of the resident set to operate efficiently (else: frequent memory access faults) honor the principle of locality to achieve this repeated references: single jumps: working set: initialization data code early phase of process lifetime code 1 code 2 data main phase of process lifetime finalization code final phase of process lifetime Principle of locality is weakened by modern programming techniques ! OO leads to references to objects all over multithreaded apps -> sudden jumps in control flow
9
no need to swap out complete process!!!
1.6 VM: Features Swapping „piece“ modified? lack of memory find rarely used „piece" adjust mapping table discard „piece“ no save HDD location of „piece“ write „piece“ out to disk yes no need to swap out complete process!!! Danger: Thrashing: „Piece“ just swapped out is immediately requested again System swaps in/out all the time, no real work is done Thus: „piece“ for swap out has to be chosen carefully Keep track of „piece“ usage („age of piece“) Hopefully „piece“ used frequently lately will be used again in near future (principle of locality!)
10
1.6 VM: Features Protection
Each process has its own virtual address space Processes invisible to each other Process cannot access another processes memory MMU checks protection bits on memory access (during address mapping) „Pieces“ can be protected from being written to or being executed or even being read System can distinguish different protection levels (user / kernel mode) Write protection can be used to implement copy on write ( Sharing)
11
1.6 VM: Features Sharing „Pieces“ of different processes mapped to one single „piece“ of physical memory Allows sharing of code (saves memory), e.g. libraries Copy on write: „piece“ may be used by several processes until one writes to it (then that process gets its own copy) Simplifies interprocess-communication (IPC) Piece 2 Piece 1 Virtual memory Process 1 Piece 0 Virtual memory Process 2 Piece 1 Piece 0 Piece 2 Piece 1 Piece 2 Piece 0 Physical memory Shared Code must be reentrant (non-self-modifying) shared memory
12
1.7 VM: Advantages (1) VM supports
Swapping Rarely used „pieces“ can be discarded or swapped out „Piece“ can be swapped back in to any free piece of physical memory large enough, mapping unit translates addresses Protection Sharing Common data or code may be shared to save memory Process need not be in memory as a whole No need for complicated overlay techniques (OS does job) Process may even be larger than all of physical memory Data / code can be read from disk as needed
13
1.7 VM: Advantages (2) Code can be placed anywhere in physical memory without relocation (adresses are mapped!) Increased cpu utilization more processes can be held in memory (in part) more processes in ready state (consider: 80% HDD I/O wait time not uncommon)
14
1.8 VM: Disadvantages Memory requirements (mapping tables)
Longer memory access times (mapping table lookup) Can be improved using TLB
15
1.9 VM: Implementation VM may be implemented using
Paging Segmentation Combination of both Note: Everything said in the first chapter still holds for the following chapters!
16
2. Paging 2.1 What is Paging? 2.2 Paging: Implementation
2.3 Paging: Features 2.4 Paging: Advantages 2.5 Paging: Disadvantages 2.6 Summary: Conversion of a Virtual Address
17
2.1 What is Paging? Page 7 Page 5 Page 4 Page 3 Page 2 Page 1 Page 6
Virtual memory (divided into equal size pages) Page 0 0x00 Page Table (one per process, one entry per page maintained by OS) Page 7 Page 5 Page 4 Page 3 Page 2 Page 1 Page 6 Page 0 v Frame 0 Frame 1 Frame 3 Frame 0 Frame 1 Frame 2 Frame 3 Physical memory (divided into equal size page frames) 0x00 v VM usually much larger than physical memory -> usually most pages are not mapped Contiguous piece of virtual memory may be mapped all over physical mem
18
2.2 Paging: Implementation Typical Page Table Entry
valid v r w x execute x write w read r Page Frame # modified m re referenced re v shared s m caching disabled c s super-page su c process id pid su guard data (extended) guard gd g pid g gd other other Read protect bits: may define in which mode (kernel / user) page is accessible Write protect bits: protect code pages from writing (readonly) may be used for copy on write Execute bits: define, if page may be executed also different bits for user/kernel mode possible Valid: does entry point to valid page in physical memory ? else: OS must decide, if page paged out or error Referenced: for OS to determine how often page is used can be paged out or not ? Modified (dirty): must page be swapped out, or can be discarded ? Shared: (obvious) Caching disabled: important for machines with memory mapped I/O: actual device must be read, not a cached copy Others superpages guarded pagetables PID
19
2.2 Paging: Implementation Singlelevel Page Tables
0x14 0x2 Virtual address Page # Offset Physical address 0x14 Offset Page Table Base Register (PTBR) Page Table 0x8 ... 0x0 * L 0x1 * L 0x2 * L L : size of entry 0x8 Frame # one entry per page one table per process Problem: Page tables can get very large, e.g. 32 bit address space, 4KB pages 2^20 entries per process 4MB at 4B per entry 64 bit GB page table!!!!
20
2.2 Paging: Implementation Multilevel Page Tables
Offset Page #1 Page #2 Page #3 Offset Offset Frame # Oversized Super-Page Page Directory Page Table Page Frame # Page Middle Directory table size can be restricted to one page Not all address ranges will be used (principle of locality!) -> some page tables need not be present One may notice that multilevel page tables are still not a satifying solution, since every translation step must be made no matter how sparsely a table is filled -> saves memory but costs time Page tables can be limited to one page -> more easily be paged out -> multiple page faults possible v=0 not all need be present saves memory
21
2.2 Paging: Implementation Inverted Page Tables
0x14 0xA Virtual address Page # Offset 0x14 Physical address Offset Inverted Page Table 0xA PID ... hash fkt. 0x0 0x1 hash table one entry per frame one table for all processes 0x2 Frame # Saves memory ! System not as simple as indexing into a table (hash function must be evaluated !!) (time effects ???) Faster context switch (one table for all processes) Problems with sharing !!! Problem: Additional information about pages not presently in memory must still be kept somehow (e.g. normal page tables on HDD)
22
2.2 Paging: Implementation Guarded Page Tables (1)
0xA 0x5 0x14 0x2 Virtual address Page #1 Offset Page #2 Page #3 0x14 Physical address Offset Page Directory (guarded) = page fault no Frame # 0x3B2 yes 0xA5 0x3B2 0x8 Page Middle Directory Page Table only one valid entry per table guard Especially interesting on systems that aside from TLB provide no additional hardware support for paging (zero-level paging system) Persue topic -> ask for additional time ! guard length Frame # or page table base table not needed, if guard in place
23
2.2 Paging: Implementation Guarded Page Tables (2)
Guarded page tables especially interesting if hardware offers only TLB (zero-level paging, MIPS) OS has total flexibility, may use Different sizes of pages and page tables (all powers of 2 ok) and as many levels as desired Guarded page tables Inverted page tables Optimization: guarded table entry will usually not contain guard and guard length but equivalent information Note that handling of protection is to be modified For didactical reasons and for reasons of time I have left out some detail and some optimizations, please read yourself ! Rather: extended guard, length of address still to translate and that length minus length of index into subordinate table may be stored (details: [LIE2])
24
2.3 Paging: Features Prepaging
Process requests consecutive pages (or just one) OS loads following pages into memory as well (expecting they will also be needed) Saves time when large contiguous structures are used (e.g. huge arrays) Wastes memory and time case pages not needed May waste time: another process generating page fault at same time has to wait !!! VM referenced by process prepaged by OS
25
2.3 Paging: Features Demand Paging
On process startup only first page is loaded into physical memory Pages are then loaded as referenced Saves memory But: may cause frequent page faults until process has its working set in physical memory. OS may adjust its policy (demand / prepaging) dependent on Available free physical memory Process types and history
26
2.3 Paging: Features Cheap Memory Allocation
No search for large enough a piece of PM necessary Any requested amount of memory is divided into pages can be distributed over the available frames OS keeps a list of free frames If memory is requested the first frame is taken Frame 0 Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Frame 6 Frame 7 PM Frame 0 Frame 1 Frame 2 Frame 3 Frame 4 Frame 5 Frame 6 Frame 7 PM 4 Page 1 Page 2 Page 1 Page 2 Process started, requiring 6KB (4KB pages) 1 6 4 Linked list of free frames
27
2.3 Paging: Features Simplified Swapping
Process requires memory Paging VM system Non-Paging VM system PM PM 3 pages 1 „piece“ rarely used Process requires 3 frames swap out 3 most seldomly used pages Swapping out the 3 most seldomly used „pieces“ will not work Swap algo must try to create free pieces as big as possible (costly!)
28
2.4 Paging: Advantages Allocating memory is easy and cheap
Any free page is ok, OS can take first one out of list it keeps Eliminates external fragmentation Data (page frames) can be scattered all over PM pages are mapped appropriately anyway Allows demand paging and prepaging More efficient swapping No need for considerations about fragmentation Just swap out page least likely to be used Without paging demand paging might be very costly (find fitting piece of free memory) Equal size blocks (pages) well suited for HDD
29
2.5 Paging: Disadvantages
Longer memory access times (page table lookup) Can be improved using TLB Guarded page tables Inverted page tables Memory requirements (one entry per VM page) Improve using Multilevel page tables and variable page sizes (super-pages) Page Table Length Register (PTLR) to limit virtual memory size Internal fragmentation Yet only an average of about ½ page per contiguous address range Guarded: time savings dependent on how many tables are skipped
30
2.6 Summary: Conversion of a Virtual Address
Hard ware OS Virtual address yes bring in page from HDD! process into blocking state TLB page table miss page in mem? hit access rights? yes update TLB memory full? exception to process no swap out a page yes HDD I/O read req. no reference legal? page fault no HDD I/O complete: interrupt Hardware hand controle to OS interrupting control flow of process through faults (exceptions / interrupts) process into blocking state yes copy on write? no protection fault copy page update page table exception to process no process into ready state Physical address
31
3. Segmentation 3.1 What is Segmentation? 3.2 Segmentation: Advantages
3.3 Segmentation: Disadvantages
32
External fragmentation
3.1 What is Segmentation? Segment # Offset virtual address External fragmentation Seg 1 (code) Seg 2 (data) Seg 3 (stack) Virtual memory offset < limit ? MMU STBR STLR Base Limit Other Segment table Physical memory Seg 2 (data) Seg 1 (code) Seg 3 (stack) 0x00 as in paging: valid, modified, protection, etc. memory access fault no Segment Base + Offset physical address yes
33
3.2 Segmentation: Advantages
As opposed to paging: No internal fragmentation (but: external fragmentation) May save memory if segments are very small and should not be combined into one page (e.g. for reasons of protection) Segment tables: only one entry per actual segment as opposed to one per page in VM Average segment size >> average page size less overhead (smaller tables) Check array boundaries by placing into fitting segment Swapping: code can be placed anywhere without relocating again, but free mem problem ! Average segment size >> average page size: fragmentation problem gets worse
34
3.3 Segmentation: Disadvantages
External fragmentation Costly memory management algorithms Segmentation: find free memory area big enough (search!) Paging: keep list of free pages, any page is ok (take first!) Segments of unequal size not suited as well for swapping Dynamic storage allocation problem: first fit or best fit algo, compaction my be used (no relocation problem) Equal size blocks better suited for HDD No linear address space
35
4. Combined Segmentation and Paging (CoSP)
4.1 What is CoSP? 4.2 CoSP: Advantages 4.3 CoSP: Disadvantages
36
size limited by segment limit
4.1 What is CoSP? Offset Seg # Page #1 Page #2 Virtual Address Page Table Page Frame # Offset Physical Address limit base Segment Table Page Directory Segment tables may also be paged (mulitcs) segment number broken into page number and segment table offset size limited by segment number size limited by segment limit not all need be present
37
4.2 CoSP: Advantages Reduces memory usage as opposed to pure paging
Page table size limited by segment size Segment table has only one entry per actual segment Simplifies handling protection and sharing of larger modules (define them as segments) Most advantages of paging still hold Simplifies memory allocation Eliminates external fragmentation Supports swapping, demand paging, prepaging etc. Epecially well suited for prepaging: segment limits tell memory management system where to stop
38
internal fragmentation
4.3 CoSP: Disadvantages Internal fragmentation Yet only an average of about ½ page per contiguous address range Page 1 Page 2 Process requests a 6KB address range (4KB pages) internal fragmentation No linear address space
39
The End
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.