Download presentation
Presentation is loading. Please wait.
Published byKobe Danzey Modified over 9 years ago
1
OS Fall’02 Virtual Memory Operating Systems Fall 2002
2
OS Fall’02 Paging and Virtual Memory Paging makes virtual memory possible Logical to physical address mapping is dynamic Processes can be broken to a number of pages that need not be mapped into a contiguous region of the main memory => It is not necessary that all of the process pages be in main memory during execution
3
OS Fall’02 How does this work? CPU can execute a process as long as some portion of its address space is mapped onto the physical memory E.g., next instruction and data addresses are mapped Once a reference to an unmapped page is generated (page fault): Put the process into blocking state Read the page from disk into the memory Resume the process
4
OS Fall’02 Benefits More processes may be maintained in the main memory Better system utilization The process size is not restricted by the physical memory size: the process memory is virtual But what is the limit anyway?
5
OS Fall’02 Why is this practical? Observation: Program branching and data access patterns are not random Principle of locality: program and data references tend to cluster => Only a fraction of the process virtual address space need to be resident to allow the process to execute for sufficiently long
6
OS Fall’02 Virtual memory implementation Efficient run-time address translation Hardware support, control data structures Fetch policy Demand paging: page is brought into the memory only when page-fault occurs Pre-paging: pages are brought in advance Page replacement policy Which page to evict when a page fault occurs?
7
OS Fall’02 Thrashing A condition when the system is engaged in moving pages back and forth between memory and disk most of the time Bad page replacement policy may result in thrashing Programs with non-local behavior
8
OS Fall’02 Address translation Virtual address is divided into page number and offset Process page table maintains mappings of virtual pages onto physical frames Each process has its own unique page table Virtual Address Page NumberOffset
9
OS Fall’02 Forward-mapped page tables (FMPT) Page table entry (PTE) structure Page table is an array of the above Index is the virtual page number PM Frame Number Other Control Bits Page Table Frame # Page # P: present bit M: modified bit
10
OS Fall’02 Address Translation using FMPT ProgramPagingMain Memory Virtual address Register Page Table Page Frame Offset P# Frame # Page Table Ptr Page #OffsetFrame #Offset +
11
OS Fall’02 Handling large address spaces One level FMPT is not suitable for large virtual address spaces 32 bit addresses, 4K (2 12 ) page size, 2 32 / 2 12 = 2 20 entries ~4 bytes each => 4Mbytes resident page table per process! What about 64 bit architectures?? Solutions: multi-level FMPT Inverted page tables (IPT)
12
OS Fall’02 Multilevel FMPT Use bits of the virtual address to index a hierarchy of page tables The leaf is a regular PTE Only the root is required to stay resident in main memory Other portions of the hierarchy are subject to paging as regular process pages
13
OS Fall’02 Two-level FMPT page number page offset pipi p2p2 d 10 12
14
OS Fall’02 Two-level FMPT
15
OS Fall’02 Inverted page table (IPT) A single table with one entry per physical page Each entry contains the virtual address currently mapped to a physical page (plus control bits) Different processes may reference the same virtual address values Address space identifier (ASID) uniquely identifies the process address space
16
OS Fall’02 Address translation with IPT Virtual address is first indexed into the hash anchor table (HAT) The HAT provides a pointer to a linked list of potential page table entries The list is searched sequentially for the virtual address (and ASID) match If no match is found -> page fault
17
OS Fall’02 Address translation with IPT Virtual address page number offset hash + HAT base register ASID register page number ASID Frame# IPT + IPT base register frame number HAT
18
OS Fall’02 Translation Lookaside Buffer (TLB) With VM accessing a memory location involves at least two intermediate memory accesses Page table access + memory access TLB caches recent virtual to physical address mappings ASID or TLB flash is used to enforce protection
19
OS Fall’02 TLB internals TLB is associative, high speed memory Each entry is a pair (tag,value) When presented with an item it is compared to all keys simultaneously If found, the value is returned; otherwise, it is a TLB miss Expensive: number of typical TLB entries: 64-1024 Do not confuse with memory cache!
20
OS Fall’02 Address translation with TLB
21
OS Fall’02 Bits in the PTE: Present (valid) Present (valid) bit Indicates whether the page is assigned to frame or not Invalid page can be not a part of any memory segment A reference to an invalid page generates page fault which is handled by the operating system
22
OS Fall’02 Bits in PTE: modified, used Modified (dirty) bit Indicates whether the page has been modified Unmodified pages need not be written back to the disk when evicted Used bit Indicates whether the page has been accessed recently Used by the page replacement algorithm
23
OS Fall’02 Bits in PTE Access permissions bit indicates whether the page is read-only or read-write UNIX copy-on-write bit Set whether more than one process shares a page If one of the processes writes into the page, a separate copy must first be made for all other processes sharing the page Useful for optimizing fork()
24
OS Fall’02 Protection with VM Preventing processes from accessing other process pages Simple with FMPT Load the process page table base address into a register upon context switch ASID with IPT
25
OS Fall’02 Segmentation with paging Segmentation simplifies protection and sharing, enforce modularity, but prone to external fragmentation Paging transparent, eliminates ext. fragmentation, allows for sophisticated memory management Segmentation and paging can be combined
26
OS Fall’02 Address translation Main Memory Page Frame Offset Paging Page Table P# + Frame #Offset Seg Table Ptr + S # SegmentationProgram Segment Table Seg #Page #Offset
27
OS Fall’02 Page size considerations Small page size better approximates locality large page tables inefficient disk transfer Large page size internal fragmentation Most modern architectures support a number of different page sizes a configurable system parameter
28
OS Fall’02 Next: Page replacement
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.