Page Table Implementation
Page Table Implementations Key issues: Each instruction requires one or more memory accesses: Mapping must be done very quickly Where do we store the page table? It is now part of the context, with implications on context switching Hardware must support auxilliary bits
PDP-11 Example Revisited Page table is small (8 entries) Can be implemented in hardware Moderate effect on context switching time Each process will need an 8-entry array in its PCB to store page table when not running Protections works fine But: what if address space is 32-bit
Page Table Size Problems Assume 16K page size and 32-bit address space Then: For each process, there are 219 virtual pages Page table size: 219 * 4 bytes/entry Page table requires 2 Mbytes Cannot be stored in hardware, slowing down the mapping from virtual to physical addressing
Solution1: Multi-Level Page Tables Use two or three levels page tables All entries in the topmost level must be there Entries in lower levels are there only if needed Store page tables in memory Have one CPU register contain address of top-most level table
Example: SPARC Context 3-level page table index 1 index 2 index 3 offset 8 6 6 12 Page Level 1 Level 2 Level 3 Context table (up to 4K registers)
SPARC: Cont’d Only level 1 table need be there entirely 256 entries * 4 bytes/entry = 1K /process Context switching is not affected Just save and restore the context register/process Second and third level tables are there only if necessary
Translation Lookaside Buffer A small associative memory in processor Contains recent mapping results Typically 8 to 32 entries If access is localized, works very well Must be flushed on context switching If TLB misses, then must resolve through page tables in main memory (slow)
Other Varieties 2-level page table in VAX systems 4-level page table in the 68030/68040 Organize the cache memory by virtual addresses (instead of physical addresses) Remove the TLB from critical path Combine cache misses with address translations e.g. MIPS 3000/4000
Solution 2: 0-Level Page Table Only a TLB inside processor No page table support in MMU On a TLB miss, trap to software and let the OS deal with it (MIPS 3000/4000) Advantages: Disadvantages: Simpler hardware Trap to software Flexibiliy for OS may be slow
Solution 3: Inverted Page Tables Rationale: Conventional per-process page tables depend on virtual memory size Virtual address space is getting larger (e.g. 64 bits) Size of physical memory projected is less than virtual memory in foreseeable future
Inverted Page Table Main Idea: Global page table indexed by frame number Each entry contains virtual address & pid Use TLB to reduce the need to access PT On a TLB miss: Search page table for the <virtual address, pid> Physical address is obtained from the index of the entry (frame number)
Inverted Page Table Structure Indexed by frame number Entry contains virtual address and pid using the frame number (if any) Contains protection & reference pid Virtual addr v w r x f m pid Virtual addr v w r x f m pid Virtual addr v w r x f m v: valid bit w: write bit r: read bit x: execute bit (rare) f: reference bit m: modified bit
Mapping Virtual to Real Addresses n bits Virtual address virtual page number offset s bits + pid: search key into inverted page table s bits frame no. offset p bits Physical address
Properties & Problems Table size is independent of virtual address size, only function of physical memory size TLB misses are expensive We don’t know where to look May require searching entire table (very bad) Virtual memory more expensive Sharing becomes very difficult
A Solution Organize Inverted Page Table as a hash table Search key <pid, vaddr> Search in hardware or software Examples: IBM 38, RS/6000, HP PA-RISC systems
Sharing & Inverted Page Tables Conceivably possible with a general hashing function Requires an additional field in page table entry Size no longer limited, so no system implements it Frame no. pid Virtual addr v w r x f m Frame no. pid Virtual addr v w r x f m Frame no. pid Virtual addr v w r x f m