Download presentation
1
Paging Andrew Whitaker CSE451
2
Review: Process (Virtual) Address Space
user space kernel space Each process has its own address space The OS and the hardware translate virtual addresses to physical frames
3
Multiple Processes Each process has its own address space
user space kernel space proc1 proc2 Note: threads share page tables. Each process has its own address space And, its own set of page tables Kernel mappings are the same for all
4
Linux Physical Memory Layout
5
Paging Issues Memory scarcity Making Paging Fast
Virtual memory, stay tuned… Making Paging Fast Reducing the Overhead of Page Tables
6
Review: Mechanics of address translation
virtual address virtual page # offset physical memory page frame 0 page table page frame 1 physical address page frame 2 page frame # page frame # offset page frame 3 … page frame Y Problem: page tables live in memory
7
Making Paging Fast We must avoid a page table lookup for every memory reference This would double memory access time Solution: Translation Lookaside Buffer Fancy name for a cache TLB stores a subset of PTEs (page table translation entries) TLBs are small and fast (16-48 entries) Can be accessed “for free”
8
TLB Details In practice, most (> 99%) of memory translations handled by the TLB Each processor has its own TLB TLB is fully associative Any TLB slot can hold any PTE entry Who fills the TLB? Two options: Hardware (x86) walks the page table on a TLB miss Software (MIPS, Alpha) routine fills the TLB on a miss TLB itself needs a replacement policy Usually implemented in hardware (LRU) Again, we’re taking advantage of the principal of locality here. If the program accessed its address space willy-nilly, then caching does not work, including TLB caching. Advantage of hardware filled == speed Advantage of software filled == flexibility
9
What Happens on a Context Switch?
Each process has its own address space So, each process has its own page table So, page-table entries are only relevant for a particular process Thus, the TLB must be flushed on a context switch This is why context switches are so expensive
10
Alternative to flushing: Address Space IDs
We can avoid flushing the TLB if entries are associated with an address space When would this work well? When would this not work well? 4 1 1 1 2 20 ASID V R M prot page frame number
11
TLBs with Multiprocessors
page table TLB 1 page frame # TLB 2 Each TLB stores a subset of page table state Must keep state consistent on a multiprocessor
12
Today’s Topics Page Replacement Strategies Making Paging Fast
Reducing the Overhead of Page Tables
13
Page Table Overhead For large address space, page table sizes can become enormous Example: IA64 architecture 64 bit address space, 8KB pages Num PTEs = 2^64 / 2^13 = 2^51 Assuming 8 bytes per PTE: Num Bytes = 2^54 = 16 Petabytes And, this is per-process!
14
Optimizing for Sparse Address Spaces
Observation: very little of the address space is in use at a given time Basic idea: only allocate page tables where we need to And, fill in new page tables lazily (on demand) virtual address space
15
Implementing Sparse Address Spaces
We need a data structure to keep track of the page tables we have allocated And, this structure must be small Otherwise, we’ve defeated our original goal Solution: multi-level page tables Page tables of page tables “Any problem in CS can be solved with a layer of indirection”
16
Two level page tables virtual address master page # secondary page# offset physical memory page frame 0 master page table physical address page frame # offset page frame 1 secondary page table secondary page table page frame 2 page frame 3 empty page frame number empty … page frame Y Key point: not all secondary page tables must be allocated
17
Generalizing Early architectures used 1-level page tables
VAX, x86 used 2-level page tables SPARC uses 3-level page tables Alpha uses 4-level page tables Key thing is that the outer level must be wired down (pinned in physical memory) in order to break the recursion
18
Cool Paging Tricks Basic Idea: exploit the layer of indirection between virtual and physical memory
19
Trick #1: Shared Libraries
Q: How can we avoid 1000 copies of printf? A: Shared libraries Linux: /usr/lib/*.so Firefox Open Office libc libc libc Physical memory
20
Shared Memory Segments
Virt Address space 1 Virt Address space 2 Physical memory
21
Trick #2: Copy-on-write
Copy-on-write allows for a fast “copy” by using shared pages Especially useful for “fork” operations Implementation: pages are shared “read-only” OS intercepts write operations, makes a real copy V R M prot page frame number
22
Trick #3: Memory-mapped Files
Normally, files are accessed with system calls Open, read, write, close Memory mapping allows a program to access a file with load/store operations Virt Address space Foo.txt
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.