Download presentation
Presentation is loading. Please wait.
Published byHope Hill Modified over 9 years ago
1
VMLihu Rappoport, 12/2004 1 MAMAS – Computer Architecture Virtual Memory Dr. Lihu Rappoport
2
VMLihu Rappoport, 12/2004 2 Memory System Problems Different Programs have different memory requirements –How to manage program placement? Different machines have different amount of memory –How to run the same program on many different machines? At any given time each machine runs a different set of programs –How to fit the program mix into memory? Reclaiming unused memory? Moving code around? The amount of memory consumed by each program is dynamic (changes over time) –How to effect changes in memory location: add or subtract space? Program bugs can cause a program to generate reads and writes outside the program address space –How to protect one program from another?
3
VMLihu Rappoport, 12/2004 3 Virtual Memory Provides illusion of very large memory –sum of the memory of many jobs greater than physical memory address space of each job larger than physical memory Allows available (fast and expensive) physical memory to be very well utilized Simplifies memory management: code and data movement, and protection Exploits memory hierarchy to keep average access time low Involves at least two storage levels: main and secondary Virtual Address: address used by the programmer Virtual Address Space: collection of such addresses Memory Address: address in physical memory also known as “physical address” or “real address”
4
VMLihu Rappoport, 12/2004 4 Virtual Memory: Basic Idea Divide memory (virtual and physical) into fixed size blocks –Pages in Virtual space, Frames in Physical space –Page size = Frame size –Page size is a power of 2: page size = 2 k All pages in the virtual address space are contiguous Pages can be mapped into physical Frames in any order Some of the pages are in main memory (DRAM), some of the pages are on disk. All programs are written using Virtual Memory Address Space The hardware does on-the-fly translation between virtual and physical address spaces. –Use a Page Table to translate between Virtual and Physical addresses
5
VMLihu Rappoport, 12/2004 5 Main memory can act as a cache for the secondary storage (disk) Advantages: – illusion of having more physical memory – program relocation – protection Virtual Memory Virtual Addresses Physical Addresses Address Translation Disk Addresses
6
VMLihu Rappoport, 12/2004 6 Virtual to Physical Address translation 3 2 1 011 10 9 815 14 13 1231 30 29 28 27 3 2 1 011 10 9 815 14 13 1229 28 27 Virtual page number Physical page number Page offset Virtual Address Physical Addresses Page size: 2 12 byte =4K byte Page Table
7
VMLihu Rappoport, 12/2004 7 The Page Table 31 Page offset 011 Virtual Page Number Page offset 110 Physical Frame Number 29 Virtual Address Physical Address VDFrame number 1 Page table base reg 0 Valid bit Dirty bit 12 AC Access Control 12
8
VMLihu Rappoport, 12/2004 8 If V = 1 then page is in main memory at frame address stored in table Fetch data else (page fault) need to fetch page from disk causes a trap, usually accompanied by a context switch: current process suspended while page is fetched from disk Access Control (R = Read-only, R/W = read/write, X = execute only) If kind of access not compatible with specified access rights then protection_violation_fault causes trap to hardware, or software fault handler Missing item fetched from secondary memory only on the occurrence of a fault demand load policy Address Mapping Algorithm
9
VMLihu Rappoport, 12/2004 9 Page Tables Valid 1 Physical Memory Disk Page Table Physical Page Or Disk Address 1 1 1 1 1 1 1 1 0 0 0 Virtual page number
10
VMLihu Rappoport, 12/2004 10 Page Replacement Algorithm Not Recently Used (NRU) –Associated with each page is a reference flag such that ref flag = 1 if the page has been referenced in recent past If replacement is needed, choose any page frame such that its reference bit is 0. –This is a page that has not been referenced in the recent past Clock implementation of NRU: last replaced pointer (lrp) if replacement is to take place, advance lrp to next entry (mod table size) until one with a 0 bit is found; this is the target for replacement; As a side effect, all examined PTE's have their reference bits set to zero. 1 0 0 0 page table entry Ref bit 1 0 Possible optimization: search for a page that is both not recently referenced AND not dirty
11
VMLihu Rappoport, 12/2004 11 Page Faults Page faults: the data is not in memory retrieve it from disk –The CPU must detect situation –The CPU cannot remedy the situation (has no knowledge of the disk) CPU must trap to the operating system so that it can remedy the situation –Pick a page to discard (possibly writing it to disk) –Load the page in from disk –Update the page table –Resume to program so HW will retry and succeed! Page fault incurs a huge miss penalty –Pages should be fairly large (e.g., 4KB) –Can handle the faults in software instead of hardware –Page fault causes a context switch –Using write-through is too expensive so we use write-back
12
VMLihu Rappoport, 12/2004 12 Optimal Page Size Minimize wasted storage –Small page minimizes internal fragmentation –Small page increase size of page table Minimize transfer time –Large pages (multiple disk sectors) amortize access cost –Sometimes transfer unnecessary info –Sometimes prefetch useful data –Sometimes discards useless data early General trend toward larger pages because –Big cheap RAM –Increasing memory / disk performance gap –Larger address spaces
13
VMLihu Rappoport, 12/2004 13 Translation Lookaside Buffer (TLB) Page table resides in memory each translation requires a memory access A way to speed up translation is to use a special cache of recently used page table entries TLBs have typically 128 to 256 entries TLBs are usually highly associative TLB access time is comparable to L1 cache access time Yes No TLB Access TLB Hit ? Access Page Table Virtual Address Physical Addresses
14
VMLihu Rappoport, 12/2004 14 TLB (Translation Lookaside Buffer) is a cache for recent address translations: Valid 1 1 1 1 0 1 1 0 1 1 0 1 1 1 1 1 0 1 Making Address Translation Fast Physical Memory Disk Virtual page number Page Table Valid Tag Physical Page TLB Physical Page Or Disk Address
15
VMLihu Rappoport, 12/2004 15 Virtual Memory And Cache Yes No TLB Access TLB Hit ? Access Page Table Access Cache Virtual Address Cache Hit ? Yes No Access Memory Physical Addresses Data TLB access is serial with cache access
16
VMLihu Rappoport, 12/2004 16 Overlapped TLB & Cache Access Physical page number Page offset Virtual Memory view of a Physical Address Disp Cache view of a Physical Address # SetTag In the above example #Set is not contained within the Page Offset The #Set is not known Until the Physical page number is known Cache can be accessed only after address translation done 3 2 1 011 10 9 815 14 13 1229 28 27 3 2 1 011 10 9 815 14 13 1229 28 27
17
VMLihu Rappoport, 12/2004 17 Overlapped TLB & Cache Access (cont) Physical page number Page offset Virtual Memory view of a Physical Address Disp Cache view of a Physical Address # SetTag In the above example #Set is contained within the Page Offset The #Set is known immediately Cache can be accessed in parallel with address translation First the tags from the appropriate set are brought Tag compare takes place only after the Physical page number is known (after address translation done) 3 2 1 011 10 9 815 14 13 1229 28 27 3 2 1 011 10 9 815 14 13 1229 28 27
18
VMLihu Rappoport, 12/2004 18 Overlapped TLB & Cache Access (cont) First Stage –Virtual Page Number goes to TLB for translation –Page offset goes to cache to get all tags from appropriate set Second Stage –The Physical Page Number, obtained from the TLB, goes to the cache for tag compare. Limitation: Cache < (page size × associativity) How can we overcome this limitation ? –Fetch 2 sets and mux after translation –Increase associativity …
19
VMLihu Rappoport, 12/2004 19 Overlapped TLB & Cache Access The K6 solution In x86 the page size is 4K byte bits [11:0] are not translated K6 L1 caches are 32K Byte, 2 way set-associative, 32 byte/line Index is comprised of bits [13:6] of the virtual address Since bits [13:12] are translated, the line may reside in one of 4 sets (the physical translation of these bits may be one of 00, 01, 10 or 11). 12 Tag is comprised of bits [31:12] of the physical address (not just [31:14]) The tags from all 4 sets (8 tags) are then compared with bits [31:12] of the physical address simultaneously If one of the 2 tags of the indexed set matches bits [31:12] of the physical address, we have a cache hit Notice: bits [13:12] of the tag may differ from bits [13:12] of the indexed set If the line is found in one of the 4 sets, but not in the indexed set, the line is invalidated (possibly written back if dirty), and then re-fetched from the L2 cache of from memory and put into the indexed set
20
VMLihu Rappoport, 12/2004 20 Virtually-Addressed Cache Cache uses virtual addresses (tags are virtual) Only require address translation on cache miss –TLB not in path to cache hit Aliasing: two different virtual addresses map to same physical address –Two different cache entries holding data for the same physical address! –must update all cache entries with same physical address Cache must be flushed at task switch data Trans- lation Cache Main Memory VA hit PA CPU
21
VMLihu Rappoport, 12/2004 21 Virtually-Addressed Cache (cont). Cache must be flushed at task switch –Solution: include process ID (PID) in tag New problem: memory sharing among processes –Solution: permit multiple virtual pages to refer to same memory Problem: incoherence if they point to different physical pages –Solution: require sufficiently many common virtual LSB. –With direct mapped cache, guarantied that they all point to same physical page
22
VMLihu Rappoport, 12/2004 22 More On Page Swap-out DMA copies the page to the disk controller: –Reads each byte: Executes snoop-invalidate for each byte in the cache (both L1 and L2) If the byte resides in the cache: if it is modified reads its line from the cache into memory invalidates the line –Writes the byte to the disk controller –This means that when a page is swapped-out of memory: All data in the caches which belongs to that page is invalidated The page in the disk is up-to-date The TLB is snooped, and there is a TLB hit for the swapped-out page, its TLB entry is invalidated In the page table –The valid bit in the PTE entry of the swapped-out pages set to 0 –All the rest of the bits in the PTE entry may be used by the operating system for keeping the location of the page in the disk.
23
VMLihu Rappoport, 12/2004 23 Large Address Spaces For a large virtual address space, we may get a large page table, for example: For a 2 GB virtual address space, page size of 4KB, we get 2 31 /2 12 =2 19 entries Assuming each entry is 4 bytes wide, page table alone will occupy 2MB in main memory Solution: use two-level Page Tables Master page table resides in main memory Secondary page tables reside in virtual address space Virtual address format P1 indexP2 indexpage offest 10 12 4 bytes 4KB 1K PTEs
24
VMLihu Rappoport, 12/2004 24 VM in VAX: Address Format Page size: 2 9 byte = 512 bytes 31 Page offset 08 Virtual Page Number Virtual Address 9 30 29 0 0 - P0 process space (code and data) 0 1 - P1 process space (stack) 1 0 - S0 system space 1 1 - S1 Page offset 80 Physical Frame Number 29 Physical Address 9
25
VMLihu Rappoport, 12/2004 25 VM in VAX: Virtual Address Spaces Process 0 Process 1 Process 2 Process 3 P 0 process code & global vars grows upward P 1 process stack & local vars grows downward S 0 system space grows upward, generally static 0 7FFFFFFF 80000000
26
VMLihu Rappoport, 12/2004 26 Page Table Entry (PTE) V PROT M Z OWN S S 0 Physical Frame Number 31 20 Valid bit =1 if page mapped to main memory, otherwise page fault: Page on the disk swap area Address indicates the page location on the disk 4 Protection bits Modified bit 3 ownership bits Indicate if the line was cleaned (zero)
27
VMLihu Rappoport, 12/2004 27 System Space Address Translation 00 10 Page offset 80 29 9 VPN SBR (System page table base physical address) + VPN 0 PTE physical address = 00 PFN (from PTE) Page offset 80 29 9 PFN Get PTE
28
VMLihu Rappoport, 12/2004 28 System Space Address Translation (cont) SBR VPN*4 PFN 10 offset 8029 9 VPN offset 8029 9 PFN 31
29
VMLihu Rappoport, 12/2004 29 P0 Space Address Translation 00 Page offset 80 29 9 VPN P0BR (P0 page table base virtual address) + VPN 0 PTE S0 space virtual address = 00 PFN (from PTE) Page offset 80 29 9 PFN Get PTE using system space translation algorithm
30
VMLihu Rappoport, 12/2004 30 P0 Space Address Translation (cont) SBR P0BR+VPN*4 Offset’ 8029 9 PFN’ 00 offset 8029 9 VPN 31 10 Offset’ 8029 9 VPN’ 31 PFN’ VPN’*4 Physical addr of PTE PFN Offset 8029 9 PFN
31
VMLihu Rappoport, 12/2004 31 P0 space Address translation Using TLB Yes No Process TLB Access Process TLB hit? 00 VPN offset No System TLB hit? Yes Get PTE of req page from the proc. TLB Calculate PTE virtual addr (in S0): P0BR+4*VPN System TLB Access Get PTE from system TLB Get PTE of req page from the process Page table Access Sys Page Table in SBR+4*VPN(PTE) Memory Access Calculate physical address PFN Access Memory
32
VMLihu Rappoport, 12/2004 32 Paging in x86 l2-level hierarchical mapping Page directory and page tables All pages and page tables are 4K lLinear address divided to: Dir10 bits Table10 bits Offset12 bits Dir/Table serves as index into a page table Offset serves ptr into a data page lPage entry points to a page table or page lPerformance issues: TLB! 031 DIRTABLEOFFSET Page Table PG Tbl Entry Page Directory 4K Dir Entry 4K Page Frame Operand Linear Address Space (4K Page) 1121 CR3
33
VMLihu Rappoport, 12/2004 33 x86 Page Translation Mechanism CR3 points to current page directory (may be changed per process) Usually, a page directory entry (covers 4MB) will point to a page table that covers data of the same type/usage Can allocate different physical for same Linear (e.g. 2 copies of same code) Sharing can alias pages from diff. processes to same physical (e.g., OS) DIRTABLEOFFSET Page Dir Code Data StackOS Phys Mem 4K page CR3 Page Tables
34
VMLihu Rappoport, 12/2004 34 x86 Page Entry Format l 20 bit pointer to a 4K Aligned address l 12 bits flags l Virtual memory Present Accessed, Dirt l Protection Writable (R#/W) User (U/S#) – 2 levels/type only! l Caching Page WT Page Cache Disabled l 3 bit for OS usage Figure 11-14. Format of Page Directory and Page Table Entries for 4K Pages 0 00 Page Frame Address 31:12 AVAIL00A PCDPCD PWTPWT UWP Present Writable User Write-Through Cache Disable Accessed Page Size (0: 4 Kbyte) Available for OS Use Page Dir Entry 0412357 911 68 1231 Page Frame Address 31:12 AVAILDA PCDPCD PWTPWT UWP Present Writable User Write-Through Cache Disable Accessed Dirty Available for OS Use Page Table Entry 0412357911681231 Reserved by Intel for future use (should be zero) - -
35
VMLihu Rappoport, 12/2004 35 x86 Paging - Virtual memory A page can be –Not yet loaded –Loaded –On disk A loaded page can be –Dirty –Clean When a page is not loaded (P bit clear) => Page fault occurs –It may require throwing a loaded page to insert the new one OS prioritize throwing by LRU and dirty/clean/avail bits Dirty page should be written to Disk. Clean need not. –New page is either loaded from disk or “initialized” –CPU will set page “access” flag when accessed, “dirty” when written
36
VMLihu Rappoport, 12/2004 36 Backup
37
VMLihu Rappoport, 12/2004 37 Inverted Page Tables V.Page P. Frame hash Virtual Page = IBM System 38 (AS400) implements 64-bit addresses. 48 bits translated start of object contains a 12-bit tag => TLBs or virtually addressed caches are critical
38
VMLihu Rappoport, 12/2004 38 Hardware / Software Boundary What aspects of the Virtual → Physical Translation is determined in hardware? TLB Format Type of Page Table Page Table Entry Format Disk Placement Paging Policy
39
VMLihu Rappoport, 12/2004 39 Why virtual memory? Generality –ability to run programs larger than size of physical memory Storage management –allocation/deallocation of variable sized blocks is costly and leads to (external) fragmentation Protection –regions of the address space can be R/O, Ex,... Flexibility –portions of a program can be placed anywhere, without relocation Storage efficiency –retain only most important portions of the program in memory Concurrent I/O –execute other processes while loading/dumping page Expandability –can leave room in virtual address space for objects to grow. Performance
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.