Paging Review: Virtual address spaces are broken into pages

Slides:



Advertisements
Similar presentations
Lecture 7 Memory Management. Virtual Memory Approaches Time Sharing: one process uses RAM at a time Static Relocation: statically rewrite code before.
Advertisements

Chapter 19 Translation Lookaside Buffer Chien-Chung Shen CIS, UD
Lecture 8 Memory Management. Paging Too slow -> TLB Too big -> multi-level page table What if all that stuff does not fit into memory?
COMP 3221: Microprocessors and Embedded Systems Lectures 27: Virtual Memory - III Lecturer: Hui Wu Session 2, 2005 Modified.
Virtual Memory Management
Memory/Storage Architecture Lab Computer Architecture Virtual Memory.
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
Memory Management (II)
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Virtual Memory Chantha Thoeun. Overview  Purpose:  Use the hard disk as an extension of RAM.  Increase the available address space of a process. 
Lecture 21 Last lecture Today’s lecture Cache Memory Virtual memory
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
IT253: Computer Organization
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming  To allocate scarce memory resources.
© 2004, D. J. Foreman 1 Virtual Memory. © 2004, D. J. Foreman 2 Objectives  Avoid copy/restore entire address space  Avoid unusable holes in memory.
Chapter 18 Paging Chien-Chung Shen CIS, UD
PA0 due 60 hours. Lecture 4 Memory Management OSTEP Virtualization CPU: illusion of private CPU RAM: illusion of private memory Concurrency Persistence.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Review (1/2) °Caches are NOT mandatory: Processor performs arithmetic Memory stores data Caches simply make data transfers go faster °Each level of memory.
Virtual Memory 1 1.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 9: Virtual Memory.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
Chapter 21 Swapping: Mechanisms Chien-Chung Shen CIS, UD
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
PA1 due in one week TA office hour is 1-3 PM The grade would be available before that.
Lecture 7 TLB. Virtual Memory Approaches Time Sharing Static Relocation Base Base+Bounds Segmentation Paging.
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
CS203 – Advanced Computer Architecture Virtual Memory.
Chapter 18 Paging Chien-Chung Shen CIS/UD
Chapter 21 Swapping: Mechanisms Chien-Chung Shen CIS/UD
Chapter 20 Smaller Tables Chien-Chung Shen CIS, UD
Computer Architecture Lecture 12: Virtual Memory I
Chapter 19 Translation Lookaside Buffer
CS161 – Design and Architecture of Computer
Paging Review Page tables store translations from virtual pages to physical frames Issues: Speed - TLBs Size – Multi-level page tables.
Virtual Memory Chapter 7.4.
Review: Memory Virtualization
Virtualization Virtualize hardware resources through abstraction CPU
Chapter 2 Memory and process management
CS161 – Design and Architecture of Computer
CSE 120 Principles of Operating
CS703 - Advanced Operating Systems
Paging COMP 755.
PA1 is out Best by Feb , 10:00 pm Enjoy early
Virtual Memory © 2004, D. J. Foreman.
Chapter 8: Main Memory.
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Lecture 6 Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Computer Architecture
Module IV Memory Organization.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Virtual Memory Hardware
CSE 451: Operating Systems Autumn 2005 Memory Management
Virtual Memory Overcoming main memory size limitation
Christo Wilson Lecture 7: Virtual Memory
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Contents Memory types & memory hierarchy Virtual memory (VM)
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management CSE451 Andrew Whitaker.
Virtual Memory © 2004, D. J. Foreman.
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Virtual Memory.
Virtual Memory 1 1.
Presentation transcript:

Paging Review: Virtual address spaces are broken into pages Physical memory is divided into page frames Page size = page frame size Page tables store translations from VPN to PFN Hardware support via TLBs makes paging fast by caching popular translations But…Page tables are still too big

Paging: Linear Tables One page table for every process in the system. 32-bit address space with 4KB pages and 4-byte page-table entry. 4B entry Page 0 Page 1 Page 2 Physical Memory Page n … 4KB entry entry … entry Page Table of Process A 여백 감소 페이지 사이의 간격을 없애기. Page table size = 2^20 * 4 = 4 MB per process 100+ processes at a time = hundreds of MBs of memory

Big pages lead to internal fragmentation. Option 1: Bigger pages 32-bit address space with 16KB pages and 4-byte page-table entry. 4B entry Page 0 Page 1 Page 2 Physical Memory Page n … 16KB entry entry … entry Page Table of Process A 여백 감소 페이지 사이의 간격을 없애기. page table size = 2^18 * 4 = 1MB per process Big pages lead to internal fragmentation.

Problem Single page table for all entries in the process PFN valid Physical Memory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Virtual Address Space PFN valid prot present dirty 10 1 r-x - 15 rw- … 3 23 code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Allocate heap A Page Table For 16KB Address Space stack A 16KB Address Space with 1KB Pages

Problem Most of the page table is unused, full of invalid entries. PFN Physical Memory 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Virtual Address Space PFN valid prot present dirty 10 1 r-x - 15 rw- … 3 23 code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Allocate heap A Page Table For 16KB Address Space stack A 16KB Address Space with 1KB Pages

Hybrid Approach: Paging and Segments 1 page table per segment – code, heap, stack Base holds the physical address of the page table for that segment, rather than the base of the segment itself The bounds register indicates the end of the page table Page table for a segment grows as the segment is allocated more memory by the OS

Simple Example of Hybrid Approach Each process has three page tables associated with it The base register for each of these segments contains the physical address of a linear page table for that segment. 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Seg VPN Offset 32-bit Virtual address space with 4KB pages Seg value Content 00 unused segment 01 code 10 heap 11 stack

TLB miss with Hybrid Approach Hardware gets the physical address of the page table using the base and bounds Use the segment bits(SN) to determine which base and bounds pair to use Use the VPN to index into the segment to get the PTE 01: SN = (VirtualAddress & SEG_MASK) >> SN_SHIFT 02: VPN = (VirtualAddress & VPN_MASK) >> VPN_SHIFT 03: AddressOfPTE = Base[SN] + (VPN * sizeof(PTE))

Problems?

Problems? Sparsely used heap? External fragmentation? May still waste plenty of space External fragmentation? Back to different sized fragments

Multi-level Page Tables Turns the linear page table into something like a tree. Chop up the page table into page-sized units If an entire page of page-table entries are invalid, don’t allocate that page of the page table at all To track whether a page of the page table is valid, use a new structure, called a page directory

Multi-level Page Tables: Page directory Linear Page Table Multi-level Page Table PBTR PBTR 201 200 valid prot PFN valid valid prot PFN PFN 1 rx 12 13 - rw 100 86 15 1 201 - 203 1 rx 12 13 - rw 100 PFN201 PFN200 PFN201 The Page Directory [Page 1 of PT:Not Allocated] PFN202 [Page 2 of PT: Not Allocated] - 1 rw 86 15 PFN203 PFN204 Linear (Left) And Multi-Level (Right) Page Tables

Multi-level Page Tables: Page directory entries The page directory contains one entry per page of the page table. One page directory entry (PDE) per page of the page table PDE has a valid bit and page frame number(PFN)

Multi-level Page Tables: Advantage & Disadvantage Allocates page-table space proportional to the amount of address space you are using The OS can grab the next free page when it needs to allocate or grow a page table No external fragmentation Disadvantage Multi-level table is a small example of a time-space trade-off Must traverse multiple layers on a TLB miss – added memory access Complexity

Multi-level Page Table: Indirection A multi-level structure adds a level of indirection through use of the page directory Page-table pages can be placed wherever we would like in physical memory Linear page tables had to be in contiguous memory PTBR stored the base of the page table VPN indexed into the page table

A Detailed Multi-Level Example code (free) heap stack 0000 0000 0000 0001 ... Flag Detail Address space 16 KB Page size 64 byte Virtual address 14 bit VPN 8 bit Offset 6 bit Page table entries 2^8 1111 1111 A 16-KB Address Space With 64-byte Pages 13 12 11 10 9 8 7 6 5 4 3 2 1 VPN Offset

A Detailed Multi-Level Example: Page Directory Index The page directory needs one entry per page of the page table 64 B page / 4 B PTE = 16 PTE/page 256 total PTEs / 16 = 16 entries in the page directory If a page-directory entry is invalid  Raise an exception Page Directory Index 13 12 11 10 9 8 7 6 5 4 3 2 1 VPN Offset 14-bits Virtual address

A Detailed Multi-Level Example: Page Table Index If the PDE is valid: Use the page-table index to index into the page table page itself and get the correct PTE Page Directory Index Page Table Index 13 12 11 10 9 8 7 6 5 4 3 2 1 VPN Offset 14-bits Virtual address

More than Two Levels In some cases, a deeper tree is possible. VPN 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 VPN offset Flag Detail Virtual address 30 bit Page size 512 byte VPN 21 bit Offset 9 bit

More than Two Level : Page Table Index In some cases, a deeper tree is possible. 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Page Directory Index Page Table Index VPN offset Flag Detail Virtual address 30 bit Page size 512 byte VPN 21 bit Offset 9 bit Page entry per page 128 PTEs – 7 bits Our page directory now has 2 ^ 14 entries requiring 128 pages of storage

More than Two Level : Page Directory Add a third level Each page directory level now requires only 1 page of storage If a PDE is invalid, those page table pages are not allocated 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 PD Index 0 PD Index 1 Page Table Index VPN offset

Multi-level Page Table Control Flow 01: VPN = (VirtualAddress & VPN_MASK) >> SHIFT 02: (Success,TlbEntry) = TLB_Lookup(VPN) 03: if(Success == True) //TLB Hit 04: if(CanAccess(TlbEntry.ProtectBits) == True) 05: Offset = VirtualAddress & OFFSET_MASK 06: PhysAddr = (TlbEntry.PFN << SHIFT) | Offset 07: Register = AccessMemory(PhysAddr) 08: else RaiseException(PROTECTION_FAULT); 09: else // perform the full multi-level lookup (line 1) extract the virtual page number(VPN) (line 2) check if the TLB holds the translation for this VPN (lines 5-8) extract the page frame number from the relevant TLB entry, and form the desired physical address and access memory

Multi-level Page Table Control Flow 11: else 12: PDIndex = (VPN & PD_MASK) >> PD_SHIFT 13: PDEAddr = PDBR + (PDIndex * sizeof(PDE)) 14: PDE = AccessMemory(PDEAddr) 15: if(PDE.Valid == False) 16: RaiseException(SEGMENTATION_FAULT) 17: else // PDE is Valid: now fetch PTE from PT (line 11) extract the Page Directory Index(PD Index) (line 13) get Page Directory Entry(PDE) (lines 15-17) Check PDE valid flag. If valid flag is true, fetch Page Table entry from Page Table

The Translation Process: Remember the TLB 18: PTIndex = (VPN & PT_MASK) >> PT_SHIFT 19: PTEAddr = (PDE.PFN << SHIFT) + (PTIndex * sizeof(PTE)) 20: PTE = AccessMemory(PTEAddr) 21: if(PTE.Valid == False) 22: RaiseException(SEGMENTATION_FAULT) 23: else if(CanAccess(PTE.ProtectBits) == False) 24: RaiseException(PROTECTION_FAULT); 25: else 26: TLB_Insert(VPN, PTE.PFN , PTE.ProtectBits) 27: RetryInstruction()

Inverted Page Tables Keep a single page table that has an entry for each physical page of the system. The entry tells us which process is using this page, and which virtual page of that process maps to this physical page.

One last assumption…

One last assumption… So far… all process’s full address space has fit into physical memory

Beyond Physical Memory: Mechanisms Require an additional level in the memory hierarchy. OS needs a place to stash away portions of address space that currently aren’t in great demand. In modern systems, this role is usually served by a hard disk drive Registers Cache Main Memory Mass Storage( hard disk, tape, etc...) Memory Hierarchy in modern system

Single large address space for a process Why one large address space? Ease of use: as programmers we don’t have to worry about: Lack of memory How to lay out code and data in memory How to move code/data in and out of memory Beyond a single process. The addition of swap space allows the OS to support the illusion of a large virtual memory for multiple concurrently-running process

Physical Memory and Swap Space Reserve some space on the disk for moving pages back and forth. OS manages the swap space, in page-sized unit PFN 0 PFN 1 PFN 2 PFN 3 Proc 0 [VPN 0] Proc 1 [VPN 2] [VPN 3] Proc 2 Physical Memory Swap Space Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Proc 0 [VPN 1] [VPN 2] [Free] Proc 1 [VPN 0] Proc 3 Proc 2 Physical Memory and Swap Space

Present Bit Add some machinery to support swapping pages to and from the disk. When the hardware looks in the PTE, it may find that the page is not present in physical memory. Value Meaning 1 page is present in physical memory The page is not in memory but rather on disk.

What If Memory Is Full ? The OS must evict pages to make room for the new pages the OS is about to bring in. The process of picking a page to kick out, or replace is known as page-replacement policy

The Page Fault Accessing a page that is not in physical memory OS needs to swap the page into memory in order to service the page fault.

Page Fault Control Flow PTE used for data such as the PFN of the page for a disk address. Operating System 3. Check storage to see if the page exist. Secondary Storage 2.Trap 1. Reference Load M i 6. reinstruction Page Frame Page Frame Page Table ... 4. Get the page Page Frame 5. Reset Page Table. Page Frame Virtual Address When the OS receives a page fault, it looks in the PTE and issues the request to disk.

Page Fault Control Flow – Hardware 1: VPN = (VirtualAddress & VPN_MASK) >> SHIFT 2: (Success, TlbEntry) = TLB_Lookup(VPN) 3: if (Success == True) // TLB Hit 4: if (CanAccess(TlbEntry.ProtectBits) == True) 5: Offset = VirtualAddress & OFFSET_MASK 6: PhysAddr = (TlbEntry.PFN << SHIFT) | Offset 7: Register = AccessMemory(PhysAddr) 8: else RaiseException(PROTECTION_FAULT)

Page Fault Control Flow – Hardware 9: else // TLB Miss 10: PTEAddr = PTBR + (VPN * sizeof(PTE)) 11: PTE = AccessMemory(PTEAddr) 12: if (PTE.Valid == False) 13: RaiseException(SEGMENTATION_FAULT) 14: else 15: if (CanAccess(PTE.ProtectBits) == False) 16: RaiseException(PROTECTION_FAULT) 17: else if (PTE.Present == True) 18: // assuming hardware-managed TLB 19: TLB_Insert(VPN, PTE.PFN, PTE.ProtectBits) 20: RetryInstruction() 21: else if (PTE.Present == False) 22: RaiseException(PAGE_FAULT)

Page Fault Control Flow – Software 1: PFN = FindFreePhysicalPage() 2: if (PFN == -1) // no free page found 3: PFN = EvictPage() // run replacement algorithm 4: DiskRead(PTE.DiskAddr, pfn) // sleep (waiting for I/O) 5: PTE.present = True // update page table with present 6: PTE.PFN = PFN // bit and translation (PFN) 7: RetryInstruction() // retry instruction The OS must find a physical frame for the faulted page to reside within If there is no such page, run the replacement algorithm

When Replacements Really Occur So far…OS waits until memory is entirely full, and only then replaces a page to make room for some other page This is a little bit unrealistic, and there are many reasons for the OS to keep a small portion of memory free more proactively. Swap Daemon, Page Daemon If fewer than low watermark pages available, a background thread runs that is responsible for freeing memory The thread evicts pages until there are high watermark pages available

Next time… Which page to replace? Project overview