Page Table Implementation. Readings r Silbershatz et al: 8.4-8.5.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Page Table Implementation
Paging Hardware With TLB
4/14/2017 Discussed Earlier segmentation - the process address space is divided into logical pieces called segments. The following are the example of types.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Main Memory CS Memory Management1. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main.
Memory Management (II)
Chapter 9: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Memory Management Background.
Background A program must be brought into memory and placed within a process for it to be executed. Input queue – collection of processes on the disk that.
Memory Management.
03/10/2004CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
1 School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Ch 8: Memory Management Dr. Mohamed Hefeeda.
Memory Management Gordon College Stephen Brinton.
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
03/22/2004CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
A. Frank - P. Weisberg Operating Systems Simple/Basic Paging.
Silberschatz, Galvin and Gagne  Operating System Concepts Segmentation Memory-management scheme that supports user view of memory. A program.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
1 Tuesday, July 04, 2006 "Programs expand to fill the memory available to hold them." - Modified Parkinson’s Law.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Main Memory.
 Background  Swapping  Contiguous Allocation  Paging  Segmentation  Segmentation with Paging.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Chapter 8: Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Background Program must be brought.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 8: Memory Management.
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 34 Paging Implementation.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-8 Memory Management (2) Department of Computer Science and Software.
Memory Management 1. Background Programs must be brought (from disk) into memory for them to be run Main memory and registers are only storage CPU can.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
8.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Implementation of Page Table Page table is kept in main memory Page-table base.
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
9.1 Operating System Concepts Paging Example. 9.2 Operating System Concepts.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Paging Paging is a memory-management scheme that permits the physical-address space of a process to be noncontiguous. Paging avoids the considerable problem.
Chapter 8: Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Memory Management. Background Memory consists of a large array of words or bytes, each with its own address. The CPU fetches instructions from memory.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Silberschatz, Galvin and Gagne  Operating System Concepts Paging Logical address space of a process can be noncontiguous; process is allocated.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 8: Main Memory.
COMP 3500 Introduction to Operating Systems Paging: Translation Look-aside Buffers (TLB) Dr. Xiao Qin Auburn University
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
W4118 Operating Systems Instructor: Junfeng Yang.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 33 Paging Read Ch. 9.4.
1 Chapter 8: Main Memory. 2 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
COMP 3500 Introduction to Operating Systems Paging: Basic Method Dr. Xiao Qin Auburn University Slides.
Memory: Page Table Structure CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Memory: Page Table Structure
Basic Paging (1) logical address space of a process can be made noncontiguous; process is allocated physical memory whenever the latter is available. Divide.
Chapter 8: Main Memory Source & Copyright: Operating System Concepts, Silberschatz, Galvin and Gagne.
Chapter 8: Main Memory.
Paging and Segmentation
Operating System Concepts
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
So far in memory management…
Presentation transcript:

Page Table Implementation

Readings r Silbershatz et al:

Outline r Translation Lookaside buffer (TLB) r Protection and Shared Pages r Dealing with Large Page Tables

TLB

Implementation of Page Table r The simplest approach is to have the page table implemented as a set of dedicated registers r Note: Not feasible to keep page table in registers m Why? Page tables can be very large m Context switching would be very expensive

Implementation of Page Table r Each process has a page table r Page table is kept in main memory r Page-table base register (PTBR) points to the page table r Page-table length register (PRLR) indicates size of the page table r Changing page tables requires changing only this one register during a contet switch

Implementation of Page Table r Problem: m Want to access location i m Two memory accesses: Index into the page table using the value in the PTBR offset by the location i. This gives the frame number Use the frame number to access the desired address r Solution: Use a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)

TLB r Associative memory – parallel search r Address translation (p, d) m If p is in associative memory, get frame number out m Otherwise get frame number from page table in memory Page #Frame #

Paging Hardware With TLB

TLB r The TLB contains only a few of the page- table entries r When a logical address is generated by the CPU its page number is presented to the TLB. r If found the frame number is immediately available

TLB r If page number is not in TLB then a TLB miss occurs m The page table is consulted m The page number and frame number is added to the TLB m If the TLB is full then one of the entries is replaced m Example replacement policy: Least Recently Used (LRU) r A high hit rate has a high impact can dramatically reduce lookup time

Effective Access Time r TLB Lookup =  time unit r Memory cycle is 1 r Hit ratio =  m If hit the time to get page is 1 +  r Miss ratio = 1-  m If miss the time to get page is 2 +  r Effective Access Time (EAT) EAT = (1 +  )  + (2 +  )(1 –  ) = 2 +  – 

Effective Access Time r TLB Lookup = 20 nanoseconds r Hit ratio (percentage of times that a particular page is found in the TLB) = 80% r Time to access memory: 100 nanoseconds r TLB hit: m Time to get data: 120 r TLB miss: m Time to get data: 220 r Effective access time: m 0.80 * *220 = 140

Effective Access Time r Assume hit ratio is 98% (typical) r Effective access time: m 0.98 * *220 = 122

Protection and Shared Pages

Protection r Memory protection implemented by associating protection bit with each frame r One protection bit can define a page to be read-write or read-only

Shared Pages r Shared code m One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems). m Shared code must appear in same location in the logical address space of all processes r Private code and data m Each process keeps a separate copy of the code and data m The pages for the private code and data can appear anywhere in the logical address space

Shared Pages Example

Dealing with Large Page Tables

Structure of the Page Table r Typically systems have large logical address spaces r Page table could be excessively large r Example: m 32-bit logical address space The number of possible addresses in the logical address space is 2 32 m Page size is 4 KB (4096 bytes or 2 12 ) m Number of pages is 2 20 (20 bits for page number) m Page table may consist of up to 1 million entries r Take away message: Page table could be large

Structure of Page Table r Several approaches m MultiLevel Page tables m Hashed Page Tables m Inverted Page Tables

Two-Level Page-Table Scheme Page table is also paged Need to be able to index the outer page table

Two-Level Paging Example r A logical address (on 32-bit machine with 1K page size) is divided into: m A page number consisting of 22 bits m A page offset consisting of 10 bits r Since the page table is paged, the page number is further divided into: m A 12-bit page number m A 10-bit page offset

Two-Level Paging Example r Thus, a logical address is as follows: where p 1 is an index into the outer page table, and p 2 is the displacement within the page of the outer page table page numberpage offset p1p1 p2p2 d 12 10

Address-Translation Scheme r p 1 is used to index the outer page table r p 2 is used to index the page table

Multi-level Paging r What if you have a 64-bit architecture? r Do you think hierarchical paging is a good idea? m How many levels of paging are needed? m What is the relationship between paging and memory accesses? m 64-bit means that even the outer page is large The 64-bit UltraSPARC requires 7 levels of paging

Hashed Page Tables r Common in address spaces larger than 32 bits r The page number is hashed into a page table m This page table contains a chain of elements hashing to the same location r Virtual page numbers are compared in this chain searching for a match m If a match is found, the corresponding physical frame is extracted

Summary r This section studied implementation issues related to hash tables