Paging Paging is a memory-management scheme that permits the physical-address space of a process to be noncontiguous. Paging avoids the considerable problem.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Page Table Implementation
Paging Hardware With TLB
4/14/2017 Discussed Earlier segmentation - the process address space is divided into logical pieces called segments. The following are the example of types.
Memory Management (II)
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Memory Management.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
1 School of Computing Science Simon Fraser University CMPT 300: Operating Systems I Ch 8: Memory Management Dr. Mohamed Hefeeda.
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
03/22/2004CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
A. Frank - P. Weisberg Operating Systems Simple/Basic Paging.
Silberschatz, Galvin and Gagne  Operating System Concepts Segmentation Memory-management scheme that supports user view of memory. A program.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
1 Tuesday, July 04, 2006 "Programs expand to fill the memory available to hold them." - Modified Parkinson’s Law.
Chapter 8: Main Memory.
CS 241 Section Week #12 (04/22/10).
Chapter 8 Main Memory Bernard Chen Spring Objectives To provide a detailed description of various ways of organizing memory hardware To discuss.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Lecture 21 Last lecture Today’s lecture Cache Memory Virtual memory
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D.
SOCSAMS e-learning Dept. of Computer Applications, MES College Marampally MEMORYMANAGEMNT.
Operating Systems Chapter 8
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 08 Main Memory (Page table questions)
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 34 Paging Implementation.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-8 Memory Management (2) Department of Computer Science and Software.
Memory Management 1. Background Programs must be brought (from disk) into memory for them to be run Main memory and registers are only storage CPU can.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
8.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Implementation of Page Table Page table is kept in main memory Page-table base.
CE Operating Systems Lecture 14 Memory management.
CS 149: Operating Systems March 3 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
9.1 Operating System Concepts Paging Example. 9.2 Operating System Concepts.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Objectives To provide a detailed description of various ways of organizing.
Page Table Implementation. Readings r Silbershatz et al:
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
Memory Management. Background Memory consists of a large array of words or bytes, each with its own address. The CPU fetches instructions from memory.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Silberschatz, Galvin and Gagne  Operating System Concepts Paging Logical address space of a process can be noncontiguous; process is allocated.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 8: Main Memory.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
W4118 Operating Systems Instructor: Junfeng Yang.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 33 Paging Read Ch. 9.4.
Main Memory: Paging and Segmentation CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition, Chapter 8: Memory- Management Strategies.
COMP 3500 Introduction to Operating Systems Paging: Basic Method Dr. Xiao Qin Auburn University Slides.
Memory: Page Table Structure
Basic Paging (1) logical address space of a process can be made noncontiguous; process is allocated physical memory whenever the latter is available. Divide.
Non Contiguous Memory Allocation
Chapter 8: Main Memory Source & Copyright: Operating System Concepts, Silberschatz, Galvin and Gagne.
Chapter 8: Main Memory.
Paging and Segmentation
Chapter 8: Main Memory.
Operating System Concepts
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Lecture 3: Main Memory.
So far in memory management…
Lecture 34 Syed Mansoor Sarwar
Presentation transcript:

Paging Paging is a memory-management scheme that permits the physical-address space of a process to be noncontiguous. Paging avoids the considerable problem of fitting the varying-sized memory chunks onto the backing store, from which most of the previous memory-management schemes suffered. When a process is to be executed, its pages are loaded into any available memory frames from the backing store. Physical memory is broken down into fixed-sized blocks (equal size), called frames. logical memory is divided into blocks of the same size, called pages. (At the time of process execution process address space will be created.) Size of page=size of frame=offset

The following snapshots show process address space with pages (i.e., logical address space), physical address space with frames, loading of paging into frames, and storing mapping of pages into frames in a page table.

Mapping paging in the logical into the frames in the physical address space and keeping this mapping in the page table

Paging

Example Page size = 4 bytes Process address space = 4 pages Physical address space = 8 frames Logical address: (1,3) = 0111 Physical address: (6,3) = 11011

Logical address: (1,0) = 0100 (page no., offset) Physical address: (6,0) = (frame no, offset) (Offset will be same for both addresses)

Address mapping Using a page size of 4 bytes and physical memory of 32 bytes (8 frames ), we show how the user's view of memory can be mapped into physical memory. Logical address 0 (0000) is (0,0) page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5. Thus, logical address 0 maps to physical address 20 (= (5 x 4) + 0). Logical address 3 (0011) (page 0, offset 3) maps to physical address 23 (= (5 x 4) + 3). Logical address 4 (0100) is (1,0) page 1, offset 0; according to the page table, page 1 is mapped to frame 6. Thus, logical address 4 maps to physical address 24 (= (6 x 4) + 0). Logical address 13 maps to physical address 9 and so on. Physical address= (frame no* frame size)+ offset

Protection Memory protection implemented by associating protection bit with each frame. Valid-invalid bit attached to each entry in the page table: 1. “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page. 2. “invalid” indicates that the page is not in the process’ logical address space.

Paging issues When we use a paging scheme, we have no external fragmentation: Any free frame can be allocated to a process that needs it. However, we may have some internal fragmentation. For example, if pages are 2,048 bytes, a process of 72,766 bytes would need 35 pages plus 1,086 bytes. It would be allocated 36 frames, resulting in an internal fragmentation of = 962 bytes. In the worst case, a process would need n pages plus one byte. It would be allocated n + 1 frames, resulting in an internal fragmentation of almost an entire frame. The problem with this approach is the time required to access a user memory location. With this scheme, two memory accesses are needed to access a byte (one for the page-table entry, one for the byte). The standard solution to this problem is to use a special, small, fast lookup hardware cache, called translation look-aside buffer (TLB). The TLB is associative, high-speed memory.

Translation look-aside buffer (TLB)

Effective memory-access time Effective access time or average access time= h*(T TLB +T MM )+(1-h)*(T TLB +T MM +T MM ) TLB Hit ratio: h*(T TLB +T MM ) TLB Miss ratio: (1-h)*(T TLB +T MM +T MM ) T TLB= Time to search in TLB T MM = Time to search in RAM

Example T mm =100 nsec T TLB =20 nsec Hit ratio is 80% T EAT =? TEAT= (80%)(20+100)+(20%)(20+2*100) = 0.8* *220 = 140 nsec

Page table issues In such an environment, the page table itself becomes excessively large. For example, consider a system with a 32- bit logical-address space. If the page size in such a system is 4 KB (2 12 ), then a page table may consist of up to 1 million entries (2 32 /2 12 ). Assuming that each entry consists of 4 bytes, each process may need up to 4 MB of physical- address space for the page table alone. Means calculated page table size (4MB) is grater then available page size (4KB) (available frame size in memory) that’s way, clearly, we would not want to allocate the page table contiguously in main memory. One simple solution to this problem is to divide the page table into smaller pieces. One way is to use a two-level paging or multilevel paging algorithm, in which the page table itself is also paged.

Multi level paging Page table needed for keeping track of pages of the page table called the outer page table or page directory. Paging level (structure) depends on size of logical address space and size of page size. In previous example : – No. of pages in the outer page table is 4M/4K=1K=2 10 pages=10 bits Size of the outer page table is – 1K*4 byte= 4Kbytes -----outer page will fit in one page because page size is 4Kbytes.

10 bit 32 bit 12 bit Size of outer table 1K*4Bytes

Example (VAX architecture) Given Logical address 32 bits So size of logical address space 2 32 bytes Given Page size 512 bytes=2 9 bytes=offset Given size of page table entry=4 bytes Logical address space of process divided into 4 equal sections means (2 2 )required 2 bit to represent each section. Size of each section is 2 32 bytes/2 2 = 2 30 bytes Each section has 2 30 bytes/2 9 bytes=2 21 pages, So 21 bits required to index page table for a section. Size of page table=2 21 bytes*4 bytes=2 23 bytes=8MB

Page table size is grater then page size so required multilevel paging then 8MB page table is paged into 8MB/512 bytes=~ 2K pages= 2 11 pages. (number of pages in outer page table, needed 11 bits) Size of outer page table is 2K *4 bytes=8KB 8KB is grater then page size so outer page table is further paged. Result is 3-level paging required in each sections. Section 2 bits Page no 21 bits Offset 9 bits

Other example Given logical address 64 bits Then size of logical address space= 2 64 bytes Given page size 4KB= 2 12 bytes=needed 12 bits  bits  Then page table consist of 2 52 entries. And size of page table is 2 52 *4bytes(given)= 2 54 bytes Size of page table is grater then memory page size so required multilevel paging means page table is divided into pages. Page table is paged into 2 54 bytes/ 2 12 bytes= 2 42 pages (no. of pages in outer page table, required bit 42 for outer page table) 52 bits page no divided into 42 bit outer page and 10 bit inner pages Page no 52 bits Offset 12 bits

Then outer page table size is 2 42 *4 bytes= 2 44 bytes. Outer page table size is grater then page size then further divides outer page table into second outer page table 2 44 bytes/ 2 12 bytes= 2 32 pages. Size of second outer page table is 2 32 *4bytes= 2 34 bytes. Second outer page table size is grater then page size then further divides second outer page table into third outer page table 2 34 bytes/ 2 12 bytes= 2 22 pages Size of third outer page table is 2 22 *4bytes= 2 24 bytes. third outer page table size is grater then page size then further divides third outer page table into forth outer page table 2 24 bytes/ 2 12 bytes= 2 12 pages Finally page table size equal to page size then exit.

Shared pages Reentrant (read only) code pages of a process address space can be shared.

Questions (paging) Logical address space of 16 pages of 1024 words each, mapped into a physical memory of 32 frames, find (1 word=2bytes) – Logical address size – Physical address size – Number of bits for p, f and d. A system uses 32 bit physical address, 24 bit logical address and size of frame 1 Kbyte. Find – Size of logical address space – Size of physical address space – No. of pages in LAS – No of frames in PAS – Size of page table In a computer system logical address space is divided into 256 pages with page size 2048 bytes, size of physical address space 2 32 bytes. Find – Size of logical address space – No of frame in physical address – Size of page table where are extra bit valid/invalid is also store with each page table entry.

Questions (EAT) Suppose to access main memory (search page table or pages) system required 200ns. If most recent access page and corresponding frames number are stored in TLB (cache or associative memory). Then time required to search at cache is 10ns. If 90% is TLB access to search a page number corresponding to frame number then what will be the EAT. On a paged system, associative register hold most active page entries and the page table stored in the main memory. If references satisfied by the associative registers take 90ns and reference through the main memory page table takes 220ns. So what is the effective access time if 60% of all memory reference find there entries in the associative registers.

Solution 1 T mm =200 nsec T TLB =10 nsec TLB hit ratio is 80% T EAT =? T EAT = (90%)(10+200)+(10%)(10+2*200) = 0.9* *410 = 230 nsec

Solution 2 T mm =220 nsec T TLB =90 nsec TLB Hit ratio is 60% T EAT =? TEAT= (60%)(90+220)+(40%)(90+2*220) = 0.6* *440 = 398 nsec