9.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 9.5 Allocation of Frames Each process needs minimum number of pages Example: machine.

Slides:



Advertisements
Similar presentations
Module 10: Virtual Memory
Advertisements

Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
國立台灣大學 資訊工程學系 Chapter 9: Virtual Memory. 資工系網媒所 NEWS 實驗室 Objectives To describe the benefits of a virtual memory system To explain the concepts of demand.
Chapter 9: Virtual Memory Adapted to COP4610 by Robert van Engelen.
Operating Systems Prof. Navneet Goyal Department of Computer Science & Information Systems BITS, Pilani.
Page 15/4/2015 CSE 30341: Operating Systems Principles Allocation of Frames  How should the OS distribute the frames among the various processes?  Each.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 38 Frame Allocation Read.
03/31/2004CSCI 315 Operating Systems Design1 Allocation of Frames & Thrashing (Virtual Memory)
Chapter 101 Cleaning Policy When should a modified page be written out to disk?  Demand cleaning write page out only when its frame has been selected.
9.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Virtual Memory OSC: Chapter 9. Demand Paging Copy-on-Write Page Replacement.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 9: Virtual Memory
Advanced Operating Systems - Spring 2009 Lecture 17 – March 23, 2009 Dan C. Marinescu Office: HEC 439 B. Office hours:
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
1 Thursday, July 06, 2006 “Experience is something you don't get until just after you need it.” - Olivier.
Chapter 9: Virtual-Memory Management. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 9: Virtual Memory Background Demand Paging.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write.
Segmentation and Paging Considerations
Virtual-Memory Management
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Lecture 17: Wrapping up Virtual Memory.
Gordon College Stephen Brinton
Virtual Memory CS 3100 Virutal Memory.
Chapter 9: Virtual Memory
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Virtual Memory.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 9: Virtual Memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 9: Virtual-Memory Management.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Virtual Memory Background.
Chapter 10: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 10: Virtual Memory.
Instructor: Umar KalimNUST Institute of Information Technology Operating Systems Revisiting Virtual Memory.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Virtual Memory Background.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 9: Virtual Memory.
Chapter 9: Virtual Memory. Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
Chapter 9: Virtual Memory. 9.2CSCI 380 – Operating Systems Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 9: Virtual Memory.
Chapter 9: Virtual-Memory Management. 9.2Operating System Principles Chapter 9: Virtual-Memory Management Background Demand Paging Copy-on-Write Page.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Background Virtual memory –
Operating System Concepts 7th Edition Abraham SilBerschatz Peter Baer Galvin Greg Gagne Prerequisite: CSE212.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Virtual Memory Background.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Virtual Memory Background.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Virtual Memory.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
9.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Objectives To describe the benefits of a virtual memory system To explain the concepts.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Page Replacement Allocation of.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Outline n Background n Demand.
Chapter 9: Virtual Memory. 9.2 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped.
Saurav Karmakar. Chapter 9: Virtual Memory  Background  Demand Paging  Copy-on-Write  Page Replacement  Allocation of Frames  Thrashing  Memory-Mapped.
9.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Objectives To describe the benefits of a virtual memory system To explain the concepts.
9.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 9: Virtual Memory.
Chapter 9: Virtual Memory
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 9: Virtual Memory.
Silberschatz, Galvin and Gagne ©2007 Operating System Concepts with Java – 7 th Edition, Nov 15, 2006 Chapter 9: Virtual Memory.
9.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 9: Virtual Memory.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 9: Virtual Memory.
CSE Operating System Principles Virtual Memory.
Chapter 9: Virtual Memory. Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
Chapter 9: Virtual Memory. Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 9: Virtual Memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 9: Virtual-Memory Management.
Chapter 9: Virtual Memory
Module 9: Virtual Memory
Chapter 9: Virtual Memory
5: Virtual Memory Background Demand Paging
Module 9: Virtual Memory
Presentation transcript:

9.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 9.5 Allocation of Frames Each process needs minimum number of pages Example: machine with all memory reference: at least two memory accesses per instruction. If indirect addressing, then paging requires at least 3 frames per process Example: IBM 370 – 6 pages to handle MVC (multiple move) instruction: instruction is 6 bytes, might span 2 pages source block might straddle 2 pages destination block might straddle 2 pages Worst case: multiple level indirection Two major allocation schemes equal allocation priority allocation

9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Equal Allocation Equal allocation – For example, if there are 100 frames and 5 processes, give each process 20 frames. Proportional allocation – Allocate according to the size of process Priority Allocation Use a proportional allocation scheme using priorities rather than size

9.3 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Global vs. Local Allocation Global replacement – process selects a replacement frame from the set of all frames; one process can take a frame from another Allows a high-priority process to increase its frame allocation at the expense of a low-priority process Low-priority process cannot control its own page-fault rate Local replacement – each process selects from only its own set of allocated frames Global replacement generally results in greater system throughput

9.4 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 9.6 Thrashing ( 痛擊 ) If a process does not have “enough” pages, the page- fault rate is very high. This leads to: low CPU utilization operating system thinks that it needs to increase the degree of multiprogramming another process added to the system Thrashing: a process is busy swapping pages in and out. It spends more time in paging than executing.

9.5 Silberschatz, Galvin and Gagne ©2005 Operating System Principles

9.6 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Demand Paging and Thrashing Why does demand paging work? Answer: Locality model Process migrates from one locality to another Localities may overlap A process page faults when it changes locality If we allocate fewer frames than size of the current locality, the process will thrash For a system, why does thrashing occur? sum of size of locality > total memory size We can limit the effect of thrashing by using a local replacement algorithm If processes are thrashing, the average service time for a page fault will increase because of the longer queue for the paging device

9.7 Silberschatz, Galvin and Gagne ©2005 Operating System Principles locality in a memory- reference pattern

9.8 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Working-Set Model   working-set window  a fixed number of page references Example: 10,000 instruction WSS i (working set size of Process P i ) = total number of pages referenced in the most recent  (varies in time) if  too small will not encompass entire locality if  too large will encompass several localities if  =   will encompass entire program D =  WSS i  total demand frames of all processes if D > m (total number of available frames)  Thrashing Policy: if D > m, then suspend one of the processes

9.9 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Keeping Track of the Working Set Approximate with a fixed-interval timer interrupt + a reference bit Example:  = 10,000 and Timer interrupts after every 5000 time units Keep in memory 2 bits for each page Whenever a timer interrupts copy and sets all values of current reference bit to 0 If a page fault occurs, we can examine the current reference bit and two in-memory reference bits to determine whether a page was used during the last to references If one of the bits in memory = 1  page in working set Why is this not completely accurate? Improvement = 10 bits and interrupt every 1000 time units

9.10 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Page-Fault Frequency Scheme Establish “acceptable” page-fault rate If actual rate too low, process loses frame If actual rate too high, process gains frame 跳過 9.7

9.11 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 9.8 Allocating Kernel Memory Treated differently from user mode memory Kernel requests memory for structures of varying sizes  Some of them are less than a page in size Some kernel memory needs to be contiguous.  Ex: Hardware devices interact directly with physical memory Often allocated from a free-memory pool different from the list used to satisfy ordinary user-mode processes

9.12 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Buddy System Allocates memory from fixed-size segment consisting of physically-contiguous pages Memory allocated using power-of-2 allocator Satisfies requests in units sized as power of 2 Request rounded up to next highest power of 2 When smaller allocation needed than current available, current chunk split into two buddies of next-lower power of 2  Continue until appropriate sized chunk available Advantage: adjacent buddies can be combined quickly to form larger segment using coalescing ( 聯合 ) Drawback: very likely to cause fragmentation within allocated segments

9.13 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Buddy System Allocator

9.14 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Alternate strategy: Slab Allocation Slab is one or more physically contiguous pages Cache consists of one or more slabs Single cache for each unique kernel data structure Each cache filled with objects – instantiations of the data structure When cache created, filled with objects marked as free When structures stored, objects marked as used If slab is full of used objects, next object allocated from empty slab If no empty slabs, new slab allocated Benefits include no fragmentation, fast memory request satisfaction

9.15 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Slab Allocation

9.16 Silberschatz, Galvin and Gagne ©2005 Operating System Principles 9.9 Other Issues Prepaging To reduce the large number of page faults that occurs at process startup In a system with working-set model, we keep with each process a list of pages in its working set. Before suspending a process, its working set is saved. Prepage all or some of the pages a process will need, before they are referenced But if prepaged pages are unused, I/O and memory was wasted Assume s pages are prepaged and α of the pages is used  Is cost of the s * α saved pages faults greater or less than the cost of prepaging s * (1- α) unnecessary pages?  α near zero  prepaging loses  α near one  prepaging wins

9.17 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Other Issues – Page Size Page size selection must take the following into consideration: Size of the page table  A large page size is preferred Internal fragmentation  Need a small page size Time to read or write a page  Need a larger page size Locality  Smaller page size to match program locality more accurately Number of page faults  Need a larger page size to reduce number of page faults

9.18 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Other Issues – TLB Reach TLB: expensive and power hungry TLB Reach -The amount of memory accessible from TLB TLB Reach = (TLB Size) X (Page Size) Ideally, the working set of each process is stored in TLB Otherwise there is a high degree of page faults Increase the Page Size to increase TLB reach This may lead to an increase in fragmentation as not all applications require a large page size Provide Multiple Page Sizes This allows applications that require larger page sizes the opportunity to use them without an increase in fragmentation Recent trends is to move toward software-managed TLB

9.19 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Other Issues – Program Structure Program structure l int[128,128] data; Each row is stored in one page Program 1 for (j = 0; j <128; j++) for (i = 0; i < 128; i++) data[i,j] = 0;  128 x 128 = 16,384 page faults Program 2 for (i = 0; i < 128; i++) for (j = 0; j < 128; j++) data[i,j] = 0;  128 page faults

9.20 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Other Issues – Program Structure Stack has good locality, hash table has bad locality In addition to locality, other factors:  search speed, total number of memory references, total number of pages touched Compiler and loader Code pages are always read-only Loader can avoid placing routines across page boundaries Routines that call each other can be packed into one page Language The use of pointers in C and C++ tend to randomize access to memory, thereby potentially diminishing a process’s locality OO programs tend to have a poor locality of reference

9.21 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Other Issues – I/O interlock I/O Interlock – Pages must sometimes be locked into memory. Consider I/O - Pages that are used for copying a file from a device must be locked from being selected for eviction by a page replacement algorithm Solutions Never execute I/O to user memory. Use system memory instead. Extra copying between user mamery and system memory. Allow pages to be locked into memory with a lock bit. Lock-bit can be used in preventing replacement of a newly brought-in page until it can be used at least once. Useful for low-priority process,.

9.22 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Reason Why Frames Used For I/O Must Be In Memory 跳過 9.10