CS 241 Spring 2007 System Programming 1 Memory Implementation Issues Lecture 33 Klara Nahrstedt.

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
Advertisements

SE-292 High Performance Computing
Chapter 5 : Memory Management
Chapter 4 Memory Management Basic memory management Swapping
Project 5: Virtual Memory
CS 241 Spring 2007 System Programming 1 Memory Replacement Policies Lecture 32 Klara Nahrstedt.
Page Replacement Algorithms
Virtual Memory (Chapter 4.3)
Virtual Memory Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution -
Cache and Virtual Memory Replacement Algorithms
Chapter 3.3 : OS Policies for Virtual Memory
Chapter 9: Virtual Memory
Module 10: Virtual Memory
Chapter 3 Memory Management
Chapter 10: Virtual Memory
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Chapter 9: Virtual Memory
Virtual Memory II Chapter 8.
Virtual Memory: Page Replacement
Memory Management.
Paging: Design Issues. Readings r Silbershatz et al: ,
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
Chapter 10: Virtual Memory
SE-292 High Performance Computing
Copyright ©: Lawrence Angrave, Vikram Adve, Caccamo 1 Virtual Memory III.
Allocation of Frames Each process needs minimum number of pages
Operating Systems Prof. Navneet Goyal Department of Computer Science & Information Systems BITS, Pilani.
Page 15/4/2015 CSE 30341: Operating Systems Principles Allocation of Frames  How should the OS distribute the frames among the various processes?  Each.
Chapter 101 Cleaning Policy When should a modified page be written out to disk?  Demand cleaning write page out only when its frame has been selected.
Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Segmentation and Paging Considerations
Memory Management Design & Implementation Segmentation Chapter 4.
Operating System Support Focus on Architecture
Memory Management 2010.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Virtual Memory Background.
Computer Organization and Architecture
OS Spring’04 Virtual Memory: Page Replacement Operating Systems Spring 2004.
03/29/2004CSCI 315 Operating Systems Design1 Page Replacement Algorithms (Virtual Memory)
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
CS 241 Fall 2007 System Programming 1 Virtual Memory Lawrence Angrave and Vikram Adve.
1 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms.
Second-Chance Algorithm Basically, it’s a FIFO algorithm When a page has been selected, we inspect its reference bit. If the value is 0, we proceed to.
Chapter 4 Memory Management Virtual Memory.
Copyright ©: Lawrence Angrave, Vikram Adve, Caccamo 1 Virtual Memory.
Lecture 11 Page 1 CS 111 Online Working Sets Give each running process an allocation of page frames matched to its needs How do we know what its needs.
Virtual Memory The memory space of a process is normally divided into blocks that are either pages or segments. Virtual memory management takes.
NETW3005 Virtual Memory. Reading For this lecture, you should have read Chapter 9 (Sections 1-7). NETW3005 (Operating Systems) Lecture 08 - Virtual Memory2.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 9: Virtual Memory.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Virtual Memory Chapter 8.
OPERATING SYSTEM CONCEPTS AND PRACTISE
Chapter 9: Virtual Memory
CSC 322 Operating Systems Concepts Lecture - 16: by
Module 9: Virtual Memory
Chapter 9: Virtual Memory
Chapter 9: Virtual-Memory Management
Page Replacement.
5: Virtual Memory Background Demand Paging
Chapter 9: Virtual Memory
Virtual Memory: Working Sets
Module 9: Virtual Memory
COMP755 Advanced Operating Systems
Virtual Memory.
CSE 542: Operating Systems
Presentation transcript:

CS 241 Spring 2007 System Programming 1 Memory Implementation Issues Lecture 33 Klara Nahrstedt

2 CS241 Administrative  Read Stallings Chapter 8.1 and 8.2 about VM  LMP3 starts today  Start Early!!!

3 Contents Brief Discussion of Second Chance Replacement Algorithm Paging basic process implementation Frame allocation for multiple processes Thrashing Working Set Memory-Mapped Files

4 Second Chance Example 12 references, 9 faults

5 Basic Paging Process Implementation(1) Separate page out from page in Keep a pool of free frames when a page is to be replaced, use a free frame read the faulting page and restart the faulting process while page out is occurring Why? Alternative: Before a frame is needed to read in the faulted page from disk, just evict a page Disadvantage with alternative: A page fault may require 2 disk accesses: 1 for writing-out the evicted page, 1 for reading in the faulted page

6 Basic Paging Process Implementation(2) Paging out Write dirty pages to disk whenever the paging device is free and reset the dirty bit Benefit? Remove the paging out (disk writes) process from the critical path allows page replacement algorithms to replace clean pages What should we do with paged out pages? Cache paged out pages in primary memory (giving it a second chance) Return paged-out pages to a free pool but remember which page frame they are. If system needs to map page in again, reuse page.

7 Frame Allocation for Multiple Processes How are the page frames allocated to individual virtual memories of the various jobs running in a multi-programmed environment?. Simple solution Allocate a minimum number (??) of frames per process. One page from the current executed instruction Most instructions require two operands include an extra page for paging out and one for paging in

8 Multi-Programming Frame Allocation Solution 2 allocate an equal number of frames per job but jobs use memory unequally high priority jobs have same number of page frames and low priority jobs degree of multiprogramming might vary

9 Multi-Programming Frame Allocation Solution 3: allocate a number of frames per job proportional to job size how do you determine job size: by run command parameters or dynamically? Why multi-programming frame allocation is important? If not solved appropriately, it will result in a severe problem--- Thrashing

10 Thrashing: exposing the lie of VM Thrashing: As page frames per VM space decrease, the page fault rate increases. Each time one page is brought in, another page, whose contents will soon be referenced, is thrown out. Processes will spend all of their time blocked, waiting for pages to be fetched from disk I/O devs at 100% utilization but system not getting much useful work done Memory and CPU mostly idle Real mem P1 P2 P3

11 Page Fault Rate vs. Size Curve

12 Why Thrashing? Computations have locality As page frames decrease, the page frames available are not large enough to contain the locality of the process. The processes start faulting heavily Pages that are read in, are used and immediately paged out.

13 Results of Thrashing

14 Why? As the page fault rate goes up, processes get suspended on page out queues for the disk. The system may try to optimize performance by starting new jobs. Starting new jobs will reduce the number of page frames available to each process, increasing the page fault requests. System throughput plunges.

15 Solution: Working Set Main idea figure out how much memory does a process need to keep most the recent computation in memory with very few page faults? How? The working set model assumes locality the principle of locality states that a program clusters its access to data and text temporally A recently accessed page is more likely to be accessed again Thus, as the number of page frames increases above some threshold, the page fault rate will drop dramatically

16 Working set (1968, Denning) What we want to know: collection of pages process must have in order to avoid thrashing This requires knowing the future. And our trick is? Working set: Pages referenced by process in last  seconds of execution considered to comprise its working set  : the working set parameter Usages of working set sizes? Cache partitioning: give each app enough space for WS Page replacement: preferentially discard non-WS pages Scheduling: process not executed unless WS in memory

17 Working Set At least allocate this many frames for this process

18 Calculating Working Set 12 references, 8 faults Window size is 

19 Working Set in Action to Prevent Thrashing Algorithm if #free page frames > working set of some suspended process i, then activate process i and map in all its working set if working set size of some process k increases and no page frame is free, suspend process k and release all its pages

20 Working sets of real programs Typical programs have phases Working set size transition, stable Sum of both

21 Working Set Implementation Issues Moving window over reference string used for determination Keeping track of working set

22 Working Set Implementation Approximate working set model using timer and reference bit Set timer to interrupt after approximately x references, . Remove pages that have not been referenced and reset reference bit.

23 Page Fault Frequency Working Set Another approximation of pure working set Assume that if the working set is correct there will not be many page faults. If page fault rate increases beyond assumed knee of curve, then increase number of page frames available to process. If page fault rate decreases below foot of knee of curve, then decrease number of page frames available to process.

24 Page Fault Frequency Working Set

25 Page Size Considerations small pages require large page tables large pages imply significant amounts of page may not be referenced locality of reference tends to be small (256), implying small pages i/o transfers have high seek time, implying larger pages. (more data per seek.) internal fragmentation minimized with small page size Real systems (can be reconfigured) Windows: default 8KB Linux: default 4 KB

26 Memory Mapped Files Memory Mapped File In Blocks VM of User Mmap requests Disk File Blocks of data From file mapped To VM

27 Memory Mapped Files Dynamic loading. By mapping executable files and shared libraries into its address space, a program can load and unload executable code sections dynamically. Fast File I/O. When you call file I/O functions, such as read() and write(), the data is copied to a kernel's intermediary buffer before it is transferred to the physical file or the process. This intermediary buffering is slow and expensive. Memory mapping eliminates this intermediary buffering, thereby improving performance significantly.

28 Memory Mapped Files Streamlining file access. Once you map a file to a memory region, you access it via pointers, just as you would access ordinary variables and objects. Memory persistence. Memory mapping enables processes to share memory sections that persist independently of the lifetime of a certain process.

29 POSIX caddr_t mmap(caddress_t map_addr, /* map_addr is VM address to map file, use 0 to allow system to choose*/ size_t length, /* Length of file map*/ int protection, /* types of access*/ int flags, /*attributes*/ int fd, /*file descriptor*/ off_t offset); /*Offset file map start*/

30 Protection Attributes PROT_READ /* the mapped region may be read */ PROT_WRITE /* the mapped region may be written */ PROT_EXEC /* the mapped region may be executed */

31 Map first 4kb of file and read int #include int main(int argc, char *argv[]) { int fd; void * pregion; if (fd= open(argv[1], O_RDONLY) <0) { perror("failed on open"); return –1; }

32 Map first 4kb of file and read int /*map first 4 kilobytes of fd*/ pregion=mmap(NULL, 4096, PROT_READ, MAP_SHARED,fd,0); if (pregion==(caddr_t)-1) { perror("mmap failed") return –1; } close(fd); /*close the physical file because we don't need it */ /*access mapped memory; read the first int in the mapped file */ int val= *((int*) pregion); }

33 munmap int munmap(caddr_t addr, int length); int msync (void *address, size_t length, int flags) size_t page_size = (size_t) sysconf (_SC_PAGESIZE); SIGSEGV signal allows you to catch references to memory that have the wrong protection mode.

34 Summary Second Chance Replacement Policy Paging basic implementation Multiprogramming frame allocation Thrashing Working set model Working set implementation Page size consideration Memory-Mapped Files