Virtual Memory II: Thrashing Working Set Algorithm Dynamic Memory Management.

Slides:



Advertisements
Similar presentations
Module 10: Virtual Memory
Advertisements

Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Paging: Design Issues. Readings r Silbershatz et al: ,
Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
Dynamic Memory Management
國立台灣大學 資訊工程學系 Chapter 9: Virtual Memory. 資工系網媒所 NEWS 實驗室 Objectives To describe the benefits of a virtual memory system To explain the concepts of demand.
Virtual Memory.
Copyright ©: Lawrence Angrave, Vikram Adve, Caccamo 1 Virtual Memory III.
1 Virtual Memory Management B.Ramamurthy. 2 Demand Paging Main memory LAS 0 LAS 1 LAS 2 (Physical Address Space -PAS) LAS - Logical Address.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 9: Virtual Memory
Advanced Operating Systems - Spring 2009 Lecture 17 – March 23, 2009 Dan C. Marinescu Office: HEC 439 B. Office hours:
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
1 Thursday, July 06, 2006 “Experience is something you don't get until just after you need it.” - Olivier.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Gordon College Stephen Brinton
Ken Birman. 2 Reminders We’ve focused on demand paging Each process is allocated  pages of memory As a process executes it references pages On a miss.
Ken Birman. What’s the goal? Each program you build consists of Code you wrote Pre-existing libraries your code accesses In early days, the balance.
CS 153 Design of Operating Systems Spring 2015 Lecture 19: Page Replacement and Memory War.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
1 CSE 380 Computer Operating Systems Instructor: Insup Lee University of Pennsylvania, Fall 2002 Lecture Note: Memory Management.
Memory Design Example. Selecting Memory Chip Selecting SRAM Memory Chip.
Operating System Support Focus on Architecture
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
CS 104 Introduction to Computer Science and Graphics Problems
Virtual Memory Chapter 8.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
Thrashing and Memory Management
Computer Organization and Architecture
1 Inner Workings of Malloc and Free Professor Jennifer Rexford COS 217.
1 Virtual Memory Management B.Ramamurthy Chapter 10.
Virtual Memory. 2 Last Week Memory Management Increase degree of multiprogramming –Entire process needs to fit into memory Dynamic Linking and Loading.
Chapter 9: Virtual Memory. Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel.
CS364 CH08 Operating System Support TECH Computer Science Operating System Overview Scheduling Memory Management Pentium II and PowerPC Memory Management.
Layers and Views of a Computer System Operating System Services Program creation Program execution Access to I/O devices Controlled access to files System.
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
Review of Memory Management, Virtual Memory CS448.
Lecture Topics: 11/17 Page tables TLBs Virtual memory flat page tables
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Page Replacement Allocation of.
1 CMSC421: Principles of Operating Systems Nilanjan Banerjee Principles of Operating Systems Acknowledgments: Some of the slides are adapted from Prof.
CS 241 Discussion Section (11/17/2011). Outline Review of MP7 MP8 Overview Simple Code Examples (Bad before the Good) Theory behind MP8.
CS 241 Section Week #9 (11/05/09). Topics MP6 Overview Memory Management Virtual Memory Page Tables.
Lecture Topics: 11/24 Sharing Pages Demand Paging (and alternative) Page Replacement –optimal algorithm –implementable algorithms.
1 Virtual Memory. Cache memory: provides illusion of very high speed Virtual memory: provides illusion of very large size Main memory: reasonable cost,
CS 241 Discussion Section (12/1/2011). Tradeoffs When do you: – Expand Increase total memory usage – Split Make smaller chunks (avoid internal fragmentation)
CSE 153 Design of Operating Systems Winter 2015 Lecture 12: Page Replacement.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
COMP091 – Operating Systems 1 Memory Management. Memory Management Terms Physical address –Actual address as seen by memory unit Logical address –Address.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 9: Virtual Memory.
1 Page Replacement Algorithms. 2 Virtual Memory Management Fundamental issues : A Recap Key concept: Demand paging  Load pages into memory only when.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Chapter 9: Virtual Memory
ITEC 202 Operating Systems
William Stallings Computer Organization and Architecture
Main Memory Management
Module 9: Virtual Memory
Chapter 9: Virtual Memory
Last Time We’ve focused on demand paging
5: Virtual Memory Background Demand Paging
Operating Systems CMPSC 473
Operating Systems Concepts
Module 9: Virtual Memory
COMP755 Advanced Operating Systems
Working Set Algorithm Thrashing
Presentation transcript:

Virtual Memory II: Thrashing Working Set Algorithm Dynamic Memory Management

2 Announcements Prelim this week: –Thursday March 8 th, 7:30—9:00pm, 1½ hour exam –203 Phillips –Closed book, no calculators/PDAs/… –Bring ID Make-up exam will be early Thursday morning, March 8 th 8:30am-10:00am. –Three people signed up so far. Topics: Everything up to (and including) Today, March 5 th –Lectures 1-18, chapters 1-9 (7 th ed) Review Session: –Tomorrow, Tuesday, March 6 th, during second half of 415 section Homework 3 and Project 3 Design Doc due Today, March 5 th –Make sure to look at the lecture schedule to keep up with due dates!

3 Last Time We’ve focused on demand paging –Each process is allocated  pages of memory –As a process executes it references pages –On a miss a page fault to the O/S occurs –The O/S pages in the missing page and pages out some target page if room is needed The CPU maintains a cache of PTEs: the TLB. The O/S must flushes the TLB before looking at page reference bits during context switching (because this changes the page table)

4 Example Page Replacement: LRU,  =3 R S   In  79  386  56 Out  379  35  679  68 R(t): Page referenced at time t. S(t): Memory state when finished doing the paging at time t. In(t): Page brought in, if any. Out(t): Page sent out.  : None. Hit ratio: 9/19 = 47% Miss ratio: 10/19 = 53% While “filling” memory we are likely to get a page fault on almost every reference. Usually we don’t include these events when computing the hit ratio

5 Goals for today Review demand paging What is thrashing? Solutions to thrashing –Approach 1: Working Set –Approach 2: Page fault frequency Dynamic Memory Management –Memory allocation and deallocation and goals –Memory allocator - impossibility result Best fit Simple scheme - chunking, binning, and free Buddy-block scheme Other issues

6 Thrashing Def: Excessive rate of paging that occurs because processes in system require more memory –Keep throwing out page that will be referenced soon –So, they keep accessing memory that is not there Why does it occur? –Poor locality, past != future –There is reuse, but process does not fit –Too many processes in the system

7 Approach 1: Working Set Peter Denning, 1968 –He uses this term to denote memory locality of a program Def: pages referenced by process in last  time-units comprise its working set For our examples, we usually discuss WS in terms of , a “window” in the page reference string. But while this is easier on paper it makes less sense in practice! In real systems, the window should probably be a period of time, perhaps a second or two.

8 Working Sets The working set size is num pages in the working set –the number of pages touched in the interval [t-Δ+1..t]. The working set size changes with program locality. –during periods of poor locality, you reference more pages. –Within that period of time, you will have a larger working set size. Goal: keep WS for each process in memory. –E.g. If  WS i for all i runnable processes > physical memory, then suspend a process

9 Working Set Approximation Approximate with interval timer + a reference bit Example:  = 10,000 –Timer interrupts after every 5000 time units –Keep in memory 2 bits for each page –Whenever a timer interrupts copy and sets the values of all reference bits to 0 –If one of the bits in memory = 1  page in working set Why is this not completely accurate? –Cannot tell (within interval of 5000) where reference occured Improvement = 10 bits and interrupt every 1000 time units

10 Using the Working Set Used mainly for prepaging –Pages in working set are a good approximation In Windows processes have a max and min WS size –At least min pages of the process are in memory –If > max pages in memory, on page fault a page is replaced –Else if memory is available, then WS is increased on page fault –The max WS can be specified by the application

11 Theoretical aside Denning defined a policy called WSOpt –In this, the working set is computed over the next  references, not the last: R(t)..R(t+  -1) He compared WS with WSOpt. –WSOpt has knowledge of the future… –…yet even though WS is a practical algorithm with no ability to see the future, we can prove that the Hit and Miss ratio is identical for the two algorithms!

12 Key insight in proof Basic idea is to look at the paging decision made in WS at time t+  -1 and compare with the decision made by WSOpt at time t Both look at the same references… hence make same decision –Namely, WSOpt tosses out page R(t-1) if it isn’t referenced “again” in time t..t+  -1 –WS running at time t+  -1 tosses out page R(t-1) if it wasn’t referenced in times t...t+  -1

13 How do WSOpt and WS differ? WS maintains more pages in memory, because it needs  time “delay” to make a paging decision –In effect, it makes the same decisions, but it makes them after a time lag –Hence these pages hang around a bit longer

14 How do WS and LRU compare? Suppose we use the same value of  –WS removes pages if they aren’t referenced and hence keeps less pages in memory –When it does page things out, it is using an LRU policy! –LRU will keep all  pages in memory, referenced or not Thus LRU often has a lower miss rate, but needs more memory than WS

15 Approach 2: Page Fault Frequency Thrashing viewed as poor ratio of fetch to work PFF = page faults / instructions executed if PFF rises above threshold, process needs more memory –not enough memory on the system? Swap out. if PFF sinks below threshold, memory can be taken away

16 Working Sets and Page Fault Rates Page fault rate transition Working set stable

17 Other Issues – Program Structure Int[128,128] data; Each row is stored in one page Program 1 –for (j = 0; j <128; j++) for (i = 0; i < 128; i++) data[i,j] = 0; –128 x 128 = 16,384 page faults Program 2 –for (i = 0; i < 128; i++) for (j = 0; j < 128; j++) data[i,j] = 0; –128 page faults

18 Dynamic Memory Management: Contiguous Memory Allocation - Part II Notice that the O/S kernel can manage memory in a fairly trivial way: –All memory allocations are in units of “pages” –And pages can be anywhere in memory… so a simple free list is the only data structure needed But for variable-sized objects, we need a heap: –Used for all dynamic memory allocations malloc/free in C, new/delete in C++, new/garbage collection in Java Also, managing kernel memory – Is a very large array allocated by OS, managed by program

19 Allocation and deallocation What happens when you call: –int *p = (int *)malloc(2500*sizeof(int)); Allocator slices a chunk of the heap and gives it to the program –free(p); Deallocator will put back the allocated space to a free list Simplest implementation: –Allocation: increment pointer on every allocation –Deallocation: no-op –Problems: lots of fragmentation –This is essential FIFO or First-Fit heap (free memory) current free position allocation

20 Memory allocation goals Minimize space –Should not waste space, minimize fragmentation Minimize time –As fast as possible, minimize system calls Maximizing locality –Minimize page faults cache misses And many more Proven: impossible to construct “always good” memory allocator

21 Memory Allocator What allocator has to do: –Maintain free list, and grant memory to requests –Ideal: no fragmentation and no wasted time What allocator cannot do: –Control order of memory requests and frees –A bad placement cannot be revoked Main challenge: avoid fragmentation a malloc(20)? b

22 Impossibility Results Optimal memory allocation is NP-complete for general computation Given any allocation algorithm,  streams of allocation and deallocation requests that defeat the allocator and cause extreme fragmentation

23 Best Fit Allocation Minimum size free block that can satisfy request Data structure: –List of free blocks –Each block has size, and pointer to next free block Algorithm: –Scan list for the best fit

24 Best Fit gone wrong Simple bad case: allocate n, m (m<n) in alternating orders, free all the m’s, then try to allocate an m+1. Example: –If we have 100 bytes of free memory –Request sequence: 19, 21, 19, 21, 19 –Free sequence: 19, 19, 19 –Wasted space: 57!

25 A simple scheme Each memory chunk has a signature before and after –Signature is an int –+ve implies the a free chunk –-ve implies that the chunk is currently in use –Magnitude of chunk is its size So, the smallest chunk is 3 elements: –One each for signature, and one for holding the data

26 Which chunk to allocate? Maintain a list of free chunks –Binning, doubly linked lists, etc Use best fit or any other strategy to determine page –For example: binning with best-fit What if allocated chunk is much bigger than request? –Internal fragmentation –Solution: split chunks Will not split unless both chunks above a minimum size What if there is no big-enough free chunk? –sbrk (changes segment size) or mmap –Possible page fault

27 What happens on free? Identify size of chunk returned by user Change sign on both signatures (make +ve) Combine free adjacent chunks into bigger chunk –Worst case when there is one free chunk before and after –Recalculate size of new free chunk –Update the signatures Don’t really need to erase old signatures

28 Example Initially one chunk, split and make signs negative on malloc p = malloc(2 * sizeof (int));

29 Example q gets 4 words, although it requested for p = malloc(2 * sizeof (int)); q = malloc(3 * sizeof (int));

30 Design features Which free chunks should service request –Ideally avoid fragmentation… requires future knowledge Split free chunks to satisfy smaller requests –Avoids internal fragmentation Coalesce free blocks to form larger chunks –Avoids external fragmentation

31 Buddy-Block Scheme Invented by Donald Knuth, very simple Idea: Work with memory regions that are all powers of 2 times some “smallest” size –2 k times b Round each request up to have form b*2 k

32 Buddy-Block Scheme

33 Buddy-Block Scheme Keep a free list for each block size (each k) –When freeing an object, combine with adjacent free regions if this will result in a double-sized free object Basic actions on allocation request: –If request is a close fit to a region on the free list, allocate that region. –If request is less than half the size of a region on the free list, split the next larger size of region in half –If request is larger than any region, double the size of the heap (this puts a new larger object on the free list)

34 How to get more space? In Unix, system call sbrk() Used by malloc if heap needs to be expanded Notice that heap only grows on “one side” /* add nbytes of valid virtual address space */ void *get_free_space(unsigned nbytes) { void *p; if(!(p = sbrk(nbytes))) error(“virtual memory exhausted”); return p; } heap code data stack

35 Summary Thrashing: a process is busy swapping pages in and out –Process will thrash if working set doesn’t fit in memory –Need to swap out a process Working Set: –Set of pages touched by a process recently –Recently is  references or time units Dynamic memory allocation –Same problems as contiguous memory allocation –First-fit, Best-fit, Worst-fit, still suffer from fragmentation –Buddy scheme + Best-fit is simple strategy