Virtual Memory CSE451 Andrew Whitaker.

Slides:



Advertisements
Similar presentations
Paging, Page Tables, and Such Andrew Whitaker CSE451.
Advertisements

4.4 Page replacement algorithms
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Page Replacement Algorithms
Virtual Memory. 2 What is virtual memory? Each process has illusion of large address space –2 32 for 32-bit addressing However, physical memory is much.
Virtual Memory.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
1 Virtual Memory Management B.Ramamurthy. 2 Demand Paging Main memory LAS 0 LAS 1 LAS 2 (Physical Address Space -PAS) LAS - Logical Address.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Memory Management – 4 Page Replacement Algorithms CS 342 – Operating Systems.
1 Virtual Memory vs. Physical Memory So far, all of a job’s virtual address space must be in physical memory However, many parts of programs are never.
Paging Algorithms Vivek Pai / Kai Li Princeton University.
Virtual Memory Management B.Ramamurthy. Paging (2) The relation between virtual addresses and physical memory addres- ses given by page table.
1 Virtual Memory Management B.Ramamurthy Chapter 10.
Page Replacement Algorithms
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Memory Management CSE451 Andrew Whitaker. Big Picture Up till now, we’ve focused on how multiple programs share the CPU Now: how do multiple programs.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Virtual Memory.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
Chapter 21 Virtual Memoey: Policies Chien-Chung Shen CIS, UD
Lecture 11 Page 1 CS 111 Online Virtual Memory A generalization of what demand paging allows A form of memory where the system provides a useful abstraction.
CPS110: Page replacement Landon Cox. Replacement  Think of physical memory as a cache  What happens on a cache miss?  Page fault  Must decide what.
Lecture Topics: 11/24 Sharing Pages Demand Paging (and alternative) Page Replacement –optimal algorithm –implementable algorithms.
CSE 153 Design of Operating Systems Winter 2015 Lecture 12: Page Replacement.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Virtual Memory. 2 Last Week Memory Management Increase degree of multiprogramming –Entire process needs to fit into memory Dynamic Linking and Loading.
COS 318: Operating Systems Virtual Memory Paging.
CS 333 Introduction to Operating Systems Class 14 – Page Replacement
© 2013 Gribble, Lazowska, Levy, Zahorjan
CS703 - Advanced Operating Systems
Chapter 21 Virtual Memoey: Policies
Demand Paging Reference Reference on UNIX memory management
Lecture 10: Virtual Memory
CSE 120 Principles of Operating
Module 9: Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Chapter 9: Virtual-Memory Management
CSE 451: Operating Systems Autumn 2012 Module 12 Virtual Memory, Page Faults, Demand Paging, and Page Replacement Ed Lazowska
5: Virtual Memory Background Demand Paging
Demand Paged Virtual Memory
VM Page Replacement Hank Levy.
CSE 451: Operating Systems Autumn 2003 Lecture 11 Demand Paging and Page Replacement Hank Levy Allen Center
CSE451 Introduction to Operating Systems Winter 2009
There’s not always room for one more. Brian Bershad
CSE 451: Operating Systems Autumn Demand Paging and Page Replacement
Contents Memory types & memory hierarchy Virtual memory (VM)
Practical Session 8, Memory Management 2
CSE 451: Operating Systems Winter 2003 Lecture 11 Demand Paging and Page Replacement Hank Levy 412 Sieg Hall 1.
Computer Architecture
Operating Systems CMPSC 473
Memory Management CSE451 Andrew Whitaker.
Lecture 9: Caching and Demand-Paged Virtual Memory
CSE 153 Design of Operating Systems Winter 19
CSE 451: Operating Systems Spring 2006 Module 11 Virtual Memory, Page Faults, Demand Paging, and Page Replacement John Zahorjan
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
CSE451 Introduction to Operating Systems Winter 2011
Module 9: Virtual Memory
CSE 451: Operating Systems Autumn 2009 Module 11 Virtual Memory, Page Faults, Demand Paging, and Page Replacement Ed Lazowska
CSE 451: Operating Systems Autumn 2010 Module 11 Virtual Memory, Page Faults, Demand Paging, and Page Replacement Ed Lazowska
There’s not always room for one more. Brian Bershad
Demand Paging We’ve hinted that pages can be moved between memory and disk this process is called demand paging is different than swapping (entire process.
CSE 451: Operating Systems Winter 2006 Module 11 Virtual Memory, Page Faults, Demand Paging, and Page Replacement Ed Lazowska
Practical Session 9, Memory Management continues
Virtual Memory.
CSE 451: Operating Systems Winter 2007 Module 11 Virtual Memory, Page Faults, Demand Paging, and Page Replacement Ed Lazowska
Presentation transcript:

Virtual Memory CSE451 Andrew Whitaker

Problem: Physical Memory Scarcity Q: Which is bigger: A 64 bit address space Or, the number of hydrogen molecules in a star? A: It’s the star 2^64 bytes in an address space 10^57 hydrogen atoms in a star 57 * log 10 > 64 * log 2 But, the fact we have to ask is significant!

Solution: Virtual Memory Physical memory stores a subset of virtual address space The rest is stored on disk Main memory acts as a page cache Implemented transparently by the OS and hardware Application can’t tell which pages are in memory virtual memory physical memory

How Does Virtual Memory Work? 1 1 1 2 20 V R M prot page frame number Page table entry contains a valid bit Says whether the mapping is valid If valid bit is not set, the system raises a page fault Behaves like an (involuntary) system call

What Happens on a Page Fault? Hardware raises an exception OS page fault handler runs First, make sure the address is valid Second, verify access type Not allowed to write to a read-only page! If access is legal, allocate a new page frame What happens if physical memory is scarce… stay tuned! OS reads page frame from disk Process/thread is blocked during this process Once read completes, OS updates the PTE and resumes the process/thread

Process Startup Two options for new processes: Eagerly bring in pages Lazily fault in pages Called demand paging Most OS’s prefer demand paging Doesn’t require guessing or maintaining history Slightly smarter approach: clustering Bring in the faulting page and several subsequent pages

Dealing With Memory Scarcity Physical memory is over-allocated We can run out! OS needs a page replacement strategy Before diving into particular strategies, lets look at some theory…

Locality of Reference Programs tend to reuse data and instructions they have used recently These “hot” items are a small fraction of total memory Rule of thumb: 90% of execution time is spent in 10% of the code This is what makes caching work!

The Working Set Model The working set represents those pages currently in use by a program The working set contents change over time So does the size of the working set To get reasonable performance, a program must be allocated enough pages for its working set

A hypothetical web server working set Not linear! Performance decays rapidly without enough memory Request / second of throughput Number of page frames allocated to process

Thrashing Programs with an allocation beneath their working set will thrash Each page fault evicts a “hot” page That page will be needed soon… Evicted page takes a page fault [repeat] Net result: no work gets done All time is devoted to paging operations

Over-allocation Giving a program more than its working set does very little good Page eviction strategies take advantage of this

Generalizing to Multiple Processes Let W to be the sum of working sets for all active processes Let M be amount of physical memory If W > M, the system as a whole will thrash No page replacement policy will work :-( If W <= M, the system might perform well Key issue: is the page replacement policy smart enough to identify working sets?

Belady’s Algorithm Evict the page that won’t be used for the longest time in the future This page is probably not in the working set If it is in the working set, we’re thrashing This is optimal! Minimizes the number of page faults Major problem: this requires a crystal ball We can’t “know” future memory sequence

Temporal Locality Assume the past is a decent indicator of the future How good are these algorithms: LIFO Newest page is kicked out FIFO Oldest page is kicked out Random Random page is kicked out LRU Least recently used page is kicked out

Paging Algorithms Random is also pretty bad LIFO is horrendous LRU is pretty good FIFO is mediocre VAX VMS used a form of FIFO because of hardware limitations

Implementing LRU: Approach #1 One (bad) approach: on each memory reference: long timeStamp = System.currentTimeMillis(); sortedList.insert(pageFrameNumber,timeStamp); Problem: this is too inefficient Time stamp + data structure manipulation on each memory operation Too complex for hardware

Making LRU Efficient Use hardware support Trade off accuracy for speed Reference bit is set when pages are accessed Can be cleared by the OS Trade off accuracy for speed It suffices to find a “pretty old” page 1 1 1 2 20 V R M prot page frame number Note: we don’t know the order of use of the referenced bits. So, our LRU estimate will definitely be approximate

Approach #2: LRU Approximation with Reference Bits For each page, maintain a set of reference bits Let’s call it a reference byte Periodically, shift the HW reference bit into the highest-order bit of the reference byte Suppose the reference byte was 10101010 If the HW bit was set, the new reference bit become 11010101 Frame with the lowest value is the LRU page

Analyzing Reference Bits Pro: Does not impose overhead on every memory reference Interval rate can be configured Con: Scanning all page frames can still be inefficient e.g., 4 GB of memory, 4KB pages => 1 million page frames

Approach #3: LRU Clock Use only a single bit per page frame Basically, this is a degenerate form of reference bits On page eviction: Scan through the list of reference bits If the value is zero, replace this page If the value is one, set the value to zero

Why “Clock”? Typically implemented with a circular queue 1 1 1 1

Analyzing Clock Pro: Very low overhead Only runs when a page needs evicted Takes the first page that hasn’t been referenced Con: Isn’t very accurate (one measly bit!) Degenerates into FIFO if all reference bits are set Pro: But, the algorithm is self-regulating If there is a lot of memory pressure, the clock runs more often (and is more up-to-date)

When Does LRU Do Badly? Example: Many database workloads: LRU performs poorly when there is little temporal locality: 1 2 3 4 5 6 7 8 Example: Many database workloads: SELECT * FROM Employees WHERE Salary < 25000