Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping

Slides:



Advertisements
Similar presentations
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
Advertisements

Chapter 4 Memory Management Basic memory management Swapping
Memory.
4.4 Page replacement algorithms
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Virtual Memory: Page Replacement
Memory Management.
Page Replacement Algorithms
Chapter 4 Memory Management Page Replacement 补充:什么叫页面抖动?
Memory management Lecture 7 ~ Winter, 2007 ~.
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
Memory Management Paging &Segmentation CS311, CS350 & CS550.
MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 3 Memory Management Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
1 Virtual Memory Management B.Ramamurthy. 2 Demand Paging Main memory LAS 0 LAS 1 LAS 2 (Physical Address Space -PAS) LAS - Logical Address.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Module 9: Virtual Memory
Module 10: Virtual Memory Background Demand Paging Performance of Demand Paging Page Replacement Page-Replacement Algorithms Allocation of Frames Thrashing.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
1 Memory Management Managing memory hierarchies. 2 Memory Management Ideally programmers want memory that is –large –fast –non volatile –transparent Memory.
Virtual Memory Today Virtual memory Page replacement algorithms
Memory Management Design & Implementation Segmentation Chapter 4.
Memory Management Chapter 4. Memory hierarchy Programmers want a lot of fast, non- volatile memory But, here is what we have:
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
1 Pertemuan 16 Isu-Isu pada Sistem Paging dan Segmentasi Matakuliah: T0316/sistem Operasi Tahun: 2005 Versi/Revisi: 5.
Introduction to Systems Programming Lecture 7
Chapter 3.2 : Virtual Memory
Memory Management Virtual Memory Page replacement algorithms
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Virtual Memory Management B.Ramamurthy. Paging (2) The relation between virtual addresses and physical memory addres- ses given by page table.
1 Virtual Memory Management B.Ramamurthy Chapter 10.
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
O RERATıNG S YSTEM LESSON 10 MEMORY MANAGEMENT II 1.
Memory Management Page replacement algorithms, segmentation Tanenbaum, ch. 3 p Silberschatz, ch. 8, 9 p
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
Memory Management From Chapter 4, Modern Operating Systems, Andrew S. Tanenbaum.
Memory Management The part of O.S. that manages the memory hierarchy (i.e., main memory and disk) is called the memory manager Memory manager allocates.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
1 Memory Management Chapter Basic memory management 4.2 Swapping (εναλλαγή) 4.3 Virtual memory (εικονική/ιδεατή μνήμη) 4.4 Page replacement algorithms.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
1 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms.
Chapter 4 Memory Management Virtual Memory.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Memory: Paging.
“ Memory Management Function ” Presented By Lect. Rimple Bala GPC,Amritsar 1.
1 Memory Management Review of Designs and Issues Basic memory management Swapping Virtual memory Page replacement algorithms Design issues for paging systems.
Demand Paging Reference Reference on UNIX memory management
操作系统原理 OPERATING SYSTEM Chapter 3 Memory Management 内存管理.
1 Memory Management Chapter Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement.
1 Memory Management Adapted From Modern Operating Systems, Andrew S. Tanenbaum.
Chapter 9: Virtual Memory. 9.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Background Virtual memory – separation of user logical memory.
Virtual Memory What if program is bigger than available memory?
Memory Management Chapter 3
Memory Management Paging (continued) Segmentation
Demand Paging Reference Reference on UNIX memory management
Module 9: Virtual Memory
Chapter 9: Virtual Memory
Demand Paging Reference Reference on UNIX memory management
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
Chapter 9: Virtual-Memory Management
Page Replacement.
Memory Management Paging (continued) Segmentation
5: Virtual Memory Background Demand Paging
Memory Management Paging (continued) Segmentation
Module 9: Virtual Memory
Chapter 9: Virtual Memory CSS503 Systems Programming
Virtual Memory.
Chapter 8 & 9 Main Memory and Virtual Memory
CSE 542: Operating Systems
Presentation transcript:

Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms 4.6 Design issues for paging systems 4.7 Implementation issues 4.8 Segmentation

Memory Management Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory – cache some medium-speed, medium price main memory gigabytes of slow, cheap disk storage Memory manager handles the memory hierarchy

Basic Memory Management Monoprogramming without Swapping or Paging Three simple ways of organizing memory - an operating system with one user process

Fixed memory partitions Sort jobs in separate queues by size for each partition: small jobs wait while their partition is free. Single input queue: take from the head wherever it fits. choose the ones that fit: discriminated jobs are rejected k times and taken in.

Multiprogramming makes CPU better utilized Degree of multiprogramming p – probability of I/O wait (CPU idle) (e.g. 80%). With n processes P[CPU idle] = pn . P[CPU busy] = 1 - pn .

How much memory Assume total 32 M, with OS taking 16 M and each user program 4 M. With 80% IO CPU utilization is 1 - .84 = 60%. Additional 16 M allows N to grow from 4 to 8 or CPU utilization = 83 % or 38 % better. Adding another 16 M will increase n = 12 or CPU utilization = 93 % which is 12 % increase. Multiprogramming problems: Relocation: solved by base registers. Protection: solved by limit registers. Timesharing systems: Processes are kept on the disk and brought into memory dynamically: Swapping Virtual memory Problems: memory compaction.

Swapping: keep programs on disk Memory allocation changes as processes come into memory and leave memory. Shaded regions are unused memory => fragmentation. Memory compaction: moving processes down to fill in holes.

New art: the size of process is known Allocating space for growing data segment (old way) Allocating space for growing stack & data segment (new way): the memory requirements per process are known in advance.

Two ways to track memory usage Processes: A (5), B(6), C(4), D(6), E(3) Part of memory with 5 processes, 3 holes tick marks show allocation units shaded regions are free Corresponding bit map. Corresponding information as a linked list: P – process, H – hole, start block, size, and pointer to the next unit. Improvement is to maintain the list of processes and list of holes possibly ordered by size for quick search.

Double linked list enables easy compation Memory allocation algorithms: First fit: the first hole large enough is chosen. Next fit: the same as first fit that always starts from where it ended last (worst than 1.). It is not for ordered list. Best fit: searches the list for smallest partition. It leaves small holes that are useless. In ordered list of holes the first hole that fits is the one. Worst fit lives largest partitions, however (simulation shows) not good.

Virtual Memory: The function of the MMU Job may be larger than the physical address. Program is split into overlays. To start running program only first overlay needs to be in memory.

Paging MMU maps virtual pages into physical pages - (frames) Page size: 4 to 64 kB. Therefore: mov reg, 0 ; maps to mov reg, 8192 and mov reg, 8192 ; maps to mov reg, 24576

Page Tables Internal operation of MMU with 16 4 KB pages

32 bit address with two level page table Second-level page tables Top-level page table Both levels page table will have 1020 entries. However, two level doesn’t need to keep all page table In a memory all the time. For instance only 4 kB is needed here.

Page tables entry Page frame number depends on size of physical memory. Present/absent: entry is valid Protection: three bits RWX Referenced: whether is page read or written into Modified: whether is page written into Cashing disabled: for memory mapped I/O

Translation Lookaside Buffers (TLBs) speed up the mapping Page maps are too large and kept in memory. TLB contains entries only for page frames. They are associative memories. If the page is not in TLB it is loaded from the page table. Initially it was done by hardware. The old entry modified bit is copied into page table. How is the TLB searched? Then it was done by software.

Inverted Page Tables Trouble of page table combined with TLB is that it is too large.Why not to keep track of only pages that are active (frames)? The table of active pages has much less entries. From inverted table we load TLB. Index into inverted table is hash value of virtual page. If virtual page is not found needs to search a chain in hash table. This approach is used in 64 bit address machines, IBM and HP workstations.

Page Replacement Algorithms Similar problem exist in cashe and web servers. Active pages are kept while inactive are in the background. Page fault forces choice which page must be removed to make room for incoming page Modified page must first be saved unmodified just overwrite entry in page table Better not to choose an often used page will probably need to be brought back in soon

Optimal Page Replacement Algorithm Replace page needed at the farthest point in future Say we use instruction counter to mark the future page references. Replace the page referenced furthest into the future. Optimal but future is not known. Estimate by … Simulate a run and make prediction. However, this run is specific for particular program and particular data. page use in the past of process: each page is labeled by the number of instructions before it has been referenced. The more instructions the less chance to be referenced and likely candidate for replacement. it is impractical to count instruction – count a time instead.

Not Recently Used (NRU) Replacement Algorithm (adequate performance) Each page has Reference bit, Modified bit bits are set when page is referenced and/or modified reference bit is cleared every clock interrupt (20 msec) to differ pages not referenced recently. Pages are classified not referenced, not modified not referenced, modifieda referenced, not modified referenced, modified NRU removes page at random from lowest numbered non empty class e.g. among 1. (if exists) then among 2. etc. Rationale: it is better to replace not referenced but modified pages than heavily referenced a. Page might be modified in the past and cleared by the clock interrupt.

FIFO Page Replacement Algorithm Based on age when the page was brought in. Maintain a linked list of all pages in order they came into memory: oldest pages at the head, the most recent at the tail of the list. Page at beginning of list, the oldest one, is replaced. Disadvantage page in memory the longest may be often used.

Second Chance Page Replacement Algorithm Inspect the R bit before throwing the oldest page Operation of a second chance pages sorted in FIFO order (number above is time of arrival) if last page has R=0 it is replaced immediately. if not its R is set to 0 and put at the tail with current time stamp ( = 20) and next head page (the oldest) is looked at. if all pages have been referenced A will first come to the head of the list with R=0 and be replaced like FIFO.

The Clock Page Replacement Algorithm All page frames are placed around the clock sorted by age (same as linked list). When page fault occurs the hand points to the most recent page (tail). Hand moves one place clockwise to inspect the oldest page. If R=0 then replace, else set R=0 and time = current, and inspect the next.

Least Recently Used (LRU) Assume pages used recently will used again soon throw out page that has been unused for longest time Must keep a linked list of pages most recently used at front, least at rear update this list every memory reference !! Alternatively keep reference counter in each frame table entry choose page with lowest value counter periodically zero the counter

Simulating LRU in Software (1) LRU using a matrix n*n with n frames Every reference of frame k set k-th row to 1 and reset k-th column to 0. At page fault the frame with the lowest binary number is LRU candidate. Example shows pages referenced in sequence: 0,1,2,3,2,1,0,3,2,3

Simulating LRU in Software (2) Counting page references can be simulated by counting R bits every tick. However, the window in the past in infinite – noting is forgotten. The approach below uses window of 8 ticks. R bit counters for 6 pages for 5 clock ticks, (a) – (e). The LRU page has the smallest binary number.

The Working Set Page Replacement Algorithm (1) The working set is the set of pages used by the k most recent memory references w(k,t) is the size of the working set at time, t

The working set algorithm Every clock tick: if (R == 1) { R = 0; counter = VT; } At page fault: loop { age = current VT – counter; if (age > t) { evict; break; else noted = smallest age page; if (not evicted && (R[noted] == 0)) evict(noted) evict next younger with R==0; if (not evicted) evict random frame At each page fault entire frame table has to be scanned to find the proper page to evict.

WSClock Algorithm

Review of Page Replacement Algorithms

Modeling Page Replacement Algorithms Belady's Anomaly: FIFO with 3 frames has less page faults than with 4 frames.

Stack Algorithms for LRU frames VM State of memory array, M, after each item in reference string is processed

Probability density functions for two hypothetical distance string. The Distance String Probability density functions for two hypothetical distance string.

Computation of page fault rate from distance string 20 3 3 14 3 11 9 1 C vector F vector

Design Issues for Paging Systems Local versus Global Allocation Policies (1) Original configuration Local page replacement Global page replacement

Local versus Global Allocation Policies (2) Page fault rate as a function of the number of page frames assigned

Load Control Despite good designs, system may still thrash When PFF algorithm indicates some processes need more memory but no processes need less Solution : Reduce number of processes competing for memory swap one or more to disk, divide up pages they held reconsider degree of multiprogramming

Page Size (1) Small page size Advantages Disadvantages less internal fragmentation better fit for various data structures, code sections less unused program in memory Disadvantages programs need many pages, larger page tables

internal fragmentation Page Size (2) Overhead due to page table and internal fragmentation Where s = average process size in bytes p = page size in bytes e = page entry page table space internal fragmentation Optimized when

Separate Instruction and Data Spaces One address space Separate I and D spaces

Two processes sharing same program

Cleaning Policy Need for a background process, paging daemon periodically inspects state of memory When too few frames are free selects pages to evict using a replacement algorithm It can use same circular list (clock) as regular page replacement algorithm but with diff ptr

Implementation Issues Operating System Involvement with Paging Four times when OS involved with paging Process creation determine program size create page table Process execution MMU reset for new process TLB flushed Page fault time determine virtual address causing fault swap target page out, needed page in Process termination time release page table, pages

Page Fault Handling (1) Hardware traps to kernel General registers saved OS determines which virtual page needed OS checks validity of address, seeks page frame If selected frame is dirty, write it to disk

Page Fault Handling (2) OS brings schedules new page in from disk Page tables updated Faulting instruction backed up to when it began Faulting process scheduled Registers restored Program continues

An instruction causing a page fault Instruction Backup An instruction causing a page fault

Locking Pages in Memory Virtual memory and I/O occasionally interact Proc issues call for read from device into buffer while waiting for I/O, another processes starts up has a page fault buffer for the first proc may be chosen to be paged out Need to specify some pages locked exempted from being target pages

Backing Store (a) Paging to static swap area (b) Backing up pages dynamically

Separation of Policy and Mechanism Page fault handling with an external pager

Segmentation (1) One-dimensional address space with growing tables One table may bump into another

Allows each table to grow or shrink, independently Segmentation (2) Allows each table to grow or shrink, independently

Comparison of paging and segmentation

Implementation of Pure Segmentation (a)-(d) Development of fragmentation (e) Removal of the fragmentation by compaction

MULTICS: Segmentation with Paging: Segment descriptor Segment descriptor points to page tables

A 34 bit Multics virtual address

Conversion of a MULTICS address into a main memory address

Simplified version of the MULTICS TLB Existence of 2 page sizes makes actual TLB more complicated

Pentium: selector chooses a segment descriptor

Pentium code segment descriptor Data segments differ slightly

Conversion of the (selector, offset) into physical address

Segmentation with Paging: Pentium Mapping of a linear address onto a physical address

Segmentation with Paging: Pentium (5) Level Protection on the Pentium