Operating Systems Lecture Notes Memory Management Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Part IV: Memory Management
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Lecture 13: Main Memory (Chapter 8)
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
Chapter 9: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Memory Management Background.
Memory Management.
03/10/2004CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Memory Management Gordon College Stephen Brinton.
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Silberschatz, Galvin and Gagne  Operating System Concepts Segmentation Memory-management scheme that supports user view of memory. A program.
Silberschatz, Galvin and Gagne  Operating System Concepts Multistep Processing of a User Program User programs go through several steps before.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Main Memory.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Background Program must be brought.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 8: Memory Management.
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 8 Operating Systems.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 8: Main Memory.
Styresystemer og Multiprogrammering Block 3, 2005 Memory Management: Main Memory Robert Glück.
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
OSes: 8. Mem. Mgmt. 1 Operating Systems v Objectives –describe some of the memory management schemes used by an OS so that several processes can be in.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
CE Operating Systems Lecture 14 Memory management.
CS 149: Operating Systems March 3 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
CSC 360, Instructor Kui Wu Memory Management I: Main Memory.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 8: Main Memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 33 Paging Read Ch. 9.4.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition, Chapter 8: Memory- Management Strategies.
Chapter 9: Memory Management
Chapter 8: Main Memory.
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Operating System Concepts
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Main Memory Session -15.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
CSS 430: Operating Systems - Main Memory
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
Chapter 8: Memory management
Lecture 3: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Page Main Memory.
Presentation transcript:

Operating Systems Lecture Notes Memory Management Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002

Today’s Lecture Why memory management? –We want to give the user the illusion of a large, contiguous address space –But we also want to maximize the degree of multiprogramming and utilization of physical memory Topics: –Memory address binding –Logical vs. physical address spaces –Allocation –Memory paging –Segmentation Readings: Silberschatz et al., chapter 9

Bird’s eye view Symbolic names Logical addresses Physical addresses Binding But physical memory is shared among many processes Relocation Allocation Paging Segmentation

Typical programming steps test1.c, test2.c gcc –c test1.c; gcc –c test2.c test1.o, test2.o gcc –o test test.o test2.o –lmylib -lm test./test test, math library, standard C libraries C source compiler object code object code linker static library dynamic library executable loader memory image mylib,

Address Binding for User Programs Programs use symbolic addresses, e.g. “int i;” The compiler usually transforms symbolic addresses into relocatable addresses (e.g. “14 bytes from the beginning of the module”). The linkage editor or the loader transforms relocatable addresses into absolute (logical) addresses. Does the OS allow programs to run anywhere in memory? –YES: The OS must transform the absolute logical addresses into physical addresses. –NO: Logical and physical addresses are the same.

OS-supported dynamic binding Dynamic linking of system libraries (DLLs or shared libraries): –At link time, only a stub for library routines is linked –The rest of the library sits on disk until needed –The stub checks whether library has been loaded when called, and loads the DLL if necessary. Reduces memory usage by sharing between processes  Requires OS support for protection

Logical vs. Physical Addresses When running a user program, the CPU deals with logical addresses. The memory controller sees a stream of physical addresses. The memory management unit (MMU) does the conversion. Logical View Physical View Process 1Process 2Process 3 Process 1 Process 2 Process 3 OS

Swapping and address binding

Sometimes the OS needs to swap out a process to make room for others. When time to swap the process back in, does it need to go back to the same location in physical memory? –Yes, if compile or load-time binding [e.g. embedded systems] –No, if dynamic binding is allowed [e.g. modern general-purpose systems]

Relocation CPU generates logical addresses in the range 0 to max. The MMU adds a base register (value= r ) to the logical addresses to get physical addresses. Physical address space is r +0 to r + max. e.g. with r =14000, a logical address of 346 gets mapped to the physical address

Relocation Register Example

Allocation When processes are created or swapped in, we must allocate space for them. The relocation register scheme only supports contiguous allocation: –OS stays resident in low memory –To protect other processes, use a limit register with the relocation register

Hardware support for contiguous allocation

Contiguous allocation Basic idea: keep list of available memory holes. When a new process arrives, find a hole for it. OS process 5 process 8 process 2 OS process 5 process 2 OS process 5 process 2 OS process 5 process 9 process 2 process 9 process 10

Contiguous Allocation Strategies First fit: allocate from first hole large enough –Circular version: start search from last-allocated position Best fit: allocate from smallest hole possible Worst fit:allocate from largest hole possible Simulations show first fit and best fit are usually better than worst fit, but no clear winner.

Dealing with fragmentation in allocation Internal fragmentation: –Usually we round off a process’ usage into blocks. –The space wasted by this round off is internal fragmentation. External fragmentation: –Loading and releasing memory blocks can leave small holes spread over physical memory. –It is possible to have enough memory to satisfy a request even though it is broken up into too-small pieces. –This wasted space is called external fragmentation.

Dealing with external fragmentation Compaction: –Shuffle processes around in physical memory to create larger holes –Only works if processes are relocatable (dynamic address binding) –Time consuming and complex (I/O completion) Better solutions: –Paging: break logical address space into fixed-size pages and allow pages to be placed arbitrarily –Segmentation: split logical address space into a small set of variable-sized segments

Dealing with external fragmentation with paging The most common non-contiguous allocation method. –Physical memory broken into fixed-size frames. –Logical memory broken into same-size pages. –Logical addresses now consist of page number and offset. –Page number p indexes a page table. –Page table entry p contains the address f of page p in physical memory. –Requires hardware support. Can still have fragmentation?: internal yes, external no.

Paging address translation architecture

Logical and physical memory

Example 4-bit addresses 2 bits for page number 2 bits for offset 4-byte pages

How to select page size? Advantages of big pages: –Page table size is smaller –Frame allocation accounting is simpler Advantages of small pages: –Internal fragmentation is reduced

Process loading Split process logical address space into pages. Find enough free frames for the process’ pages. If necessary, swap out an old process. Find a sufficient number of free frames. Copy process pages into their designated frames. Fill in the page table.

Process loading

Page Table Implementation Page table maps logical memory page numbers to physical memory frame numbers. Additional information: protection bits –Readable, writable, valid/invalid, etc. If very small, page table can be stored in fast registers Usually, the page table is too big: –32 bit addresses, 4 KB pages: more than a million entries –Needs to be stored in main memory –Each memory access now requires two accesses. –1MB page table for a 1MB process is wasteful

Two-level page tables

To reduce size of the page table, use hierarchical structure. Page number field is split into a page number and an offset within that page. Now one memory access requires 3 accesses! page number page offset pipi p2p2 d 10 12

Page table caching Storing page tables in main memory doubles or triples effective memory access time. Solution: the Translation Lookaside Buffer (TLB) An associative, high-speed, on-chip memory (cache) Search for the value corresponding to a tag is parallel Page # ValueTag Frame #

Page table caching, again Storing page tables in main memory doubles or triples effective memory access time. Solution: the Translation Lookaside Buffer (TLB) An associative, high-speed, on-chip memory (cache) Search for the value corresponding to a tag is parallel Page # ValueTag Frame #

Paging with a TLB

TLB Usage The TLB stores a few page number / frame number pairs When CPU generates a logical address: –TLB is searched in parallel for the page number tag –If present, TLB hit Frame number is used immediately to access memory. –If not present, TLB miss Frame number has to be retrieved from table in memory. Retrieved frame number is used to access memory TLB is updated with the retrieved page / frame numbers –If TLB is full, have to select an entry for replacement The TLB must be flushed and restored on each context switch.

Effective access times Average memory access time depends on TLB hit ratio. Suppose: –TLB search takes 20 ns –Memory accesses take 100 ns –Single-level page table –80% hit rate Effective access time (EAT) = 0.8 x ( 20 ns ns ) x ( 20 ns ns ns ) = 140 ns Paging overhead is 40% (100 ns vs. 140 ns)

Segmentation An alternative way of breaking up the logical address space User programs are broken into logical segments –Main program, function1, global heap, etc. Each process has several variable-sized memory segments Hardware provides base and limit registers for each segment Packing problem simpler (smaller chunks) but external fragmentation still occurs –Solution: combine paging and segmentation

Segmented Address Space

Logical and physical address spaces user spacephysical memory space

Hardware support for segments

Segmentation example

What have we learned? Address binding schemes –Compile time, link time, load time, dynamic Logical vs. physical address spaces Allocation schemes –Contiguous –Paging Translation Lookaside Buffer (TLB) page table cache –Segmented memory architecture