Memory Management. Introduction To improve both the utilization of the CPU and the speed of its response to users, the computer must keep several processes.

Slides:



Advertisements
Similar presentations
Part IV: Memory Management
Advertisements

CS 311 – Lecture 21 Outline Memory management in UNIX
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
CS 104 Introduction to Computer Science and Graphics Problems
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Memory Management Chapter 5.
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Operating System Concepts
Silberschatz, Galvin and Gagne  Operating System Concepts Multistep Processing of a User Program User programs go through several steps before.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
CSCI 315 Operating Systems Design
Chapter 8: Main Memory.
Chapter 8 Main Memory Bernard Chen Spring Objectives To provide a detailed description of various ways of organizing memory hardware To discuss.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 8 Operating Systems.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 8: Main Memory.
Swapping and Contiguous Memory Allocation. Multistep Processing of a User Program User programs go through several steps before being run. Program components.
Lecture 13 L.Mohammad R.Alkafagee1.  The concept of a logical address space that is bound to a separate physical address space is central to proper memory.
Memory Management. Process must be loaded into memory before being executed. Memory needs to be allocated to ensure a reasonable supply of ready processes.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 8: Memory Management Strategies.
1 Memory Management (a). 2 Background  Program must be brought into memory and placed within a process for it to be run.  Input queue – collection of.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
8.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 8: Memory-Management Strategies Objectives To provide a detailed description.
Chapter 4 Storage Management (Memory Management).
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Memory Management 1. Background Programs must be brought (from disk) into memory for them to be run Main memory and registers are only storage CPU can.
Silberschatz and Galvin  Operating System Concepts Module 8: Memory Management Background Logical versus Physical Address Space Swapping Contiguous.
CE Operating Systems Lecture 14 Memory management.
Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
Basic Memory Management 1. Readings r Silbershatz et al: chapters
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 31 Memory Management.
Memory Management Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only storage.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 8: Main Memory.
14/06/ :02:31 CSC Alliance — 1 Kimera Richard Phone: INSTITUTE OF COMPUTER SCIENCE DEPARTMENT.
Memory Management 1. Outline Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation 2.
Memory management The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Lecture 7 Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run. Memory consists of a large.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition, Chapter 8: Memory- Management Strategies.
Main Memory CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
MEMORY MANAGEMENT. memory management  In a multiprogramming system, in order to share the processor, a number of processes must be kept in memory. 
Chapter 9: Memory Management
Memory Management.
Chapter 2 Memory and process management
Chapter 8: Main Memory.
Main Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Multistep Processing of a User Program
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
Memory Management Tasks
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Page Main Memory.
Presentation transcript:

Memory Management

Introduction To improve both the utilization of the CPU and the speed of its response to users, the computer must keep several processes in memory. Many memory management schemes exist (paging and segmentation), reflecting various approaches, and the effectiveness of each algorithm depends on the situation. Management of primary memory. Selection of a memory management scheme for a system depends on many factors, especially on the hardware design of the system. Each algorithm requires its own hardware support.

Memory consists of a large array of words or bytes, each with its own address. The CPU fetches instructions from memory according to the value of the program counter. These instructions may cause additional loading from and storing to specific memory addresses. A typical instruction-execution cycle, for example, first fetches an instruction from memory. The instruction is then decoded and may cause operands to be fetched from memory. After the instruction has been executed on the operands, results may be stored back in memory. So, In this procedure the sequence of memory addresses generated by the running program.

Function of memory management Keeping track of the status of each memory location. Whether the memory location is allocated or free. Determining allocation policy for memory. How to allocate memory. How to deallocate memory.

Requirement of MM Relocation Protection Sharing Logical organization Physical organization

Address Binding Usually, a program resides on a disk as a binary executable file. The program must be brought into memory and placed within a process for it to be executed. Depending on the memory management in use, the process may be moved between disk and memory during its execution. The collection of processes on the disk that is waiting to be brought into memory for execution forms the input queue. The normal procedure is to select one of the processes in the input queue and to load that process into memory. As the process is executed, it accesses instructions and data from memory. Eventually, the process terminates, and its memory space is declared available.

In most cases, a user program will go through several steps before being executed: Addresses may be represented in different ways during these steps. Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to relocatable addresses. The linkage editor or loader will in turn bind these relocatable addresses to absolute addresses. Each binding is a mapping from one address space to another.

Binding of Instructions and Data to Memory Address binding of instructions and data to memory addresses can be done at three different stages – Compile time: if you know at compile time where the process will reside in memory (location known), then absolute code can be generated. For example, user process resides starting at location R, then the generated compiler code will start at that location and extend up from there. The starting location changes, then it will be necessary to recompile this code. – Load time: If it is not known at compile time where the process will reside in memory, then the compiler must generate relocatable code. If the starting address changes, we need only to reload the user code to incorporate this changed value. In this case final binding is delayed until load time. – Execution time: If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time. Need hardware support for address maps (e.g., base and limit registers)

Logical- Versus Physical-Address Space An address generated by the CPU is commonly referred to as a logical address (virtual address), whereas an address seen by the memory unit-that is, the one loaded into the memory-address register of the memory-is commonly referred to as a physical address. The compile-time and load-time address-binding methods generate identical logical and physical addresses. However, the execution-time address binding scheme results in differing logical and physical addresses. The set of all logical addresses generated by a program is a logical-address space; the set of all physical addresses corresponding to these logical addresses is a physical-address space. Thus, in the execution-time address-binding scheme, the logical- and physical-address spaces differ.

The run-time mapping from virtual to physical addresses is done by a hardware device called the memory-management unit (MMU).

Memory-Management Unit ( MMU ) In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory The user program deals with logical addresses; it never sees the real physical addresses. The memory-mapping hardware converts logical addresses into physical addresses (at the time of execution-time binding).

Dynamic relocation using a relocation register The base register is now called a relocation register. The value in the relocation register is added to every address generated by a user process at the time it is sent to memory.

Dynamic loading The entire program and data of a process must be in physical memory for the process to execute. The size of a process is limited to the size of physical memory. To obtain better memory-space utilization, we can use dynamic loading. With dynamic loading, a routine is not loaded until it is called. All routines are kept on disk in a relocatable load format. The main program is loaded into memory and is executed. When a routine needs to call another routine, the calling routine first checks to see whether the other routine has been loaded. If not, the relocatable linking loader is called to load the desired routine into memory and to update the program's address tables to reflect this change. Then, control is passed to the newly loaded routine. The advantage of dynamic loading is that an unused routine is never loaded. This method is particularly useful when large amounts of code are needed to handle infrequently occurring cases

Overlays The idea of overlays is to keep in memory only those instructions and data that are needed at any given time. When other instructions are needed, they are loaded into space occupied previously by instructions that are no longer needed.

Example consider a two-pass assembler. During pass 1, it constructs a symbol table; then, during pass 2, it generates machine-language code. We may be able to partition such an assembler into pass 1 code, pass 2 code, the symbol table, and common support routines used by both pass 1 and pass 2. Assume that the sizes of these components are as follows: Pass 1 70 KB Pass 2 80 KB Symbol table 20 KB Common routines 30 KB

To load everything at once, we would require 200 KB of memory. If only 150 KB is available, we cannot run our process. However, notice that pass 1 and pass 2 do not need to be in memory at the same time. We thus define two overlays: – Overlay A is the symbol table, common routines, and pass 1, and – overlay B is the symbol table, common routines, and pass 2. We add an overlay driver (10 KB) and start with overlay A in memory. When we finish pass 1, we jump to the overlay driver, which reads overlay B into memory, overwriting overlay A, and then transfers control to pass 2. Overlay A needs only 120 KB, whereas overlay B needs 130 KB. We can now run our assembler in the 150 KB of memory.

As in dynamic loading, overlays do not require any special support from the operating system. They can be implemented completely by the user with simple file structures, reading from the files into memory and then jumping to that memory and executing the newly read instructions.

Swapping For example, assume a multiprogramming environment with a round-robin CPU-scheduling algorithm. When a quantum expires, the memory manager will start to swap out the process that just finished, and to swap in another process to the memory space that has been freed. This variant of swapping is sometimes called roll out, roll in. Normally a process that is swapped out will be swapped back into the same memory space that it occupied previously. This restriction is dictated by the method of address binding. If binding is done at assembly or load time, then the process cannot be moved to different locations. If execution-time binding is being used, then a process can be swapped into a different memory space, because the physical addresses are computed during execution time. The system maintains a ready queue consisting of all processes whose memory images are on the backing store or in memory and are ready to run.

Schematic View of Swapping

Contiguous Memory Allocation Before discussing memory allocation, we must discuss the issue of memory protection- protecting the operating system from user processes, and protecting user processes from one another. We can provide this protection by using a relocation register with a limit register. The relocation register contains the value of the smallest physical address; The limit register contains the range of logical addresses (for example, relocation = and limit = 74600). With relocation and limit registers, each logical address must be less than the limit register; the MMU maps the logical address dynamically by adding the value in the relocation register. This mapped address is sent to memory

Hardware Support for Relocation and Limit Registers

Memory allocation One of the simplest methods for memory allocation is to divide memory into several fixed- sized partitions. Each partition may contain exactly one process. Thus, the degree of multiprogramming is bound by the number of partitions. In this multiple-partition method, when a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process.

Memory management technique Single partition Multi partition (MFT and MVT) – Fixed allocation or partition (allocation policies): support system view First fit (first enough block to store process) Best fit (search all memory and find enough space for process) Worst fit (find larger block to hold process) – Variable partition (dynamic): support user view (continuous allocation) Paging Segmentation Virtual memory

Memory Allocation The first-fit, best-fit, and worst-fit strategies are the most common ones used to select a free hole from the set of available holes. First fit: Allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes or where the previous first-fit search ended. We can stop searching as soon as we find a free hole that is large enough. Best fit: Allocate the smallest hole that is big enough. We must search the entire list, unless the list is kept ordered by size. This strategy produces the smallest leftover hole. Worst fit: Allocate the largest hole. Again, we must search the entire list, unless it is sorted by size. This strategy produces the largest leftover hole, which may be more useful than the smaller leftover hole from a best-fit approach. First-fit and best-fit better than worst-fit in terms of speed and storage utilization

Examples Fixed partition P1= 300mb P2=200mb P3=400mb OS 300mb 200mb 400mb

Given memory partition – 100k, 500k, 200k, 300k, 600k (in order) Placed processes of – 212k, 417k, 112k, 426k (in order)

Internal Fragmentation In internal fragmentation, there is wasted space internal to a partition due to the fact that the block of data loaded is smaller than the partition. But is not being used (memory space). Fixed allocation suffers from internal fragmentation.

Dynamic partition Example: continuous allocation of processes

External Fragmentation External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous. Dynamic partition suffers from external fragmentation. Reduce external fragmentation by compaction – Shuffle memory contents to place all free memory together in one large block. – Compaction is possible only if relocation is dynamic, and is done at execution time.

Example After dynamic allocation some free space are available but it is not continue.

Memory management techniques – Multiprogram fixed task: In this technique, memory is divided into several fixed-size partitions. Each partition may contain exactly one process. Thus the degree of multiprogramming is bound by the number of partitions. In this multiple partition method, when a partition is free, a process is selected from the input queue and is loaded in the free partition. When the process terminates, the partition becomes available for another process. – Multiprogram variable task : Memory utilization techniques – Dynamic loading – Dynamic linking – Overlays – Swapping