Memory management The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Part IV: Memory Management
Memory management In a multiprogramming system, in order to share the processor, a number of processes must be kept in memory. Memory management is achieved.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Lecture 13: Main Memory (Chapter 8)
CS 311 – Lecture 21 Outline Memory management in UNIX
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
CS 104 Introduction to Computer Science and Graphics Problems
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Memory Management Chapter 5.
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Operating System Concepts
Silberschatz, Galvin and Gagne  Operating System Concepts Multistep Processing of a User Program User programs go through several steps before.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Chapter 8: Main Memory.
Chapter 8 Main Memory Bernard Chen Spring Objectives To provide a detailed description of various ways of organizing memory hardware To discuss.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 8 Operating Systems.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 8: Main Memory.
Swapping and Contiguous Memory Allocation. Multistep Processing of a User Program User programs go through several steps before being run. Program components.
SOCSAMS e-learning Dept. of Computer Applications, MES College Marampally MEMORYMANAGEMNT.
Memory Management. Process must be loaded into memory before being executed. Memory needs to be allocated to ensure a reasonable supply of ready processes.
8.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 8: Memory-Management Strategies Objectives To provide a detailed description.
Chapter 4 Storage Management (Memory Management).
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management 1. Background Programs must be brought (from disk) into memory for them to be run Main memory and registers are only storage CPU can.
Silberschatz and Galvin  Operating System Concepts Module 8: Memory Management Background Logical versus Physical Address Space Swapping Contiguous.
CE Operating Systems Lecture 14 Memory management.
Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Memory Management. Introduction To improve both the utilization of the CPU and the speed of its response to users, the computer must keep several processes.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Memory Management OS Fazal Rehman Shamil. swapping Swapping concept comes in terms of process scheduling. Swapping is basically implemented by Medium.
Memory Management and Virtual Memory
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Lecture 7 Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run. Memory consists of a large.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition, Chapter 8: Memory- Management Strategies.
Main Memory CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Memory Management.
Chapter 2 Memory and process management
Memory Allocation The main memory must accommodate both:
Chapter 8: Main Memory.
Main Memory Management
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Chapter 8: Main Memory.
Operating System Concepts
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Multistep Processing of a User Program
Memory Management.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
So far… Text RO …. printf() RW link printf Linking, loading
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Lecture 3: Main Memory.
Chapter 8: Memory Management strategies
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Page Main Memory.
Presentation transcript:

Memory management The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory (at least partially) during execution. To improve both the utilization of the CPU and the speed of its response to users, the computer must keep several processes in memory. Memory consists of large array of words or bytes, each with its own address. The CPU fetches instructions from memory according to the value of the program counter. These instructions may cause additional loading from and storing to specific memory addresses.

Logical Versus Physical Address Space An address generated by the CPU is commonly referred to as logical address, whereas an address seen by the memory unit (the one loaded into the memory) is commonly referred to as a physical address.

Memory Management Unit (MMU) The run time mapping from logical (or virtual) to physical addresses is done by a hardware device called the memory-management-unit (MMU) or Memory management unit converts the logical address to physical address. Dynamic relocation using relocation register

Memory Management Unit (MMU) The base register is now called a relocation register. The value in the relocation register is added to every address generated by a user process at the time it is sent to memory as shown in figure above. For example if the base is at 14000, then an attempt by the user to address location 0 is dynamically relocated to location 14000, similarly an attempt by the user to address location 346 is mapped to location User programs are not able to see the real physical addresses. The program can create a pointer to location 346. User programs only deal with logical address.

Address Binding Binding is the mapping of addresses from one address space to another. Classically, the binding of instructions and data to memory addresses can be done at any step along the way:

Compile Time If you know at compile time where the process will reside in memory, then absolute code can be generated. For example, if you know that a user process will reside starting at location R, then the generated compiler code will start at that location and extend up from there. If, at some later time, the starting location changes, then it will be necessary to recompile this code. This kind of binding is also known as static binding.

Load Time If it is not known at compile time where the process will reside in memory, then the compiler must generate relocate-able code. In this case final binding is delayed until load time.

Execution Time If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time. This kind of binding is also known as dynamic binding.

Swapping A process must be in memory to be executed. A process, however, can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution. For example, assume a multiprogramming environment with a Round- Robin CPU-scheduling algorithm. When a quantum expires, the memory manager will start to swap out the process that just finished and to swap another process into the memory space that has been freed, as shown in the figure below.

Swapping In the meantime, the CPU scheduler will allocate a time slice to some other process in memory. When each process finishes its quantum, it will be swapped with another process. Ideally, the memory manager can swap processes fast enough that some processes will be in memory, ready to execute, when CPU scheduler wants to reschedule the CPU. In addition, the quantum must be large enough to allow reasonable amounts of computing to be done between swaps.

Contiguous Memory Allocation The main memory must accommodate both the operating system and the various user processes. We therefore need to allocate parts of the main memory in the most efficient way possible. One of the simplest methods for allocating memory is to divide memory into several fixed-sized partitions.

Fixed-Sized Partitions Scheme – It divides the memory into several fixed-sized partitions. – Each partition may contain exactly one process. – The degree of multiprogramming is bound by the number of partitions. – In this multiple partition method, when a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process. – In fixed-partition scheme, the operating system keeps a table indicating which parts of memory are available and which are occupied. – This method is no longer in use.

Generalization of Fixed Size Partitions Scheme – We have a set of holes of various sizes scattered throughout memory. – When a process arrives and needs memory, the system searches the set for hole that is large enough for this process. – If hole is too large, it is split into two parts. – One part is allocated to the arriving process; the other is returned to the set of holes. – When a process terminates it releases its block of memory, which is then placed back in the set of holes. – If the new hole is adjacent to other holes, these adjacent holes are merged to form one larger hole.

Generalization of Fixed Size Partitions Scheme Following are some common strategies used to select a free hole from the set of available holes: First Fit – Allocate the first hole that is big enough. – Searching can start either at the beginning of the set of holes or where the previous first-fit search ended. We can stop searching as soon as we find a free hole that is large enough.

Generalization of Fixed Size Partitions Scheme Best Fit – Allocate the smallest hole that is big enough. – We must search the entire list, unless the list is ordered by size. – This strategy produces the smallest leftover hole. Worst Fit – Allocate the largest hole. – Again, we must search the entire list, unless it is sorted by size. – This strategy produces the largest leftover hole.

Paging Paging is a memory management scheme that permits the physical address space of a process to be noncontiguous and this is the main advantage of it. It is one of the ways in which the operating system can store and retrieve data from secondary storage for use in main memory. In the paging memory-management scheme, the operating system retrieves data from secondary storage in same-size blocks called pages.

Paging Paging avoids the problem of fitting memory chunks of varying sizes onto the backing store because some code fragments or data residing in main memory need to be swapped out and imagine if contiguous space is not found on the backing store but since paging uses noncontiguous memory allocation so no such problem will arise.

Basic Method The basic method for implementing paging involves breaking physical memory into fixed- sized blocks called frames and breaking logical memory into blocks of the same size called pages, as shown in figure below. When a process is to be executed, its pages are loaded into any available memory frames from the backing store. The backing store is divided into fixed-sized blocks that are of the same size as the memory frames.

Paging model of logical and physical memory

Paging The hardware support for paging is illustrated in figure below. Every address generated by the CPU is divided into two parts: Page number (p) Page offset (d) The page number is used as an index into a page table. The page table contains the base address of each page in physical memory. This base address is combined with the page offset to define the physical memory address that is sent to the memory unit.

Paging Remember the page size (like frame size) is defined by hardware. The size of a page is typically a power of 2, varying between 512 bytes and 16 MB per page, depending on computer architecture.

Page Table A page table is the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual addresses and physical addresses. Virtual addresses are those unique to the accessing process. Physical addresses are those unique to the RAM. virtual memorycomputer operating systemvirtual addressesphysical addressesprocessRAM

Frame Table The user program views the memory as one single space, containing only this one program. Infact the user program is scattered throughout the physical memory which also holds other programs. Since the operating system is managing the physical memory, it must be aware of the allocation details of physical memory i.e., which frames are allocated, which frames are available, how many total frames there are, and so on. This information is generally kept in a data structure called frame table. The frame table has one entry for each physical page frame, indicating whether the frame is free or allocated, and if allocated to which page of which process.

Page Fault In computer storage technology, a page is a fixed-length block of memory that is used as a unit of transfer between physical memory and external storage like a disk, and a page fault is an interrupt (or exception) to the software raised by the hardware, when a program accesses a page that is mapped in address space, but not loaded in physical memory.computer storagepage physical memorydiskinterruptexceptionaddress space The hardware that detects this situation in a processor is MMU. The exception handling software that handles the page fault is generally part of an operating system. The operating system tries to handle the page fault by making the required page accessible at a location in physical memory or kills the program in case it is an illegal access.exception handlingoperating system