Memory management Ref: Stallings G.Anuradha. What is memory management? The task of subdivision of user portion of memory to accommodate multiple processes.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Part IV: Memory Management
Memory Management Chapter 7.
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable supply of ready processes to.
Chapter 7 Memory Management
Chapter 7 Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2009, Prentice.
OS Fall’02 Memory Management Operating Systems Fall 2002.
Chapter 7 Memory Management
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
Memory Management Chapter 7 B.Ramamurthy. Memory Management Subdividing memory to accommodate multiple processes Memory needs to allocated efficiently.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Memory Management Chapter 5.
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Memory Management Five Requirements for Memory Management to satisfy: –Relocation Users generally don’t know where they will be placed in main memory May.
A. Frank - P. Weisberg Operating Systems Simple/Basic Paging.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Chapter 7 Memory Management
1 Lecture 8: Memory Mangement Operating System I Spring 2008.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
Memory Management -1 Background Swapping Memory Management Schemes
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 8 Operating Systems.
Operating System 7 MEMORY MANAGEMENT. MEMORY MANAGEMENT REQUIREMENTS.
Memory Management Chapter 7.
Operating Systems Chapter 8
Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Dr.
Chapter 7 Memory Management
Chapter 7 Memory Management Seventh Edition William Stallings Operating Systems: Internals and Design Principles.
CS212: OPERATING SYSTEM Lecture 5: Memory Management Strategies 1 Computer Science Department.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Memory Management Chapter 7.
Memory Management. Roadmap Basic requirements of Memory Management Memory Partitioning Basic blocks of memory management –Paging –Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Subject: Operating System.
Memory Management 1. Background Programs must be brought (from disk) into memory for them to be run Main memory and registers are only storage CPU can.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
CE Operating Systems Lecture 14 Memory management.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
1 Memory Management n In most schemes, the kernel occupies some fixed portion of main memory and the rest is shared by multiple processes.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 8: Main Memory.
2010INT Operating Systems, School of Information Technology, Griffith University – Gold Coast Copyright © William Stallings /2 Memory Management.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
COMP 3500 Introduction to Operating Systems Memory Management: Part 2 Dr. Xiao Qin Auburn University Slides.
Memory Management Chapter 7.
Chapter 7 Memory Management
Memory Management Chapter 7.
ITEC 202 Operating Systems
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Memory Management-I 1.
Operating System Chapter 7. Memory Management
Chapter 8: Memory Management strategies
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Operating Systems: Internals and Design Principles, 6/E
CSE 542: Operating Systems
Presentation transcript:

Memory management Ref: Stallings G.Anuradha

What is memory management? The task of subdivision of user portion of memory to accommodate multiple processes is carried out dynamically by the operating system and is known as memory management Memory management terms

Memory management requirements

Memory management requirements – Contd… Relocation – Users generally don’t know where they will be placed in main memory – May want to swap in at a different place – Must deal with user pointers – Generally handled by hardware Protection – Prevent processes from interfering with the O.S. or other processes – Often integrated with relocation

Memory management requirements – Contd… – Sharing Allow processes to share data/programs – Logical Organization Main memory and secondary memory are organized into linear/1D address space of segments and words. Secondary memory is also similarly organized Most programs are modularized Advantages of modular approach – Written and compiled independently – Can have different degrees of protection – Module level of sharing – Physical Organization Transferring data in and out of main memory to secondary memory can’t be assigned to programmers Manage memory  disk transfers (System responsibility) Segmentation

Memory Partitioning Fixed partitioning Dynamic partitioning Simple Paging Simple segmentation Virtual memory paging Virtual memory segmentation

Fixed partitioning

Difficulties in equal size fixed partitions A program may be too big to fit into a partition. In such cases overlays can be used Main memory utilization is extremely inefficient. Leads to internal fragmentation – BLOCK OF DATA LOADED IS SMALLER THAN THE PARTITION

Placement algorithm With equal size partition a process can be loaded into a partition as long as there is an available partition With unequal size partitions there are two possible ways to assign processes to partitions – One process queue per partition – Single partition

Memory assignment for fixed partitioning

Advantages and disadvantages of each of the approaches One process queue per partition – Advantages: Internal fragmentation is reduced – Disadvantages Not optimal from the system point of view Single queue – Advantages Degree of flexibility, simple, requires minimal OS S/w and processing overhead – Disadvantages Limits the number of active processes in the system Small jobs will not utilize partition space efficiently IBM Mainframe OS. OS/MFT

Dynamic partitioning Create partitions as programs loaded Avoids internal fragmentation, but must deal with external fragmentation

Effect of dynamic partitioning

Dynamic partitioning External fragmentation:-As time goes on more and more fragments are added and the effective utilization declines External fragmentation can be overcome using compaction Compaction – Time consuming

Placement algorithm Best-fit:- Chooses the block that is closest in size to the request First-fit:- scans memory from the beginning and chooses the first available block that is large enough Next-fit:-Scan memory from the location of the last placement and chooses the next available block that is large enough

Which amongst them is the best? First fit:- – Simple, best and fastest Next fit:- – Next to first fit. Requires compaction in this case Best fit:- – Worst performer. Memory compaction should be done as frequently as possible

Buddy system Overcomes the drawbacks of both fixed and dynamic partitioning schemes

Algorithm of buddy system The entire space available for allocation is treated as a single block of size 2 U If a request of size s ST is made then the entire block is allocated Otherwise the block is split into two of size 2 U-1 This process continues until the smallest block greater than or equal to s is generated and allocated to the request

Example of a buddy system

Free representation of buddy system Modified version used in UNIX kernel memory allocation

Relocation Not a major problem with fixed-sized partitions Easy to load process back into the same partition Otherwise need to deal with a process loaded into a new location Memory addresses may change When loaded into a new partition If compaction is used To solve this problem a distinction is made among several types of addresses.

Relocation Different types of address – Logical address:- reference to a memory location independent of the current assignment of data to memory – Relative address:- example of logical address in which the address is expressed as a location relative to some known point – Physical address:- actual location in main memory A hardware mechanism is needed for translating the relative addresses to physical main memory addresses at the time of execution of the instruction that contain the reference

Hardware support for relocation

Base/Bounds Relocation Base Register – Holds beginning physical address – Add to all program addresses Bounds Register – Used to detect accesses beyond the end of the allocated memory – Provides protection to system Easy to move programs in memory – Change base/bounds registers Largely replaced by paging

Paging Problems with unequal fixed-size and variable- size partitions are external and internal fragmentations respectively. If the process is also divided into chunks of same size - pages Memory is divided into chunks called frames Then a page can be framed into a page frame Then there will be only internal fragmentation especially in the last page of the process

Assignment of Process pages to free frames

Page tables When a new process D is brought in, it can still be loaded even though there is no contiguous memory location to store the process. For this the OS maintains a page table for each process The page table shows the frame location for each page of the process Within the program, each logical address consists of a page number and an offset within the page

Page table contd… In a simple partition, a logical address is the location of a word relative to the beginning of the program The processor translates it into physical address. With paging the logical-physical address translation is done by hardware processor Logical address {page number, offset} Physical address {Frame number, offset}

Data structures when process D is stored in main memory Paging similar to fixed size partitioning Difference:- 1. partitions are small 2. Program may occupy more than one partition 3. Partitions need not be contiguous

Computation of logical and physical addresses Page size typically a power of 2 to simplify the paging hardware – Example (16-bit address, 1K pages) Relative address of 1502 is Top 6 bits (000001)= page # {Page number is 1} Bottom 10 bits ( ) = {offset within page, in this case 478} Thus a program can consist of a maximum of 2 6 = 64 pages of 1K bytes each.

Why page size is in multiples of 2? The logical addressing scheme is transparent to programmer, assembler, linker. Easy to implement a function in hardware to perform dynamic address translation at runtime. Steps in address translation

Logical to physical address translation using paging

Hardware support OS has its own method for storing page tables. Pointer to page table is stored with the other register values in the PCB, which is reloaded whenever the process is loaded. Page table may be stored in special registers if the number of pages is small. Page table may be stored in physical memory, and a special register, page-table base register, points to the page table.(Problem is time taken for accessing)

Implementation of Page Table Page table is kept in main memory Page-table base register (PTBR) points to the page table Page-table length register (PRLR) indicates size of the page table In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs) Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies each process to provide address-space protection for that process

Hardware support contd… Use translation look-aside buffer (TLB). TLB stores recently used pairs (page #, frame #). It compares the input page # against the stored ones. If a match is found, the corresponding frame # is the output. Thus, no physical memory access is required. The comparison is carried out in parallel and is fast. TLB normally has 64 to 1,024 entries.

Paging Hardware With TLB

Memory Protection Memory protection implemented by associating protection bit with each frame Valid-invalid bit attached to each entry in the page table: – “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page – “invalid” indicates that the page is not in the process’ logical address space

Valid (v) or Invalid (i) Bit In A Page Table

Shared pages Possibility of sharing common code This happens in the case if the code is re- entrant code(pure code). Re-entrant code is non-self modifying code: it never changes during execution. Two or more processes can simultaneously utilize the same code.

Shared Pages Example

Structure of page table Hierarchical paging Hashed page tables Inverted page tables

Hierarchical paging

Summary 1.Main memory is divided into equal sized frames 2.Each process is divided into frame-sized pages 3.When a process is brought in all of its pages are loaded into available frames and a page table is set up 4.This approach solves the problems in partitioning

Segmentation Segmentation is a memory management scheme that supports user view of memory Logical address space is a collection of segments Each segment has a name and length Each address has a segment and a offset within a segment

User’s View of a Program

Logical View of Segmentation user spacephysical memory space

Segmentation Hardware

Example of Segmentation Segment 2, ref to byte 53= =4353 Segment 3, ref to byte 852= =4052 Segment 0, ref to byte 1222=

Implementation of Segment Tables Just like page tables the segment tables can be kept in registers and accessed When the program contains a large number of segments a segment table base register and segment table limit register is kept in the memory and checks are performed

Protection and sharing Segments are always protected becos its semantically defined Code and data can be shared in segmentation. Segments are shared when entries in the segment tables of two different processes point to the same physical location