Memory Management Chapter 7.

Slides:



Advertisements
Similar presentations
Chapters 7 & 8 Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee Community College, Venice,
Advertisements

Part IV: Memory Management
Memory Management Chapter 7.
Fixed/Variable Partitioning
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable supply of ready processes to.
Allocating Memory.
Chapter 7 Memory Management
Chapter 7 Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2009, Prentice.
OS Fall’02 Memory Management Operating Systems Fall 2002.
Chapter 7 Memory Management
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Module 3.0: Memory Management
Memory Management Chapter 7 B.Ramamurthy. Memory Management Subdividing memory to accommodate multiple processes Memory needs to allocated efficiently.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Memory Management Chapter 5.
Memory Management Five Requirements for Memory Management to satisfy: –Relocation Users generally don’t know where they will be placed in main memory May.
1 Memory Management Memory Management Requirements –Relocation A programmer does not know in advance which other programs will be resident in main memory.
A. Frank - P. Weisberg Operating Systems Real Memory Management.
Chapter 7 Memory Management
1 Lecture 8: Memory Mangement Operating System I Spring 2008.
Chapter 91 Memory Management Chapter 9   Review of process from source to executable (linking, loading, addressing)   General discussion of memory.
1 Memory Management Chapter 7. 2 Roadmap n Memory management u Objectives u Requirements n Simple memory management u Memory partitioning F Fixed partitioning.
Chapter 7 Memory Management Seventh Edition William Stallings Operating Systems: Internals and Design Principles.
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
Operating System 7 MEMORY MANAGEMENT. MEMORY MANAGEMENT REQUIREMENTS.
Memory Management Chapter 7.
1 Memory Management Memory Management COSC513 – Spring 2004 Student Name: Nan Qiao Student ID#: Professor: Dr. Morteza Anvari.
Memory Management. Process must be loaded into memory before being executed. Memory needs to be allocated to ensure a reasonable supply of ready processes.
Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Dr.
Chapter 7 Memory Management
Chapter 7 Memory Management Seventh Edition William Stallings Operating Systems: Internals and Design Principles.
Memory Management. Roadmap Basic requirements of Memory Management Memory Partitioning Basic blocks of memory management –Paging –Segmentation.
Subject: Operating System.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 8. 2 Memory Management n It is the task carried out by the OS and hardware to accommodate multiple processes in main memory.
Basic Memory Management 1. Readings r Silbershatz et al: chapters
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Memory management Ref: Stallings G.Anuradha. What is memory management? The task of subdivision of user portion of memory to accommodate multiple processes.
Memory Management Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only storage.
CSNB224 – AAG 2007 Memory Management Chapter 7. CSNB224 – AAG 2007 Memory Management In uniprogramming system, memory is divided into two parts: for the.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
Memory Management Chapter 5 Advanced Operating System.
1 Memory Management n In most schemes, the kernel occupies some fixed portion of main memory and the rest is shared by multiple processes.
2010INT Operating Systems, School of Information Technology, Griffith University – Gold Coast Copyright © William Stallings /2 Memory Management.
MEMORY MANAGEMENT. memory management  In a multiprogramming system, in order to share the processor, a number of processes must be kept in memory. 
COMP 3500 Introduction to Operating Systems Memory Management: Part 2 Dr. Xiao Qin Auburn University Slides.
Memory Management Chapter 7.
Chapter 7 Memory Management
Memory Management Chapter 7.
ITEC 202 Operating Systems
Chapter 2 Memory and process management
Requirements, Partitioning, paging, and segmentation
Chapter 8: Main Memory.
Real Memory Management
Chapter 8 Main Memory.
Main Memory Management
Economics, Administration & Information system
Memory Management Chapter 7.
Lecture 3: Main Memory.
Operating System Chapter 7. Memory Management
Operating Systems: Internals and Design Principles, 6/E
Chapter 7 Memory Management
Presentation transcript:

Memory Management Chapter 7

Memory Management It is the task carried out by the OS with support from hardware to accommodate multiple processes in main memory Effective memory management is vital in a multiprogramming system Efficiently allocate memory to pack as many processes into memory as possible. This is essential for minimizing processor idle time due to I/O wait

Memory Management Requirements Memory management policies must satisfy the following: Relocation Protection Sharing Logical Organization Physical Organization

Relocation Ability to relocate a process to a different area of memory Programmer cannot know where the program will be placed in memory when it is executed A process may be (often) relocated in main memory due to swapping Swapping enables the OS to have a larger pool of ready-to-execute processes Memory references in code (for both instructions and data) must be translated to actual physical memory address

Protection Processes should not be able to reference memory locations in another process without permission Impossible to check absolute addresses at compile time in programs because of relocation and dynamic address calculation Address references must be checked at run time by hardware Memory protection must be provided by the processor hardware rather than the OS

Sharing Must allow several processes to access a common portion of main memory without compromising protection cooperating processes may need to share access to the same data structure better to allow each process to access the same copy of the program rather than have their own separate copy

Logical Organization Main memory is organized as a linear address space However, users write programs in modules. Different degrees of protection for different modules (read only, execute only, etc.) Modules can be shared among processes Modules can be written and compiled independently and mutual references resolved at runtime To effectively deal with user programs, the OS and hardware should support such modules

Physical Organization Memory is organized in a hierarchy Secondary memory is the long term store for programs and data while main memory holds program and data currently in use Moving information between these two levels of memory is a major concern of memory management (OS) It is highly inefficient to leave this responsibility to the application programmer

Simple Memory Management In this chapter we study the simpler case where there is no virtual memory fixed partitioning dynamic partitioning simple paging simple segmentation An executing process must be loaded entirely in main memory (if overlays are not used) Not used in modern OS, but they lay the ground for a proper discussion of virtual memory (next chapter)

Fixed Partitioning Partition main memory into a set of non-overlapping regions called partitions Partitions can be of equal or unequal sizes

Fixed Partitioning Any process whose size is less than or equal to a partition size can be loaded into the partition If all partitions are occupied, the operating system can swap a process out of a partition A program may be too large to fit in a partition, then the programmer must design the program with overlays when the module needed is not present the user program must load that module into the program’s partition, overlaying whatever program or data are there

Fixed Partitioning Main memory use is inefficient. Any program, no matter how small, occupies an entire partition. This phenomenon is called internal fragmentation (space is wasted internal to a partition). Unequal-size partitions reduces these problems but they still remain... Also, number of active processes is fixed Fixed partitioning was used in early IBM’s OS/MFT (Multiprogramming with a Fixed number of Tasks)

Placement Algorithm with Fixed Partitions Equal-size partitions If there is an available partition, a process can be loaded into that partition because all partitions are of equal size, it does not matter which partition is used If all partitions are occupied by blocked processes, choose one process to swap out to make room for the new process

Placement Algorithm with Fixed Partitions Unequal-size partitions: use of multiple queues assign each process to the smallest partition within which it will fit A queue for each partition size tries to minimize internal fragmentation Problem: some queues will be empty if no processes within a size range is present

Placement Algorithm with Fixed Partitions Unequal-size partitions: use of a single queue When it is time to load a process into main memory the smallest available partition that will hold the process is selected increases the level of multiprogramming at the expense of internal fragmentation

Dynamic Partitioning Partitions are of variable length and number Each process is allocated exactly as much memory as it requires – partition size same as process size Eventually holes are formed in main memory. This is called external fragmentation (the holes are external to the partitions) Must use compaction to shift processes so they are contiguous and all free memory is in one block Used in IBM’s OS/MVT (Multiprogramming with a Variable number of Tasks)

Dynamic Partitioning: An Example A hole of 64K is left after loading 3 processes: not enough room for another process Eventually each process is blocked. The OS swaps out process 2 to bring in process 4 (128K)

Dynamic Partitioning: An Example Another hole of 96K is created Eventually each process is blocked. The OS swaps out process 1 to bring in again process 2 and another hole of 96K is created... Compaction would produce a single hole of 256K

Placement Algorithm Used to decide which free block to allocate to a process Goal: to reduce usage of compaction (because it is time consuming). Possible algorithms: Best-fit: choose smallest block that will fit First-fit: choose first block large enough from beginning Next-fit: choose first block large enough from last placement

Placement Algorithm: Comments Best-fit searches for smallest block the fragment left behind is as small as possible main memory is quickly littered with holes too small to hold any process, hence usually the worst algorithm compaction generally needs to be done more often First-fit favors allocation near the beginning simplest and usually the best algorithm tends to create less fragmentation then Next-fit Next-fit often leads to allocation of the largest free block at the end of memory largest block at end is broken into small fragments requires more frequent compaction than First-fit

Replacement Algorithm When all processes in main memory are blocked and there is insufficient memory for another process, the OS must choose which process to replace A process must be swapped out (to a Blocked-Suspend state) and be replaced by a new process or a process from the Ready-Suspend queue We will discuss later such algorithms for memory management schemes using virtual memory

Buddy System, I. A reasonable compromise to overcome disadvantages of both fixed and variable partitioning schemes A modified form is used in Unix SVR4 for kernel memory allocation Memory blocks are available in size of 2K where L <= K <= U and where 2L = smallest size of block allocated 2U = largest size of block allocated (generally, the entire memory available)

Buddy System, II. We start with the entire block of size 2U When a request of size S is made: If 2U-1 < S <= 2U then allocate the entire block of size 2U Else, split this block into two buddies, each of size 2U-1 If 2U-2 < S <= 2U-1 then allocate one of the two buddies Otherwise one of the two buddies is split in half again This process is repeated until the smallest block greater or equal to S is generated

Buddy System, III. The OS maintains lists of holes (free blocks) the i-list is the list of holes of size 2i A hole from the (i+1)-list may be removed by splitting it into two buddies of size 2i and putting them in the i-list if a pair of buddies in the i-list become unallocated, they are removed from that list and coalesced into a single block in the (i+1)-list

Example of Buddy System

Relocation Because of swapping and compaction, a process may occupy different main memory locations during its lifetime Hence physical memory references by a process cannot be fixed This problem is solved by distinguishing between logical address and physical address

Address Types A physical address (absolute address) is a physical location in main memory A logical address is a reference to a memory location independent of the current assignment of program/data to memory Compilers produce code in which all memory references are logical addresses A relative address is an example of logical address in which the address is expressed as a location relative to some known point in the program (ex: the beginning)

Address Translation Relative address is the most frequent type of logical address used in program modules (ie: executable files) Such modules are loaded in main memory with all memory references in relative form Physical addresses are calculated “on the fly” as the instructions are executed For adequate performance, the translation from relative to physical address must by done by hardware

Simple example of hardware translation of addresses When a process is assigned to the running state, a base register (in CPU) gets loaded with the starting physical address of the process A bounds register gets loaded with the process’s ending physical address When a relative address is encountered, it is added with the content of the base register to obtain the physical address which is compared with the content of the bounds register This provides hardware protection: each process can only access memory within its process image

Hardware Support for Relocation

Paging Memory partitioned into relatively small fixed size chunks called frames (page frames) Each process divided into chunks of same size called pages and can be assigned to frames No external fragmentation Minimum internal fragmentation (last page) OS keeps track of frames in use and free If page size is power of two, address translation simplified

Paging Example

Page Table OS maintains a page-table for each process and a list of free frames Page table is indexed by page number and the entries are the corresponding frame numbers in main memory

Address Translation Restrict page size to power of 2 Logical address = (page #, offset) Physical address = (frame #, offset) 16 bit addresses: 216 addressable bytes 1K (210 = 1024) bytes page size: 10 bit offset, 6 bit page # Max number of pages = 26 = 64

Address Translation Figure 7.11

Logical to Physical Address Translation with Paging

Address Translation Advantages of power of 2 frame size: Logical address (page #, offset) is same as relative address Easy to convert logical address to physical address in hardware

Segmentation Programs divided into variable length segments, usually along boundaries specific to the program External fragmentation is problem No internal fragmentation Programmer/Compiler must be aware of max segment size, etc. (paging is invisible to the programmer)

Address Translation Logical address = (segment #, offset) Segment # indexes segment table Each segment table entry contains starting physical address (base address) of segment and length of segment Physical address = base address + offset If offset > length of segment, address is invalid

Address Translation with Segmentation

Comparison of Memory Management Techniques See Table 7.1, page 306.