Real Memory Management

Slides:



Advertisements
Similar presentations
Part IV: Memory Management
Advertisements

Memory Management Chapter 7.
Fixed/Variable Partitioning
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
CS 311 – Lecture 21 Outline Memory management in UNIX
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
Module 3.0: Memory Management
Memory Management Chapter 5.
A. Frank - P. Weisberg Operating Systems Real Memory Management.
Silberschatz, Galvin and Gagne  Operating System Concepts Multistep Processing of a User Program User programs go through several steps before.
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
1 Lecture 8: Memory Mangement Operating System I Spring 2008.
Chapter 91 Memory Management Chapter 9   Review of process from source to executable (linking, loading, addressing)   General discussion of memory.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 8: Main Memory.
Swapping and Contiguous Memory Allocation. Multistep Processing of a User Program User programs go through several steps before being run. Program components.
Lecture 13 L.Mohammad R.Alkafagee1.  The concept of a logical address space that is bound to a separate physical address space is central to proper memory.
Memory Management Chapter 7.
8.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 8: Memory-Management Strategies Objectives To provide a detailed description.
Chapter 4 Storage Management (Memory Management).
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Memory Management Chapter 7.
Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
1 Memory Management Chapter 8. 2 Memory Management n It is the task carried out by the OS and hardware to accommodate multiple processes in main memory.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 31 Memory Management.
Memory Management Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only storage.
1 Memory Management n In most schemes, the kernel occupies some fixed portion of main memory and the rest is shared by multiple processes.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Main Memory CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Chapter 7 Memory Management
Memory Management Chapter 7.
Module 9: Memory Management
Chapter 9: Memory Management
ITEC 202 Operating Systems
Chapter 8: Main Memory.
Chapter 8: Memory Management
Chapter 8: Main Memory.
Chapter 8 Main Memory.
Main Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Economics, Administration & Information system
Storage Management Chapter 9: Memory Management
Operating System Concepts
Module 9: Memory Management
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Main Memory Session -15.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Multistep Processing of a User Program
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
So far… Text RO …. printf() RW link printf Linking, loading
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
Memory Management Chapter 7.
Memory Management Tasks
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Operating System Chapter 7. Memory Management
Chapter 8: Memory Management strategies
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
OPERATING SYSTEMS MEMORY MANAGEMENT BY DR.V.R.ELANGOVAN.
CSE 542: Operating Systems
Presentation transcript:

Real Memory Management Operating Systems Real Memory Management

Real Memory Management Background Memory Management Requirements Fixed/Static Partitioning Variable/Dynamic Partitioning Simple/Basic Paging Simple/Basic Segmentation Segmentation with Paging

Background Program must be brought into main memory and placed within a process representing it, in order for it to be run. Memory management is the task carried out by the OS and hardware to accommodate multiple processes in main memory. User programs go through several steps before being able to run. This multi-step processing of the program invokes the appropriate utility and generates the required module at each step (see next slides).

Multi-step Processing of a User Program

Steps for loading a process in memory Source file1 Source file2 Source file3 Object module1 Object module2 Object module3 Linker Assembler Or Compiler Executable Binary file (Load module) Loader Process image in Memory

Object Module Public names are usable by other object modules. External names are defined in other object modules: Includes the list of instructions having these names as operands. Relocation dictionary: Has the list of instructions who’s operands are addresses (since they are relocatable). Only code and data will be loaded in physical memory: The rest is used by the linker and then removed. The stack is allocated only at load time. End of module Relocation dictionary Data Machine code External names table Public names table Module identification

Object Modules Initially, each object module has its own address space. All addresses are relative to the beginning of the module.

Linking Static The linker uses tables in object modules to link modules into a single linear addressable space. The new addresses are addresses relative to the beginning of the load module. BRANCH TO 200 MOVE P TO X CALL 400 BRANCH TO 600 MOVE R TO X CALL 0 100 200 300 400 500 600 700 800 900 Object module B Object module A Load Module (from Linker)

Dynamic Linking The linking of some external modules is done after the creation of the load module (executable file). Load-time dynamic linking: The load module contains references to external modules which are resolved at load time. Run-time dynamic linking: references to external modules are resolved when a call is made to a procedure defined in the external module. unused procedure is never loaded. Process starts faster.

Program vs. Memory sizes What to do when program size is larger than the amount of memory/partition (that exists or can be) allocated to it? There are two basic solutions within real memory management: Overlays Dynamic Linking (Libraries - DLLs)

Overlays Keep in memory only the overlay (those instructions and data that are) needed at any given phase/time. Overlays can be used only for programs that fit this model multi-pass programs like compilers. Overlays are designed/implemented by programmer. Needs an overlay driver. No special support needed from operating system, but program design of overlays structure is complex.

Overlays for a Two-Pass Assembler Symbol table Common Routine Overlay Driver 20 K 30 K 10 K Total memory needed, if Overlay is not used is 200 K (20+30+70+80) K Total memory needed is 140K (20+30+10+80) K Pass 1 70 K Pass 2 80 K

Dynamic Linking Dynamic linking is useful when large amounts of code are needed to handle infrequently occurring cases. Routine is not loaded unless/until it is called. Better memory-space utilization; unused routine is never loaded.

Dynamics of Dynamic Linking Linking postponed until execution time. Small piece of code, stub, used to locate the appropriate memory-resident library routine. Stub replaces itself with the address of the routine, and executes the routine. OS needed to check if routine is in processes’ memory address. Dynamic linking is particularly useful for shared/common libraries -– here full OS support is needed.

Advantages of Dynamic Linking Executable files can use another version of the external module without the need of being modified. Each process is linked to the same external module. Saves disk space. The external module needs to be loaded in main memory only once. Processes can share code and save memory. Examples: Windows: external modules are .DLL files. Unix: external modules are .SO files (shared library).

Memory Management Requirements If only a few processes can be kept in main memory, then much of the time all processes will be waiting for I/O and the CPU will be idle. Hence, memory needs to be allocated efficiently in order to pack as many processes into memory as possible. Need additional support for: Relocation Protection Sharing Logical Organization Physical Organization Next

Memory Management Requirements (1) Relocation: Programmer don’t know where the program will be placed in memory when it is executed. A process may be (often) relocated in main memory due to swapping/compaction: Swapping enables the OS to have a larger pool of ready-to-execute processes. Compaction enables the OS to have a larger contiguous memory to place programs in. Back

Memory Management Requirements (2) Protection: Processes should not be able to reference memory locations in another process without permission. Impossible to check addresses in programs at compile/load-time since the program could be relocated. Address references must be checked at execution-time by hardware. Back

Memory Management Requirements (3) Sharing: must allow several processes to access a common portion of main memory without compromising protection: Better to allow each process to access the same copy of a shared module rather than have their own separate copy. Cooperating processes may need to share access to the same data structure. Back

Memory Management Requirements (4) Logical Organization: Users write programs in modules with different characteristics: instruction modules are execute-only. data modules are either read-only or read/write. some modules are private and others are public. To effectively deal with user programs, the OS and hardware should support a basic form of a module to provide the required protection and sharing. Back

Memory Management Requirements (5) Physical Organization: External memory is the long term store for programs and data while main memory holds programs and data currently in use. Moving information between these two levels of the memory hierarchy is a major concern of memory management - it is highly inefficient to leave this responsibility to the application programmer. Back

The need for Relocation Because of need for process swapping and memory compaction, a process may occupy different main memory locations during its lifetime. Consequently, physical memory references (addresses) by a process cannot always be fixed. This problem is solved by distinguishing between logical address and physical address.

Address Types A physical (absolute) address is a physical location in main memory. A logical (virtual) address is a reference to a memory location that is independent of the physical organization of memory. Compilers produce code in which all memory references are logical addresses. A relative address is an example of logical address in which the address is expressed as a location relative to some known point in the program (the beginning).

Relocation Scheme Relative address is the most frequent type of logical address used in program modules (i.e., executable files). Relocatable modules are loaded in main memory with all memory references left in relative form. Physical addresses are calculated “on the fly” as the instructions are executed. For adequate performance, the translation from relative to physical address must by done by hardware.

Memory-Management Unit (MMU) Hardware device that maps logical/virtual address to real/physical address. In relocation scheme, the value in the relocation register is added to every logical address generated by a user process at the time it is sent to memory. The process deals with logical/virtual addresses; it never sees the real/physical addresses.

CPU, MMU and Memory The CPU sends logical addresses to the memory CPU CPU package Memory Disk Controller Bus The MMU sends Physical addresses to the memory

Dynamic relocation using a relocation register + MMU CPU Memory 14000 Physical address 14346 Logical address 346

Hardware Support for Relocation and Limit Registers CPU Limit register Relocation register Memory + Physical address < Logical address yes no Trap; addressing error

Address-binding of Instructions/Data Address-binding of instructions and data to memory addresses can happen at three different stages: Compile-time: If memory location is known a priori, absolute code can be generated; must recompile code if the starting location changes. Load-time: Must generate relative code if memory location is not known at compile-time; loading maps relative code to absolute code by adding start location. Execution-time: Binding delayed until run-time if the process can be relocated (i.e., relocatable code) during its execution from one place to another. Need hardware support for address maps (e.g., base and limit registers).

Logical vs. Physical Address Space The concept of a logical address space of a program that is bound to a physical address space is central to proper memory management. Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme.

Logical and Physical Address Spaces Process 2 Base reg =500K Length =350K Process 2’s logical address space 350 K 0 K The computers Physical address space 850 K 500 K 200 K 0 K Process 1 Base reg =200K Length =300K Process 1’s logical address space 300 K 0 K

Example Hardware for Address Translation Adder Relative address Base register Code Data stack Process image in memory Comparator absolute address Limit register Interrupt to OS

Dynamics of hardware translation of addresses When a process is assigned to the running state, a relocation/base register gets loaded with the starting physical address of the process. A limit/bounds register gets loaded with the process’s ending physical address. When a relative addresses is encountered, it is added with the content of the base register to obtain the physical address which is compared with the content of the limit/bounds register. This provides hardware protection: each process can only access memory within its process image.

Contiguous Allocation An executing process must be loaded entirely in main memory (if overlays are not used). Main memory is usually split into two (memory split) or more (memory division) partitions: Resident operating system, usually held in low memory partition with interrupt vector. User processes then held in high memory partitions. Relocation-register scheme is used to protect user processes from each other, and from changing OS code and data.

Unequal-size Partition Fixed Partitioning Operating system 8 M Equal size Partition 8 M Operating system 2M 4M 6M 12 M 16 M Unequal-size Partition Partition main memory into a set of non-overlapping memory regions called partitions. Fixed partitions can be of equal or unequal sizes. Leftover space in partition, after program assignment, is called internal fragmentation.

Placement Algorithm with Partitions Equal-size partitions: If there is an available partition, a process can be loaded into that partition - because all partitions are of equal size, it does not matter which partition is used. internal fragmentation may occur. If all partitions are occupied by blocked processes, choose one process to swap out to make room for the new process. Unequal-size partitions, use of multiple queues: assign each process to the smallest partition within which it will fit. increases the level of multiprogramming at the expense of internal fragmentation.

Dynamics of Fixed Partitioning Any process whose size is less than or equal to a partition size can be loaded into the partition. If all partitions are occupied, the OS can swap a process out of a partition. A program may be too large to fit in a partition. The programmer must design the program with overlays.

Comments on Fixed Partitioning Main memory use is inefficient. Any program, no matter how small, occupies an entire partition. This can cause internal fragmentation. Unequal-size partitions lessens these problems but they still remain ... Equal-size partitions was used in early IBM’s OS/MFT (Multiprogramming with a Fixed number of Tasks).

Variable Partitioning When a process arrives, it is allocated memory from a hole large enough to accommodate it. Hole – block of available memory; holes of various sizes are scattered throughout memory. Operating system maintains information about: a) allocated partitions b) free partitions (holes) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2

Managing allocated and free partitions Example: memory with 5 processes and 3 holes: tick marks show allocation units. shaded regions are free.

Variable Partitioning: example (1) A hole of 64K is left after loading 3 processes: not enough room for another process. Eventually each process is blocked. The OS swaps out process 2 to bring in process 4.

Variable Partitioning: example (2) Another hole of 96K is created. Eventually each process is blocked. The OS swaps out process 1 to bring in again process 2 and another hole of 96K is created ...

Internal/External Fragmentation There are really two types of fragmentation: Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, that is not being used. External Fragmentation – total memory space exists to satisfy a size n request, but that memory is not contiguous.

Reducing External Fragmentation Reduce external fragmentation by doing compaction: Shuffle memory contents to place all free memory together in one large block (or possibly a few large ones). Compaction is possible only if relocation is dynamic, and is done at execution time. I/O execution problem: Lock job in memory while it is involved in I/O. Do I/O only into OS buffers.

Comments on Variable Partitioning Partitions are of variable length and number. Each process is allocated exactly as much memory as it requires. Eventually holes are formed in main memory. This can cause external fragmentation. Must use compaction to shift processes so they are contiguous; all free memory is in one block. Used in IBM’s OS/MVT (Multiprogramming with a Variable number of Tasks).