Memory Management: Early System.

Slides:



Advertisements
Similar presentations
Part IV: Memory Management
Advertisements

Understanding Operating Systems Fifth Edition
Memory Management Chapter 7.
Chapter 7 Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2009, Prentice.
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Chapter 3 Memory Management: Virtual Memory
Chapter 7 Memory Management
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
CS 104 Introduction to Computer Science and Graphics Problems
Memory Management 2010.
Virtual Memory Chapter 8.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management Chapter 5.
Understanding Operating Systems Fifth Edition
Computer Organization and Architecture
 2004 Deitel & Associates, Inc. All rights reserved. Chapter 9 – Real Memory Organization and Management Outline 9.1 Introduction 9.2Memory Organization.
Chapter 2 Memory Management: Early Systems (all ancient history)
Chapter 7 Memory Management
Understanding Operating Systems Fifth Edition
Chapter 3 Memory Management: Virtual Memory
Memory Management Chapter 7.
1 Memory Management Memory Management COSC513 – Spring 2004 Student Name: Nan Qiao Student ID#: Professor: Dr. Morteza Anvari.
MEMORY MANAGEMENT Presented By:- Lect. Puneet Gupta G.P.C.G. Patiala.
Understanding Operating Systems Seventh Edition
Chapter 2 Memory Management: Early Systems Understanding Operating Systems, Fourth Edition.
Chapter 2 Memory Management: Early Systems Understanding Operating Systems, Fourth Edition.
Memory Management. Roadmap Basic requirements of Memory Management Memory Partitioning Basic blocks of memory management –Paging –Segmentation.
Chapter 2 - Memory Management, Early Systems Ivy Tech State College Northwest Region 01 CIS106 Microcomputer Operating Systems Gina Rue CIS Faculty.
Subject: Operating System.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
Memory Management. Why memory management? n Processes need to be loaded in memory to execute n Multiprogramming n The task of subdividing the user area.
Memory Management Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only storage.
Understanding Operating Systems Fifth Edition Chapter 3 Memory Management: Virtual Memory.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
Chapter 7 Memory Management
Memory Management Chapter 7.
Computer Organization
Virtual memory.
ITEC 202 Operating Systems
Chapter 2 Memory and process management
Understanding Operating Systems Seventh Edition
Chapter 8: Main Memory.
ITEC 202 Operating Systems
Chapter 9 – Real Memory Organization and Management
Lecture 10: Virtual Memory
Main Memory Management
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Operating System Concepts
Chapter 9: Virtual-Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Multistep Processing of a User Program
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Main Memory Background Swapping Contiguous Allocation Paging
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Lecture 3: Main Memory.
Contents Memory types & memory hierarchy Virtual memory (VM)
Understanding Operating Systems Seventh Edition
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
COMP755 Advanced Operating Systems
Operating Systems: Internals and Design Principles, 6/E
CSE 542: Operating Systems
Page Main Memory.
Presentation transcript:

Memory Management: Early System

Introduction Management of main memory is critical Entire system performance dependent on two items How much memory is available Optimization of memory during job processing This chapter introduces: Memory manager Four types of memory allocation schemes Single-user systems Fixed partitions Dynamic partitions Relocatable dynamic partitions

What is memory? In computing, memory refers to the physical devices used to store programs (sequences of instructions) or data on a temporary or permanent basis for use in a computer or other digital electronic device. Cache memory is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations. The term primary memory is used for the information in physical systems which are fast (i.e. RAM), as a distinction from secondary memory, which are physical devices for program and data storage which are slow to access but offer higher memory capacity. Primary memory stored on secondary memory is called "virtual memory“.

Memory Hierarchy Memory hierarchy is used in the theory of computation when discussing performance issues in computer architectural design, algorithm predictions, and the lower level programming constructs such as involving locality of reference. A 'memory hierarchy' in computer storage distinguishes each level in the 'hierarchy' by response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by the controlling technology. There are four major storage levels. Internal – Processor registers and cache. Main – the system RAM and controller cards. On-line mass storage – Secondary storage. Off-line bulk storage – Tertiary and Off-line storage.

Single-User Contiguous scheme Each program is loaded in its entirety into memory and allocated as much contiguous space as it needed. Once the program is loaded into memory, it remains there until execution is complete. Only one program can be loaded in memory.

Advantage: Simple Disadvantages: If the program is larger than the available memory space, it couldn’t be executed. Program size must be less than memory size to execute. It doesn’t support multiprogramming. Possible solutions: The size of main memory must be increased. The program must be modified to make it smaller.

Fixed Partitions Memory is divided into many unequal size, fixed partitions. Each partition has to be protected. Before loading a job, its size must be matched with the size of the partition to make sure it fits completely. Only one job can be loaded into a partition. The memory manager must keep a table showing each partition size, address, and its current status (free/busy).

Fixed Partitions Advantage: Multiprogramming. Disadvantages: If partitions sizes are too small, big jobs are rejected. If partitions are too big, memory is wasted. Internal fragmentation (fragment within a block). Still, entire and contiguous jobs are required.

Dynamic Partitions Main memory is partitioned Jobs given memory requested when loaded One contiguous partition per job Job allocation method First come, first serve allocation method Memory waste: comparatively small Disadvantages Full memory utilization only during loading of initial jobs Subsequent allocation: memory waste External fragmentation: fragments between blocks

Job allocation policies How the OS Keeps track of the free sections of memory? First-fit Best-fit

Best-fit & First-fit allocation First-fit: The first partition that fits the job. Best-fit: The smallest partition that fits the job. In first-fit, Memory Manager organizes the free/busy lists by memory location. In best-fit, Memory Manager organizes the free/busy lists by size.

Best-Fit Versus First-Fit Allocation Two methods for free space allocation First-fit memory allocation: first partition fitting the requirements Leads to fast allocation of memory space Best-fit memory allocation: smallest partition fitting the requirements Results in least wasted space Internal fragmentation size reduced, but not eliminated Fixed and dynamic memory allocation schemes use both methods

Advantages & Disadvantages First-fit: Advantage: Fast. Disadvantages: Large jobs must wait for the large spaces. memory waste.

Advantages & Disadvantages Best-fit: Advantage: Best use of memory space. Disadvantage: Wasting time looking for the best memory block.

Relocatable Dynamic Partitions Memory Manager relocates programs Gathers together all empty blocks Compact the empty blocks Make one block of memory large enough to accommodate some or all of the jobs waiting to get in

Compaction: reclaiming fragmented sections of memory space Every program in memory must be relocated Programs become contiguous Operating system must distinguish between addresses and data values Every address adjusted to account for the program’s new location in memory Data values left alone

Compaction issues: What lists have to be updated? What goes on behind the scenes when relocation and compaction take place? What keeps track of how far each job has moved from its original storage area?

What lists have to be updated? Free list Must show the partition for the new block of free memory Busy list Must show the new locations for all of the jobs already in process that were relocated Each job will have a new address Exception: those already at the lowest memory locations

In order to address the two last questions, special-purpose registers are used for relocation: Bounds register Stores highest location accessible by each program Relocation register Contains the value that must be added to each address referenced in the program Must be able to access the correct memory addresses after relocation If the program is not relocated, “zero” value stored in the program’s relocation register

Compacting and relocating optimizes use of memory Improves throughput The crucial factor is the timing of the compaction –when and how often it should be done Options for timing of compaction: When a certain percentage of memory is busy When there are jobs waiting to get in After a prescribed amount of time has elapsed Compaction entails more overhead Goal: optimize processing time and memory use while keeping overhead as low as possible

Summary Four memory management techniques Single-user systems, fixed partitions, dynamic partitions, and relocatable dynamic partitions Common requirements of four memory management techniques Entire program loaded into memory Contiguous storage Memory residency until job completed Each places severe restrictions on job size Sufficient for first three generations of computers

Memory Management: Virtual Memory Recent System

Introduction Early schemes were limited to storing entire program in memory. Fragmentation. Overhead due to relocation. More sophisticated memory schemes now involving the evolution of virtual memory helps to: Eliminate need to store programs contiguously. Eliminate need for entire program to reside in memory during execution. Problems

TASK In PAIR, research on the recent memory allocation schemes. There are four types. Produce at least two-pages report on your findings and explain your understanding on the findings to your instructor. Submission Date: 20th February 2014

More recent Memory Allocation Schemes Paged Memory Allocation Demand Paging Memory Allocation Segmented Memory Allocation Segmented/Demand Paged Allocation

Paged Memory Allocation Divides each incoming job into pages of equal size Works well if page size = size of memory block size (page frames) = size of disk section (sector, block). Memory manager tasks prior to program execution Determines number of pages in program Locates enough empty page frames in main memory Loads all program pages into page frames Advantage of storing program non-contiguously New problem: keeping track of job’s pages

Job Table Page Map Table Memory Map Table

Paged Memory Allocation (Job 1 – Fig 3.2) At compilation time every job is divided into pages: Page 0 contains the first hundred lines. Page 1 contains the second hundred lines. Page 2 contains the third hundred lines. Page 3 contains the last fifty lines. Program has 350 lines. Referred to by system as line 0 through line 349.

Paging Requires 3 Tables to Track a Job’s Pages Three tables for tracking pages: 1. Job Table (JT) contains Size of each active job and Memory location where its PMT is stored One JT for the whole system 2. Page Map Table (PMT) contains Page number and Its corresponding page frame address One PMT for each job 3. Memory Map Table (MMT) contains Location for each page frame and Free/busy status One MMT for the whole system

Job 1 Is 350 Lines Long & Divided Into 4 Pages (Figure 3.2)

Displacement Displacement (offset) of a line -- how far away a line is from the beginning of its page. To determine line distance from beginning of its page Used to locate that line within its page frame. Relative factor. For example, lines 0, 100, 200, and 300 are first lines for pages 0, 1, 2, and 3 respectively so each has displacement of zero. Determining page number and displacement of a line Divide job space address by the page size Page number: integer quotient from the division Displacement: remainder from the division

Steps to determining exact location of a line in memory Determine page number and displacement of a line Refer to the job’s PMT Determine page frame containing required page Obtain address of the beginning of the page frame Multiply page frame number by page frame size Add the displacement (calculated in first step) to starting address of the page frame Address resolution Translating job space address into physical address Relative address into absolute address

Page size selection is crucial Advantage Allows job allocation in non-contiguous memory Efficient memory use Disadvantages Increased overhead from address resolution Internal fragmentation in last page Must store entire job in memory location Page size selection is crucial Too small: generates very long PMTs Too large: excessive internal fragmentation

Demand Paging Introduced the concept of loading only a part of the program into memory for processing – the first widely used scheme that removed the restriction of having the entire job in memory from beginning to the end of its processing. Pages are brought into memory only as needed Removes restriction: entire program in memory Requires high-speed page access Exploits programming techniques Modules written sequentially All pages not necessary needed simultaneously

Allowed for wide availability of virtual memory concept Provides appearance of almost infinite or nonfinite physical memory Jobs run with less main memory than required in paged memory allocation scheme Requires high-speed direct access storage device Works directly with CPU Swapping: how and when pages passed in memory Depends on predefined policies

Memory Manager requires three tables Job Table Page Map Table: additional three new fields If requested page is already in memory If page contents have been modified If page has been referenced recently Determines which page remains in main memory and which is swapped out Memory Map Table

Swapping Process Exchanges resident memory page with secondary storage page Involves Copying resident page to disk (if it was modified) Writing new page into the empty page frame Requires close interaction between: Hardware components Software algorithms Policy schemes

Hardware instruction processing Page fault: failure to find page in memory Page fault handler Part of operating system Determines if empty page frames in memory Yes: requested page copied from secondary storage No: swapping occurs Deciding page frame to swap out if all are busy Directly dependent on the predefined policy for page removal

Thrashing An excessive amount of page swapping between main memory and secondary storage Due to main memory page removal that is called back shortly thereafter Produces inefficient operation Occurs across jobs Large number of jobs competing for a relatively few number of free pages Occurs within a job In loops crossing page boundaries

Advantages Job no longer constrained by the size of physical memory (concept of virtual memory) Utilizes memory more efficiently than previous schemes Faster response Disadvantage Increased overhead caused by tables and page interrupts

Page Replacement Policies and Concepts Policy to select page removal Crucial to system efficiency Page replacement polices First-In First-Out (FIFO) policy Best page to remove is one in memory longest Least Recently Used (LRU) policy Best page to remove is least recently accessed Mechanics of paging concepts The working set concept

First-In First-Out Removes the longest page in memory Efficiency Ratio of page faults to page requests FIFO example: not so good Efficiency is 9/11 or 82% FIFO anomaly More memory frames does not lead to better performance

Least Recently Used Removes the least recently accessed page Efficiency Causes either decrease in or same number of page interrupts (faults) Slightly better (compared to FIFO): 8/11 or 73% LRU is a stack algorithm removal policy Increasing main memory will cause either a decrease in or the same number of page interrupts Does not experience FIFO anomaly

LRU Implementation Bit-shifting technique Uses 8-bit reference byte and bit-shifting technique The reference bit for each page is updated with every CPU clock tick. If any page is referenced, its leftmost reference bit will be set to 1. When a page fault occurs, the LRU policy selects the page with the smallest value in its reference byte because that would be the least recently used.

The Mechanics of Paging Page swapping Memory manage requires specific information Uses Page Map Table Information Status bits of: “0” or “1”

Page map table bit meaning Status bit Indicates if page currently in memory Referenced bit Indicates if page referenced recently Used by LRU to determine page to swap Modified bit Indicates if page contents altered Used to determine if page must be rewritten to secondary storage when swapped out Four combinations of modified and referenced bits (00, 01, 10, 11). See Table 3.5.

Segmented Memory Allocation Segments are divisions of computer memory of variable size that temporarily store memory addresses required for a logical computer process. Virtual memory is divided into variable length regions called segments Virtual address consists of a segment number and a segment offset <segment-number, offset> Each memory segment is associated with a specific length and a set of permissions.

Memory Manager tracks segments using tables Job Table Lists every job in process (one for whole system) Segment Map Table Lists details about each segment (one for each job) Memory Map Table Monitors allocation of main memory (one for whole system) Instructions with segments ordered sequentially Segments not necessarily stored contiguously

Addressing scheme requirement Segment number and displacement Advantages Internal fragmentation is removed Memory allocated dynamically Disadvantages Difficulty managing variable-length segments in secondary storage External fragmentation

Paging vs Segmentation Block replacement easy Fixed-length blocks Segmentation: Block replacement hard Variable-length blocks Need to find contiguous, variable-sized, unused part of main memory

Invisible to application programmer No external fragmentation Paging: Invisible to application programmer No external fragmentation There is internal fragmentation Unused portion of page Units of code and data are broken up into separate pages Segmentation: Visible to application programmer No internal fragmentation Unused pieces of main memory There is external fragmentation Keeps blocks of code or data as single units

Segmented/Demand Paged Memory Allocation Segmented/demand paged scheme evolved from the two schemes we’ve just discussed. It’s a combination of segmentation and demand paging. Segmented/demand paged memory scheme subdivides segments into pages of equal size Smaller than most segments More easily manipulated than whole segments Segmentation problems removed Compaction, external fragmentation, secondary storage handling To access a location in memory, the system must locate the address which is composed of three entries: Segment number page number within that segment displacement within that page

Scheme requires four tables 1. Job Table Lists every job in process (one for the whole system) 2. Segment Map Table Lists details about each segment (one for each job) 3. Page Map Table Lists details about every page (one for each segment) 4. Memory Map Table Monitors allocation of page frames in main memory (one for the whole system)

Advantages Large virtual memory Segment loaded on demand Logical benefits of segmentation Physical benefits of paging Disadvantages Table handling overhead Memory needed for page and segment tables

Virtual Memory Allows program execution even if not stored entirely in memory Requires cooperation between memory manager and processor hardware Advantages Job size not restricted to size of main memory Memory used more efficiently Allows an unlimited amount of multiprogramming Eliminates external fragmentation and minimizes internal fragmentation Allows the sharing of code and data Facilitates dynamic linking of program segments Disadvantages Increased processor hardware costs Increased overhead for handling paging interrupts Increased software complexity to prevent thrashing

The use of virtual memory requires cooperation between Memory Manager which tracks each page or segment and the processor hardware which issues the interrupt and resolves the virtual address. E.g. when a page is needed that’s not already in memory.

Summary Paged memory allocation Efficient use of memory Allocate jobs in non-contiguous memory locations Problems Increased overhead Internal fragmentation Demand paging scheme Eliminates physical memory size constraint LRU provides slightly better efficiency (compared to FIFO) Segmented memory allocation scheme Solves internal fragmentation problem

Segmented/demand paged memory Problems solved Compaction, external fragmentation, secondary storage handling Associative memory Used to speed up the process Virtual memory Programs execute if not stored entirely in memory Job’s size no longer restricted to main memory size Cache memory CPU can execute instruction faster