Operating Systems COMP 4850/CISG 5550 Basic Memory Management Swapping Dr. James Money.

Slides:



Advertisements
Similar presentations
MEMORY MANAGEMENT Y. Colette Lemard. MEMORY MANAGEMENT The management of memory is one of the functions of the Operating System MEMORY = MAIN MEMORY =
Advertisements

Chapter 4 Memory Management Basic memory management Swapping
Part IV: Memory Management
1 Memory Management Managing memory hierarchies. 2 Memory Management Ideally programmers want memory that is –large –fast –non volatile –transparent Memory.
Fixed/Variable Partitioning
Allocating Memory.
CS 311 – Lecture 21 Outline Memory management in UNIX
Memory Management. Managing Memory … The Simplest Case The O/S User Program 0 0xFFF … * Early PCs and Mainframes * Embedded Systems One user program at.
1 CSE 380 Computer Operating Systems Instructor: Insup Lee University of Pennsylvania, Fall 2002 Lecture Note: Memory Management.
Memory Management Chapter 4. Memory hierarchy Programmers want a lot of fast, non- volatile memory But, here is what we have:
Operating System Support Focus on Architecture
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Memory Management - 2 CS 342 – Operating Systems Ibrahim Korpeoglu Bilkent.
Chapter 3.1 : Memory Management
CS 104 Introduction to Computer Science and Graphics Problems
CS 342 – Operating Systems Spring 2003 © Ibrahim Korpeoglu Bilkent University1 Memory Management CS 342 – Operating Systems Ibrahim Korpeoglu Bilkent University.
1 Pertemuan 11 Manajemen Memori Matakuliah: T0316/sistem Operasi Tahun: 2005 Versi/Revisi: 5 OFFCLASS02.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management Chapter 5.
Computer Organization and Architecture
Memory Management Five Requirements for Memory Management to satisfy: –Relocation Users generally don’t know where they will be placed in main memory May.
Avishai Wool lecture Introduction to Systems Programming Lecture 6 Memory Management.
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping
1 Chapter 3.1 : Memory Management Storage hierarchy Storage hierarchy Important memory terms Important memory terms Earlier memory allocation schemes Earlier.
Layers and Views of a Computer System Operating System Services Program creation Program execution Access to I/O devices Controlled access to files System.
Chapter 91 Memory Management Chapter 9   Review of process from source to executable (linking, loading, addressing)   General discussion of memory.
Memory Management By: Omar A. Cruz Salgado ICOM 5007 Sec. 121.
SOCSAMS e-learning Dept. of Computer Applications, MES College Marampally MEMORYMANAGEMNT.
Memory Management Chapter 7.
1 Memory Management Memory Management COSC513 – Spring 2004 Student Name: Nan Qiao Student ID#: Professor: Dr. Morteza Anvari.
1. Memory Manager 2 Memory Management In an environment that supports dynamic memory allocation, the memory manager must keep a record of the usage of.
Chapter 5 Operating System Support. Outline Operating system - Objective and function - types of OS Scheduling - Long term scheduling - Medium term scheduling.
Ch. 4 Memory Mangement Parkinson’s law: “Programs expand to fill the memory available to hold them.”
1 Memory Management Chapter Basic memory management 4.2 Swapping (εναλλαγή) 4.3 Virtual memory (εικονική/ιδεατή μνήμη) 4.4 Page replacement algorithms.
Chapter 4 Memory Management.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Memory Management Chapter 7.
1 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms.
Subject: Operating System.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
OPERATING SYSTEMS Lecture 3: we will explore the role of the operating system in a computer Networks and Communication Department 1.
Memory Management. Introduction To improve both the utilization of the CPU and the speed of its response to users, the computer must keep several processes.
Basic Memory Management 1. Readings r Silbershatz et al: chapters
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Memory: Mono an multiprogramming.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Memory Management OS Fazal Rehman Shamil. swapping Swapping concept comes in terms of process scheduling. Swapping is basically implemented by Medium.
Memory Management Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only storage.
Copyright ©: Nahrstedt, Angrave, Abdelzaher1 Memory.
CHAPTER 3-1, 3-2 MEMORY MANAGEMENT. MEMORY HIERARCHY Small amount of expensive, fast, volatile cache Larger amount of still fast, but slower, volatile.
Ch. 4 Memory Mangement Parkinson’s law: “Programs expand to fill the memory available to hold them.”
Memory Management One of the most important OS jobs.
Memory Management.
Chapter 2 Memory and process management
From Monoprogramming to multiprogramming with swapping
Chapter 8: Main Memory.
CSC 322 Operating Systems Concepts Lecture - 12: by
Chapter 9 – Real Memory Organization and Management
William Stallings Computer Organization and Architecture
Chapter 8 Main Memory.
Main Memory Management
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Main Memory Background Swapping Contiguous Allocation Paging
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Memory Management (1).
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
COMP755 Advanced Operating Systems
Presentation transcript:

Operating Systems COMP 4850/CISG 5550 Basic Memory Management Swapping Dr. James Money

Memory Management One of the most important resources to manage is memory One of the most important resources to manage is memory Here, we typically refer to managing RAM in the memory hierarchy, however, we involve several members Here, we typically refer to managing RAM in the memory hierarchy, however, we involve several members The part of the OS that handles this is called the memory manager The part of the OS that handles this is called the memory manager

Memory Hierarchy

Basic Memory Management There are two classes for memory management systems: There are two classes for memory management systems: –Systems that move processes back and forth from main memory to disk –System that do not do this swapping We will first consider the second case, which is a simpler implementation We will first consider the second case, which is a simpler implementation

Monoprogramming The simplest scheme is to run just one program at a time The simplest scheme is to run just one program at a time The memory is shared between the OS and the program in memory The memory is shared between the OS and the program in memory There are various ways this setup can be arranged There are various ways this setup can be arranged

Monoprogramming Three models: Three models: –OS at bottom of memory, with user program at top –User program with OS in ROM at top of memory –OS at bottom and device drivers in ROM at top with user program in between. The ROM contains the BIOS (Basic Input Output System)

Monoprogramming

Monoprogramming In this setup, one program runs at a time In this setup, one program runs at a time Typically, a command shell or similar program issues a prompt and gets the program to run Typically, a command shell or similar program issues a prompt and gets the program to run Then the program is read from disk into memory and executed Then the program is read from disk into memory and executed The program exists and the command shell runs again, replacing the exited process The program exists and the command shell runs again, replacing the exited process Typically only used on embedded systems Typically only used on embedded systems

Multiprogramming with Fixed Partitions Most modern OSes allow you to run multiple programs simultaneously Most modern OSes allow you to run multiple programs simultaneously This allows the CPU to be utilized even when one process is blocked for I/O This allows the CPU to be utilized even when one process is blocked for I/O The easiest approach is to divide memory into n, possibly unequal, partitions of memory The easiest approach is to divide memory into n, possibly unequal, partitions of memory

Multiprogramming with Fixed Partitions This partitioning can be done when the OS first starts up This partitioning can be done when the OS first starts up Each partition has an input queue associated with it Each partition has an input queue associated with it Then, when a job arrives it is put into an input queue for the smallest partition to hold it Then, when a job arrives it is put into an input queue for the smallest partition to hold it However, unused partition space is lost However, unused partition space is lost

Multiprogramming with Fixed Partitions The problem with multiple input queues is it can waste CPU time The problem with multiple input queues is it can waste CPU time Small jobs may wait for the smaller memory partition despite the fact the large memory partition might be free Small jobs may wait for the smaller memory partition despite the fact the large memory partition might be free An alternate approach is to use a single input queue An alternate approach is to use a single input queue

Multiprogramming with Fixed Partitions In a single input queue scheme, when a partition becomes free, the job closest to the front of the queue that fits is loaded into the slot In a single input queue scheme, when a partition becomes free, the job closest to the front of the queue that fits is loaded into the slot Another approach is to search the entire queue for the best fit into the free partition Another approach is to search the entire queue for the best fit into the free partition This discriminates against small jobs and tends to be avoided This discriminates against small jobs and tends to be avoided

Multiprogramming with Fixed Partitions

One solution for the small jobs is to have at least one small partition One solution for the small jobs is to have at least one small partition Another option is to not skip over a job more than k times Another option is to not skip over a job more than k times This system was used by IBM mainframes in OS/360 called MFT or OS/MFT This system was used by IBM mainframes in OS/360 called MFT or OS/MFT Most OSes do not use this scheme today Most OSes do not use this scheme today

Modeling Multiprogramming When we use multiprogramming, we increase CPU utilization When we use multiprogramming, we increase CPU utilization In a crude analogy, if the average process only uses 10% of the CPU, then 10 processes will keep the CPU busy all the time In a crude analogy, if the average process only uses 10% of the CPU, then 10 processes will keep the CPU busy all the time A basic assumption here is not all of the processes are waiting for I/O at the same time A basic assumption here is not all of the processes are waiting for I/O at the same time

Modeling Multiprogramming An improved model is to use a probabilistic setup An improved model is to use a probabilistic setup Assume that a process spends the fraction p of its time waiting for I/O to complete Assume that a process spends the fraction p of its time waiting for I/O to complete When n processes in memory at once, the probability that all n processes are waiting for I/O is When n processes in memory at once, the probability that all n processes are waiting for I/O is p n

Modeling Multiprogramming Thus, we see that CPU utilization is given by the difference, Thus, we see that CPU utilization is given by the difference, CPU Utilization = 1-p n The CPU utilization as a function of n is called the degree of multiprogramming The CPU utilization as a function of n is called the degree of multiprogramming

Modeling Multiprogramming n f(n) = 1-p n

Modeling Multiprogramming Note that if a process spends 80% of its time waiting on I/O, then at least 10 processes are needed in memory to get the waste below 10% Note that if a process spends 80% of its time waiting on I/O, then at least 10 processes are needed in memory to get the waste below 10% Realize that 80% may be a conservative estimate, given waiting for keyboard input Realize that 80% may be a conservative estimate, given waiting for keyboard input This model assumes each process is separate This model assumes each process is separate

Modeling Multiprogramming However, even with its simplicity, we can make predictions about performance However, even with its simplicity, we can make predictions about performance Suppose a system has 32MB of memory, the OS requires 16MB and each program needs 4MB Suppose a system has 32MB of memory, the OS requires 16MB and each program needs 4MB With 80% wait time, we have utilization With 80% wait time, we have utilization =0.5904

Modeling Multiprogramming If we add another 16MB, we go to 8 way multiprogramming and we have utilization of If we add another 16MB, we go to 8 way multiprogramming and we have utilization of = This is a 38% increase This is a 38% increase If we repeat with another 16MB though, we get 93%, only a 10% improvement If we repeat with another 16MB though, we get 93%, only a 10% improvement We can use this to decide how much RAM to buy We can use this to decide how much RAM to buy

Analysis of Multiprogramming We can use this model to analyze batch systems We can use this model to analyze batch systems Suppose the average wait is 80% and four jobs are submitted at 10:00, 10:10, 10:15, and 10:20 with CPU minutes needed of 4,3,2, and 2 each respectively Suppose the average wait is 80% and four jobs are submitted at 10:00, 10:10, 10:15, and 10:20 with CPU minutes needed of 4,3,2, and 2 each respectively For the first process, with only 20% used CPU time, only 12 seconds of CPU time is allocated every minute For the first process, with only 20% used CPU time, only 12 seconds of CPU time is allocated every minute

Analysis of Multiprogramming The process will need The process will need 0.2 x = 4 x = 4/0.2 = 20 minutes to finish. From 10:00-10:10, job 1 gets 1/5 of that time of 2 minutes of work done From 10:00-10:10, job 1 gets 1/5 of that time of 2 minutes of work done At 10:10, job 2 arrives At 10:10, job 2 arrives

Analysis of Multiprogramming Now with 2 jobs, the utilization increases to 36%, but each process only gets 36%/2 = 18% of that Now with 2 jobs, the utilization increases to 36%, but each process only gets 36%/2 = 18% of that Thus the processes go from 0.2 CPU minutes per minute to 0.18 Thus the processes go from 0.2 CPU minutes per minute to 0.18 At 10:15, we get job 3, we have 0.16 CPU minutes per process At 10:15, we get job 3, we have 0.16 CPU minutes per process

Analysis of Multiprogramming

Relocation and Protection Two problems need to be solved in multiprogramming Two problems need to be solved in multiprogramming –Relocation – how to get new addresses for program –Protection – how to prevent processes from overwriting each other

Relocation and Protection When a program is linked, it is combined together with libraries and user procedures and must know the address the program will be in memory When a program is linked, it is combined together with libraries and user procedures and must know the address the program will be in memory Suppose the first instruction is a call to absolute address 100 in the binary program Suppose the first instruction is a call to absolute address 100 in the binary program

Relocation and Protection If the program is loaded into partition 1 at memory location 200K, then the program will jump to location 100 in the OS most likely If the program is loaded into partition 1 at memory location 200K, then the program will jump to location 100 in the OS most likely However, what we want to happen is to jump to address 200K+100 However, what we want to happen is to jump to address 200K+100 This is called the relocation problem This is called the relocation problem

Relocation and Protection One solution is to modify the instructions as the program is read into memory One solution is to modify the instructions as the program is read into memory The binary program has a list of addresses to be relocated The binary program has a list of addresses to be relocated This solves the relocation problem, but does not address protection This solves the relocation problem, but does not address protection A program could craft a new instruction and jump to that location A program could craft a new instruction and jump to that location

Relocation and Protection For OS/360, IBM choose a protection mechanism that divides the memory in 2KB blocks and uses a 4 bit protection key for each block For OS/360, IBM choose a protection mechanism that divides the memory in 2KB blocks and uses a 4 bit protection key for each block The program status word (PSW) contained this 4 bit key and if it did not match, then the program could not write to the address The program status word (PSW) contained this 4 bit key and if it did not match, then the program could not write to the address Only the OS could modify the PSW value Only the OS could modify the PSW value

Relocation and Protection An alternative to both relocation and protection is to use two special hardware registers called base and limit registers An alternative to both relocation and protection is to use two special hardware registers called base and limit registers When a program is loaded the base is set to the start of the memory location which is added to every memory reference and limit which contains the end of its partition When a program is loaded the base is set to the start of the memory location which is added to every memory reference and limit which contains the end of its partition In our example, the base would be 200K, so that a reference to address 100 would be in effect 200K+100 In our example, the base would be 200K, so that a reference to address 100 would be in effect 200K+100

Relocation and Protection When a memory reference is made, the limit register is consulted to make sure there is not a reference beyond the limit register When a memory reference is made, the limit register is consulted to make sure there is not a reference beyond the limit register The comparison is quick test, so this does not require extra circuits The comparison is quick test, so this does not require extra circuits

Relocation and Protection The disadvantage is the need for the addition which can be slow The disadvantage is the need for the addition which can be slow This can be eliminated with special addition circuits This can be eliminated with special addition circuits CDC 6600 – first supercomputer used this CDC 6600 – first supercomputer used this Intel 8088(cheaper 8086) had base but no limit registers Intel 8088(cheaper 8086) had base but no limit registers

Swapping With current OSes the scenario is different with regards to fixed partitions of memory With current OSes the scenario is different with regards to fixed partitions of memory We want to have dynamically sized partitions We want to have dynamically sized partitions In addition, we sometimes cannot fit the entire program into memory at once In addition, we sometimes cannot fit the entire program into memory at once

Swapping There are two approaches depending on the type of hardware available There are two approaches depending on the type of hardware available –Swapping – simplest, brings each process into/out of memory as a whole unit from disk –Virtual memory – allows portions of a process to be brought in/out of memory from disk We first look at swapping We first look at swapping

Swapping

Swapping The major difference with the partitioning scheme now is the ability to have dynamic partition sizes and numbers The major difference with the partitioning scheme now is the ability to have dynamic partition sizes and numbers This improves memory utilization, but at the cost of keeping track of this freedom This improves memory utilization, but at the cost of keeping track of this freedom This tracking can be done with bitmaps or linked lists of free and used partitions This tracking can be done with bitmaps or linked lists of free and used partitions

Swapping Swapping creates the problem of multiple holes in memory Swapping creates the problem of multiple holes in memory It is possible to move all the processes downward in memory as far as possible, which is called memory compaction It is possible to move all the processes downward in memory as far as possible, which is called memory compaction This is not done until a last resort since it wastes a lot of CPU time – a 256MB machines which moves 4 bytes in 40 nsecs takes 2.7 sec to compact all of memory This is not done until a last resort since it wastes a lot of CPU time – a 256MB machines which moves 4 bytes in 40 nsecs takes 2.7 sec to compact all of memory

Swapping One point of concern is how much memory to allocate for a process when it is created or swapped into RAM One point of concern is how much memory to allocate for a process when it is created or swapped into RAM If they are fixed size, then there is no problem If they are fixed size, then there is no problem However, many times the process data segment can grow due to dynamic memory heaps However, many times the process data segment can grow due to dynamic memory heaps

Swapping If there is a hole in an adjacent block, then it can be allowed to grow into it If there is a hole in an adjacent block, then it can be allowed to grow into it If there is not a hole, there we have a problem If there is not a hole, there we have a problem We may have to swap out other process when the current process needs more room for the data segment We may have to swap out other process when the current process needs more room for the data segment

Swapping If we know the process will grow, then we can give it extra memory when it starts or swapped into memory If we know the process will grow, then we can give it extra memory when it starts or swapped into memory However, when it is swapped to disk, we do not save the unused portions However, when it is swapped to disk, we do not save the unused portions

Swapping If there are two segments, such as a data and stack segment, then we can have them growing toward each other at opposite ends of the partition If there are two segments, such as a data and stack segment, then we can have them growing toward each other at opposite ends of the partition There is typically a barrier that marks where one segment cannot cross another There is typically a barrier that marks where one segment cannot cross another If it requests to do so, the OS has to allocate a bigger partition If it requests to do so, the OS has to allocate a bigger partition

Swapping