W4118 Operating Systems Instructor: Junfeng Yang.

Slides:



Advertisements
Similar presentations
Part IV: Memory Management
Advertisements

Memory Management Questions answered in this lecture: How do processes share memory? What is static relocation? What is dynamic relocation? What is segmentation?
CS 311 – Lecture 21 Outline Memory management in UNIX
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
03/09/2007CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Silberschatz, Galvin and Gagne  Operating System Concepts Multistep Processing of a User Program User programs go through several steps before.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
CSCI 315 Operating Systems Design
Chapter 8: Main Memory.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Memory Management -1 Background Swapping Memory Management Schemes
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 8 Operating Systems.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 8: Main Memory.
Swapping and Contiguous Memory Allocation. Multistep Processing of a User Program User programs go through several steps before being run. Program components.
Lecture 13 L.Mohammad R.Alkafagee1.  The concept of a logical address space that is bound to a separate physical address space is central to proper memory.
1 Memory Management (a). 2 Background  Program must be brought into memory and placed within a process for it to be run.  Input queue – collection of.
8.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Chapter 8: Memory-Management Strategies Objectives To provide a detailed description.
Chapter 4 Storage Management (Memory Management).
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz and Galvin  Operating System Concepts Module 8: Memory Management Background Logical versus Physical Address Space Swapping Contiguous.
Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 31 Memory Management.
Memory Management. Background Memory consists of a large array of words or bytes, each with its own address. The CPU fetches instructions from memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 8: Main Memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Main Memory CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Module 9: Memory Management
Chapter 9: Memory Management
Chapter 8: Main Memory.
Chapter 8: Main Memory Source & Copyright: Operating System Concepts, Silberschatz, Galvin and Gagne.
Chapter 8: Memory Management
Chapter 8: Main Memory.
Chapter 8: Memory Management
Main Memory Management
Chapter 8: Main Memory.
Main Memory.
Storage Management Chapter 9: Memory Management
Operating System Concepts
Module 9: Memory Management
Chapter 8: Main Memory.
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Main Memory Session -15.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
CSS 430: Operating Systems - Main Memory
Multistep Processing of a User Program
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
So far… Text RO …. printf() RW link printf Linking, loading
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
Memory Management Tasks
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Lecture 3: Main Memory.
Operating System Chapter 7. Memory Management
Chapter 8: Memory Management strategies
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Lecture 3: Main Memory.
OPERATING SYSTEMS MEMORY MANAGEMENT BY DR.V.R.ELANGOVAN.
CSE 542: Operating Systems
Page Main Memory.
Presentation transcript:

W4118 Operating Systems Instructor: Junfeng Yang

Logistics  Homework 4 out, due 3:09pm 3/26  You will add a new scheduling policy to Linux  This is likely the most difficult programming assignment of this course, so start today 1

Last lecture: Advanced Scheduling  Advanced Scheduling Concepts  Multilevel Queue Scheduling  Multiprocessor Scheduling  Real-time Scheduling  Linux Scheduling  Goals  Data structures: runqueue, task_struct  Procedures: schedule(), scheduler_tick() 2

Today: Memory Management  Dynamic memory allocation  Stack and Heap  Intro to Memory management 3

Motivation for Dynamic Memory Allocation  Static (compile time) allocation is not possible for all data  Recursive calls  Runtime input from user  Complicated data structure ……  Two ways of dynamic allocation  Stack allocation Restricted, but simple and efficient  Heap allocation More general, but less efficient More difficult to implement 4

Stack Organization  Stack: memory is freed in opposite order from allocation  Last in First out (LIFO)  When useful?  Memory usage pattern follows LIFO  Example Function call frames  Implementation  Pointer separating allocated and free space  Allocate: increment pointer  Free: decrement pointer  Advantage  Simple and efficient  Keeps all free space continuous  Disadvantage  Not for data structures 5

Heap Organization  Heap: allocate from random locations  Memory consists of allocated area and free area (or holes)  When is it useful?  Allocate and free are unpredictable  Complex data structures new in C++, malloc in C, kmalloc in Linux kernel  Advantage: general, works on arbitrary allocation and free patterns  Disadvantage: end up with small chunks of free space 6

Fragmentation  Small trunks of free memory, too small for future allocations  External: visible to system  Internal: visible to process (e.g. if allocate at some granularity)  Goal  Reduce number of holes  Keep holes large  Stack: all free space is together as one big hole 7

Heap Implementation  Data structure: linked list of free blocks  free list: chains free blocks together  Allocation  Choose block large enough for request  Update free list  Free  Add block back to list  Merge adjacent free blocks 8

Best vs. First vs. Worst  Best fit  Search the whole list on each allocation  Choose the smallest block that can satisfy request  Can stop search if exact match found  First fit  Choose first block that can satisfy request  Worst fit  Chose largest block (most leftover space) Which is better? 9

Examples  Best algorithm: depends on sequence of requests  Example: free list has 2 blocks of size 20 and 15 bytes  Allocation requests: 10 then 20  Allocation requests: 8, 12, then 12 10

Comparison of Allocation Strategies  Best fit  Tends to leave very large holes and very small holes  Disadvantage: very small holes may be useless  First fit:  Tends to leave “average” size holes  Advantage: faster than best fit  Worst fit:  Simulation shows that worst fit is worse in terms of storage utilization 11

Today: Memory Management  Dynamic memory allocation  Stack  Heap  Allocation strategies  Intro to Memory management 12

Motivation for Memory Management  Simple uniprogramming with a single segment per process  Early batch systems  Early personal computers  Disadvantages  Only one process can run a time  Process can destroy OS OS User Process 13

Multiple Processes in Memory Process A Process B 14

Multiprogramming Wish-list  Sharing  multiple processes coexist in main memory  Transparency  Processes not aware that memory is shared  Run regardless of number and/or locations of processes  Protection  Cannot corrupt OS or other processes  Privacy: cannot read data of other processes  Efficiency: should have reasonable performance  Purpose of sharing is to increase efficiency  Do not waste CPU or memory resources 15

Relocation Transparency  relocation  Process can run anywhere in memory (can’t predict in advance)  How? 16

Background  Compiler compiles source files into object files  Linker links object files and system libraries into a program  Loader loads program and dynamically-linked library (from disk) into memory and placed within a process for it to be run 17

Dynamic Linking  Linking postponed until execution time  Implementation  Small piece of code, stub, used to locate the appropriate memory-resident library routine  Stub replaces itself with the address of the routine, and executes the routine  Advantage  useful for libraries: updating the library updates all programs that use it (shared libraries)  Saves space  Disadvantage  Difficult to manage dependencies  Runtime failures 18

Dynamic Loading  Routine is not loaded until it is called  Better memory-space utilization; unused routine is never loaded  Useful when large amounts of code are needed to handle infrequently occurring cases 19

20

When to Relocate?  Compile time  Hardwire physical location at compile time (absolute code)  Problem Each program needs to be written with others in mind Not really transparent  Load time  Compiled code relocatable (relocatable code)  How? All addresses relative to a start address. Change start address to relocate  Problem Once loaded, can’t change or move  Execution time  Can move during execution  This is what’s generally done, but need hardware support 21

Relocation at Execution Time  Map program-generated address to hardware address dynamically at every reference  MMU: Memory Management Unit Controlled by OS  Program: logical (virtual) address  Hardware: physical (real) addresses  Address space: each process’s view of memory 22 CPUMMU MEMORY Logical Addresses Physical Addresses

Address Spaces OS AS1 AS2 AS3 23 Logical viewPhysical view

Hardware Support  Two operating modes  Privileged (protected, kernel) mode: when OS runs When trap into OS (system calls, interrupts, exceptions) Allows certain instructions to be executed Allows OS to access all of physical memory  User mode: when user processes run Performs translation of logical address to physical address Protects OS and other processes physical memory  How to implement protection?  Base register: start physical location of address space  Limit register: last valid address the process may access Appears to have private memory of size equal to limit register 24

Implementation  Translation on every memory access  Compare logical address to limit register If greater, generate exception  Add base register to logical address to generate physical address 25 CPU base limit <= limit? + Logical Addresses Physical Addresses exception no

Managing Processes with Base and Limit  Context switch  Protection requirement  User process cannot change base and limit register  User process cannot change to privileged mode 26

Pros and Cons of Base and Limit  Continuous allocation: each process is in a contiguous memory block  Advantages  Supports dynamic relocation of address space  Supports protection across multiple spaces  Cheap: few registers and little logic  Fast: add and compare can be done in parallel  Disadvantages  Each process must be allocated contiguously in real memory Fragmentation: cannot allocate a new process Solution: swapping (next)  Must allocate memory that may not be used  No sharing: cannot share limited parts of address space e.g. cannot shared code with private data 27

Swapping  A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution  Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images  Roll out, roll in – swapping variant used for priority- based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed  Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped  Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows) 28

Schematic View of Swapping