WEEK 5 LINKING AND LOADING MEMORY MANAGEMENT PAGING AND SEGMENTATION Operating Systems CS3013 / CS502.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Part IV: Memory Management
CSS430 Memory Management Textbook Ch8
Linking & Loading CS-502 Operating Systems
Operating Systems Lecture Notes Memory Management Matthew Dailey Some material © Silberschatz, Galvin, and Gagne, 2002.
CS 311 – Lecture 21 Outline Memory management in UNIX
Operating Systems Memory Management (Chapter 9). Overview Provide Services (done) –processes(done) –files(after memory management) Manage Devices –processor.
Modified from Silberschatz, Galvin and Gagne Lecture 16 Chapter 8: Main Memory.
Operating Systems I Memory Management. Overview F Provide Services –processes  –files  F Manage Devices –processor  –memory  –disk 
Memory Management (continued) CS-3013 C-term Memory Management CS-3013 Operating Systems C-term 2008 (Slides include materials from Operating System.
Memory ManagementCS-3013 A-term Memory Management CS-3013, Operating Systems A-term 2009 (Slides include materials from Modern Operating Systems,
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory ManagementCS-502 Fall Memory Management CS-502 Operating Systems Fall 2006 (Slides include materials from Operating System Concepts, 7 th.
Memory Management Gordon College Stephen Brinton.
Memory Management (Ch )
Memory ManagementCS-3013 C-term Memory Management CS-3013 Operating Systems C-term 2008 (Slides include materials from Operating System Concepts,
Operating System Concepts
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
Operating Systems Memory Management (Ch )
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Operating Systems Memory Management (Ch 9). Overview Provide Services (done) –processes(done) –files(done in cs4513) Manage Devices –processor (done)
Operating Systems Memory Management (Ch )
Chapter 8: Main Memory.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Memory Management -1 Background Swapping Memory Management Schemes
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 8 Operating Systems.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
CE Operating Systems Lecture 14 Memory management.
Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Memory Management. Background Memory consists of a large array of words or bytes, each with its own address. The CPU fetches instructions from memory.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management 1. Outline Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation 2.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition, Chapter 8: Memory- Management Strategies.
Main Memory CSSE 332 Operating Systems Rose-Hulman Institute of Technology.
Module 9: Memory Management
Chapter 9: Memory Management
Chapter 8: Memory Management
Chapter 8: Main Memory.
Chapter 8: Memory Management
Linking & Loading.
CS-3013 Operating Systems C-term 2008
Main Memory Management
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Storage Management Chapter 9: Memory Management
Module 9: Memory Management
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
Introduction to Memory Management
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Linking & Loading CS-502 Operating Systems
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Linking & Loading CS-502 Operating Systems
Memory Management (Ch 4: )
CSE 542: Operating Systems
Presentation transcript:

WEEK 5 LINKING AND LOADING MEMORY MANAGEMENT PAGING AND SEGMENTATION Operating Systems CS3013 / CS502

Agenda 2 Linking and Loading Memory Management Paging and Segmentation

Objectives 3 Understand Linking and Loading Differentiate Static and Dynamic Linking

Executable Files 4 Every OS expects executable files in a specific format  Header Info  Code locations  Data locations  Code & Data  Symbol Table  List of names of things defined in your program and where they are located within your program.  List of names of things defined elsewhere that are used by your program, and where they are used.

Example 5 Symbols defined in the program and used elsewhere  main Symbols defined elsewhere and used in the program  printf #include int main () { printf (“hello, world\n”) }

Example 6 Symbols defined in the program and used elsewhere  main Symbols defined elsewhere and used in the program  printf  errno #include Extern int errno; int main () { printf (“hello, world\n”) }

Linking and Loading 7 Linking: Combining a set of programs, including library routines, to create a loadable image  Resolving symbols defined within the set  Listing symbols needing to be resolved by loader Loading: Copying the loadable image into memory, connecting it with any other programs already loaded, and updating addresses as needed  (in all systems) kernel image is special (own format)

Source 8 Binding is the act of connecting names to addresses Most compilers produce relocatable object code  Addresses relative to zero The linker combines multiple object files and library modules into a single executable file  Addresses also relative to zero The Loader reads the executable file  Allocates memory  Maps addresses within file to memory addresses  Resolves names of dynamic library items Source (.c,.cc) Object (.o) Executable In-memory Image Compiler Linker Loader Other Objects (.o) Dynamic libraries (.dll) Static libraries (.a)

Static Linking 9 Printf.c Printf.oMain.o Main.c a.out Memory Static Library Linker gcc ar loader

Classic Unix 10 Linker lives inside of cc or gcc command Loader is part of exec system call Executable image contains all object and library modules needed by program Entire image is loaded at once Every image contains its own copy of common library routines Every loaded program contain duplicate copy of library routines

Dynamic Linking 11 Routine is not loaded until it is called Better memory-space utilization; unused routines are never loaded. Useful when large amounts of code are needed to handle infrequently occurring cases.

Linker-Assisted Dynamic Linking 12 For function call to a dynamic function  Call is indirect through a link table  Each link table entry is initialized with address of small stub of code to locate and load module.  When loaded, loader replaces link table entry with address of loaded function  When unloaded, loader restores table entry with stub address  Works only for function calls, not static data

Linker-Assisted Dynamic Linking 13 Your program void main () { printf (…); } Link table Stub void load() { … load(“IOLib”); … }

Linker-Assisted Dynamic Linking 14 Your program void main () { printf (…); } Link table IOLib read() {…} printf() {…} scanf() {…}

Shared Library 15 Observation – “everyone” links to standard libraries (libc.a, etc.) These consume space in  every executable image  every process memory at runtime Would it be possible to share the common libraries?

Shared Library 16 Libraries designated as “shared” .so,.dll, etc.  Supported by corresponding “.a” libraries containing symbol information Linker sets up symbols to be resolved at runtime Loader: Is library already in memory?  If yes, map into new process space  If not, load and then map

Run Time Dynamic Linking 17 Printf.c Printf.oMain.o Main.c a.out Memory Dynamic Library Linker gcc ar loader Run-time Loader

Dynamic Linking 18 Complete linking postponed until execution time. Stub used to locate the appropriate memory-resident library routine. Stub replaces itself with the address of the routine, and executes the routine. Operating system needs to check if routine is in address space of process Dynamic linking is particularly useful for libraries.

Dynamic Shared Library 19 Static shared libraries requires address space pre- allocation Dynamic shared libraries – address binding at runtime  Code must be position independent  At runtime, references are resolved as  Library_relative_address + library_base_address

Linker 20 Linker – key part of OS – not in kernel  Combines object files and libraries into a “standard” format that the OS loader can interpret  Resolves references and does static relocation of addresses  Creates information for loader to complete binding process  Supports dynamic shared libraries

Loader 21 An integral part of the OS Resolves addresses and symbols that could not be resolved at link-time May be small or large  Small: Classic Unix  Large: Linux, Windows XP, etc. May be invoke explicitly or implicitly Explicitly by stub or by program itself Implicitly as part of exec

Agenda 22 Linking and Loading Memory Management Paging and Segmentation

Objectives 23 Understand Different Memory Management Strategies Differentiate Internal and External Fragmentation

Simple Memory Management 24 One process in the memory  Uses the entire memory available  Each program needs I/O driver User Program RAM I/O Drivers

Simple Memory Management 25 Small, protected OS “Mono-processing” User Program User Program User Program OS Device Drivers OS RAM ROM

Memory Management with Fixed Partitions 26 Unequal Queues Waste Large Partitions Partition 1 OS Partition 2 Partition 3 Partition 4 Partition 1 OS Partition 2 Partition 3 Partition 4 200k 300k 500k 900k

Physical Memory 27 OS Kernel Process 1 Process 2 Process 3 Empty x xFFFFFFFF Physical Address Space

Process 2 Terminates 28 OS Kernel Process 1 Empty Process 3 Empty x Physical Address Space xFFFFFFFF

Problem 29 What happens when Process 4 comes along and requires space larger than the largest empty partition? OS Kernel Process 1 Empty Process 3 Empty Process 4

Solution 30 Virtual Address: an address used by the program that is translated by computer into a physical address each time it is used  Also called Logical Address When the program utters 0x00105C, … … the machine accesses 0x01605C

Implementation 31 Base and Limit registers  Base automatically added to all addresses  Limit checked on all memory references  Introduced in minicomputers of early 1970s Loaded by OS at each context switch logical address Limit Reg error no Base Reg + yes physical address Physical Memory CPU <

Physical Memory 32 OS Kernel Process 1 Empty Process 3 Empty x Physical Address Space xFFFFFFFF Base Limit

Advantage 33 No relocation of program addresses at load time  All addresses relative to zero! Built-in protection provided by Limit  No physical protection per page or block Fast execution  Addition and limit check at hardware speeds within each instruction Fast context switch  Need only change base and limit registers Partition can be suspended and moved at any time  Process is unaware of change  Potentially expensive for large processes due to copy costs!

34 OS Kernel Process 1 Process 3 Process 4 x Physical Address Space xFFFFFFFF Base Limit

Memory Allocation Strategy 35 First Fit  First big enough hole Best Fit  Smallest hole that is big enough Worst Fit  Largest hole

Challenge – Memory Allocation 36 How should we partition the physical memory for processes?

Fixed-sized Partitions 37 Fixed Partitions – divide memory into equal sized pieces (except for OS)  Degree of multiprogramming = number of partitions  Simple policy to implement  All processes must fit into partition space  Find any free partition and load the process Problem – what is the “right” partition size?  Process size is limited  Internal Fragmentation – unused memory in a partition that is not available to other processes

Internal Fragmentation 38 “Empty” space for each process OS Program Data Stack Room for growth Allocated to Process

Variable-sized Partitions 39 Idea: remove “wasted” memory that is not needed in each partition  Eliminating internal fragmentation Memory is dynamically divided into partitions based on process needs Definition:  Hole: a block of free or available memory  Holes are scattered throughout physical memory New process is allocated memory from hole large enough to fit it

External Fragmentation 40 Total memory space exists to satisfy request but it is not contiguous OS Kernel Process 1 Empty Process 3 Empty 150k 100k Process 4 ? 200k

Analysis of External Fragmentation 41 Assumption  First-fit allocation strategy  System at equilibrium N allocated blocks ½ N blocks lost to fragmentation Fifty-percent rule

Compaction 42 OS Kernel Process 1 Process 3 Process 4 Process 2 OS Kernel Process 1 Process 3 Process 2 OS Kernel Process 1 Process 3 Process 2 (a)(b) 50k 75k 100k 125k

Solutions? 43 Minimize external fragmentation  Large Blocks  But internal fragmentation Tradeoff  Sacrifice some internal fragmentation for reduced external fragmentation  Paging!

Agenda 44 Linking and Loading Memory Management Paging and Segmentation

Objectives 45 Understand Paging Mechanism Translate Virtual Address Using Paging Analyze Different Requirements for Paging Differentiate Different Paging Strategies Understand Segmentation

Memory Management 46 Logical address vs. Physical address Memory Management Unit (MMU)  Set of registers and mechanisms to translate virtual addresses to physical addresses Processes (and processors) see virtual addresses  Virtual address space is same for all processes, usually 0 based  Virtual address spaces are protected from other processes MMU and devices see physical addresses Processor MMU MemoryI/O Devices Logical Addresses Physical Addresses

Paging 47 Logical address space noncontiguous; process gets memory wherever available  Divide physical memory into fixed-size blocks  Size is a power of 2, between 512 and 8192 bytes  Called Frames  Divide logical memory into fixed-size blocks  Called Pages Address generated by CPU divided into:  Page number (p) – index to page table  Page table contains base address of each page in physical memory (frame)  Page offset (d) – offset into page/frame

Paging 48 page frame 0 page frame 1 page frame 2 page frame Y … page frame 3 physical memory offset physical address F(PFN)page frame # page table offset logical address virtual page #

Paging Example 49 Page size 4 bytes Memory size 32 bytes (8 pages) Page Page Table Logical Memory Physical Memory Page 2 Page 3 Page Page 3 Page 2 Page 0

Paging Example 50 Page 0 Page 1 Page 2 Page Page Table PageFrame Offset

Paging 51 Address space 2 m Page offset 2 n Page number 2 m-n Not: not losing any bytes pd Page numberPage offset m-n bitsn bits

Paging Example 52 Consider  Physical memory = 128 bytes  Physical address space = 8 frames How many bits in an address? How many bits for page number? How many bits for page offset? Can a logical address space have only 2 pages? How big would the page table be?

Another Paging Example 53 Consider:  8 bits in an address  3 bits for the frame/page number How many bytes (words) of physical memory? How many frames are there? How many bytes is a page? How many bits for page offset? If a process’ page table is 12 bits, how many logical pages does it have?

Page Table Example (7 bits) Page 1 (A) Page 1 (B) Page 0 (B) Page 0 (A) pd Page numberPage offset m-n = 3n = 4 Page 0 Page 1 Process A Page 0 Page 1 Process B Page Table

Paging Tradeoffs 55 Advantage  No external fragmentation (no compaction)  Relocation (now pages, before were processes) Disadvantage  Internal fragmentation  Consider: 2048 byte pages, 72,766 byte proc 35 pages bytes = 962 bytes Average : ½ page per process Small pages  Overhead Page table / process (context switch + space) Lookup (especially if page to disk)

Implementation of Page Table 56 How would you implement page tables in operating systems?

Implementation of Page Table 57 Page table kept in main memory Page Table Base Register (PTBR) Page Table Length 2 memory accesses per data access  Solution? Page 0 Page Logical Memory Page Table PTBR Physical Memory

Associative Registers 58 pd Logical Address Page number frame number Associative Registers fd f miss hit Physical Memory

Associative Register Performance 59 Hit Ratio – percentage of times that a page number is found in associative registers Hit time = reg. time + mem. time Miss time = reg. time + mem. time * 2 Effective access time = hit ratio * hit time + miss ratio * miss time Example:  80% hit ratio, reg. time = 20 ns, mem. time = 100 ns

Protection 60 Protection bits with each frame Store in page table Expand to more permissions Page Page Table Logical Memory Physical Memory Page 2 Page Page 2 Page 0 v v v i

Large Address Spaces 61 Typical logical address spaces:  4 GBytes => 2 32 address bits (4-byte address) Typical page size:  4 Kbytes = 2 12 bits Page table may have:  2 32 / 2 12 = 2 20 = 1million entries Each entry 3 bytes => 3MB per process! Do not want that all in RAM Solution? Page the page table  Multilevel paging

Multilevel Paging 62 Outer Page Table Logical Memory Inner Page Tables... page number p1 page offset d p2 10 Page 0 Page 2 Page 1 …

Inverted Page Table 63 pdpid p Search di Physical Memory

View of Memory 64 Paging lost users’ view of memory

Segmentation 65 Logical address: Segment table - maps two-dimensional user defined address into one-dimensional physical address  base - starting physical location  limit - length of segment Hardware support  Segment Table Base Register  Segment Table Length Register

Segmentation 66 segment 0 segment 1 segment 2 segment 3 segment 4 physical memory segment # + virtual address <? raise protection fault no yes offset baselimit Segment register table Index to segment register table Physical Address

Segmentation 67 Protection. With each entry in segment table associate:  validation bit = 0  illegal segment  read/write/execute privileges Protection bits associated with segments Since segments vary in length, memory management becomes a dynamic storage-allocation problem Still have external fragmentation of memory