Memory Management (Ch 4: )

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Part IV: Memory Management
CSS430 Memory Management Textbook Ch8
CS 311 – Lecture 21 Outline Memory management in UNIX
Operating Systems Memory Management (Chapter 9). Overview Provide Services (done) –processes(done) –files(after memory management) Manage Devices –processor.
Operating Systems I Memory Management. Overview F Provide Services –processes  –files  F Manage Devices –processor  –memory  –disk 
Memory Management.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
CS 333 Introduction to Operating Systems Class 9 - Memory Management
Memory Management (Ch )
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Operating Systems Memory Management (Ch )
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Operating Systems Memory Management (Ch 9). Overview Provide Services (done) –processes(done) –files(done in cs4513) Manage Devices –processor (done)
Operating Systems Memory Management (Ch )
Chapter 8 Main Memory Bernard Chen Spring Objectives To provide a detailed description of various ways of organizing memory hardware To discuss.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Operating Systems Chapter 8
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
CE Operating Systems Lecture 14 Memory management.
Memory Management Operating Systems CS550. Memory Manager Memory manager - manages allocation and de-allocation of main memory Plays significant impact.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
CSC 360, Instructor Kui Wu Memory Management I: Main Memory.
WEEK 5 LINKING AND LOADING MEMORY MANAGEMENT PAGING AND SEGMENTATION Operating Systems CS3013 / CS502.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
Memory Management 1. Outline Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation 2.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
Module 9: Memory Management
Chapter 9: Memory Management
Chapter 8: Main Memory.
Chapter 8: Memory Management
Chapter 8: Main Memory.
Chapter 8 Main Memory.
Main Memory Management
Chapter 8: Main Memory.
Main Memory.
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Storage Management Chapter 9: Memory Management
Chapter 8: Main Memory.
Operating System Concepts
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Module 9: Memory Management
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Main Memory Session -15.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
CSS 430: Operating Systems - Main Memory
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
So far… Text RO …. printf() RW link printf Linking, loading
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
CS399 New Beginnings Jonathan Walpole.
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Lecture 3: Main Memory.
CHAPTER 9 - MEMORY MANAGEMENT
Chapter 8: Memory Management strategies
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Chapter 8: Main Memory.
Chapter 8 Memory Management
Chapter 8: Main Memory.
CSE 542: Operating Systems
Page Main Memory.
Presentation transcript:

Memory Management (Ch 4: 4.1-4.2) Operating Systems Memory Management (Ch 4: 4.1-4.2)

Overview Provide Services (done) Manage Devices processes (done) files (briefly here, more in cs4513) Manage Devices processor (done) memory (next!) disk (done after files)

Simple Memory Management One process in memory, using it all each program needs I/O drivers until 1960 I/O drivers User Prog RAM

Simple Memory Management Small, protected OS, drivers DOS Device Drivers OS ROM ROM User Prog User Prog RAM RAM User Prog RAM OS OS “Mono-programming” -- No multiprocessing! - Early efforts used “Swapping”, but slooooow

Multiprocessing w/Fixed Partitions Simple! Partion 4 Partion 4 900k Partion 3 Partion 3 500k Partion 2 Partion 2 300k Partion 1 200k Partion 1 OS OS (a) (b) Waste large partition Skip small jobs Unequal queues Hey, processes can be in different memory locations!

Address Binding Compile Time Link Time Load Time Run Time Source maybe absolute binding (.com) Link Time dynamic or static libraries Load Time relocatable code Run Time relocatable memory segments overlays paging Compile Object Link Load Module Load RAM Binary Run

Logical vs. Physical Addresses Compile-Time + Load Time addresses same Run time addresses different Relocation Register Logical Address Physical Address CPU Memory 14000 346 14346 + MMU User goes from 0 to max Physical goes from R+0 to R+max

Relocatable Code Basics Allow logical addresses Protect other processes Memory Limit Reg Reloc Reg yes < + CPU physical address no MMU error Addresses must be contiguous!

Design Technique: Static vs. Dynamic Static solutions compute ahead of time for predictable situations Dynamic solutions compute when needed for unpredictable situations Some situations use dynamic because static too restrictive (malloc) ex: memory allocation, type checking

Variable-Sized Partitions Idea: want to remove “wasted” memory that is not needed in each partition Definition: Hole - a block of available memory scattered throughout physical memory New process allocated memory from hole large enough to fit it

Variable-Sized Partitions OS OS OS OS process 5 process 5 process 5 process 9 process 9 8 done 9 arrv 10 arrv process 8 process 10 5 done process 2 process 2 process 2 process 2 OS keeps track of: allocated partitions free partitions (holes) queues!

Memory Request? What if a request for additional memory? malloc(20k)? OS process 3 malloc(20k)? process 8 process 2

Internal Fragmentation Have some “empty” space for each processes A stack Room for growth Allocated to A A data A program OS Internal Fragmentation - allocated memory may be slightly larger than requested memory and not being used.

External Fragmentation External Fragmentation - total memory space exists to satisfy request but it is not contiguous OS 50k process 3 ? 125k Process 9 process 8 100k process 2 “But, how much does this matter?”

Analysis of External Fragmentation Assume: system at equilibrium process in middle if N processes, 1/2 time process, 1/2 hole ==> 1/2 N holes! Fifty-percent rule Fundamental: adjacent holes combined adjacent processes not combined

Compaction Shuffle memory contents to place all free memory together in one large block Only if relocation dynamic! Same I/O DMA problem (a) (b) OS OS OS 50k process 3 process 3 90k process 8 125k Process 9 process 8 60k process 8 100k process 3 process 2 process 2 process 2

Cost of Compaction 128 MB RAM, 100 nsec/access process 1 process 1 50k process 3 process 3 90k process 8 process 2 process 8 60k 100k process 2 128 MB RAM, 100 nsec/access  1.5 seconds to compact! Disk much slower!

Solution? Want to minimize external fragmentation Tradeoff Large Blocks But internal fragmentation! Tradeoff Sacrifice some internal fragmentation for reduced external fragmentation Paging

Where Are We? Memory Management Paging  Misc fixed partitions (done) linking and loading (done) variable partitions (done) Paging  Misc

Paging Logical address space noncontiguous; process gets memory wherever available Divide physical memory into fixed-size blocks size is a power of 2, between 512 and 8192 bytes called Frames Divide logical memory into bocks of same size called Pages

Paging Address generated by CPU divided into: Page number (p) - index to page table page table contains base address of each page in physical memory (frame) Page offset (d) - offset into page/frame CPU p d f d physical memory f page table

Paging Example Page size 4 bytes Memory size 32 bytes (8 pages) Page 0 Page size 4 bytes Memory size 32 bytes (8 pages) Page 0 1 2 Page 2 3 Page 0 1 Page 1 Page 1 4 4 1 Page 2 5 3 2 Page 3 6 7 3 Page 3 7 Logical Memory Page Table Physical Memory

Paging Example Offset 000 000 Page 0 001 001 1 1 1 010 010 Page Frame 1 1 1 010 010 Page Frame Page 1 011 011 01 00 100 100 11 01 Page 2 101 101 00 10 110 110 10 11 Page 3 111 111 Page Table Physical Memory Logical Memory

Paging Hardware address space 2m page offset 2n page number 2m-n phsical memory 2m bytes page number page offset p d m-n n note: not losing any bytes!

Paging Example Consider: How many bits in an address? Physical memory = 128 bytes Physical address space = 8 frames How many bits in an address? How many bits for page number? How many bits for page offset? Can a logical address space have only 2 pages? How big would the page table be?

Another Paging Example Consider: 8 bits in an address 3 bits for the frame/page number How many bytes (words) of physical memory? How many frames are there? How many bytes is a page? How many bits for page offset? If a process’ page table is 12 bits, how many logical pages does it have?

Page Table Example b=7 3 7 1 Page Table Page 0 Page 1 Process B 1 2 3 3 7 1 Page Table Page 0 Page 1 Process B Page 0A 1 2 Page 0B 3 page number p page offset d m-n=3 n=4 Page 1A 4 5 Page 0 Page 1 Process A 1 4 Page Table 6 Page 1B 7 Physical Memory

Paging Tradeoffs Advantages Disadvantages no external fragmentation (no compaction) relocation (now pages, before were processes) Disadvantages internal fragmentation consider: 2048 byte pages, 72,766 byte proc 35 pages + 1086 bytes = 962 bytes avg: 1/2 page per process small pages! overhead page table / process (context switch + space) lookup (especially if page to disk)

Implementation of Page Table Page table kept in registers Fast! Only good when number of frames is small Expensive! Registers Memory Disk

Implementation of Page Table Page table kept in main memory Page Table Base Register (PTBR) 1 4 1 Page 1 Page 0 1 Page 0 Page 1 1 4 2 PTBR 3 Logical Memory Page Table Physical Memory Page Table Length Two memory accesses per data/inst access. Solution? Associative Registers

Associative Registers logical address 10-20% mem time p d CPU page number frame associative registers f d hit physical memory physical address page table f miss

Associative Register Performance Hit Ratio - percentage of times that a page number is found in associative registers Effective access time = hit ratio x hit time + miss ratio x miss time hit time = reg time + mem time miss time = reg time + mem time * 2 Example: 80% hit ratio, reg time = 20 nanosec, mem time = 100 nanosec .80 * 120 + .20 * 220 = 140 nanoseconds

Protection Protection bits with each frame Store in page table Expand to more perms Protection Bit 1 v Page 1 Page 0 v 1 Page 0 Page 1 1 3 v 2 Page 2 2 i 3 Page 2 3 Logical Memory Page Table Physical Memory

Large Address Spaces Typical logical address spaces: 4 Gbytes => 232 address bits (4-byte address) Typical page size: 4 Kbytes = 212 bits Page table may have: 232 / 212 = 220 = 1million entries Each entry 3 bytes => 3MB per process! Do not want that all in RAM Solution? Page the page table Multilevel paging

Multilevel Paging page number p1 page offset d 10 12 p2 Page 0 ... ... Logical Memory ... Outer Page Table Page Table

Multilevel Paging Translation page number page offset p1 p2 d p1 p2 d outer page table desired page inner page table

Inverted Page Table Page table maps to physical addresses CPU pid p d i d i search Physical Memory pid p Still need page per process --> backing store Memory accesses longer! (search + swap)

Memory View Paging lost users’ view of memory Need “logical” memory units that grow and contract subroutine ex: stack, shared library main stack Solution? Segmentation! symbol table

Segmentation Logical address: <segment, offset> Segment table - maps two-dimensional user defined address into one-dimensional physical address base - starting physical location limit - length of segment Hardware support Segment Table Base Register Segment Table Length Register

Segmentation s d CPU main limit base + < stack yes no error logical address s d CPU main limit base + < physical address stack yes no error physical memory (“Er, what have we gained?”)  Paged segments!

Memory Management Outline Basic (done) Fixed Partitions (done) Variable Partitions (done) Paging (done) Enhanced (done) Specific  WinNT Linux Linking and Loading

Memory Management in WinNT 32 bit addresses (232 = 4 GB address space) Upper 2GB shared by all processes (kernel mode) Lower 2GB private per process Page size is 4 KB (212, so offset is 12 bits) Multilevel paging (2 levels) 10 bits for outer page table (page directory) 10 bits for inner page table 12 bits for offset

Memory Management in WinNT Each page-table entry has 32 bits only 20 needed for address translation 12 bits “left-over” Characteristics Access: read only, read-write States: valid, zeroed, free … Inverted page table points to page table entries list of free frames

Memory Management in Linux Page size: Alpha AXP has 8 Kbyte page Intel x86 has 4 Kbyte page Multilevel paging (3 levels) Makes code more portable Even though no hardware support on x86! “middle-layer” defined to be 1

Normal Linking and Loading Printf.c Main.c gcc gcc Printf.o Main.o Linker ar X Window code: - 500K minimum - 450K libraries a.out Static Library Loader Memory

Load Time Dynamic Linking Printf.c Main.c gcc gcc Printf.o Main.o Save disk space. Libraries move? Moving code? Library versions? Load time still the same. Linker ar a.out Dynamic Library Loader Memory

Run-Time Dynamic Linking Printf.c Main.c gcc gcc Printf.o Main.o Linker ar Save disk space. Startup fast. Might not need all. a.out Dynamic Library Loader Run-time Loader Memory

Memory Linking Performance Comparisons