Memory Management CS 470 - Spring 2002. Overview Partitioning, Segmentation, and Paging External versus Internal Fragmentation Logical to Physical Address.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Basic Memory Management Monoprogramming Protection Swapping Overlaying OS space User space.
Part IV: Memory Management
Memory Management Chapter 7.
16.317: Microprocessor System Design I
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Chapter 7 Memory Management
OS Fall’02 Memory Management Operating Systems Fall 2002.
Memory Management (II)
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Memory Management 2010.
Chapter 3.2 : Virtual Memory
Memory Management Chapter 5.
Chapter 91 Translation Lookaside Buffer (described later with virtual memory) Frame.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
1 Lecture 8: Memory Mangement Operating System I Spring 2008.
Chapter 8: Main Memory.
Memory management. Instruction execution cycle Fetch instruction from main memory Decode instruction Fetch operands (if needed0 Execute instruction Store.
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Memory Management Chapter 7.
Operating Systems Chapter 8
1 Memory Management Memory Management COSC513 – Spring 2004 Student Name: Nan Qiao Student ID#: Professor: Dr. Morteza Anvari.
Chapter 7 Memory Management
OSes: 8. Mem. Mgmt. 1 Operating Systems v Objectives –describe some of the memory management schemes used by an OS so that several processes can be in.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Memory Management Chapter 7.
Memory Management. Roadmap Basic requirements of Memory Management Memory Partitioning Basic blocks of memory management –Paging –Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Subject: Operating System.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Memory Management. Why memory management? n Processes need to be loaded in memory to execute n Multiprogramming n The task of subdividing the user area.
Memory management Ref: Stallings G.Anuradha. What is memory management? The task of subdivision of user portion of memory to accommodate multiple processes.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management – Page 1CSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: Memory Management *** Modified – look for Reading:
2010INT Operating Systems, School of Information Technology, Griffith University – Gold Coast Copyright © William Stallings /2 Memory Management.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
MEMORY MANAGEMENT. memory management  In a multiprogramming system, in order to share the processor, a number of processes must be kept in memory. 
Memory Management Chapter 7.
Memory management.
CSNB334 Advanced Operating Systems 5. Memory Management
Address Translation Mechanism of 80386
Chapter 8: Main Memory.
Chapter 8 Main Memory.
Chapter 8: Main Memory.
Chapter 8: Main Memory.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy 11/12/2018.
Segmentation Lecture November 2018.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
Memory Management Chapter 7.
CS399 New Beginnings Jonathan Walpole.
Lecture 3: Main Memory.
CSE 451: Operating Systems Autumn 2005 Memory Management
Operating System Chapter 7. Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
COMP755 Advanced Operating Systems
CSE 542: Operating Systems
Page Main Memory.
Presentation transcript:

Memory Management CS Spring 2002

Overview Partitioning, Segmentation, and Paging External versus Internal Fragmentation Logical to Physical Address Mapping Placement Algorithms –First Fit, Next Fit, and Best Fit –Buddy System Intel X86 Memory Mapping Mechanisms Linking and loading executables

Memory Partitioning Fixed Partitions (IBM OS/MFT) –Equal Partition Sizes –Variable but fixed Partition Sizes –Internal Fragmentation Dynamic Partitions (IBM OS/MVT) –External Fragmentation –Need for Compaction

Segmentation versus Paging Segmentation - Each process divided into variable sized programmer visible segments. External fragmentation. Paging - Main memory divided into equal sized programmer invisible pages. Trivial internal fragmentation. Simple (whole process loaded) versus Virtual (parts of processes loaded)

Relocation Segment or Partition Base Address Size Segment or Partition Descriptor Offset Base Addr + Offset  Physical Address Offset  Size  Address Exception Logical to Physical Translation: Size

Logical vs. Physical Addresses Allows processes to be physically scattered throughout memory while logically contiguous - i.e. programmer thinks it is all one contiguous block of memory Allows for physical movement without logical movement Allows processes to occupy same logical addresses.

Logical to Physical AddressTranslation Page/Segment Offset Process Page or Segment Table Segment Length Base Address Physical Address Logical Address

Inverted Page Table Page/Segment Offset Page NbrBase Addr Physical Address Logical Address hash Hash table 2 1 Link Used in MacOS

Placement Algorithms Trivial for allocating blocks of fixed size Given list of free blocks/partitions and their lengths First fit -- use first block of sufficient size Next fit -- use first block of sufficient size after the one that was last allocated Best fit -- use block whose size is smallest amongst those of sufficient size.

Amount of Fragmentation First fit is easiest and causes least fragmentation Next fit requires remembered state and fragments more because all blocks have equal chance to be allocated Best fit takes longest and almost guarantees lots of small fragments Can reduce external fragmentation if use only multiples of minimal sized block.

Buddy System Uses blocks of fixed sizes 2 i for L  i  U to reduce fragmentation i_List of free blocks of size 2 i, L  i  U. If request is of size k where 2 i-1  k  2 i, allocate block of size 2 i. If none of size 2 i, divide block of size 2 i+1 into 2 equal buddies - repeat recursively. Coalesce buddies recursively when freed.

Address of Buddy Assume original block of size 2 U is at address which is even multiple of 2 U All blocks of size 2 i are at address Addr satisfying ( Addr & (2 i - 1)) == 0. Address of buddy obtained by inverting the i th bit. The buddy is at address: ( Addr & ~ 2 i ) | ((~Addr) & 2 i )

Buddy Memory Allocation Free Lists 7 6 5

Buddy System Blocks Data Space (N - 4 bytes) Log2N: 0 Forward Link Back Link Log2N: N - 12 Bytes Free BlockAllocated Block Allocator Return Value

Intel X86 Memory Mapping Supports both segmentation and paging 16 bit segment selector –13 bit segment number –Descriptor Tables: 0 = Global, 1 = Local –2 bit requested privilege level 32 bit segment offset 64 terabyte address space for each process Registers (GDTR, LDTR) point to descriptor tables and give their length

Logical to Linear Mapping Segment OffsetSeg NumberRPL Seg Desc. Descriptor Table DirPageOffset Linear Address

Linear to Physical Mapping DirPageOffset Linear Address Dir Entry. Page Directory Pg TblEntry. Page Table CR3 Physical Address 031 Physical Address

Page/Directory Table Entry Page Frame AddrDA CDCD RWRW USUS V V Valid R/W Read / Write U/S User / Supervisor W/TWrite through C/DCache Disabled A Accessed D Dirty LLarge page GLGlobal WTWT GLGL L

Translation Lookaside Buffers Caches page table entries Separate TLB for each cache For Data cache, TLB depends on page size –4 way associative with 16 sets for 4K pages –4 way associative with 2 sets for 4MB pages For Code cache, TLB is 4 way associative with 8 sets

Linking and Loading Module 1 Module 2 Module 3 Library 1 Library 2 Linker Module 1 Module 2 Module 3 Library 1 Library 2 Loader Loaded Program Load Module Memory

When to resolve addresses Load Module Types –Absolute load modules –Relocatable load modules –Dynamic Run-time load modules Address Resolution Times –Programming time –Compile or assembly time –Load module creation time –Load time –Run time

Dynamic Linking Run time references to routines in external modules causes loading of that module and resolution of the reference Advantages –Upgrade of libraries can occur without relinking all the applications –Automatic sharing of libraries Complicates design and testing of applications