Memory Management CS Spring 2002
Overview Partitioning, Segmentation, and Paging External versus Internal Fragmentation Logical to Physical Address Mapping Placement Algorithms –First Fit, Next Fit, and Best Fit –Buddy System Intel X86 Memory Mapping Mechanisms Linking and loading executables
Memory Partitioning Fixed Partitions (IBM OS/MFT) –Equal Partition Sizes –Variable but fixed Partition Sizes –Internal Fragmentation Dynamic Partitions (IBM OS/MVT) –External Fragmentation –Need for Compaction
Segmentation versus Paging Segmentation - Each process divided into variable sized programmer visible segments. External fragmentation. Paging - Main memory divided into equal sized programmer invisible pages. Trivial internal fragmentation. Simple (whole process loaded) versus Virtual (parts of processes loaded)
Relocation Segment or Partition Base Address Size Segment or Partition Descriptor Offset Base Addr + Offset Physical Address Offset Size Address Exception Logical to Physical Translation: Size
Logical vs. Physical Addresses Allows processes to be physically scattered throughout memory while logically contiguous - i.e. programmer thinks it is all one contiguous block of memory Allows for physical movement without logical movement Allows processes to occupy same logical addresses.
Logical to Physical AddressTranslation Page/Segment Offset Process Page or Segment Table Segment Length Base Address Physical Address Logical Address
Inverted Page Table Page/Segment Offset Page NbrBase Addr Physical Address Logical Address hash Hash table 2 1 Link Used in MacOS
Placement Algorithms Trivial for allocating blocks of fixed size Given list of free blocks/partitions and their lengths First fit -- use first block of sufficient size Next fit -- use first block of sufficient size after the one that was last allocated Best fit -- use block whose size is smallest amongst those of sufficient size.
Amount of Fragmentation First fit is easiest and causes least fragmentation Next fit requires remembered state and fragments more because all blocks have equal chance to be allocated Best fit takes longest and almost guarantees lots of small fragments Can reduce external fragmentation if use only multiples of minimal sized block.
Buddy System Uses blocks of fixed sizes 2 i for L i U to reduce fragmentation i_List of free blocks of size 2 i, L i U. If request is of size k where 2 i-1 k 2 i, allocate block of size 2 i. If none of size 2 i, divide block of size 2 i+1 into 2 equal buddies - repeat recursively. Coalesce buddies recursively when freed.
Address of Buddy Assume original block of size 2 U is at address which is even multiple of 2 U All blocks of size 2 i are at address Addr satisfying ( Addr & (2 i - 1)) == 0. Address of buddy obtained by inverting the i th bit. The buddy is at address: ( Addr & ~ 2 i ) | ((~Addr) & 2 i )
Buddy Memory Allocation Free Lists 7 6 5
Buddy System Blocks Data Space (N - 4 bytes) Log2N: 0 Forward Link Back Link Log2N: N - 12 Bytes Free BlockAllocated Block Allocator Return Value
Intel X86 Memory Mapping Supports both segmentation and paging 16 bit segment selector –13 bit segment number –Descriptor Tables: 0 = Global, 1 = Local –2 bit requested privilege level 32 bit segment offset 64 terabyte address space for each process Registers (GDTR, LDTR) point to descriptor tables and give their length
Logical to Linear Mapping Segment OffsetSeg NumberRPL Seg Desc. Descriptor Table DirPageOffset Linear Address
Linear to Physical Mapping DirPageOffset Linear Address Dir Entry. Page Directory Pg TblEntry. Page Table CR3 Physical Address 031 Physical Address
Page/Directory Table Entry Page Frame AddrDA CDCD RWRW USUS V V Valid R/W Read / Write U/S User / Supervisor W/TWrite through C/DCache Disabled A Accessed D Dirty LLarge page GLGlobal WTWT GLGL L
Translation Lookaside Buffers Caches page table entries Separate TLB for each cache For Data cache, TLB depends on page size –4 way associative with 16 sets for 4K pages –4 way associative with 2 sets for 4MB pages For Code cache, TLB is 4 way associative with 8 sets
Linking and Loading Module 1 Module 2 Module 3 Library 1 Library 2 Linker Module 1 Module 2 Module 3 Library 1 Library 2 Loader Loaded Program Load Module Memory
When to resolve addresses Load Module Types –Absolute load modules –Relocatable load modules –Dynamic Run-time load modules Address Resolution Times –Programming time –Compile or assembly time –Load module creation time –Load time –Run time
Dynamic Linking Run time references to routines in external modules causes loading of that module and resolution of the reference Advantages –Upgrade of libraries can occur without relinking all the applications –Automatic sharing of libraries Complicates design and testing of applications