Download presentation
Presentation is loading. Please wait.
1
Computer System Chapter 10. Virtual Memory
Lynn Choi Korea University
2
A System with Physical Memory Only
Examples: Most Cray machines, early PCs, nearly all embedded systems, etc. Addresses generated by the CPU correspond directly to bytes in physical memory CPU 0: 1: N-1: Memory Physical Addresses
3
A System with Virtual Memory
Examples: Workstations, servers, modern PCs, etc. Address Translation: Hardware converts virtual addresses to physical addresses via OS-managed lookup table (page table) Memory 0: 1: N-1: Page Table Virtual Addresses Physical Addresses 0: 1: CPU P-1: Disk
4
Page Faults (like “Cache Misses”)
What if an object is on disk rather than in memory? Page table entry indicates virtual address not in memory OS exception handler invoked to move data from disk into memory Current process suspends, others can resume OS has full control over placement, etc. Before fault After handling fault Memory Memory Page Table Page Table Virtual Addresses Physical Addresses Virtual Addresses Physical Addresses CPU CPU Disk Disk
5
Servicing a Page Fault Processor Signals Controller Read Occurs
(1) Initiate Block Read Processor Signals Controller Read block of length P starting at disk address X and store starting at memory address Y Read Occurs Direct Memory Access (DMA) Under control of I/O controller I / O Controller Signals Completion Interrupt processor OS resumes suspended process Processor Reg (3) Read Done Cache Memory-I/O bus (2) DMA Transfer I/O controller Memory disk Disk Disk disk
6
Memory Management Linux/x86 process memory image
Multiple processes can reside in physical memory. How do we resolve address conflicts? What if two processes access something at the same address? memory invisible to user code kernel virtual memory %esp stack Memory mapped region for shared libraries Linux/x86 process memory image the “brk” ptr runtime heap (via malloc) uninitialized data (.bss) initialized data (.data) program text (.text) forbidden
7
Solution: Separate Virt. Addr. Spaces
Virtual and physical address spaces divided into equal-sized blocks Blocks are called “pages” (both virtual and physical) Each process has its own virtual address space Operating system controls how virtual pages as assigned to physical memory Physical Address Space (DRAM) Virtual Address Space for Process 1: Address Translation VP 1 PP 2 VP 2 ... N-1 (e.g., read/only library code) PP 7 Virtual Address Space for Process 2: VP 1 VP 2 PP 10 ... M-1 N-1
8
Protection Page table entry contains access rights information
Hardware enforces this protection (trap into OS if violation occurs) Page Tables Memory Physical Addr Read? Write? PP 9 Yes No PP 4 XXXXXXX VP 0: VP 1: VP 2: • 0: 1: N-1: Process i: Physical Addr Read? Write? PP 6 Yes PP 9 No XXXXXXX • VP 0: VP 1: VP 2: Process j:
9
Address Translation Symbols
Virtual Address Components VPO: virtual page offset VPN: virtual page number TLBI: TLB index TLBT: TLB tag Physical Address Components PPO: physical page offset PPN: physical page number CO: byte offset within cache block CI: cache index CT: cache tag
10
Simple Memory System Example
Addressing 14-bit virtual addresses 12-bit physical address Page size = 64 bytes 13 12 11 10 9 8 7 6 5 4 3 2 1 VPN VPO (Virtual Page Number) (Virtual Page Offset) 11 10 9 8 7 6 5 4 3 2 1 PPN PPO (Physical Page Number) (Physical Page Offset)
11
Simple Memory System Page Table
Only show first 16 entries VPN PPN Valid 00 28 1 08 13 01 – 09 17 02 33 0A 03 0B 04 0C 05 16 0D 2D 06 0E 11 07 0F
12
Simple Memory System TLB
16 entries 4-way associative 13 12 11 10 9 8 7 6 5 4 3 2 1 VPO VPN TLBI TLBT Set Tag PPN Valid 03 – 09 0D 1 00 07 02 2D 04 0A 2 08 06 3 34
13
Simple Memory System Cache
16 lines 4-byte line size Direct mapped 11 10 9 8 7 6 5 4 3 2 1 PPO PPN CO CI CT Idx Tag Valid B0 B1 B2 B3 19 1 99 11 23 8 24 3A 00 51 89 15 – 9 2D 2 1B 02 04 08 A 93 DA 3B 3 36 B 0B 4 32 43 6D 8F 09 C 12 5 0D 72 F0 1D D 16 96 34 6 31 E 13 83 77 D3 7 C2 DF 03 F 14
14
Address Translation Example #1
Virtual Address 0x03D4 VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____ Physical Address Offset ___ CI___ CT ____ Hit? __ Byte: ____ 13 12 11 10 9 8 7 6 5 4 3 2 1 VPO VPN TLBI TLBT 11 10 9 8 7 6 5 4 3 2 1 PPO PPN CO CI CT
15
Address Translation Example #2
Virtual Address 0x0B8F VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____ Physical Address Offset ___ CI___ CT ____ Hit? __ Byte: ____ 13 12 11 10 9 8 7 6 5 4 3 2 1 VPO VPN TLBI TLBT 11 10 9 8 7 6 5 4 3 2 1 PPO PPN CO CI CT
16
Address Translation Example #3
Virtual Address 0x0040 VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____ Physical Address Offset ___ CI___ CT ____ Hit? __ Byte: ____ 13 12 11 10 9 8 7 6 5 4 3 2 1 VPO VPN TLBI TLBT 11 10 9 8 7 6 5 4 3 2 1 PPO PPN CO CI CT
17
Program Start Scenario
Before starting the process Load the page directory into physical memory Load the PDBR (page directory base register) with the beginning of the page directory Load the PC with the start address of code When the 1st reference to code triggers iTLB miss (translation failed for instruction address) Exception handler looks up PTE1 dTLB miss (translation failed for PTE1) Exception handler looks up PTE2 Lookup page directory and find PTE2 Add PTE2 to dTLB dTLB hit, but page miss (PTE1 not in memory) Load page containing PTE1 Lookup page table and find PTE1 Add PTE1 to iTLB iTLB hit, but page miss (code page not present in memory) Load the instruction page Cache miss, but memory returns the instruction
18
external system bus (e.g. PCI)
P6 Memory System 32 bit address space 4 KB page size L1, L2, and TLBs 4-way set associative inst TLB 32 entries 8 sets data TLB 64 entries 16 sets L1 i-cache and d-cache 16 KB 32 B line size 128 sets L2 cache unified 128 KB -- 2 MB DRAM external system bus (e.g. PCI) L2 cache cache bus bus interface unit inst TLB data TLB instruction fetch unit L1 i-cache L1 d-cache processor package
19
Overview of P6 Address Translation
CPU 32 L2 and DRAM result 20 12 virtual address (VA) VPN VPO L1 miss L1 hit 16 4 TLBT TLBI L1 (128 sets, 4 lines/set) TLB hit TLB miss ... ... 10 10 TLB (16 sets, 4 entries/set) VPN1 VPN2 20 12 20 7 5 PPN PPO CT CI CO physical address (PA) PDE PTE Page tables PDBR
20
P6 2-level Page Table Structure
Page directory byte page directory entries (PDEs) that point to page tables One page directory per process. Page directory must be in memory when its process is running Always pointed to by PDBR Page tables: byte page table entries (PTEs) that point to pages. Page tables can be paged in and out. Up to 1024 page tables 1024 PTEs page directory ... 1024 PDEs 1024 PTEs ... 1024 PTEs
21
P6 Page Directory Entry (PDE)
31 12 11 9 8 7 6 5 4 3 2 1 Page table physical base addr Avail G PS A CD WT U/S R/W P=1 Page table physical base address: 20 most significant bits of physical page table address (forces page tables to be 4KB aligned) Avail: These bits available for system programmers G: global page (don’t evict from TLB on task switch) PS: page size 4K (0) or 4M (1) A: accessed (set by MMU on reads and writes, cleared by software) CD: cache disabled (1) or enabled (0) WT: write-through or write-back cache policy for this page table U/S: user or supervisor mode access R/W: read-only or read-write access P: page table is present in memory (1) or not (0) 31 1 Available for OS (page table location in secondary storage) P=0
22
P6 Page Table Entry (PTE)
31 12 11 9 8 7 6 5 4 3 2 1 Page physical base address Avail G D A CD WT U/S R/W P=1 Page base address: 20 most significant bits of physical page address (forces pages to be 4 KB aligned) Avail: available for system programmers G: global page (don’t evict from TLB on task switch) D: dirty (set by MMU on writes) A: accessed (set by MMU on reads and writes) CD: cache disabled or enabled WT: write-through or write-back cache policy for this page U/S: user/supervisor R/W: read/write P: page is present in physical memory (1) or not (0) 31 1 Available for OS (page location in secondary storage) P=0
23
How P6 Page Tables Map Virtual Addresses to Physical Ones
10 10 12 VPN1 VPN2 VPO Virtual address word offset into page directory word offset into page table word offset into physical and virtual page page directory page table physical address of page base (if P=1) PTE PDE PDBR physical address of page table base (if P=1) physical address of page directory 20 12 PPN PPO Physical address
24
Representation of Virtual Address Space
Page Directory PT 3 P=1, M=1 P=0, M=0 P=0, M=1 • PT 2 PT 0 Page 0 Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7 Page 8 Page 9 Page 10 Page 11 Page 12 Page 13 Page 14 Page 15 Simplified Example 16 page virtual address space Flags P: Is entry in physical memory? M: Has this part of VA space been mapped? Mem Addr Disk Addr In Mem On Disk Unmapped
25
P6 TLB Translation ... ... L1 (128 sets, 4 lines/set)
CPU 32 L2 andDRAM result 20 12 virtual address (VA) VPN VPO L1 miss L1 hit 16 4 TLBT TLBI L1 (128 sets, 4 lines/set) TLB hit TLB miss ... ... 10 10 TLB (16 sets, 4 entries/set) VPN1 VPN2 20 12 20 7 5 PPN PPO CT CI CO physical address (PA) PDE PTE Page tables PDBR
26
P6 TLB ... TLB entry (not all documented, so this is speculative):
V: indicates a valid (1) or invalid (0) TLB entry PD: is this entry a PDE (1) or a PTE (0)? tag: disambiguates entries cached in the same set PDE/PTE: page directory or page table entry Structure of the data TLB: 16 sets, 4 entries/set PDE/PTE Tag PD V 1 16 32 entry ... set 0 set 1 set 2 set 15
27
Translating with the P6 Page Tables (case 1/1)
Case 1/1: page table and page present. MMU Action: MMU builds physical address and fetches data word. OS action none 20 12 VPN VPO 20 12 VPN1 VPN2 PPN PPO Mem PDE p=1 PTE p=1 data PDBR Page directory Page table Data page Disk
28
Translating with the P6 Page Tables (case 1/0)
Case 1/0: page table present but page missing. MMU Action: Page fault exception Handler receives the following args: VA that caused fault Fault caused by non-present page or page-level protection violation Read/write User/supervisor 20 12 VPN VPO VPN1 VPN2 Mem PDE p=1 PTE p=0 PDBR Page directory Page table data Disk Data page
29
Translating with the P6 Page Tables (case 1/0)
OS Action: Check for a legal virtual address. Read PTE through PDE. Find free physical page (swapping out current page if necessary) Read virtual page from disk and copy to virtual page Restart faulting instruction by returning from exception handler. 20 12 VPN VPO 20 12 VPN1 VPN2 PPN PPO Mem PDE p=1 PTE p=1 data PDBR Page directory Page table Data page Disk
30
Translating with the P6 Page Tables (case 0/1)
Case 0/1: page table missing but page present. Introduces consistency issue. Potentially every page out requires update of disk page table. Linux disallows this If a page table is swapped out, then swap out its data pages too. 20 12 VPN VPO VPN1 VPN2 Mem PDE p=0 data PDBR Page directory Data page Disk PTE p=1 Page table
31
Translating with the P6 Page Tables (case 0/0)
Case 0/0: page table and page missing. MMU Action: Page fault exception 20 12 VPN VPO VPN1 VPN2 Mem PDE p=0 PDBR Page directory PTE p=0 data Disk Page table Data page
32
Translating with the P6 Page Tables (case 0/0)
OS action: Swap in page table. Restart faulting instruction by returning from handler. Like case 1/0 from here on. 20 12 VPN VPO VPN1 VPN2 Mem PDE p=1 PTE p=0 PDBR Page directory Page table data Disk Data page
33
P6 L1 Cache Access ... ... L1 (128 sets, 4 lines/set)
CPU 32 L2 andDRAM result 20 12 virtual address (VA) VPN VPO L1 miss L1 hit 16 4 TLBT TLBI L1 (128 sets, 4 lines/set) TLB hit TLB miss ... ... 10 10 TLB (16 sets, 4 entries/set) VPN1 VPN2 20 12 20 7 5 PPN PPO CT CI CO physical address (PA) PDE PTE Page tables PDBR
34
Speeding Up L1 Access Observation
Tag Check 20 7 5 CT CI CO Physical address (PA) PPN PPO Addr. Trans. No Change CI virtual address (VA) VPN VPO 20 12 Observation Bits that determine CI identical in virtual and physical address Can index into cache while address translation taking place Then check with CT from physical address “Virtually indexed, physically tagged” Cache carefully sized to make this possible
35
Linux Organizes VM as Collection of “Areas”
Contiguous chunk of (allocated) virtual memory whose pages are related Examples: code segment, data segment, heap, shared library segment, etc. Any existing virtual page is contained in some area. Any virtual page that is not part of some area does not exist and cannot be referenced! Thus, the virtual address space can have gaps. The kernel does not keep track of virtual pages that do not exist. task_struct Kernel maintains a distinct task structure for each process Contain all the information that the kernel needs to run the process PID, pointer to the user stack, name of the executable object file, program counter, etc. mm_struct One of the entries in the task structure that characterizes the current state of virtual memory pgd – base of the page directory table mmap – points to a list of vm_area_struct
36
Linux Organizes VM as Collection of “Areas”
process virtual memory vm_area_struct task_struct mm_struct vm_end vm_start mm pgd vm_prot vm_flags mmap shared libraries vm_next 0x vm_end vm_start vm_prot: read/write permissions for this area vm_flags shared with other processes or private to this process data vm_prot vm_flags 0x0804a020 text vm_next vm_end vm_start 0x vm_prot vm_flags vm_next
37
Linux Page Fault Handling
process virtual memory Is the VA legal? i.e. is it in an area defined by a vm_area_struct? if not then signal segmentation violation (e.g. (1)) Is the operation legal? i.e., can the process read/write this area? if not then signal protection violation fault (e.g., (2)) If OK, handle the page fault e.g., (3) vm_area_struct vm_end r/o vm_next vm_start shared libraries 1 read vm_end r/w vm_next vm_start 3 data read 2 text write vm_end r/o vm_next vm_start
38
Memory Mapping Linux (also, UNIX) initializes the contents of a virtual memory area by associating it with an object on disk Create new vm_area_struct and page tables for area Areas can be mapped to one of two types of objects (i.e., get its initial values from) : Regular file on disk (e.g., an executable object file) The file is divided into page-sized pieces. The initial contents of a virtual page comes from each piece. If the area is larger than file section, then the area is padded with zeros. Anonymous file (e.g., bss) An area can be mapped to an anonymous file, created by the kernel. The initial contents of these pages are initialized as zeros Also, called demand-zero pages Key point: no virtual pages are copied into physical memory until they are referenced! Known as “demand paging” Crucial for time and space efficiency
39
User-Level Memory Mapping
void *mmap(void *start, int len, int prot, int flags, int fd, int offset) map len bytes starting at offset offset of the file specified by file description fd, preferably at address start (usually 0 for don’t care). prot: PROT_EXEC, PROT_READ, PROT_WRITE flags: MAP_PRIVATE, MAP_SHARED, MAP_ANON MAP_PRIVATE indicates a private copy-on-write object MAP_SHARED indicates a shared object MAP_ANON with NULL fd indicates an anonymous file (demand-zero pages) Return a pointer to the mapped area. Int munmap(void *start, int len) Delete the area starting at virtual address start and length len
40
Shared Objects Why shared objects?
Many processes need to share identical read-only text areas. For example, Each tcsh process has the same text area. Standard library functions such as printf It would be extremely wasteful for each process to keep duplicate copies in physical memory An object can be mapped as either a shared object or a private object Shared object Any write to that area is visible to any other processes that have also mapped the shared object. The changes are also reflected in the original object on disk. A virtual memory area into which a shared object is mapped is called a shared area. Private object Any write to that area is not visible to other processes. The changes are not reflected back to the object on disk. Private objects are mapped into virtual memory using copy-on-write. Only one copy of the private object is stored in physical memory. The page table entries for the private area are flagged as read-only Any write to some page in the private area triggers a protection fault The hander needs to create a new copy of the page in physical memory and then restores the write permission to the page. After the handler returns, the process proceeds normally
41
Shared Object Shared object Physical memory Process 1 virtual memory
42
Private Object Private copy-on-write object Physical memory Process 1
virtual memory Process 2 Copy-on-write Write to private copy-on-write page
43
Exec() Revisited To run a new program p in the current process using exec(): Free vm_area_struct’s and page tables for old areas. Create new vm_area_struct’s and page tables for new areas. stack, bss, data, text, shared libs. text and data backed by ELF executable object file. bss and stack initialized to zero. Set PC to entry point in .text Linux will swap in code and data pages as needed. process-specific data structures (page tables, task and mm structs) physical memory same for each process kernel code/data/stack kernel VM 0xc0 %esp stack demand-zero process VM Memory mapped region for shared libraries .data .text libc.so brk runtime heap (via malloc) demand-zero uninitialized data (.bss) initialized data (.data) .data program text (.text) .text forbidden p
44
Fork() Revisited To create a new process using fork():
Make copies of the old process’s mm_struct, vm_area_struct’s, and page tables. At this point the two processes are sharing all of their pages. How to get separate spaces without copying all the virtual pages from one space to another? “copy on write” technique. copy-on-write Make pages of writeable areas read-only flag vm_area_struct’s for these areas as private “copy-on-write”. Writes by either process to these pages will cause page faults. Fault handler recognizes copy-on-write, makes a copy of the page, and restores write permissions. Net result: Copies are deferred until absolutely necessary (i.e., when one of the processes tries to modify a shared page).
45
Dynamic Memory Allocation
Heap An area of demand-zero memory that begins immediately after the bss area. Allocator Maintains the heap as a collection of various sized blocks. Each block is a contiguous chunk of virtual memory that is either allocated or free. Explicit allocator requires the application to allocate and free space E.g., malloc and free in C Implicit allocator requires the application to allocate, but not to free space The allocator needs to detect when an allocated block is no longer being used Implicit allocators are also known as garbage collectors. The process of automatically freeing unused blocks is known as garbage collection. E.g. garbage collection in Java, ML or Lisp
46
Heap memory invisible to kernel virtual memory user code stack %esp
Memory mapped region for shared libraries the “brk” ptr points to the top of the heap run-time heap (via malloc) uninitialized data (.bss) initialized data (.data) program text (.text)
47
Malloc Package #include <stdlib.h> void *malloc(size_t size)
If successful: Returns a pointer to a memory block of at least size bytes (Typically) aligned to 8-byte boundary so that any kind of data object can be contained in the block If size == 0, returns NULL If unsuccessful (i.e. larger than virtual memory): returns NULL (0) and sets errno. Two other variations: calloc (initialize the allocated memory to zero) and realloc Use the mmap or munmap function, or use sbrk function void *realloc(void *p, size_t size) Changes the size of block pointed by p and returns pointer to the new block. Contents of the new block unchanged up to min of old and new size. void free(void *p) Returns the block pointed by p to pool of available memory p must come from a previous call to malloc or realloc.
48
Malloc Example void foo(int n, int m) { int i, *p;
/* allocate a block of n ints */ if ((p = (int *) malloc(n * sizeof(int))) == NULL) { perror("malloc"); exit(0); } for (i=0; i<n; i++) p[i] = i; /* add m bytes to end of p block */ if ((p = (int *) realloc(p, (n+m) * sizeof(int))) == NULL) { perror("realloc"); for (i=n; i < n+m; i++) /* print new array */ for (i=0; i<n+m; i++) printf("%d\n", p[i]); free(p); /* return p to available memory pool */
49
Allocation Examples p1 = malloc(4) p2 = malloc(5) p3 = malloc(6)
free(p2) p4 = malloc(2)
50
Requirements (Explicit Allocators)
Applications: Can issue arbitrary sequence of allocation and free requests Free requests must correspond to an allocated block Allocators Can’t control the number or the size of allocated blocks Must respond immediately to all allocation requests i.e., can’t reorder or buffer requests Must allocate blocks from free memory i.e., can only place allocated blocks in free memory Must align blocks so they satisfy all alignment requirements 8 byte alignment for GNU malloc (libc malloc) on Linux boxes Can only manipulate and modify free memory Can’t move the allocated blocks once they are allocated i.e., compaction is not allowed
51
Goals of Allocators Maximize throughput Maximize memory utilization
Throughput: number of completed requests per unit time Example: 5,000 malloc calls and 5,000 free calls in 10 seconds Throughput is 1,000 operations/second Maximize memory utilization Need to minimize “fragmentation”. Fragmentation (holes) – unused area There is a tradeoff between throughput and memory utilization Need to balance these two goals Good locality properties “Similar” objects should be allocated close in space
52
Internal Fragmentation
Poor memory utilization caused by fragmentation. Comes in two forms: internal and external fragmentation Internal fragmentation For some block, internal fragmentation is the difference between the block size and the payload size. Caused by overhead of maintaining heap data structures, i.e. padding for alignment purposes. Any virtual memory allocation policy using the fixed sized block such as paging can suffer from internal fragmentation block Internal fragmentation Internal fragmentation payload
53
External Fragmentation
Occurs when there is enough aggregate heap memory, but no single free block is large enough p1 = malloc(4) p2 = malloc(5) p3 = malloc(6) free(p2) p4 = malloc(6) oops! External fragmentation depends on the pattern of future requests, and thus is difficult to measure.
54
Implementation Issues
Free block organization How do we know the size of a free block? How do we keep track of the free blocks? Placement How do we choose an appropriate free block in which to place a newly allocated block? Splitting What do we do with the extra space after the placement? Coalescing What do we do with small blocks that have been freed p1 = malloc(1)
55
How do we know the size of a block?
Standard method Keep the length of a block in the word preceding the block. This word is often called the header field or header Requires an extra word for every allocated block Format of a simple heap block 31 3 2 1 a = 1: Allocated a = 0: Free The block size includes the header, payload, and any padding. malloc returns a pointer to the beginning of the payload Block size 0 0 a Payload (allocated block only) Padding (optional)
56
Example free(p0) p0 = malloc(4) p0 Block size data 5
57
Keeping Track of Free Blocks
Method 1: Implicit list using lengths -- links all blocks Method 2: Explicit list among the free blocks using pointers within the free blocks Method 3: Segregated free list Different free lists for different size classes 5 4 6 2 5 4 6 2
58
Placement Policy First fit: Next fit: Best fit:
Search list from the beginning, choose the first free block that fits Can take linear time in total number of blocks (allocated and free) (+) Tend to retain large free blocks at the end (-) Leave small free blocks at beginning Next fit: Like first-fit, but search the list starting from the end of previous search (+) Run faster than the first fit (-) Worse memory utilization than the first fit Best fit: Search the list, choose the free block with the closest size that fits (+) Keeps fragments small – better memory utilization than the other two (-) Will typically run slower – requires an exhaustive search of the heap
59
Splitting Allocating in a free block - splitting
Since allocated space might be smaller than free space, we might want to split the block 4 4 6 2 p addblock(p, 2) 4 4 4 2 2
60
Coalescing Coalescing
When the allocator frees a block, there might be other free blocks that are adjacent. Such adjacent free blocks can cause a false fragmentation, where there is an enough free space, but chopped up into small, unusable free spaces. Need to coalesce next and/or previous block if they are free Coalescing with next block But how do we coalesce with previous block? 4 4 4 2 2 p free(p) 4 4 6 2
61
Bidirectional Coalescing
Boundary tags [Knuth73] Replicate size/allocated word (called footer) at the bottom of a block Allows us to traverse the “list” backwards, but requires extra space Important and general technique! – allow constant time coalescing 1 word Header size a a = 1: allocated block a = 0: free block size: total block size payload: application data (allocated blocks only) payload and padding Format of allocated and free blocks Boundary tag (footer) size a 4 4 4 4 6 6 4 4
62
Constant Time Coalescing
Case 1 Case 2 Case 3 Case 4 allocated allocated free free block being freed allocated free allocated free
63
Constant Time Coalescing (Case 1)
n 1 n m2 1 m2 1 m2 1 m2 1
64
Constant Time Coalescing (Case 2)
1 m1 1 m1 1 m1 1 n 1 n+m2 n 1 m2 m2 n+m2
65
Constant Time Coalescing (Case 3)
n+m1 m1 n 1 n 1 n+m1 m2 1 m2 1 m2 1 m2 1
66
Constant Time Coalescing (Case 4)
n+m1+m2 m1 n 1 n 1 m2 m2 n+m1+m2
67
Implicit Lists: Summary
Implementation is very simple Allocate takes linear time in the worst case Free takes constant time in the worst case -- even with coalescing Memory usage will depend on placement policy First fit, next fit or best fit Not used in practice for malloc/free because of linear time allocate. Used for special purpose applications where the total number of blocks is known beforehand to be small However, the concepts of splitting and boundary tag coalescing are general to all allocators.
68
Keeping Track of Free Blocks
Method 1: Implicit list using lengths -- links all blocks Method 2: Explicit list among the free blocks using pointers within the free blocks Method 3: Segregated free lists Different free lists for different size classes 5 4 2 6 5 4 6 2
69
Explicit Free Lists Use data space for pointers
Typically doubly linked Still need boundary tags for coalescing A B C Forward links A B 4 4 4 4 6 6 4 4 4 4 C Back links
70
Format of Doubly-Linked Heap Blocks
31 3 2 1 31 3 2 1 Block size a/f Header Block size a/f Header Payload pred (Predecessor) succ (Successor) Old payload Padding (optional) Padding (optional) Block size a/f Footer Block size a/f Footer Allocated Block Free Block
71
Freeing With Explicit Free Lists
Insertion policy: Where in the free list do you put a newly freed block? LIFO (last-in-first-out) policy Insert freed block at the beginning of the free list (+) Simple and freeing a block can be performed in constant time. If boundary tags are used, coalescing can also be performed in constant time. Address-ordered policy Insert freed blocks so that free list blocks are always in address order i.e. addr(pred) < addr(curr) < addr(succ) (-) Freeing a block requires linear-time search (+) Studies suggest address-ordered first fit enjoys better memory utilization than LIFO-ordered first fit.
72
Explicit List Summary Comparison to implicit list:
Allocation time takes linear in the number of free blocks instead of total blocks Much faster allocates when most of the memory is full Slightly more complicated allocate and free since needs to splice blocks in and out of the list Extra space for the links (2 extra words needed for each block) This results in a larger minimum block size, and potentially increase the degree of internal fragmentation Main use of linked lists is in conjunction with segregated free lists Keep multiple linked lists of different size classes, or possibly for different types of objects
73
Keeping Track of Free Blocks
Method 1: Implicit list using lengths -- links all blocks Method 2: Explicit list among the free blocks using pointers within the free blocks Method 3: Segregated free list Different free lists for different size classes Can be used to reduce the allocation time compared to a linked list organization 5 4 6 2 5 4 6 2
74
Segregated Storage Partition the set of all free blocks into equivalent classes called size classes The allocator maintains an array of free lists, with one free list per size class ordered by increasing size. 1-2 3 4 5-8 9-16 Often have separate size class for every small size (2,3,4,…) Classes with larger sizes typically have a size class for each power of 2 Variations of segregated storage They differ in how they define size classes, when they perform coalescing, and when they request additional heap memory to OS, whether they allow splitting, and so on. Examples: simple segregated storage, segregated fits
75
Simple Segregated Storage
Separate heap and free list for each size class Free list for each size class contains same-sized blocks of the largest element size For example, the free list for size class {17-32} consists entirely of block size 32 To allocate a block of size n: If free list for size n is not empty, allocate the first block in its entirety If free list is empty, get a new page from OS, create a new free list from all the blocks in page, and then allocate the first block on list To free a block: Simply insert the free block at the front of the appropriate free list (+) Both allocating and freeing blocks are fast constant-time operations. (+) Little per-block memory overhead: no splitting and no coalescing (-) Susceptible to internal and external fragmentation Internal fragmentation: since free blocks are never split External fragmentation: since free blocks are never coalesced
76
Segregated Fits Array of free lists, each one for some size class
Free list for each size class contains potentially different-sized blocks To allocate a block of size n: Do a first-fit search of the appropriate free list If an appropriate block is found: Split (option) the block and place the fragment on the appropriate list If no block is found, try the next larger class and repeat this until block is found If none of free lists yields a block that fits, request additional heap memory to OS, allocate the block out of this new heap memory, and place the remainder in the largest size To free a block: Coalesce and place on the appropriate list (+) Fast Since searches are limited to part of the heap rather than the entire heap area However, coalescing can increase search times (+) Good memory utilization A simple first-fit search approximates a best-fit search of the entire heap Popular choice for production-quality allocators such as GNU malloc
77
Garbage Collection Garbage collector: dynamic storage allocator that automatically frees allocated blocks that are no longer used Implicit memory management: an application never has to free void foo() { int *p = malloc(128); return; /* p block is now garbage */ } Common in functional languages, scripting languages, and modern object oriented languages: Lisp, ML, Java, Perl, Mathematica, Variants (conservative garbage collectors) exist for C and C++ Cannot collect all garbages
78
Garbage Collection How does the memory manager know when memory can be freed? In general we cannot know what is going to be used in the future since it depends on conditionals But we can tell that certain blocks cannot be used if there are no pointers to them Need to make certain assumptions about pointers Memory manager need to distinguish pointers from non-pointers Garbage Collection Garbage collectors views memory as a reachability graph and periodically reclaim the unreachable nodes Classical GC Algorithms Mark and sweep collection (McCarthy, 1960) Does not move blocks (unless you also “compact”) Reference counting (Collins, 1960) Does not move blocks (not discussed) Copying collection (Minsky, 1963) Moves blocks (not discussed)
79
Memory as a Graph Reachability graph: we view memory as a directed graph Each block is a node in the graph Each pointer is an edge in the graph Locations not in the heap that contain pointers into the heap are called root node e.g. registers, locations on the stack, global variables Root nodes Heap nodes reachable Not-reachable (garbage) A node (block) is reachable if there is a path from any root to that node. Non-reachable nodes are garbage (never needed by the application)
80
Mark and Sweep Garbage Collectors
A Mark&Sweep garbage collector consists of a mark phase followed by a sweep phase Use extra mark bit in the head of each block When out of space: Mark: Start at roots and set mark bit on all reachable memory blocks Sweep: Scan all blocks and free blocks that are not marked Mark bit set root Before mark After mark After sweep free free
81
Mark and Sweep (cont.) Mark using depth-first traversal of the memory graph ptr mark(ptr p) { if (!is_ptr(p)) return; // do nothing if not pointer if (markBitSet(p)) return // check if already marked setMarkBit(p); // set the mark bit for (i=0; i < length(p); i++) // mark all children mark(p[i]); return; } Functions is_ptr(p): If p is a pointer to an allocated block, return a pointer b to the beginning of that block. Return NULL otherwise. blockMarked(b): return true if block b is already marked blockAllocated(b): return true if block b is allocated length(b): returns the length of block b Sweep using lengths to find next block ptr sweep(ptr p, ptr end) { while (p < end) { if markBitSet(p) clearMarkBit(); else if (allocateBitSet(p)) free(p); p += length(p); }
82
Common Memory-Related Bugs in C
Dereferencing bad pointers Reading uninitialized memory Stack buffer overflow Assuming pointers and the objects are the same size Making Off-by-One errors Referencing a pointer instead of the object Misunderstanding pointer arithmetic Referencing nonexistent variables Freeing blocks multiple times Referencing freed blocks Memory leaks
83
Dereferencing Bad Pointers
There are large holes in the virtual address space of a process that are not mapped to any meaningful data. If we attempt to dereference a pointer into one of these holes, the process will cause a segmentation exception The classic scanf bug Read an integer from stdin into a variable In the best case, the program terminates immediately with an exception In the worst case, the content of val correspond to some valid read/write area, and we overwrite memory, usually with disastrous consequence much later The correct form is scanf(“%d”, val); scanf(“%d”, &val);
84
Reading Uninitialized Memory
Assuming that heap data is initialized to zero While .bss sections are always initialized to zeros by the loader, this is not true for heap memory. Should use ‘calloc’ instead of ‘malloc’ /* return y = Ax */ int *matvec(int **A, int *x) { int *y = malloc(N*sizeof(int)); int i, j; for (i=0; i<N; i++) for (j=0; j<N; j++) y[i] += A[i][j]*x[j]; return y; }
85
Stack Overflow Buffer overflow
A program can run into a buffer overflow bug if it writes to a target buffer on the stack without examining the size of the input string gets function copies an arbitrary length string to the buffer. To fix this, should use fgets, which limits the size of the input string. Basis for classic buffer overflow attacks 1988 Internet worm Modern attacks on Web servers AOL/Microsoft IM war Void bufoverflow() { char buf[64]; gets(buf); return; }
86
Pointers and the Objects are Different in Size
Allocating the (possibly) wrong sized object Create an array of n pointers, each of which points to an array of m ints. If we run this code on Alpha processor, where a pointer is larger than an int, The for loop will write past the end of the A array. Should use sizeof(int *) for the first malloc int **p; p = malloc(N*sizeof(int)); for (i=0; i<N; i++) { p[i] = malloc(M*sizeof(int)); }
87
Off-by-One Errors Off-by-one error
Try to initialize n+1 elements instead of n int **p; p = malloc(N*sizeof(int *)); for (i=0; i<=N; i++) { p[i] = malloc(M*sizeof(int)); }
88
Pointer vs Object Referencing a pointer instead of the object it points to The two unary operators – and * have the same precedence and right-associativity Will decrement pointer and then dereference The correct form is (*size)-- int *BinheapDelete(int **binheap, int *size) { int *packet; packet = binheap[0]; binheap[0] = binheap[*size - 1]; *size--; Heapify(binheap, *size, 0); return(packet); }
89
Pointer Arithmetic Misunderstanding pointer arithmetic
p += 4 will incorrectly scans every fourth integer in the array The correct form is p++ int *search(int *p, int val) { while (*p && *p != val) p += sizeof(int); return p; }
90
Referencing Nonexistent Variables
Forgetting that local variables disappear when a function returns Later, if the program assigns some value to the pointer, it might modify an entry in another function’s stack frame int *foo () { int val; return &val; }
91
Referencing Freed Blocks
Evil! Reference data in heap blocks that have already been freed! x = malloc(N*sizeof(int)); <manipulate x> free(x); ... y = malloc(M*sizeof(int)); for (i=0; i<M; i++) y[i] = x[i]++;
92
Failing to Free Blocks (Memory Leaks)
Slow, long-term killer! Memory leaks are particularly serious for programs such as deamons and servers, which by definition never terminate. foo() { int *x = malloc(N*sizeof(int)); ... return; }
93
Homework 7 Read Chapter 8 from Computer System Textbook Exercise 9.11
9.13 9.15 9.17 9.19
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.