Presentation is loading. Please wait.

Presentation is loading. Please wait.

Virtual memory.

Similar presentations


Presentation on theme: "Virtual memory."— Presentation transcript:

1 Virtual memory

2 Virtual Memory Address spaces VM as a tool for caching
VM as a tool for memory management VM as a tool for memory protection Address translation

3 A System Using Physical Addressing
Main memory 0: 1: Physical address (PA) 2: 3: CPU 4: 4 5: 6: 7: 8: ... M-1: Data word Used in “simple” systems like embedded microcontrollers in devices like cars, elevators, and digital picture frames

4 A System Using Virtual Addressing
Main memory 0: CPU Chip 1: 2: Virtual address (VA) Physical address (PA) 3: CPU MMU 4: 4 4100 5: 6: 7: 8: ... M-1: Data word Used in all modern servers, desktops, and laptops One of the great ideas in computer science

5 Address Spaces Linear address space: Ordered set of contiguous non-negative integer addresses: {0, 1, 2, 3 … } Virtual address space: Set of N = 2n virtual addresses {0, 1, 2, 3, …, N-1} Physical address space: Set of M = 2m physical addresses {0, 1, 2, 3, …, M-1} Clean distinction between data (bytes) and their attributes (addresses) Each object can now have multiple addresses Every byte in main memory: one physical address, one (or more) virtual addresses

6 Why Virtual Memory (VM)?
Uses main memory efficiently Use DRAM as a cache for the parts of a virtual address space Simplifies memory management Each process gets the same uniform linear address space Isolates address spaces One process can’t interfere with another’s memory User program cannot access privileged kernel information

7 Virtual Memory Address spaces VM as a tool for caching
VM as a tool for memory management VM as a tool for memory protection Address translation

8 VM as a Tool for Caching Virtual memory is an array of N contiguous bytes stored on disk. The contents of the array on disk are cached in physical memory (DRAM cache) These cache blocks are called pages (size is P = 2p bytes) Virtual memory Physical memory VP 0 Unallocated VP 1 Cached Empty PP 0 Uncached PP 1 Unallocated Empty Cached Uncached Empty Cached PP 2m-p-1 M-1 VP 2n-p-1 Uncached N-1 Virtual pages (VPs) stored on disk Physical pages (PPs) cached in DRAM

9 DRAM Cache Organization
DRAM cache organization driven by the enormous miss penalty DRAM is about 10x slower than SRAM Disk is about 10,000x slower than DRAM Consequences Large page (block) size: typically 4-8 KB, sometimes 4 MB Fully associative Any VP can be placed in any PP Requires a “large” mapping function – different from CPU caches Highly sophisticated, expensive replacement algorithms Too complicated and open-ended to be implemented in hardware Write-back rather than write-through

10 Page Tables A page table is an array of page table entries (PTEs) that maps virtual pages to physical pages. Per-process kernel data structure in DRAM Physical memory (DRAM) Physical page number or disk address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

11 Page Hit Page hit: reference to VM word that is in physical memory (DRAM cache hit) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

12 Page Fault Page fault: reference to VM word that is not in physical memory (DRAM cache miss) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

13 Handling Page Fault Page miss causes page fault (an exception)
Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

14 Handling Page Fault Page miss causes page fault (an exception)
Page fault handler selects a victim to be evicted (here VP 4) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 4 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

15 Handling Page Fault Page miss causes page fault (an exception)
Page fault handler selects a victim to be evicted (here VP 4) Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 3 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

16 Handling Page Fault Page miss causes page fault (an exception)
Page fault handler selects a victim to be evicted (here VP 4) Offending instruction is restarted: page hit! Physical memory (DRAM) Physical page number or disk address Virtual address VP 1 PP 0 Valid VP 2 PTE 0 null VP 7 1 VP 3 PP 3 1 1 Virtual memory (disk) null PTE 7 1 VP 1 Memory resident page table (DRAM) VP 2 VP 3 VP 4 VP 6 VP 7

17 Locality to the Rescue Again!
Virtual memory works because of locality At any point in time, programs tend to access a set of active virtual pages called the working set Programs with better temporal locality will have smaller working sets If (working set size < main memory size) Good performance for one process after compulsory misses If ( SUM(working set sizes) > main memory size ) Thrashing: Performance meltdown where pages are swapped (copied) in and out continuously

18 Virtual Memory Address spaces VM as a tool for caching
VM as a tool for memory management VM as a tool for memory protection Address translation

19 VM as a Tool for Memory Management
Key idea: each process has its own virtual address space It can view memory as a simple linear array Mapping function scatters addresses through physical memory Well chosen mappings simplify memory allocation and management Address translation Virtual Address Space for Process 1: Physical Address Space (DRAM) VP 1 VP 2 PP 2 ... N-1 (e.g., read-only library code) PP 6 Virtual Address Space for Process 2: PP 8 VP 1 VP 2 ... ... N-1 M-1

20 VM as a Tool for Memory Management
Memory allocation Each virtual page can be mapped to any physical page A virtual page can be stored in different physical pages at different times Sharing code and data among processes Map virtual pages to the same physical page (here: PP 6) Address translation Virtual Address Space for Process 1: Physical Address Space (DRAM) VP 1 VP 2 PP 2 ... N-1 (e.g., read-only library code) PP 6 Virtual Address Space for Process 2: PP 8 VP 1 VP 2 ... ... N-1 M-1

21 Simplifying Linking and Loading
Memory invisible to user code Kernel virtual memory Linking Each program has similar virtual address space Code, stack, and shared libraries always start at the same address Loading execve() allocates virtual pages for .text and .data sections = creates PTEs marked as invalid The .text and .data sections are copied, page by page, on demand by the virtual memory system 0xc User stack (created at runtime) %esp (stack pointer) Memory-mapped region for shared libraries 0x brk Run-time heap (created by malloc) Read/write segment (.data, .bss) Loaded from the executable file Read-only segment (.init, .text, .rodata) 0x Unused

22 Virtual Memory Address spaces VM as a tool for caching
VM as a tool for memory management VM as a tool for memory protection Address translation

23 VM as a Tool for Memory Protection
Extend PTEs with permission bits Page fault handler checks these before remapping If violated, send process SIGSEGV (segmentation fault) Physical Address Space Process i: SUP READ WRITE Address VP 0: No Yes No PP 6 VP 1: No Yes Yes PP 4 PP 2 VP 2: Yes Yes Yes PP 2 PP 4 PP 6 Process j: SUP READ WRITE Address PP 8 VP 0: No Yes No PP 9 PP 9 VP 1: Yes Yes Yes PP 6 VP 2: No Yes Yes PP 11 PP 11

24 Virtual Memory Address spaces VM as a tool for caching
VM as a tool for memory management VM as a tool for memory protection Address translation

25 VM Address Translation
Virtual Address Space V = {0, 1, …, N–1} Physical Address Space P = {0, 1, …, M–1} Address Translation MAP: V  P U {} For virtual address a: MAP(a) = a’ if data at virtual address a is at physical address a’ in P MAP(a) =  if data at virtual address a is not in physical memory Either invalid or stored on disk

26 Summary of Address Translation Symbols
Basic Parameters N = 2n : Number of addresses in virtual address space M = 2m : Number of addresses in physical address space P = 2p : Page size (bytes) Components of the virtual address (VA) TLBI: TLB index TLBT: TLB tag VPO: Virtual page offset VPN: Virtual page number Components of the physical address (PA) PPO: Physical page offset (same as VPO) PPN: Physical page number CO: Byte offset within cache line CI: Cache index CT: Cache tag

27 Address Translation With a Page Table
Virtual address n-1 p p-1 Page table base register (PTBR) Virtual page number (VPN) Virtual page offset (VPO) Page table address for process Page table Valid Physical page number (PPN) Valid bit = 0: page not in memory (page fault) m-1 p p-1 Physical page number (PPN) Physical page offset (PPO) Physical address

28 Address Translation: Page Hit
2 CPU Chip Cache/ Memory PTEA MMU 1 PTE CPU VA 3 PA 4 Data 5 1) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) MMU sends physical address to cache/memory 5) Cache/memory sends data word to processor

29 Address Translation: Page Fault
Exception Page fault handler 4 2 CPU Chip Cache/ Memory Disk PTEA Victim page MMU 1 5 CPU VA PTE 7 3 New page 6 1) Processor sends virtual address to MMU 2-3) MMU fetches PTE from page table in memory 4) Valid bit is zero, so MMU triggers page fault exception 5) Handler identifies victim (and, if dirty, pages it out to disk) 6) Handler pages in new page and updates PTE in memory 7) Handler returns to original process, restarting faulting instruction

30 Integrating VM and Cache
PTE CPU Chip PTE MMU PTEA hit Memory PTEA miss PTEA PTEA CPU VA PA PA PA miss Data PA hit L1 cache Data VA: virtual address, PA: physical address, PTE: page table entry, PTEA = PTE address

31 Speeding up Translation with a TLB
Page table entries (PTEs) are cached in L1 like any other memory word PTEs may be evicted by other data references Solution: Translation Lookaside Buffer (TLB) Small hardware cache in MMU Maps virtual page numbers to physical page numbers Contains complete page table entries for small number of pages

32 TLB Hit A TLB hit eliminates a memory access CPU Chip TLB Cache/ MMU
2 PTE VPN 3 Cache/ Memory MMU 1 CPU VA PA 4 Data 5 A TLB hit eliminates a memory access

33 TLB Miss CPU Chip TLB 4 2 PTE VPN Cache/ Memory MMU 1 3 CPU VA PTEA PA 5 Data 6 A TLB miss incurs an additional memory access (the PTE) Fortunately, TLB misses are rare. Why?

34 Multi-Level Page Tables
... Level 2 Tables Suppose: 4KB (212) page size, 48-bit address space, 8-byte PTE Problem: Would need a 512 GB page table! 248 * * 23 = 239 bytes Common solution: Multi-level page tables Example: 2-level page table Level 1 table: each PTE points to a page table (always memory resident) Level 2 table: each PTE points to a page (paged in and out like any other data)

35 A Two-Level Page Table Hierarchy
page tables Virtual memory VP 0 ... PTE 0 PTE 0 VP 1023 2K allocated VM pages for code and data ... PTE 1 VP 1024 PTE 1023 PTE 2 (null) ... PTE 3 (null) VP 2047 PTE 4 (null) PTE 0 Gap PTE 5 (null) ... PTE 6 (null) PTE 1023 6K unallocated VM pages PTE 7 (null) PTE 8 1023 null PTEs (1K - 9) null PTEs PTE 1023 1023 unallocated pages 1023 unallocated pages 1 allocated VM page for the stack VP 9215 32 bit addresses, 4KB pages, 4-byte PTEs ...

36 Summary Programmer’s view of virtual memory
Each process has its own private linear address space Cannot be corrupted by other processes System view of virtual memory Uses memory efficiently by caching virtual memory pages Efficient only because of locality Simplifies memory management and programming Simplifies protection by providing a convenient interpositioning point to check permissions

37 Simple Memory System Example
Addressing 14-bit virtual addresses 12-bit physical address Page size = 64 bytes 13 12 11 10 9 8 7 6 5 4 3 2 1 VPN VPO Virtual Page Number Virtual Page Offset 11 10 9 8 7 6 5 4 3 2 1 PPN PPO Physical Page Number Physical Page Offset

38 Simple Memory System Page Table
Only show first 16 entries (out of 256) VPN PPN Valid VPN PPN Valid 00 28 1 08 13 1 01 09 17 1 02 33 1 0A 09 1 03 02 1 0B 04 0C 05 16 1 0D 2D 1 06 0E 11 1 07 0F 0D 1

39 Simple Memory System TLB
16 entries 4-way associative TLBT TLBI 13 12 11 10 9 8 7 6 5 4 3 2 1 VPN VPO Set Tag PPN Valid Tag PPN Valid Tag PPN Valid Tag PPN Valid 03 09 0D 1 00 07 02 1 1 03 2D 1 02 04 0A 2 02 08 06 03 3 07 03 0D 1 0A 34 1 02

40 Simple Memory System Cache
16 lines, 4-byte block size Physically addressed Direct mapped CT CI CO 11 10 9 8 7 6 5 4 3 2 1 PPN PPO Idx Tag Valid B0 B1 B2 B3 Idx Tag Valid B0 B1 B2 B3 19 1 99 11 23 11 8 24 1 3A 00 51 89 1 15 9 2D 2 1B 1 00 02 04 08 A 2D 1 93 15 DA 3B 3 36 B 0B 4 32 1 43 6D 8F 09 C 12 5 0D 1 36 72 F0 1D D 16 1 04 96 34 15 6 31 E 13 1 83 77 1B D3 7 16 1 11 C2 DF 03 F 14

41 Address Translation Example #1
Virtual Address: 0x03D4 VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____ Physical Address CO ___ CI___ CT ____ Hit? __ Byte: ____ TLBT TLBI 13 12 11 10 9 8 7 6 5 4 3 2 1 1 1 1 1 1 1 VPN VPO 0x0F 0x3 0x03 Y N 0x0D CT CI CO 11 10 9 8 7 6 5 4 3 2 1 1 PPN PPO 0x5 0x0D Y 0x36

42 Address Translation Example #2
Virtual Address: 0x0B8F VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____ Physical Address CO ___ CI___ CT ____ Hit? __ Byte: ____ TLBT TLBI 13 12 11 10 9 8 7 6 5 4 3 2 1 1 1 1 1 1 1 1 1 VPN VPO 0x2E 2 0x0B N Y TBD CT CI CO 11 10 9 8 7 6 5 4 3 2 1 PPN PPO

43 Address Translation Example #3
Virtual Address: 0x0020 VPN ___ TLBI ___ TLBT ____ TLB Hit? __ Page Fault? __ PPN: ____ Physical Address CO___ CI___ CT ____ Hit? __ Byte: ____ TLBT TLBI 13 12 11 10 9 8 7 6 5 4 3 2 1 1 VPN VPO 0x00 0x00 N N 0x28 CT CI CO 11 10 9 8 7 6 5 4 3 2 1 1 PPN PPO 0x8 0x28 N Mem

44 Dynamic Memory Allocation

45 Dynamic Memory Allocation
Programmers use dynamic memory allocators (such as malloc) to acquire VM at run time. For data structures whose size is only known at runtime. Dynamic memory allocators manage an area of process virtual memory known as the heap. Application Dynamic Memory Allocator Heap User stack Top of heap (brk ptr) Heap (via malloc) Uninitialized data (.bss) Initialized data (.data) Program text (.text)

46 Dynamic Memory Allocation
Allocator maintains heap as collection of variable sized blocks, which are either allocated or free Types of allocators Explicit allocator: application allocates and frees space E.g., malloc and free in C Implicit allocator: application allocates, but does not free space E.g. garbage collection in Java, ML, and Lisp Will discuss simple explicit memory allocation today

47 The malloc Package #include <stdlib.h> void *malloc(size_t size)
Successful: Returns a pointer to a memory block of at least size bytes (typically) aligned to 8-byte boundary If size == 0, returns NULL Unsuccessful: returns NULL (0) and sets errno void free(void *p) Returns the block pointed at by p to pool of available memory p must come from a previous call to malloc or realloc Other functions calloc: Version of malloc that initializes allocated block to zero. realloc: Changes the size of a previously allocated block. sbrk: Used internally by allocators to grow or shrink the heap

48 malloc Example void foo(int n) { int i, *p;
/* Allocate a block of n ints */ p = (int *) malloc(n * sizeof(int)); if (p == NULL) { perror("malloc"); exit(0); } /* Initialize allocated block */ for (i=0; i<n; i++) p[i] = i; /* Return p to the heap */ free(p);

49 Allocation Example p1 = malloc(4) p2 = malloc(5) p3 = malloc(6)
free(p2) p4 = malloc(2)

50 Constraints Applications Allocators
Can issue arbitrary sequence of malloc and free requests free request must be to a malloc’d block Allocators Can’t control number or size of allocated blocks Must respond immediately to malloc requests i.e., can’t reorder or buffer requests Must allocate blocks from free memory i.e., can only place allocated blocks in free memory Must align blocks so they satisfy all alignment requirements 8 byte alignment for GNU malloc (libc malloc) on Linux boxes Can manipulate and modify only free memory Can’t move the allocated blocks once they are malloc’d i.e., compaction is not allowed

51 Performance Goal: Throughput
Given some sequence of malloc and free requests: R0, R1, ..., Rk, ... , Rn-1 Goals: maximize throughput and peak memory utilization These goals are often conflicting Throughput: Number of completed requests per unit time Example: 5,000 malloc calls and 5,000 free calls in 10 seconds Throughput is 1,000 operations/second

52 Performance Goal: Peak Memory Utilization
Given some sequence of malloc and free requests: R0, R1, ..., Rk, ... , Rn-1 Def: Aggregate payload Pk malloc(p) results in a block with a payload of p bytes After request Rk has completed, the aggregate payload Pk is the sum of currently allocated payloads Def: Current heap size Hk Assume Hk is monotonically nondecreasing i.e., heap only grows when allocator uses sbrk Def: Peak memory utilization after k requests Uk = ( maxi<k Pi ) / Hk

53 Fragmentation Poor memory utilization caused by fragmentation
internal fragmentation external fragmentation

54 Internal Fragmentation
For a given block, internal fragmentation occurs if payload is smaller than block size Caused by Overhead of maintaining heap data structures Padding for alignment purposes Explicit policy decisions (e.g., to return a big block to satisfy a small request) Depends only on the pattern of previous requests Thus, easy to measure Block Internal fragmentation Payload Internal fragmentation

55 External Fragmentation
Occurs when there is enough aggregate heap memory, but no single free block is large enough Depends on the pattern of future requests Thus, difficult to measure p1 = malloc(4) p2 = malloc(5) p3 = malloc(6) free(p2) Oops! (what would happen now?) p4 = malloc(6)

56 Implementation Issues
How do we know how much memory to free given just a pointer? How do we keep track of the free blocks? What do we do with the extra space when allocating a structure that is smaller than the free block it is placed in? How do we pick a block to use for allocation -- many might fit? How do we reinsert freed block?

57 Knowing How Much to Free
Standard method Keep the length of a block in the word preceding the block. This word is often called the header field or header Requires an extra word for every allocated block p0 p0 = malloc(4) 5 free(p0) block size data

58 Keeping Track of Free Blocks
Method 1: Implicit list using length—links all blocks Method 2: Explicit list among the free blocks using pointers Method 3: Segregated free list Different free lists for different size classes Method 4: Blocks sorted by size Can use a balanced tree (e.g. Red-Black tree) with pointers within each free block, and the length used as a key 5 4 6 2 5 4 6 2

59 Method 1: Implicit List For each block we need both size and allocation status Could store this information in two words: wasteful! Standard trick If blocks are aligned, some low-order address bits are always 0 Instead of storing an always-0 bit, use it as a allocated/free flag When reading size word, must mask out this bit 1 word Size a a = 1: Allocated block a = 0: Free block Size: block size Payload: application data (allocated blocks only) Format of allocated and free blocks Payload Optional padding

60 Detailed Implicit Free List Example
Unused Start of heap 8/0 16/1 32/0 16/1 0/1 Double-word aligned Allocated blocks: shaded Free blocks: unshaded Headers: labeled with size in bytes/allocated bit

61 Implicit List: Finding a Free Block
First fit: Search list from beginning, choose first free block that fits: Can take linear time in total number of blocks (allocated and free) In practice it can cause “splinters” at beginning of list Next fit: Like first fit, but search list starting where previous search finished Should often be faster than first fit: avoids re-scanning unhelpful blocks Some research suggests that fragmentation is worse Best fit: Search the list, choose the best free block: fits, with fewest bytes left over Keeps fragments small—usually helps fragmentation Will typically run slower than first fit p = start; while ((p < end) && \\ not passed end ((*p & 1) || \\ already allocated (*p <= len))) \\ too small p = p + (*p & -2); \\ goto next block (word addressed)

62 Implicit List: Allocating in Free Block
Allocating in a free block: splitting Since allocated space might be smaller than free space, we might want to split the block 4 4 6 2 p addblock(p, 4) 4 4 4 2 2 void addblock(ptr p, int len) { int newsize = ((len + 1) >> 1) << 1; // round up to even int oldsize = *p & -2; // mask out low bit *p = newsize | 1; // set new length if (newsize < oldsize) *(p+newsize) = oldsize - newsize; // set length in remaining } // part of block

63 Implicit List: Freeing a Block
Simplest implementation: Need only clear the “allocated” flag void free_block(ptr p) { *p = *p & -2 } But can lead to “false fragmentation” 4 2 free(p) p 4 2 malloc(5) Oops! There is enough free space, but the allocator won’t be able to find it

64 Implicit List: Coalescing
Join (coalesce) with next/previous blocks, if they are free Coalescing with next block But how do we coalesce with previous block? 4 4 4 2 2 logically gone p free(p) 4 4 6 2 2 void free_block(ptr p) { *p = *p & -2; // clear allocated flag next = p + *p; // find next block if ((*next & 1) == 0) *p = *p + *next; // add to this block if } // not allocated

65 Implicit List: Bidirectional Coalescing
Boundary tags [Knuth73] Replicate size/allocated word at “bottom” (end) of free blocks Allows us to traverse the “list” backwards, but requires extra space Important and general technique! 4 6 Header a = 1: Allocated block a = 0: Free block Size: Total block size Payload: Application data (allocated blocks only) Size a Format of allocated and free blocks Payload and padding Boundary tag (footer) Size a

66 Constant Time Coalescing
Case 1 Case 2 Case 3 Case 4 Allocated Allocated Free Free Block being freed Allocated Free Allocated Free

67 Constant Time Coalescing (Case 1)
n 1 n m2 1 m2 1 m2 1 m2 1

68 Constant Time Coalescing (Case 2)
1 m1 1 m1 1 m1 1 n 1 n+m2 n 1 m2 m2 n+m2

69 Constant Time Coalescing (Case 3)
n+m1 m1 n 1 n 1 n+m1 m2 1 m2 1 m2 1 m2 1

70 Constant Time Coalescing (Case 4)
n+m1+m2 m1 n 1 n 1 m2 m2 n+m1+m2

71 Disadvantages of Boundary Tags
Internal fragmentation Can it be optimized? Which blocks need the footer tag? What does that mean?

72 Summary of Key Allocator Policies
Placement policy: First-fit, next-fit, best-fit, etc. Trades off lower throughput for less fragmentation Interesting observation: segregated free lists (next lecture) approximate a best fit placement policy without having to search entire free list Splitting policy: When do we go ahead and split free blocks? How much internal fragmentation are we willing to tolerate? Coalescing policy: Immediate coalescing: coalesce each time free is called Deferred coalescing: try to improve performance of free by deferring coalescing until needed. Examples: Coalesce as you scan the free list for malloc Coalesce when the amount of external fragmentation reaches some threshold

73 Implicit Lists: Summary
Implementation: very simple Allocate cost: linear time worst case Free cost: constant time worst case even with coalescing Memory usage: will depend on placement policy First-fit, next-fit or best-fit Not used in practice for malloc/free because of linear-time allocation used in many special purpose applications However, the concepts of splitting and boundary tag coalescing are general to all allocators

74 Dynamic memory allocation
Explicit free lists Segregated free lists Garbage collection

75 Keeping Track of Free Blocks
Method 1: Implicit free list using length—links all blocks Method 2: Explicit free list among the free blocks using pointers Method 3: Segregated free list Different free lists for different size classes Method 4: Blocks sorted by size Can use a balanced tree (e.g. Red-Black tree) with pointers within each free block, and the length used as a key 5 4 6 2 5 4 6 2

76 Explicit Free Lists Maintain list(s) of free blocks, not all blocks
Allocated (as before) Free Size a Size a Payload and padding Next Prev Size a Size a Maintain list(s) of free blocks, not all blocks The “next” free block could be anywhere So we need to store forward/back pointers, not just sizes Still need boundary tags for coalescing Luckily we track only free blocks, so we can use payload area

77 Explicit Free Lists Logically: Physically: blocks can be in any order
Forward (next) links A B 4 4 4 4 6 6 4 4 4 4 C Back (prev) links

78 Allocating From Explicit Free Lists
conceptual graphic Before After (with splitting) = malloc(…)

79 Freeing With Explicit Free Lists
Insertion policy: Where in the free list do you put a newly freed block? LIFO (last-in-first-out) policy Insert freed block at the beginning of the free list Pro: simple and constant time Con: studies suggest fragmentation is worse than address ordered Address-ordered policy Insert freed blocks so that free list blocks are always in address order: addr(prev) < addr(curr) < addr(next) Con: requires search Pro: studies suggest fragmentation is lower than LIFO

80 Freeing With a LIFO Policy (Case 1)
conceptual graphic Before free( ) Root Insert the freed block at the root of the list After Root

81 Freeing With a LIFO Policy (Case 2)
conceptual graphic Before free( ) Root Splice out predecessor block, coalesce both memory blocks, and insert the new block at the root of the list After Root

82 Freeing With a LIFO Policy (Case 3)
conceptual graphic Before free( ) Root Splice out successor block, coalesce both memory blocks and insert the new block at the root of the list After Root

83 Freeing With a LIFO Policy (Case 4)
conceptual graphic Before free( ) Root Splice out predecessor and successor blocks, coalesce all 3 memory blocks and insert the new block at the root of the list After Root

84 Explicit List Summary Comparison to implicit list:
Allocate is linear time in number of free blocks instead of all blocks Much faster when most of the memory is full Slightly more complicated allocate and free since needs to splice blocks in and out of the list Some extra space for the links (2 extra words needed for each block) Does this increase internal fragmentation? Most common use of linked lists is in conjunction with segregated free lists Keep multiple linked lists of different size classes, or possibly for different types of objects

85 Keeping Track of Free Blocks
Method 1: Implicit list using length—links all blocks Method 2: Explicit list among the free blocks using pointers Method 3: Segregated free list Different free lists for different size classes Method 4: Blocks sorted by size Can use a balanced tree (e.g. Red-Black tree) with pointers within each free block, and the length used as a key 5 4 6 2 5 4 6 2

86 Dynamic memory allocators
Explicit free lists Segregated free lists Garbage collection

87 Segregated List (Seglist) Allocators
Each size class of blocks has its own free list Often have separate classes for each small size For larger sizes: One class for each two-power size 1-2 3 4 5-8 9-inf

88 Seglist Allocator Given an array of free lists, each one for some size class To allocate a block of size n: Search appropriate free list for block of size m > n If an appropriate block is found: Split block and place fragment on appropriate list (optional) If no block is found, try next larger class Repeat until block is found If no block is found: Request additional heap memory from OS (using sbrk()) Allocate block of n bytes from this new memory Place remainder as a single free block in largest size class.

89 Seglist Allocator (cont.)
To free a block: Coalesce and place on appropriate list (optional) Advantages of seglist allocators Higher throughput log time for power-of-two size classes Better memory utilization First-fit search of segregated free list approximates a best-fit search of entire heap. Extreme case: Giving each block its own size class is equivalent to best-fit.

90 More Info on Allocators
D. Knuth, “The Art of Computer Programming”, 2nd edition, Addison Wesley, 1973 The classic reference on dynamic storage allocation Wilson et al, “Dynamic Storage Allocation: A Survey and Critical Review”, Proc Int’l Workshop on Memory Management, Kinross, Scotland, Sept, 1995. Comprehensive survey

91 Dynamic memory allocators
Explicit free lists Segregated free lists Garbage collection

92 Implicit Memory Management: Garbage Collection
Garbage collection: automatic reclamation of heap-allocated storage—application never has to free Common in functional languages, scripting languages, and modern object oriented languages: Lisp, ML, Java, Python, Mathematica Variants (“conservative” garbage collectors) exist for C and C++ However, cannot necessarily collect all garbage void foo() { int *p = malloc(128); return; /* p block is now garbage */ }

93 Garbage Collection How does the memory manager know when memory can be freed? In general we cannot know what is going to be used in the future since it depends on conditionals But we can tell that certain blocks cannot be used if there are no pointers to them Must make certain assumptions about pointers Memory manager can distinguish pointers from non-pointers All pointers point to the start of a block Cannot hide pointers (e.g., by coercing them to an int, and then back again)

94 Classical GC Algorithms
Mark-and-sweep collection (McCarthy, 1960) Does not move blocks (unless you also “compact”) Reference counting (Collins, 1960) Does not move blocks (not discussed) Copying collection (Minsky, 1963) Moves blocks (not discussed) Generational Collectors (Lieberman and Hewitt, 1983) Collection based on lifetimes Most allocations become garbage very soon So focus reclamation work on zones of memory recently allocated For more information: Jones and Lin, “Garbage Collection: Algorithms for Automatic Dynamic Memory”, John Wiley & Sons, 1996.

95 Memory as a Graph We view memory as a directed graph
Each block is a node in the graph Each pointer is an edge in the graph Locations not in the heap that contain pointers into the heap are called root nodes (e.g. registers, locations on the stack, global variables) Root nodes Heap nodes reachable Not-reachable (garbage) A node (block) is reachable if there is a path from any root to that node. Non-reachable nodes are garbage (cannot be needed by the application)

96 Mark and Sweep Collecting
Can build on top of malloc/free package Allocate using malloc until you “run out of space” When out of space: Use extra mark bit in the head of each block Mark: Start at roots and set mark bit on each reachable block Sweep: Scan all blocks and free blocks that are not marked root Before mark Note: arrows here denote memory refs, not free list ptrs. After mark Mark bit set After sweep free


Download ppt "Virtual memory."

Similar presentations


Ads by Google