Chapter 12 Memory Management

Slides:



Advertisements
Similar presentations
Garbage collection David Walker CS 320. Where are we? Last time: A survey of common garbage collection techniques –Manual memory management –Reference.
Advertisements

ICS220 – Data Structures and Algorithms Lecture 13 Dr. Ken Cosh.
Lecture 10: Heap Management CS 540 GMU Spring 2009.
Garbage Collection What is garbage and how can we deal with it?
Memory Management Chapter 7.
Garbage Collection  records not reachable  reclaim to allow reuse  performed by runtime system (support programs linked with the compiled code) (support.
Compiler construction in4020 – lecture 12 Koen Langendoen Delft University of Technology The Netherlands.
5. Memory Management From: Chapter 5, Modern Compiler Design, by Dick Grunt et al.
Garbage Collection CSCI 2720 Spring Static vs. Dynamic Allocation Early versions of Fortran –All memory was static C –Mix of static and dynamic.
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable supply of ready processes to.
Chapter 7 Memory Management Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic, N.Z. ©2009, Prentice.
CPSC 388 – Compiler Design and Construction
CS 536 Spring Automatic Memory Management Lecture 24.
Chapter 8 Runtime Support. How program structures are implemented in a computer memory? The evolution of programming language design has led to the creation.
Memory Management. History Run-time management of dynamic memory is a necessary activity for modern programming languages Lisp of the 1960’s was one of.
OS Fall’02 Memory Management Operating Systems Fall 2002.
Memory Management Professor Yihjia Tsai Tamkang University.
Chapter 7 Memory Management
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Memory Management Memory Areas and their use Memory Manager Tasks:
Memory Management Policies: UNIX
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Memory Management A memory manager should take care of allocating memory when needed by programs release memory that is no longer used to the heap. Memory.
Memory Management Chapter 5.
Linked lists and memory allocation Prof. Noah Snavely CS1114
CS61C Midterm Review on C & Memory Management Fall 2006 Aaron Staley Some material taken from slides by: Michael Le Navtej Sadhal.
Reference Counters Associate a counter with each heap item Whenever a heap item is created, such as by a new or malloc instruction, initialize the counter.
A Parallel, Real-Time Garbage Collector Author: Perry Cheng, Guy E. Blelloch Presenter: Jun Tao.
SEG Advanced Software Design and Reengineering TOPIC L Garbage Collection Algorithms.
Memory Allocation CS Introduction to Operating Systems.
Memory Management Chapter 7.
Chapter 4 Memory Management.
Subject: Operating System.
1 Memory Management Requirements of memory management system to provide the memory space to enable several processes to execute concurrently to provide.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Languages and Compilers (SProg og Oversættere) Heap allocation and Garbage Collection.
David F. Bacon Perry Cheng V.T. Rajan IBM T.J. Watson Research Center ControllingFragmentation and Space Consumption in the Metronome.
Memory Management -Memory allocation -Garbage collection.
2/4/20161 GC16/3011 Functional Programming Lecture 20 Garbage Collection Techniques.
Lecture 7 Page 1 CS 111 Summer 2013 Dynamic Domain Allocation A concept covered in a previous lecture We’ll just review it here Domains are regions of.
The Metronome Washington University in St. Louis Tobias Mann October 2003.
1 CSC 211 Data Structures Lecture 11 Dr. Iftikhar Azim Niaz 1.
Memory Management Chapter 7.
Chapter 7 Memory Management
Chapter 17 Free-Space Management
Memory Management Chapter 7.
Garbage Collection What is garbage and how can we deal with it?
Memory Management Memory Areas and their use Memory Manager Tasks:
Inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 7 – More Memory Management Lecturer PSOE Dan Garcia
Dynamic Domain Allocation
Dynamic Memory Allocation
Concepts of programming languages
Memory Management Memory Areas and their use Memory Manager Tasks:
Main Memory Management
Optimizing Malloc and Free
CS Introduction to Operating Systems
Simulated Pointers.
Main Memory Background Swapping Contiguous Allocation Paging
Operating System Chapter 7. Memory Management
Data Structures and Algorithms
Memory Management Memory Areas and their use Memory Manager Tasks:
Data Structures and Algorithms
Inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 7 – More Memory Management Lecturer PSOE Dan Garcia
Chapter 7 Memory Management
Garbage Collection What is garbage and how can we deal with it?
Page Main Memory.
Presentation transcript:

Chapter 12 Memory Management

Objectives Discuss the following topics: Memory Management The Sequential-Fit Methods The Nonsequential-Fit Methods Garbage Collection Case Study: An In-Place Garbage Collector

Memory Management The heap is the region of main memory from which portions of memory are dynamically allocated upon request of a program The memory manager is responsible for: The maintenance of free memory blocks Assigning specific memory blocks to the user programs Cleaning memory from unneeded blocks to return them to the memory pool

Memory Management (continued) The memory manager is responsible for: Scheduling access to shared data, Moving code and data between main and secondary memory Keeping one process away from another External fragmentation amounts to the presence of wasted space between allocated segments of memory

Memory Management (continued) Internal fragmentation amounts to the presence of unused memory inside the segments

The Sequential-Fit Methods In the sequential-fit methods, all available memory blocks are linked, and the list is searched to find a block whose size is larger than or the same as the requested size The first-fit algorithm allocates the first block of memory large enough to meet the request The best-fit algorithm allocates a block that is closest in size to the request

The Sequential-Fit Methods (continued) The worst-fit method finds the largest block on the list so that the remaining part is large enough to be used in later requests The next-fit method allocates the next available block that is sufficiently large The way the blocks are organized on the list determines how fast the search for an available block succeeds or fails

The Sequential-Fit Methods (continued) Figure 12-1 Memory allocation using sequential-fit methods

The Nonsequential-Fit Methods An adaptive exact-fit technique dynamically creates and adjusts storage block lists that fit the requests exactly In adaptive exact-fit, a size-list of block lists of a particular size returned to the memory pool during the last T allocations is maintained The exact-fit method disposes of entire block lists if no request comes for a block from this list in the last T allocations

The Nonsequential-Fit Methods (continued) allocate (reqSize) t++; if a block list b1 with reqSize blocks is on sizeList lastref(b1) = t; b = head of blocks(b1); if b was the only block accessible from b1 detach b1 from sizeList; else b = search-memory-for-a-block-of(reqSize); dispose of all block lists on sizeList for which t - lastref(b1) < T; return b;

The Nonsequential-Fit Methods (continued) Figure 12-2 An example configuration of a size-list and heap created by the adaptive exact-fit method

Buddy Systems Nonsequential memory management methods or buddy systems assign memory in sequential slices, and divide it into two buddies that are merged whenever possible In the buddy system, two buddies are never free A block can have either a buddy used by the program or none

Buddy Systems (continued) In the binary buddy system each block of memory (except the entire memory) is coupled with a buddy of the same size that participates with the block in reserving and returning chunks of memory

Buddy Systems (continued) Figure 12-3 Block structure in the binary buddy system

Buddy Systems (continued) Figure 12-4 Reserving three blocks of memory using the binary buddy system

Buddy Systems (continued) Figure 12-4 Reserving three blocks of memory using the binary buddy system (continued)

Buddy Systems (continued) Figure 12-4 Reserving three blocks of memory using the binary buddy system (continued)

Buddy Systems (continued) Figure 12-5 (a) Returning a block to the pool of blocks, (b) resulting in coalescing one block with its buddy

Buddy Systems (continued) Figure 12-5 (c) Returning another block leads to two coalescings (continued)

Buddy Systems (continued) avail[i] = -1 for i = 0, . . . , m-1; avail[m] = first address in memory; reserveFib(reqSize) availSize = the position of the first Fibonacci number greater than reqSize for which avail[availSize] > -1;

Buddy Systems (continued) Figure 12-6 (a) Splitting a block of size Fib(k) into two buddies using the buddy-bit and the memory-bit

Buddy Systems (continued) Figure 12-6 (b) Coalescing two buddies utilizing information stored in buddy- and memory-bits

Buddy Systems (continued) A weighted buddy system is to decrease the amount of internal fragmentation by allowing more block sizes than in the binary system A buddy system that takes a middle course between the binary system and the weighted system is a dual buddy system

Garbage Collection A garbage collector is automatically invoked to collect unused memory cells when the program is idle or when memory resources are exhausted References to all linked structures currently utilized by the program are stored in a root set, which contains all root pointers

Garbage Collection (continued) There are two phases of garbage collection: The marking phase — to identify all currently used cells The reclamation phase — when all unmarked cells are returned to the memory pool; this phase can also include heap compaction

Mark-and-Sweep Memory cells currently in use are marked by: Traversing each linked structure Then the memory is swept to glean unused (garbage) cells that put them together in a memory pool marking (node) if node is not marked mark node; if node is not an atom marking(head(node)); marking(tail(node));

Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells

Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells (continued)

Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells (continued)

Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells (continued)

Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells (continued)

Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells (continued)

Space Reclamation sweep() for each location from the last to the first if mark(location) is 0 insert location in front of availList; else set mark(location) to 0;

Compaction Figure 12-8 An example of heap compaction

Copying Methods The stop-and-copy algorithm divides the heap into two semispaces, one of which is only used for allocating memory Lists can be copied using breadth-first traversal that allows it to combine two tasks: copying lists and updating references This algorithm requires no marking phase and no stack

Copying Methods (continued) Figure 12-9 (a) A situation in the heap before copying the contents of cells in use from semispace1 to semispace2

Copying Methods (continued) Figure 12-9 (b) the situation right after copying; all used cells are packed contiguously (continued)

Incremental Garbage Collection Incremental garbage collectors, whose execution is interleaved with the execution of the program, is desirable for a fast response to a program After the collector partially processes some lists, the program can change or mutate those lists using a program called a mutator The Baker algorithm uses two semispaces, called fromspace and tospace, which are both active to ensure proper cooperation between the mutator and the collector

Incremental Garbage Collection (continued) Figure 12-10 A situation in memory (a) before and (b) after allocating a cell with head and tail references referring to cells P and Q in tospace according to the Baker algorithm

Incremental Garbage Collection (continued) Figure 12-10 A situation in memory (a) before and (b) after allocating a cell with head and tail references referring to cells P and Q in tospace according to the Baker algorithm (continued)

Incremental Garbage Collection (continued) The mutator is preceded by a read barrier, which precludes utilizing references to cells in fromspace The generational garbage collection technique divides all allocated cells into at least two generations and focuses its attention on the youngest generation, which generates most of the garbage

Incremental Garbage Collection (continued) Figure 12-11 Changes performed by the Baker algorithm when addresses P and Q refer to cells in fromspace, P to an already copied cell, Q to a cell still in fromspace

Incremental Garbage Collection (continued) Figure 12-11 Changes performed by the Baker algorithm when addresses P and Q refer to cells in fromspace, P to an already copied cell, Q to a cell still in fromspace (continued)

Incremental Garbage Collection (continued) Figure 12-12 A situation in three regions (a) before and (b) after copying reachable cells from region ri to region r’i in the Lieberman-Hewitt technique of generational garbage collection

Incremental Garbage Collection (continued) Figure 12-12 A situation in three regions (a) before and (b) after copying reachable cells from region ri to region r’i in the Lieberman- Hewitt technique of generational garbage collection (continued)

Noncopying Methods createRootPtr(p,q,r) // Lisp’s cons if collector is in the marking phase mark up to k1 cells; else if collector is in the sweeping phase sweep up to k2 cells; else if the number of cells on availList is low push all root pointers onto collector’s stack st; p = first cell on availList; head(p) = q; tail(p) = r; mark p if it is in the unswept portion of heap;

Noncopying Methods (continued) Figure 12-13 An inconsistency that results if, in Yuasa’s noncopying incremental garbage collector, a stack is not used to record cells possibly unprocessed during the marking phase

Noncopying Methods (continued) Figure 12-13 An inconsistency that results if, in Yuasa’s noncopying incremental garbage collector, a stack is not used to record cells possibly unprocessed during the marking phase (continued)

Noncopying Methods (continued) Figure 12-14 Memory changes during the sweeping phase using Yuasa’s method

Noncopying Methods (continued) Figure 12-14 Memory changes during the sweeping phase using Yuasa’s method (continued)

Case Study: An In-Place Garbage Collector roots: 1 5 3 (0: -1 2 false false 0 0) (1: 5 4 false false 1 4) (2: 0 -1 false false 2 2) (3: 4 -1 true false 130) (4: 1 3 true false 129) (5: -1 1 false false 5 1) freeCells: (0 0 0)(2 2 2) nonFreeCells: (5 5 1)(1 1 4)(4 129)(3 130)

Case Study: An In-Place Garbage Collector (continued) Figure 12-15 An example of a situation on the heap

Case Study: An In-Place Garbage Collector (continued) Figure 12-15 An example of a situation on the heap (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Case Study: An In-Place Garbage Collector (continued) Figure 12-16 Implementation of an in-place garbage collector (continued)

Summary The heap is the region of main memory from which portions of memory are dynamically allocated upon request of a program External fragmentation amounts to the presence of wasted space between allocated segments of memory Internal fragmentation amounts to the presence of unused memory inside the segments

Summary (continued) In the sequential-fit methods, all available memory blocks are linked, and the list is searched to find a block whose size is larger than or the same as the requested size An adaptive exact-fit technique dynamically creates and adjusts storage block lists that fit the requests exactly Nonsequential memory management methods or buddy systems assign memory in sequential slices, and divide it into two buddies that are merged whenever possible

Summary (continued) A garbage collector is automatically invoked to collect unused memory cells when the program is idle or when memory resources are exhausted The stop-and-copy algorithm divides the heap into two semispaces, with one only used for allocating memory Incremental garbage collectors, whose execution is interleaved with the execution of the program, is desirable for a fast response to a program