On the limits of partial compaction Anna Bendersky & Erez Petrank Technion.

Slides:



Advertisements
Similar presentations
Lecture 5 Memory Management Part I. Lecture Highlights  Introduction to Memory Management  What is memory management  Related Problems of Redundancy,
Advertisements

Dynamic Memory Management
Chapter 6: Memory Management
Understanding Operating Systems Fifth Edition
Lecture 10: Heap Management CS 540 GMU Spring 2009.
KERNEL MEMORY ALLOCATION Unix Internals, Uresh Vahalia Sowmya Ponugoti CMSC 691X.
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
CSC 2300 Data Structures & Algorithms March 16, 2007 Chapter 7. Sorting.
7. Physical Memory 7.1 Preparing a Program for Execution
Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable supply of ready processes to.
Allocating Memory.
By Jacob SeligmannSteffen Grarup Presented By Leon Gendler Incremental Mature Garbage Collection Using the Train Algorithm.
CPSC 388 – Compiler Design and Construction
Increasing Memory Usage in Real-Time GC Tobias Ritzau and Peter Fritzson Department of Computer and Information Science Linköpings universitet
1 The Compressor: Concurrent, Incremental and Parallel Compaction. Haim Kermany and Erez Petrank Technion – Israel Institute of Technology.
Progress Guarantee for Parallel Programs via Bounded Lock-Freedom Erez Petrank – Technion Madanlal Musuvathi- Microsoft Bjarne Steensgaard - Microsoft.
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Memory Management Memory Areas and their use Memory Manager Tasks:
1 Storage Allocation Operating System Hebrew University Spring 2007.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Memory Management Chapter 5.
B + -Trees (Part 1) Lecture 20 COMP171 Fall 2006.
B + -Trees (Part 1). Motivation AVL tree with N nodes is an excellent data structure for searching, indexing, etc. –The Big-Oh analysis shows most operations.
Chapter 5: Memory Management Dhamdhere: Operating Systems— A Concept-Based Approach Slide No: 1 Copyright ©2005 Memory Management Chapter 5.
 2004 Deitel & Associates, Inc. All rights reserved. Chapter 9 – Real Memory Organization and Management Outline 9.1 Introduction 9.2Memory Organization.
Improved results for a memory allocation problem Rob van Stee University of Karlsruhe Germany Leah Epstein University of Haifa Israel WADS 2007 WAOA 2007.
Memory Management Last Update: July 31, 2014 Memory Management1.
Memory Allocation CS Introduction to Operating Systems.
The memory allocation problem Define the memory allocation problem Memory organization and memory allocation schemes.
Real-Time Concepts for Embedded Systems Author: Qing Li with Caroline Yao ISBN: CMPBooks.
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
Memory Management Chapter 7.
Dynamic Partition Allocation Allocate memory depending on requirements Partitions adjust depending on memory size Requires relocatable code –Works best.
Trevor Brown – University of Toronto B-slack trees: Space efficient B-trees.
ICS 145B -- L. Bic1 Project: Main Memory Management Textbook: pages ICS 145B L. Bic.
A Mostly Non-Copying Real-Time Collector with Low Overhead and Consistent Utilization David Bacon Perry Cheng (presenting) V.T. Rajan IBM T.J. Watson Research.
1 Tuning Garbage Collection in an Embedded Java Environment G. Chen, R. Shetty, M. Kandemir, N. Vijaykrishnan, M. J. Irwin Microsystems Design Lab The.
1 Address Translation Memory Allocation –Linked lists –Bit maps Options for managing memory –Base and Bound –Segmentation –Paging Paged page tables Inverted.
Competitive Queue Policies for Differentiated Services Seminar in Packet Networks1 Competitive Queue Policies for Differentiated Services William.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
1 Advanced Memory Management Techniques  static vs. dynamic kernel memory allocation  resource map allocation  power-of-two free list allocation  buddy.
Chapter 17 Free-Space Management Chien-Chung Shen CIS, UD
Computer Systems Week 14: Memory Management Amanda Oddie.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
David F. Bacon Perry Cheng V.T. Rajan IBM T.J. Watson Research Center ControllingFragmentation and Space Consumption in the Metronome.
A REAL-TIME GARBAGE COLLECTOR WITH LOW OVERHEAD AND CONSISTENT UTILIZATION David F. Bacon, Perry Cheng, and V.T. Rajan IBM T.J. Watson Research Center.
Memory Management -Memory allocation -Garbage collection.
Informationsteknologi Wednesday, October 3, 2007Computer Systems/Operating Systems - Class 121 Today’s class Memory management Virtual memory.
Internal and External Sorting External Searching
On the limits of partial compaction Nachshon Cohen and Erez Petrank Technion.
CS 241 Discussion Section (12/1/2011). Tradeoffs When do you: – Expand Increase total memory usage – Split Make smaller chunks (avoid internal fragmentation)
Virtual Memory Pranav Shah CS147 - Sin Min Lee. Concept of Virtual Memory Purpose of Virtual Memory - to use hard disk as an extension of RAM. Personal.
Lecture 7 Page 1 CS 111 Summer 2013 Dynamic Domain Allocation A concept covered in a previous lecture We’ll just review it here Domains are regions of.
The Metronome Washington University in St. Louis Tobias Mann October 2003.
COMP091 – Operating Systems 1 Memory Management. Memory Management Terms Physical address –Actual address as seen by memory unit Logical address –Address.
Memory management The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory.
Eliminating External Fragmentation in a Non-Moving Garbage Collector for Java Author: Fridtjof Siebert, CASES 2000 Michael Sallas Object-Oriented Languages.
Memory Management Chapter 7.
Chapter 17 Free-Space Management
Memory Management Chapter 7.
Memory Management.
Partitioned Memory Allocation
Upper Bound for Defragmenting Buddy Heaps
CS Introduction to Operating Systems
David F. Bacon, Perry Cheng, and V.T. Rajan
Instructor: Shengyu Zhang
Memory Management (1).
Presentation transcript:

On the limits of partial compaction Anna Bendersky & Erez Petrank Technion

Fragmentation When a program allocates and de-allocates, holes appear in the heap. These holes are called “fragmentation” and then – Large objects cannot be allocated (even after GC), – The heap gets larger and locality deteriorates – Garbage collection work becomes tougher – Allocation gets complicated. The amount of fragmentation is hard to define or measure because it depends on future allocations.

How Bad can Fragmentation Be? Consider a game: – Program tries to consume as much space as possible, but: – Program does not keep more than M bytes of live space at any point in time. – Allocator tries to satisfy program demands within a given space – How much space may the allocator need to satisfy the requests at worst case? [Robson 1971, 1974]: There exists a program that will make any allocator use ½M  log(n) space, where n is the size of the largest object. [Robson 1971, 1974]: There is an allocator that can do with ½M  log(n).

Compaction Kills Fragmentation Compaction moves all objects to the beginning of the heap (and updates the references). A memory manager that applies compaction after each deletion never needs more than M bytes. But compaction is costly ! A common solution: partial compaction. – “Once in a while” compact “some of the objects”. – (Typically, during GC, find sparse pages & evacuate objects.) Theoretical bounds have never been studied. Our focus: the effectiveness of partial compaction.

Setting a Limit on Compaction How do we measure the amount of (partial) compaction? Compaction ratio 1/c: after the program allocates B bytes, it is allowed to move B/c bytes. Now we can ask: how much fragmentation still exists when the partial compaction is limited by a budget 1/c?

Theorem 1 There exists a program, such that for all allocators the heap space required is at least: 1/10  M  min{ c, log(n)/log(c) } Recall: 1/c is the compaction ratio, M is the overall space alive at any point in time, n is the size of the largest object.

Theorem 1 Digest If a lot of compaction is allowed (a small c), then the program needs space M  c/10. If little compaction is allowed (a large c), then the program needs space M log(n) / ( 10 log(c) ) Recall: 1/c is the compaction ratio, M is the overall space alive at any point in time, n is the size of the largest object.

What Can a Memory Manager Achieve? Recall Theorem 1: There exists a program, such that for all allocators the heap space required is at least: 1/10  M  min{ c, log(n)/log(c) } Theorem 2 (Matching upper bound): There exists an allocator that can do with M  min{ c+1, ½  log(n) } Parameters: 1/c is the compaction ratio, M is the overall space alive at any point in time, n is the size of the largest object.

Upper Bound Proof Idea There exists a memory manager that can serve any allocation and de-allocation sequence in space M  min( c+1, ½log(n) ) Idea when c is small (a lot of compaction is allowed): allocate using first-fit until M(c+1) space is used, and then compact the entire heap. Idea when c is large (little compaction is allowed): use Robson’s allocator without compacting at all.

Proving the Lower Bound Provide a program that behaves “terribly” Show that it consumes a large space overhead against any allocator. Let’s start with Robson: the allocator cannot move objects at all. – The bad program is provably bad for any allocator. – (Even if the allocator is designed specifically to handle this program only…)

Robson’s “Bad” Program (Simplified) Allocate objects in phases. Phase i allocates objects of size 2 i. Remove objects selectively so that future allocations cannot reuse space. 11 For (i=0, i<=log(n), ++i) – Request allocations of objects of size 2 i (as many as possible). – Delete as many objects as possible so that an object of size 2 i+1 cannot be placed in the freed spaces.

Bad Program Against First Fit 12 Assume (max live space) M=48. Start by allocating 48 1-byte objects. The heap:

Bad Program Against First Fit 13 Phase 0: Start by allocating 48 1-byte objects. The heap:

Bad Program Against First Fit 14 Phase 0: Start by allocating 48 1-byte objects. Next, delete so that 2-byte objects cannot be placed. The heap: x24 Memory available for allocations

Bad Program Against First Fit 15 Phase 0: Start by allocating 48 1-byte objects. Next, delete so that 2-byte objects cannot be placed. Phase 1: allocate 12 2-byte objects. The heap: x24 Memory available for allocations

Bad Program Against First Fit 16 Phase 0: Start by allocating 48 1-byte objects. Next, delete so that 2-byte objects cannot be placed. Phase 1: allocate 12 2-byte objects. Next, delete so that 4-byte objects cannot be placed. 16 The heap: x24 Memory available for allocations

Bad Program Against First Fit 17 Phase 1: allocate 12 2-byte objects. Next, delete so that 4-byte objects cannot be placed. Phase 2: allocate 6 4-byte objects. 17 x24 Memory available for allocations

First Fit Example -- Observations In each phase (after the first), we allocate ½M bytes, and space reuse is not possible. We have log(n) phases. Thus, ½M  log(n) space must be used. To be accurate (because we are being videotaped): – The proof for a general allocator is more complex. – This bad program only obtains 4/13  log(n) M for a general collector. The actual bad program is more complex.

Is This Program Bad Also When Partial Compaction is Allowed? Observation: – Small objects are surrounded by large gaps. – We could move a few and make room for future allocations. Idea: a bad program in the presence of partial compaction, monitors the density of objects in all areas. 19

Is This Program Bad Also When Partial Compaction is Allowed? Observation: – Small objects are surrounded by large gaps. – We could move a few and make room for future allocations. Idea: a bad program in the presence of partial compaction, monitors the density of objects in all areas. 20 Guideline for adversarial program: remove as much space as possible, but maintain a minimal “density” !

Simplifications for the Presentation Objects are aligned. The compaction budget c is a power of Aligned allocation: an object of size 2 i, must be placed at a location k*2 i, for some integer k

The Adversarial Program For (i=0, i<=log(n), ++i) – Request allocations of as many as possible objects of size 2 i, not exceeding M. – Partition the memory into consecutive aligned areas of size 2 i+1 – Delete as many object as possible, so that each area remains at least 2/c full. 22

An Execution Example 23 The heap: Assume compaction budget C=4 Compaction quota: 48/4 = 12 i=0, allocate M=48 objects of size 1.

An Execution Example 24 The heap: x23 Memory available for allocations Assume compaction budget C=4 Compaction Quota: = 12 i=0, allocate M=48 objects of size 1. i=1, memory manager does not compact. i=1, deletion step.

An Execution Example Assume compaction budget C=4 Compaction Quota: = 12 i=1, allocate 11 objects of size 2. The heap: x1 Memory available for allocations Compaction Quota: = 12+22/4=17.5

An Execution Example Assume compaction budget C=4 i=1, allocate 11 objects of size 2. Memory manager compacts. Compaction Quota: = 17.5 The heap: x1 Memory available for allocations Compaction Quota: = =9.5

An Execution Example Assume compaction budget C=4 i=1, allocate 11 objects of size 2. Memory manager compacts. i=1, deletion. Lowest density 2/c=1/2. Compaction Quota: = 9.5 The heap: x1 x20 Memory available for allocations

An Execution Example Assume compaction budget C=4 i=2, allocate 5 objects of size 4. Compaction Quota: = 9.5 The heap: Memory available for allocations x20

Bad Program Intuition The goal is to minimize reuse of space. If we leave an area with density 1/c, then the memory manager can: – Move allocated space out (lose budget area-size/c) – Allocate on space (win budget area-size/c) Therefore, we maintain density 2/c in each area.

Proof Skeleton The following holds for all memory managers. Fact: space used ≥ space allocated – space reused. Lemma 1: space reused < compaction  c/2 By definition: compaction < ( space allocated ) / c Conclusion: space used ≥ space allocated – compaction  c/2 ≥ space allocated ( 1 - ½ ) Lemma 2: either allocation is sparse (and a lot of space is thus used) or there is a lot of space allocated during the run. And the lower bound follows. The full proof works with non-aligned objects...

The Worst-Case for Specific Systems Real memory managers use specific allocation techniques. The space overhead for a specific allocator may be large. – Until now, we considered a program that is bad for all allocators. A malicious programs can take advantage of knowing the allocation technique. One popular allocation method is segregated free list.

Block-Oriented Segregated Free List Segregated free list: Keep an array for different object sizes (say, a power of 2). Each entry has an associated range of sizes and it points to a free list of chunks with sizes in the range. Block Oriented: Partition the heap to blocks (typically of page size). Each block only holds objects of the same size.

Segregated Free List n Different object sizes Each object is allocated in a block matching its size. The first free chunk in the list is typically used. If there is no free chunk, a new block is allocated and split into chunks. Each object is allocated in a block matching its size. The first free chunk in the list is typically used. If there is no free chunk, a new block is allocated and split into chunks. …

What’s the Worst Space Consumption? No specific allocators were investigated in previous work (except for obvious observations). Even with no compaction: what’s the worst space usage for a segregated free list allocator?

Lower Bound Differ by Possible Sizes For all possible sizes: – If no compaction allowed: space used ≥ 4/5  M  √n For exponentially increasing sizes: – With no compaction: space used ≥ M log(n) (Should be interpreted as: there exists a bad program that will make the segregated free list allocator use at least … space.) n 1248n

Lower Bound Differ by Possible Sizes For all possible sizes: – If no compaction allowed: space used ≥ 4/5  M  √n – With partial compaction: space used ≥ ⅙ M min(√n,c/2) For exponentially increasing sizes: – With no compaction: space used ≥ M log(n) – With partial compaction: space used ≥ ¼M min(log(n),c/2) (Should be interpreted as: there exists a bad program that will make the segregated free list allocator use at least … space.) n 1248n

The Adversarial Program for SFL When no compaction is allowed: For (i=0; i<k; ++i) // k is the number of different sizes. – Allocate as many objects as possible of size s i – Delete as many objects as possible while leaving a single object in each block. In the presence of partial compaction: For (i=0; i<k; ++i) – Allocate as many objects as possible of size s i – Delete as many objects as possible while leaving at least 2b/c bytes in each block. // where b is the size of the block. 37

Related Work Theoretical Works: – Robson’s work [1971, 1974] – Luby-Naor-Orda [1994,1996] Various memory managers employ partial compaction. For example: – Ben Yitzhak et.al [2003] – Metronome by Bacon et al. [2003] – Pauless collector by Click et al. [2005] – Concurrent Real-Time Garbage Collectors by Pizlo et al. [2007, 2008] 38

Conclusion Partial compaction is used to ameliorate the pauses imposed by full compaction. We studied the efficacy of partial compaction in reducing fragmentation. Compaction is budgeted as a fraction of the allocated space. We have shown a lower bound on fragmentation for any given compaction ratio. We have shown a specific lower bound for segregated free list. 39