Dynamic Storage Allocation Bradley Herrup CS 297 Security and Programming Languages.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Chapter 6: Memory Management
Lecture 10: Heap Management CS 540 GMU Spring 2009.
Quick Review of Apr 10 material B+-Tree File Organization –similar to B+-tree index –leaf nodes store records, not pointers to records stored in an original.
Chapter 11 Indexing and Hashing (2) Yonsei University 2 nd Semester, 2013 Sanghyun Park.
©Silberschatz, Korth and Sudarshan12.1Database System Concepts Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree.
Fixed/Variable Partitioning
7. Physical Memory 7.1 Preparing a Program for Execution
CPSC 388 – Compiler Design and Construction
CS 153 Design of Operating Systems Spring 2015
Lab 3: Malloc Lab. “What do we need to do?”  Due 11/26  One more assignment after this one  Partnering  Non-Honors students may work with one other.
OS Fall’02 Memory Management Operating Systems Fall 2002.
1 CSE 380 Computer Operating Systems Instructor: Insup Lee University of Pennsylvania, Fall 2002 Lecture Note: Memory Management.
Memory Management Chapter 4. Memory hierarchy Programmers want a lot of fast, non- volatile memory But, here is what we have:
Dynamic memory allocation and fragmentation Seminar on Network and Operating Systems Group II.
Multiprocessing Memory Management
CS 104 Introduction to Computer Science and Graphics Problems
Memory Management.
Memory Management 2010.
Memory Organization.
Memory Management (continued) CS-3013 C-term Memory Management CS-3013 Operating Systems C-term 2008 (Slides include materials from Operating System.
Memory Management 1 CS502 Spring 2006 Memory Management CS-502 Spring 2006.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Memory Management Chapter 5.
Computer Organization and Architecture
Memory Management Five Requirements for Memory Management to satisfy: –Relocation Users generally don’t know where they will be placed in main memory May.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
Memory Allocation CS Introduction to Operating Systems.
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
File Implementation. File System Abstraction How to Organize Files on Disk Goals: –Maximize sequential performance –Easy random access to file –Easy.
Dynamic Memory Allocation Questions answered in this lecture: When is a stack appropriate? When is a heap? What are best-fit, first-fit, worst-fit, and.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 32 Paging Read Ch. 9.4.
1 Memory Management Memory Management COSC513 – Spring 2004 Student Name: Nan Qiao Student ID#: Professor: Dr. Morteza Anvari.
1. Memory Manager 2 Memory Management In an environment that supports dynamic memory allocation, the memory manager must keep a record of the usage of.
Chapter 5 Operating System Support. Outline Operating system - Objective and function - types of OS Scheduling - Long term scheduling - Medium term scheduling.
© 2004, D. J. Foreman 1 Memory Management. © 2004, D. J. Foreman 2 Building a Module -1  Compiler ■ generates references for function addresses may be.
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
Chapter 4 Memory Management.
Subject: Operating System.
1 Memory Management Requirements of memory management system to provide the memory space to enable several processes to execute concurrently to provide.
1 Address Translation Memory Allocation –Linked lists –Bit maps Options for managing memory –Base and Bound –Segmentation –Paging Paged page tables Inverted.
CS 149: Operating Systems March 3 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
Memory Management COSC 513 Presentation Jun Tian 08/17/2000.
CS 326 Programming Languages, Concepts and Implementation Instructor: Mircea Nicolescu Lecture 9.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
CS 241 Discussion Section (11/17/2011). Outline Review of MP7 MP8 Overview Simple Code Examples (Bad before the Good) Theory behind MP8.
Lecture 10 Page 1 CS 111 Summer 2013 File Systems Control Structures A file is a named collection of information Primary roles of file system: – To store.
Memory Management -Memory allocation -Garbage collection.
Copyright ©: Nahrstedt, Angrave, Abdelzaher, Caccamo 1 Memory management & paging.
CS 241 Discussion Section (2/9/2012). MP2 continued Implement malloc, free, calloc and realloc Reuse free memory – Sequential fit – Segregated fit.
CS 241 Discussion Section (12/1/2011). Tradeoffs When do you: – Expand Increase total memory usage – Split Make smaller chunks (avoid internal fragmentation)
Virtual Memory Pranav Shah CS147 - Sin Min Lee. Concept of Virtual Memory Purpose of Virtual Memory - to use hard disk as an extension of RAM. Personal.
Memory Management Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only storage.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
Memory Management Chapter 5 Advanced Operating System.
Eliminating External Fragmentation in a Non-Moving Garbage Collector for Java Author: Fridtjof Siebert, CASES 2000 Michael Sallas Object-Oriented Languages.
Memory Management What if pgm mem > main mem ?. Memory Management What if pgm mem > main mem ? Overlays – program controlled.
Chapter 2 Memory and process management
Dynamic memory allocation and fragmentation
Main Memory Management
Chapter 8: Main Memory.
Optimizing Malloc and Free
CS Introduction to Operating Systems
Main Memory Background Swapping Contiguous Allocation Paging
Memory Management Overview
Lecture 3: Main Memory.
Operating System Chapter 7. Memory Management
Memory Management (1).
COMP755 Advanced Operating Systems
Presentation transcript:

Dynamic Storage Allocation Bradley Herrup CS 297 Security and Programming Languages

Storing Information Information needs to be stored somewhere Two Areas of Storage The Stack – Program Storage The Stack – Program Storage Organized and Structured storage mechanism with a relative standardized layout The Heap – Data Storage The Heap – Data Storage Random allocation of space for the storage of Data maintained by the Operating System Can exist in many different parts of memory it is a virtual construct that exists simultaneously in Cache and RAM

A Little History Interested in the mapping of logical systems to physical memory Interested in the mapping of logical systems to physical memory Developing a area to maintain data and separate the the OS from other programs Developing a area to maintain data and separate the the OS from other programs Looking at the ability to refine Heap storage algorithms Looking at the ability to refine Heap storage algorithms Ability to trace the heap using programs allows scientists to create heuristics and statistics to examine the heap Ability to trace the heap using programs allows scientists to create heuristics and statistics to examine the heap Redefinition of Heap structure to incorporate more higher-level data structures and Objects Redefinition of Heap structure to incorporate more higher-level data structures and Objects Focus moves away from fine-tuning the Heap and the Stack to increasing OSes using different methodologies, namely increasing the density of hardware such that if the heap runs out, the OS can just request that more memory be allocated from RAM Focus moves away from fine-tuning the Heap and the Stack to increasing OSes using different methodologies, namely increasing the density of hardware such that if the heap runs out, the OS can just request that more memory be allocated from RAM

New vs. Old Views Old Views Worry about present heap management Worry about present heap management Considered to be a solved or an unsolvable problem Considered to be a solved or an unsolvable problem New Views Look at the effect of the Heap over long periods of time Look at the effect of the Heap over long periods of time Computers are on significantly longer than 20 years ago and are running much larger programs Determine scenarios in which the heap acts in certain ways to best be able to handle the heap in those scenarios Determine scenarios in which the heap acts in certain ways to best be able to handle the heap in those scenarios Don’t solve one BIG problem, solve a bunch of little ones

The Allocator The process repsonsible for maintaining the Heap Keeps track of: Keeps track of: Used Memory Free Memory Passes available blocks of memory to programs on the Stack where actual information can be stored Passes available blocks of memory to programs on the Stack where actual information can be stored What the Allocator can NOT do: Compact memory Compact memory Reallocate memory without a programs direct consent Reallocate memory without a programs direct consent Move data around to free up necessary space Move data around to free up necessary space

Fragmentation Allocator creates and deletes information wherever in the heap wherever enough free space exists causing Fragmentation Internal Fragmentation Occurs when the available block size is larger than the data needing to be assigned to that area Occurs when the available block size is larger than the data needing to be assigned to that area External Fragmentation Occurs when free memory exists but cannot be used because object is larger than anyone particular block Occurs when free memory exists but cannot be used because object is larger than anyone particular block

Faults and Fixes Majority of programs testing heap allocator are simulations that do not emulate real life Results in false positives Results in false positives Do not test long term processors, i.e. many OS processes Do not test long term processors, i.e. many OS processes Can lead to a lot of fragmentation over long period of time Allocators create/destroy at random In Real Life Programs creation and destruction of data is not necessarily random In Real Life Programs creation and destruction of data is not necessarily random Most programs either: Most programs either: Create Lots of Data Structures at the beginning of a program and manipulate said structures Continuously create new structures and then destroy them but in groups

Methods of Testing the Heap Tracing the Heap Running Benchmark and simulations to create probabilities and statistics to fine- tune heap allocation “A single death is a tragedy. A million deaths is a statistic” – Joseph Stalin “A single death is a tragedy. A million deaths is a statistic” – Joseph Stalin Relevant because it only goes to show that a statistic does not tell us what exactly is causing the problem Theses Statistics are on short term programs cannot account for long term heap stability

Ramps, Peaks, and Plateaus Example of Ramp Heap Allocation Grobner Software Example of Peak Heap Allocation GCC Compiler Example of Plateau Heap Allocation Perl Script Examination of Real Life Programs led to three distinct families of heap usage Enables developers to create general allocators that will be able to handle different types of programs in different ways It is more important to cover the manipulation of the heap at the peaks then in the troughs

Strategies and Policies Optimize allocations to minimize wait cycles Authors believe that over a billion cycles are sacrificed and squandered hourly due to lack of efficiency in Allocation Authors believe that over a billion cycles are sacrificed and squandered hourly due to lack of efficiency in Allocation Placement choice is key to determining the optimal location algorithm for data in the heap Have to be worried about memory overhead Have to be worried about memory overhead Also time overhead Also time overhead “Put blocks where they won’t cause fragmentation later”

Splitting and Coalescing Two main ways of manipulating space in the heap Splitting: if a memory block is too big, split it to fit necessary data Splitting: if a memory block is too big, split it to fit necessary data Coalescing: if a memory block becomes empty and either of its neighbors are empty join both blocks together to provide larger block Coalescing: if a memory block becomes empty and either of its neighbors are empty join both blocks together to provide larger block

Profiling of Allocators Minimize Time overhead Minimize Holes and Fragments Exploitation of Common Patterns Use of Splitting and Coalescence Fits: when a block of a size is reused are block of the same size used preferentially Splitting Thresholds

Taxonomy of Allocating Algorithms Algorithms are classified by the mechanism they use for keeping track of areas: Sequential Fits Sequential Fits Buddy Systems Buddy Systems Indexed Fits Indexed Fits Bitmapped Fits Bitmapped Fits

Low-Level Tricks Header Fields and Alignment Used to store block related information Used to store block related information Typically the Size of block of memory Relationship with neighboring blocks In Use Bit Typically add 10% overhead to memory usage Typically add 10% overhead to memory usage Boundary tags (aka Footer) In Use Bit In Use Bit Neighboring cells information Neighboring cells information 10% overhead as well 10% overhead as well Lookup Tables Instead of indexing blocks by address, index by pages of blocks that are the same size Instead of indexing blocks by address, index by pages of blocks that are the same size

Sequential Fits Best Fit Find the smallest free block large enough to satisfy a request Find the smallest free block large enough to satisfy a request An exhaustive search O(Mn) An exhaustive search O(Mn) If there are equal blocks which to use? If there are equal blocks which to use? Good Memory Usage in the end Good Memory Usage in the end First Fit Find the first block large enough to satisfy mem allocation Find the first block large enough to satisfy mem allocation Question as to what order to look in memory Question as to what order to look in memory Next Fit Optimization of First Fit: using roving pointer to keep track of last block used to cut down on execution time Optimization of First Fit: using roving pointer to keep track of last block used to cut down on execution time

Segregated Free Lists Simple Segregated Storage No splitting of free blocks is done No splitting of free blocks is done All heap broken down into size blocks of Power-two All heap broken down into size blocks of Power-two All blocks of same size indexed by size and not by address All blocks of same size indexed by size and not by address No Header overhead(all blocks same size) No Header overhead(all blocks same size) Segregated Fits Array of Free Lists Array of Free Lists If block does not exist take blocks from other list and split or coalesce empty blocks to fit If block does not exist take blocks from other list and split or coalesce empty blocks to fit Three Types: Exact Lists Exact Lists Strict Size with Rounding Strict Size with Rounding Size Classes with Range Lists Size Classes with Range Lists

Buddy System Variant of segregated lists Split entire heap into two Split each half into two(does not need to be power of two) Split each half into two(does not need to be power of two) And so on…if two blocks need to coalesce to make room requires that binary pair both be empty Log(n) time Can also be done using Self-Balancing Binary tree or Fibonacci sequence tree

Indexed Fits More abstract algorithm that fits more to a policy More abstract algorithm that fits more to a policy Policy being a rule set like all blocks at 8 bytes One example uses a cartesian tree sorted by block size and address Has log(n) search time

Bitmapped Fits Bitmap used to record which parts of the heap are in us Bitmap being a simple vector of 1-bit flags with one bit corresponding to each word of the heap area Not used conventionally On a 3% overhead vs 20% Search times are linear but using heuristics can get it down to approximatly O(log N)

Heap Allocators for Multiprocessor Architectures Things to consider: Is there a Shared Cache? Is there a Shared Cache? Need a global heap that is accessible to all of the processors or an OS that can handle: Need a global heap that is accessible to all of the processors or an OS that can handle:Contention False Sharing Space

Applying and Using this for Garbage Collection The efficiency of the garbage collector is directly related to the efficiency of the allocator algorithm Additionally if some of the low-level tricks are used it enables the garbage collector to even more effectively decipher what is used and what is not Can also participate in maintaining the heap by coalescing neighboring blocks together to free up space

Conclusions Heap organization is still an area worth researching Because there is no defined organization to the heap it is not nearly as easy enough to exploit the heap as it is the stack By optimizing the heap’s efficiency it is possible to speed up the average computer without having to increase the size of the hardware

Works Cited Emery Berger. Scalable Memory Management. University of Massachusetts, Dept of Computer Science, Wilson, Paul, Johnstone, Mark, Neely, Michael, and Boles, David. Dynamic Storage Allocation: A Survey and Critical Review. University of Texas at Austin, Department of Computer Science, Austin, TX Wilson, Paul. Uniprocessor Garbage Collection Techniques. ACM.