Data Structures and Algorithms

Slides:



Advertisements
Similar presentations
Part IV: Memory Management
Advertisements

Dynamic Memory Management
Chapter 2: Memory Management, Early Systems
ICS220 – Data Structures and Algorithms Lecture 13 Dr. Ken Cosh.
Lecture 10: Heap Management CS 540 GMU Spring 2009.
Memory Management Chapter 7.
Garbage Collection  records not reachable  reclaim to allow reuse  performed by runtime system (support programs linked with the compiled code) (support.
Compiler construction in4020 – lecture 12 Koen Langendoen Delft University of Technology The Netherlands.
5. Memory Management From: Chapter 5, Modern Compiler Design, by Dick Grunt et al.
Garbage Collection CSCI 2720 Spring Static vs. Dynamic Allocation Early versions of Fortran –All memory was static C –Mix of static and dynamic.
7. Physical Memory 7.1 Preparing a Program for Execution
Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable supply of ready processes to.
CPSC 388 – Compiler Design and Construction
CSC321: Programming Languages 11-1 Programming Languages Tucker and Noonan Chapter 11: Memory Management 11.1 The Heap 11.2 Implementation of Dynamic Arrays.
Chapter 8 Runtime Support. How program structures are implemented in a computer memory? The evolution of programming language design has led to the creation.
Memory Management. History Run-time management of dynamic memory is a necessary activity for modern programming languages Lisp of the 1960’s was one of.
CS 1114: Data Structures – memory allocation Prof. Graeme Bailey (notes modified from Noah Snavely, Spring 2009)
CS 1114: Data Structures – Implementation: part 1 Prof. Graeme Bailey (notes modified from Noah Snavely, Spring 2009)
CS 61C L07 More Memory Management (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C.
Memory Management Chapter 7. Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated efficiently to pack as.
Memory Management Memory Areas and their use Memory Manager Tasks:
Memory Management A memory manager should take care of allocating memory when needed by programs release memory that is no longer used to the heap. Memory.
Linked lists and memory allocation Prof. Noah Snavely CS1114
Reference Counters Associate a counter with each heap item Whenever a heap item is created, such as by a new or malloc instruction, initialize the counter.
The memory allocation problem Define the memory allocation problem Memory organization and memory allocation schemes.
Real-Time Concepts for Embedded Systems Author: Qing Li with Caroline Yao ISBN: CMPBooks.
ICS220 – Data Structures and Algorithms Lecture 10 Dr. Ken Cosh.
ICS 220 – Data Structures and Algorithms Week 7 Dr. Ken Cosh.
ALGORITHMS FOR ISNE DR. KENNETH COSH WEEK 6.
Storage Management. The stack and the heap Dynamic storage allocation refers to allocating space for variables at run time Most modern languages support.
1 Memory Management Requirements of memory management system to provide the memory space to enable several processes to execute concurrently to provide.
Runtime Environments. Support of Execution  Activation Tree  Control Stack  Scope  Binding of Names –Data object (values in storage) –Environment.
CS 326 Programming Languages, Concepts and Implementation Instructor: Mircea Nicolescu Lecture 9.
11/26/2015IT 3271 Memory Management (Ch 14) n Dynamic memory allocation Language systems provide an important hidden player: Runtime memory manager – Activation.
Memory Management -Memory allocation -Garbage collection.
2/4/20161 GC16/3011 Functional Programming Lecture 20 Garbage Collection Techniques.
CS 241 Discussion Section (12/1/2011). Tradeoffs When do you: – Expand Increase total memory usage – Split Make smaller chunks (avoid internal fragmentation)
CS412/413 Introduction to Compilers and Translators April 21, 1999 Lecture 30: Garbage collection.
Runtime Environments Chapter 7. Support of Execution  Activation Tree  Control Stack  Scope  Binding of Names –Data object (values in storage) –Environment.
Data Structures and Algorithm Analysis Dr. Ken Cosh Linked Lists.
Eliminating External Fragmentation in a Non-Moving Garbage Collector for Java Author: Fridtjof Siebert, CASES 2000 Michael Sallas Object-Oriented Languages.
Memory Management Damian Gordon.
Memory Management Chapter 7.
Chapter 7 Memory Management
Storage Allocation Mechanisms
Section 10: Memory Allocation Topics
Memory Management Memory Areas and their use Memory Manager Tasks:
Memory Allocation The main memory must accommodate both:
CS 326 Programming Languages, Concepts and Implementation
Inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 7 – More Memory Management Lecturer PSOE Dan Garcia
Partitioned Memory Allocation
Memory Management 6/20/ :27 PM
Dynamic Memory Allocation
Storage Management.
Data Structures Interview / VIVA Questions and Answers
Concepts of programming languages
Memory Management Memory Areas and their use Memory Manager Tasks:
Main Memory Management
Map interface Empty() - return true if the map is empty; else return false Size() - return the number of elements in the map Find(key) - if there is an.
B- Trees D. Frey with apologies to Tom Anastasio
B- Trees D. Frey with apologies to Tom Anastasio
Chapter 12 Memory Management
Operating System Chapter 7. Memory Management
B- Trees D. Frey with apologies to Tom Anastasio
Memory Management (1).
Presentation made by Steponas Šipaila
Memory Management Memory Areas and their use Memory Manager Tasks:
Data Structures and Algorithms
Inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 7 – More Memory Management Lecturer PSOE Dan Garcia
Automating Memory Management
Presentation transcript:

Data Structures and Algorithms Memory Management Dr. Ken Cosh

Review Hashing Hash Functions Collisions Truncation Folding Modular Arithmatic Collisions Avoidance Resolution

This week Memory Management Memory Allocation Garbage Collection

The Heap Not a heap, but the heap. Not the treelike data structure. But the area of the computers memory that is dynamically allocated to programs. In C++ we allocate parts of the heap using the ‘new’ command, and reclaim them using the ‘delete’ command. C++ allows close control over how much memory is used by your program. Some programming languages (FORTRAN, COBOL, BASIC), the compiler decides how much to allocate. Some programming languages (LISP, SmallTalk, Eiffel, Java) have automatic storage reclamation.

External Fragmentation External Fragmentation occurs when sections of the memory have been allocated, and then some deallocated, leaving gaps between used memory. The heap may end up being many small pieces of available memory sandwiched between pieces of used memory. A request may come for a certain amount of memory, but perhaps no block of memory is big enough, even though there is plenty of actual space in memory.

Internal Fragmentation Internal Fragmentation occurs when the memory allocated to certain processes or data is too large for its contents. Here space is wasted even though its not being used.

Sequential Fit Methods When memory is requested a decision needs to be made about which block of memory is allocated to the request. In order to discuss which method is best, we need to investigate how memory might be managed. Consider a linked list, containing links to each block of available memory. When memory is allocated or returned, the list is rearranged, either by deletion or insertion.

Sequential Fit Methods First Fit Algorithm, Here the allocated memory is the first block found in the linked list. Best Fit Algorithm, Here the block closest in size to the requested size is allocated. Worst Fit Algorithm, Here the largest block on the list is allocated. Next Fit Algorithm, Here the next available block that is large enough is allocated.

Comparing Sequential Fit Methods First Fit is most efficient, comparable to the Next Fit. However there can be more external fragmentation. The Best Fit algorithm actually leaves very small blocks of practically unusable memory. Worst Fit tries to avoid this fragmentation, by delaying the creation of small blocks. Methods can be combined by considering the order in which the linked list is sorted – if the linked list is sorted largest to smallest, First Fit becomes the same as Worst Fit.

Non-Sequential Fit Methods In reality with large memory, sequential fit methods are inefficient. Therefore non-sequential fit methods are used where memory is divided into sections of a certain size. An example is a buddy system.

Buddy Systems In buddy systems memory can be divided into sections, with each location being a buddy of another location. Whenever possible the buddies are combined to create a larger memory location. If smaller memory needs to be allocated the buddies are divided, and then reunited (if possible) when the memory is returned.

Binary Buddy Systems In binary buddy systems the memory is divided into 2 equally sized blocks. Suppose we have 8 memory locations; {000,001, 010, 011, 100, 101, 110, 111} Each of these memory locations are of size 1, suppose we need a memory location of size 2. {000, 010, 100, 110} Or of size 4, {000, 100} Or size 8. {000} In reality the memory is combined and only broken down when requested.

Buddy System in 1024k memory A-64K 128K 256K 512K B-128K C-64K D-128K

Sequence of Requests. Program A requests memory 34K..64K in size Program B requests memory 66K..128K in size Program C requests memory 35K..64K in size Program D requests memory 67K..128K in size Program C releases its memory Program A releases its memory Program B releases its memory Program D releases its memory

If memory is to be allocated Look for a memory slot of a suitable size If it is found, it is allocated to the program If not, it tries to make a suitable memory slot. The system does so by trying the following: Split a free memory slot larger than the requested memory size into half If the lower limit is reached, then allocate that amount of memory Go back to step 1 (look for a memory slot of a suitable size) Repeat this process until a suitable memory slot is found

If memory is to be freed Free the block of memory Look at the neighbouring block - is it free too? If it is, combine the two, and go back to step 2 and repeat this process until either the upper limit is reached (all memory is freed), or until a non-free neighbour block is encountered

Buddy Systems Unfortunately with Buddy Systems there can be significant internal fragmentation. Case ‘Program A requests 34k Memory’ – but was assigned 64 bit memory. The sequence of block sizes allowed is; 1,2,4,8,16…2m An improvement can be gained from varying the block size sequence. 1,2,3,5,8,13… Otherwise known as the Fibonacci sequence. When using this sequence further complicated problems occur, for instance when finding the buddy of a returned block.

Fragmentation It is worth noticing that internal and external fragmentation are roughly inversely proportional. As internal fragmentation is avoided through precise memory allocation

Garbage Collection Another key function of memory management is garbage collection. Garbage collection is the return of areas of memory once their use is no longer required. Garbage collection in some languages is automated, while in others it is manual, such as through the delete keyword.

Garbage Collection Garbage collection follows two key phases; Determine what data objects in a program will not be accessed in the future Reclaim the storage used by those objects

Mark and Sweep The Mark and Sweep method of garbage collection breaks the two tasks into distinct phases. First each used memory location is marked. Second the memory is swept to reclaim the unused cells to the memory pool.

Marking A simple marking algorithm follows the pre order tree traversal method; marking(node) if node is not marked mark node; if node is not an atom marking(head(node)); marking(tail(node)); This algorithm can then be called for all root memory items. The problem with this algorithm? Excessive use of the runtime stack through recursion, especially with the potential size of the data to sort through.

Alternative Marking The obvious alternative to the recursive algorithm is an iterative version. The iterative version however just makes excessive use of a stack – which means using memory in order to reclaim space from memory. A better approach doesn’t require extra memory. Here each link is followed, and the path back is remembered by temporarily inverting links between nodes.

Schorr and Waite SWmark(curr) prev = null; while(1) mark curr; if head(curr) is marked or atom if head(curr) is unmarked atom mark head(curr); while tail(curr) is marked or atom if tail(curr) is an unmarked atom mark tail(curr); while prev is not null and tag(prev) is 1 tag(prev)=0 invertLink(curr,prev,tail(prev)); if prev is not null invertLink(curr, prev, head(prev)); else finished; tag(curr) = 1; invertLink(prev,curr, tail(curr)); else invertLink(prev,curr,head(curr));

Sweep Having marked all used (linked) memory locations, the next step is to sweep through the memory. Sweep() checks every item in the memory, any which haven’t been marked are then returned to available memory. Sadly, this can often leave the memory with used locations sparsely scattered throughout. A further phase is required – compaction.

Compaction C C B C B A A A A B B B C C C Compaction involves copying data to one section of the computers memory. As our data is likely to involve linked data structures, we need to maintain the pointers to the nodes even when their location changes. C C B C B A A A A B B B C C C

Compaction compact() lo = bottom of heap; hi = top of the heap; while (lo < hi) while *lo is marked lo++; while *hi is not marked hi--; unmarked cell *hi; *lo = *hi; tail(*hi--) = lo++; //forwarding address lo = the bottom of heap; while(lo <=hi) if *lo is not atom and head(*lo) > hi head(*lo) = tail(head(*lo)); if *lo is not atom and tail(*lo) > hi tail(*lo) = tail(tail(*lo));

Incremental Garbage Collection The Mark and Sweep method of garbage collection is called automatically when the available memory resources are unsatisfactory. When it is called the program is likely to pause while the algorithm runs. In Real time systems this is unacceptable, so another approach can be considered. The alternative approach is incremental garbage collection.

Incremental Garbage Collection In Incremental Garbage collection the collection phase is interweaved with the program. Here the program is called a mutator as it can change the data the garbage collector is tidying. One approach, similar to the mark and sweep, is to intermittently copy n items from a ‘fromspace’ to a ‘tospace’, two semispaces in the computers memory. The next time the two spaces are switched. Consider – what are the pro’s and con’s of incremental vs mark and sweep?