Data & Memory Management CS153: Compilers. Records in C: struct Point { int x; int y; }; struct Rect { struct Point ll,lr,ul,ur; }; struct Rect mkSquare(struct.

Slides:



Advertisements
Similar presentations
Garbage collection David Walker CS 320. Where are we? Last time: A survey of common garbage collection techniques –Manual memory management –Reference.
Advertisements

The University of Adelaide, School of Computer Science
Dynamic Memory Management
Computer Architecture CSCE 350
Lecture 10: Heap Management CS 540 GMU Spring 2009.
Lecture 21 Dynamic Memory Allocation:- *Allocation of objects in program ’ s heap _e.g. C ’ s malloc/free or Pascal ’ s new/dispose _e.g. C ’ s malloc/free.
Prof. Necula CS 164 Lecture 141 Run-time Environments Lecture 8.
Data & Memory Management CS153: Compilers Greg Morrisett.
Memory Management Tom Roeder CS fa. Motivation Recall unmanaged code eg C: { double* A = malloc(sizeof(double)*M*N); for(int i = 0; i < M*N; i++)
Run-time organization  Data representation  Storage organization: –stack –heap –garbage collection Programming Languages 3 © 2012 David A Watt,
Garbage Collection CSCI 2720 Spring Static vs. Dynamic Allocation Early versions of Fortran –All memory was static C –Mix of static and dynamic.
CPSC 388 – Compiler Design and Construction
Mark and Sweep Algorithm Reference Counting Memory Related PitFalls
CS 536 Spring Automatic Memory Management Lecture 24.
Chapter 8 Runtime Support. How program structures are implemented in a computer memory? The evolution of programming language design has led to the creation.
Memory Management. History Run-time management of dynamic memory is a necessary activity for modern programming languages Lisp of the 1960’s was one of.
CS 1114: Data Structures – memory allocation Prof. Graeme Bailey (notes modified from Noah Snavely, Spring 2009)
CS 61C L06 Memory Management (1) A Carle, Summer 2006 © UCB inst.eecs.berkeley.edu/~cs61c/su06 CS61C : Machine Structures Lecture #6: Memory Management.
Digression on Stack and Heaps CS-502 (EMC) Fall A Short Digression on Stacks and Heaps CS-502, Operating Systems Fall 2009 (EMC) (Slides include.
Memory Allocation. Three kinds of memory Fixed memory Stack memory Heap memory.
1 1 Lecture 4 Structure – Array, Records and Alignment Memory- How to allocate memory to speed up operation Structure – Array, Records and Alignment Memory-
Generational Stack Collection And Profile driven Pretenuring Perry Cheng Robert Harper Peter Lee Presented By Moti Alperovitch
CS 536 Spring Run-time organization Lecture 19.
3/17/2008Prof. Hilfinger CS 164 Lecture 231 Run-time organization Lecture 23.
Pointers and Dynamic Variables. Objectives on completion of this topic, students should be able to: Correctly allocate data dynamically * Use the new.
The environment of the computation Declarations introduce names that denote entities. At execution-time, entities are bound to values or to locations:
CS 61C L4 Structs (1) A Carle, Summer 2005 © UCB inst.eecs.berkeley.edu/~cs61c/su05 CS61C : Machine Structures Lecture #4: Strings & Structs
Garbage collection (& Midterm Topics) David Walker COS 320.
Linked lists and memory allocation Prof. Noah Snavely CS1114
Run-time Environment and Program Organization
1 CSE 303 Lecture 11 Heap memory allocation ( malloc, free ) reading: Programming in C Ch. 11, 17 slides created by Marty Stepp
CS61C Midterm Review on C & Memory Management Fall 2006 Aaron Staley Some material taken from slides by: Michael Le Navtej Sadhal.
Stacks and HeapsCS-502 Fall A Short Digression Stacks and Heaps CS-502, Operating Systems Fall 2007 (Slides include materials from Operating System.
Security Exploiting Overflows. Introduction r See the following link for more info: operating-systems-and-applications-in-
Dynamic Memory Allocation Questions answered in this lecture: When is a stack appropriate? When is a heap? What are best-fit, first-fit, worst-fit, and.
CS 11 C track: lecture 5 Last week: pointers This week: Pointer arithmetic Arrays and pointers Dynamic memory allocation The stack and the heap.
Compiler Construction
1 C - Memory Simple Types Arrays Pointers Pointer to Pointer Multi-dimensional Arrays Dynamic Memory Allocation.
Chapter 0.2 – Pointers and Memory. Type Specifiers  const  may be initialised but not used in any subsequent assignment  common and useful  volatile.
Programming for Beginners Martin Nelson Elizabeth FitzGerald Lecture 15: More-Advanced Concepts.
University of Washington Today Finished up virtual memory On to memory allocation Lab 3 grades up HW 4 up later today. Lab 5 out (this afternoon): time.
1 Dynamic Memory Allocation –The need –malloc/free –Memory Leaks –Dangling Pointers and Garbage Collection Today’s Material.
1 Memory Management Basics. 2 Program P Basic Memory Management Concepts Address spaces Physical address space — The address space supported by the hardware.
CS 61C L2.1.1 Memory Management (1) K. Meinz, Summer 2004 © UCB CS61C : Machine Structures Lecture Memory Management Kurt Meinz inst.eecs.berkeley.edu/~cs61c.
RUN-Time Organization Compiler phase— Before writing a code generator, we must decide how to marshal the resources of the target machine (instructions,
11/26/2015IT 3271 Memory Management (Ch 14) n Dynamic memory allocation Language systems provide an important hidden player: Runtime memory manager – Activation.
Runtime System CS 153: Compilers. Runtime System Runtime system: all the stuff that the language implicitly assumes and that is not described in the program.
University of Washington Wouldn’t it be nice… If we never had to free memory? Do you free objects in Java? 1.
GARBAGE COLLECTION IN AN UNCOOPERATIVE ENVIRONMENT Hans-Juergen Boehm Computer Science Dept. Rice University, Houston Mark Wieser Xerox Corporation, Palo.
Memory Management -Memory allocation -Garbage collection.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Memory Management Overview.
Thread basics. A computer process Every time a program is executed a process is created It is managed via a data structure that keeps all things memory.
1 Lecture07: Memory Model 5/2/2012 Slides modified from Yin Lou, Cornell CS2022: Introduction to C.
Consider Starting with 160 k of memory do: Starting with 160 k of memory do: Allocate p1 (50 k) Allocate p1 (50 k) Allocate p2 (30 k) Allocate p2 (30 k)
ICOM 4035 – Data Structures Dr. Manuel Rodríguez Martínez Electrical and Computer Engineering Department Lecture 2 – August 23, 2001.
CSCI 156: Lab 11 Paging. Our Simple Architecture Logical memory space for a process consists of 16 pages of 4k bytes each. Your program thinks it has.
CSE 351 Dynamic Memory Allocation 1. Dynamic Memory Dynamic memory is memory that is “requested” at run- time Solves two fundamental dilemmas: How can.
MORE POINTERS Plus: Memory Allocation Heap versus Stack.
CS412/413 Introduction to Compilers and Translators April 21, 1999 Lecture 30: Garbage collection.
1 Compiler Construction Run-time Environments,. 2 Run-Time Environments (Chapter 7) Continued: Access to No-local Names.
Memory Management What if pgm mem > main mem ?. Memory Management What if pgm mem > main mem ? Overlays – program controlled.
Dynamic Memory Allocation
Run-time organization
Simulated Pointers.
Simulated Pointers.
Memory Allocation CS 217.
Lecture 3: Main Memory.
Topic 3-b Run-Time Environment
Dynamic Memory Allocation (and Multi-Dimensional Arrays)
SPL – PS2 C++ Memory Handling.
Presentation transcript:

Data & Memory Management CS153: Compilers

Records in C: struct Point { int x; int y; }; struct Rect { struct Point ll,lr,ul,ur; }; struct Rect mkSquare(struct Point ll, int elen) { struct Square res; res.lr = res.ul = res.ur = res.ll = ll; res.lr.x += elen; res.ur.x += elen; res.ur.y += elen; res.ul.y += elen; }

Representation: struct Point { int x; int y; }; Two contiguous words. Use base address. Alternatively, dedicate two registers? struct Rect { struct Point ll,lr,ul,ur; }; 8 contiguous words. x y ll.x ll.y lr.x lr.y ul.x ul.y ur.x ur.y

Member Access i = rect.ul.y Assuming $t holds address of rect: Calculate offsets of path relative to base: –.ul = sizeof(struct Point)+sizeof(struct Point),.y = sizeof(int) –So lw $t2, 20($t)

Copy-in/Copy-out When we do an assignment as in: struct Rect mkSquare(struct Point ll, int elen) { struct Square res; res.lr = ll;... then we copy all of the elements out of the source and put them in the target. Same as doing word-level opn's: struct Rect mkSquare(struct Point ll, int elen) { struct Square res; res.lr.x = ll.x; res.lr.y = ll.x;... For really large copies, we use something like memcpy.

Procedure Calls: Similarly, when we call a procedure, we copy arguments in, and copy results out. –Caller sets aside extra space in its frame to store results that are bigger than 2-words. Sometimes, this is termed "call-by-value". –This is bad terminology. –Copy-in/copy-out is more accurate. Problem: expensive for large records…

Arrays void foo() { char buf[27]; char buf[27]; buf[0] = 'a'; *(buf) = 'a'; buf[1] = 'b'; *(buf+1) = 'b'; buf[25] = 'z'; *(buf+25) = 'z'; buf[26] = 0; *(buf+26) = 0;} Space is allocated on the stack for buf. buf[i] is really just base of array + i * elt_size

Multi-Dimensional Arrays In C int M[4][3] yields an array with 4 rows and 3 columns. Laid out in row-major order: M[0][0], M[0][1], M[0][2], M[1][0], M[1][1],… M[i][j] compiles to? In Fortran, arrays are laid out in column major order. In ML, there are no multi-dimensional arrays -- (int array) array. Why is knowing this important?

Strings A string constant "foo" is represented as global data: _string42: It's usually placed in the text segment so it's read only. –allows all copies of the same string to be shared. Typical mistake: char *p = "foo"; p[0] = 'b';

Pass-by-Reference: void mkSquare(struct Point *ll, int elen, struct Rect *res) { res->lr = res->ul = res->ur = res->ll = *ll; res->lr.x += elen; res->ur.x += elen; res->ur.y += elen; res->ul.y += elen; } void foo() { struct Point origin = {0,0}; struct Square unit_sq; mkSquare(&origin, 1, &unit_sq); } The caller passes in the address of the point and the address of the result (1 word each).

Picture: origin.y origin.x unit_sq.ur.y unit_sq.ur.x unit_sq.ul.y unit_sq.ul.x unit_sq.lr.y unit_sq.lr.x unit_sq.ll.y unit_sq.ll.x … ll elen res … Foo's frame mkSquare's frame

What's wrong with this? struct Rect * mkSquare(struct Point *ll, int elen) { struct Rect res; res.lr = res.ul = res.ur = res.ll = *ll; res.lr.x += elen; res.ur.x += elen; res.ur.y += elen; res.ul.y += elen; return &res; }

Picture: ll elen res.ur.y res.ur.x res.ul.y res.ul.x res.lr.y res.lr.x res.ll.y res.ll.x … mkSquare's frame &res

Picture: ll elen res.ur.y res.ur.x res.ul.y res.ul.x res.lr.y res.lr.x res.ll.y res.ll.x … &res

Picture: ll elen res.ur.y res.ur.x res.ul.y res.ul.x res.lr.y res.lr.x res.ll.y res.ll.x … &res next called proc's frame

Stack vs. Heap Allocation We can only allocate an object on the stack when it is no longer used after the procedure returns. –NB: it's possible to exploit bugs like this in C code to hijack the return address. Then an attacker can gain control of the program… For other objects, we must use the heap (i.e., malloc). –And of course, we must remember to free the object when it is no longer used! Also a big source of bugs in C/C++ code. –Java, ML, C#, etc. use a garbage collector instead.

Program Fixed: struct Rect * mkSquare(struct Point *ll, int elen) { struct Rect *res = malloc(sizeof(struct Square)); res->lr = res->ul = res->ur = res->ll = *ll; (*res).lr.x += elen; res->ur.x += elen; res->ur.y += elen; (*res).ul.y += elen; return res; }

How do malloc/free work? Upon malloc(n): –Find an unused space of at least size n. –(Need to mark space as in use.) –Return address of that space. Upon free(p): –Mark space pointed to by p as free. –(Need to keep track of how big object is.)

One Option: Free List Keep a linked list of contiguous chunks of free memory. –Each component of list has two words of meta-data. –1 word points to the next element in the free list. –The other word says how big the object is. nextsize

Malloc and Free To malloc, run down the list until you find a spot that's big enough to satisfy the request. –Take left-overs and put them back in the free-list. –First-fit vs. Best-fit? To free, put the object back in the list. –Perhaps keep chunks sorted so that adjacent chunks can be coalesced.

Exponential Scaling: Keep an array of free lists: –Each list has chunks of the same size. –FreeList[i] holds chunks of size 2 i. –Round requests up to nearest power of two. –When FreeList[i] is empty, take a block from FreeList[i+1] and divide it in half, putting both chunks in FreeList[i]. –Alternatively, run through FreeList[i-1] and merge contiguous blocks.

Modern Languages Represent all records (tuples, objects, etc.) using pointers. –Makes it possible to support polymorphism. –e.g., ML doesn't care whether we pass an integer, two-tuple, or record to the identity function: they are all represented with 1 word. –Price paid: lots of loads/stores… By default, allocate records on the heap. –Programmer doesn't have to worry about lifetimes. –Compiler may determine that it's safe to allocate a record on the stack instead. –Uses a garbage collector to safely reclaim data. –Because pointers are abstract, has the freedom to re- arrange the data in the heap to support compaction.

Garbage Collection: Starting from stack, registers, & globals (roots), determine which objects in the heap are reachable following pointers. Reclaim any object that isn't reachable. Requires being able to determine pointer values from other values (e.g., ints). –OCaml uses the low bit: 1 it's a scalar, 0 it's a pointer. –In Java, we use put the tag bits in the meta-data. –For Boehm collector, we use heuristics: (e.g., the value doesn't point into an allocated object.)

Mark/Sweep Traversal: Reserve a mark-bit for each object. Starting from roots, mark all accessible objects. Stick accessible objects into a queue or stack. –queue: breadth-first traversal –stack: depth-first traversal Loop until queue/stack is empty: –remove marked object (say x). –if x points to an (unmarked) object y, then mark y and put it in the queue. Run through all objects: –If they haven't been marked, put them on the free list. –If they have been marked, clear the mark bit.

Copying Collection: Split data segment into two pieces. Allocate in 1st piece until it fills up. Copy the reachable data into the 2nd area, compressing out the holes corresponding to garbage objects.

Reality: Techniques such as generational or incremental collection can greatly reduce latency. –A few millisecond pause times. Large objects (e.g., arrays) can be copied in a "virtual" fashion without doing a physical copy. Some systems use a mix of copying collection and mark/sweep with support for compaction. A real challenge is scaling this to server-scale systems with terabytes of memory… Interactions with OS matter a lot: cheaper to do GC than it is to start paging…

Conservative Collectors: Work without help from the compiler. –e.g., legacy C/C++ code. –e.g., your compiler :-) Cannot accurately determine which values are pointers. –But can rule out some values (e.g., if they don't point into the data segment.) –So they must conservatively treat anything that looks like a pointer as such. –Two bad things result: leaks, can't move.

The Boehm Collector Based on mark/sweep. –performs sweep lazily Organizes free lists as we saw earlier. –different lists for different sized objects. –relatively fast (single-threaded) allocation. Most of the cleverness is in finding roots: –global variables, stack, registers, etc. And determining values aren't pointers: –blacklisting, etc.