Chapter 8 Runtime Support. How program structures are implemented in a computer memory? The evolution of programming language design has led to the creation.

Slides:



Advertisements
Similar presentations
Garbage collection David Walker CS 320. Where are we? Last time: A survey of common garbage collection techniques –Manual memory management –Reference.
Advertisements

Procedures. 2 Procedure Definition A procedure is a mechanism for abstracting a group of related operations into a single operation that can be used repeatedly.
Introduction to Memory Management. 2 General Structure of Run-Time Memory.
Lecture 10: Heap Management CS 540 GMU Spring 2009.
Compiler construction in4020 – lecture 12 Koen Langendoen Delft University of Technology The Netherlands.
5. Memory Management From: Chapter 5, Modern Compiler Design, by Dick Grunt et al.
Various languages….  Could affect performance  Could affect reliability  Could affect language choice.
Garbage Collection CSCI 2720 Spring Static vs. Dynamic Allocation Early versions of Fortran –All memory was static C –Mix of static and dynamic.
CS 326 Programming Languages, Concepts and Implementation Instructor: Mircea Nicolescu Lecture 18.
CPSC 388 – Compiler Design and Construction
CS 536 Spring Automatic Memory Management Lecture 24.
CSC321: Programming Languages 11-1 Programming Languages Tucker and Noonan Chapter 11: Memory Management 11.1 The Heap 11.2 Implementation of Dynamic Arrays.
Memory Management. History Run-time management of dynamic memory is a necessary activity for modern programming languages Lisp of the 1960’s was one of.
Memory Management Chapter  Memory management: the process of binding values to (logical) memory locations.  The memory accessible to a program.
CS 61C L07 More Memory Management (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C.
Memory Allocation. Three kinds of memory Fixed memory Stack memory Heap memory.
Chapter 10 Storage Management Implementation details beyond programmer’s control Storage/CPU time trade-off Binding times to storage.
Run-Time Storage Organization
Run time vs. Compile time
Catriel Beeri Pls/Winter 2004/5 environment 68  Some details of implementation As part of / extension of type-checking: Each declaration d(x) associated.
The environment of the computation Declarations introduce names that denote entities. At execution-time, entities are bound to values or to locations:
Run-time Environment and Program Organization
1 Run time vs. Compile time The compiler must generate code to handle issues that arise at run time Representation of various data types Procedure linkage.
Reference Counters Associate a counter with each heap item Whenever a heap item is created, such as by a new or malloc instruction, initialize the counter.
1 Contents. 2 Run-Time Storage Organization 3 Static Allocation In many early languages, notably assembly and FORTRAN, all storage allocation is static.
COP4020 Programming Languages
Chapter TwelveModern Programming Languages1 Memory Locations For Variables.
Name Binding and Object Lifetimes Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Programming Language Concepts Lecture.
CS3012: Formal Languages and Compilers The Runtime Environment After the analysis phases are complete, the compiler must generate executable code. The.
Dynamic Memory Allocation Questions answered in this lecture: When is a stack appropriate? When is a heap? What are best-fit, first-fit, worst-fit, and.
CS 403: Programming Languages Lecture 2 Fall 2003 Department of Computer Science University of Alabama Joel Jones.
Runtime Environments Compiler Construction Chapter 7.
Compiler Construction
Chapter 5: Programming Languages and Constructs by Ravi Sethi Activation Records Dolores Zage.
10/16/2015IT 3271 All about binding n Variables are bound (dynamically) to values n values must be stored somewhere in the memory. Memory Locations for.
CNIT 133 Interactive Web Pags – JavaScript and AJAX Advanced topic - variable.
Chapter 0.2 – Pointers and Memory. Type Specifiers  const  may be initialised but not used in any subsequent assignment  common and useful  volatile.
Storage Management. The stack and the heap Dynamic storage allocation refers to allocating space for variables at run time Most modern languages support.
Basic Semantics Associating meaning with language entities.
Runtime Environments. Support of Execution  Activation Tree  Control Stack  Scope  Binding of Names –Data object (values in storage) –Environment.
1 Records Record aggregate of data elements –Possibly heterogeneous –Elements/slots are identified by names –Elements in same fixed order in all records.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. C H A P T E R F I V E Memory Management.
C++ Memory Overview 4 major memory segments Key differences from Java
1 Dynamic Memory Allocation –The need –malloc/free –Memory Leaks –Dangling Pointers and Garbage Collection Today’s Material.
CS 326 Programming Languages, Concepts and Implementation Instructor: Mircea Nicolescu Lecture 9.
COMP3190: Principle of Programming Languages
RUN-Time Organization Compiler phase— Before writing a code generator, we must decide how to marshal the resources of the target machine (instructions,
1 Languages and Compilers (SProg og Oversættere) Heap allocation and Garbage Collection.
Pointers in C Computer Organization I 1 August 2009 © McQuain, Feng & Ribbens Memory and Addresses Memory is just a sequence of byte-sized.
ISBN Chapter 6 Data Types Pointer Types Reference Types Memory Management.
1 Compiler Construction Run-time Environments,. 2 Run-Time Environments (Chapter 7) Continued: Access to No-local Names.
Runtime Environments Chapter 7. Support of Execution  Activation Tree  Control Stack  Scope  Binding of Names –Data object (values in storage) –Environment.
Memory Management CSCI 2720 Spring What is memory management? “the prudent utilization of this scarce resource (memory), whether by conservation,
Data Types Chapter 6: Data Types Lectures # 13. Topics Chapter 6: Data Types 2 Introduction Primitive Data Types Character String Types Array Types Associative.
Design issues for Object-Oriented Languages
Object Lifetime and Pointers
Storage Allocation Mechanisms
Data Types In Text: Chapter 6.
Introduction to Programming Languages and Compilers
CS 326 Programming Languages, Concepts and Implementation
Inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 7 – More Memory Management Lecturer PSOE Dan Garcia
Dynamic Memory Allocation
Storage Management.
Concepts of programming languages
Storage.
Topic 3-b Run-Time Environment
Dynamic Memory.
Name Binding and Object Lifetimes
RUN-TIME STORAGE Chuen-Liang Chen Department of Computer Science
Inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 7 – More Memory Management Lecturer PSOE Dan Garcia
Presentation transcript:

Chapter 8 Runtime Support

How program structures are implemented in a computer memory? The evolution of programming language design has led to the creation of increasingly sophisticated methods of runtime storage organization. We will cover 3 methods: 1) static allocation, 2) stack allocation, and 3) heap allocation.

Static Allocation Originally, all data were global. Correspondingly, all memory allocation was static. During compilation, data was simply placed at a fixed memory address for the entire execution of a program. This is called static allocation. Examples are all assembly languages, Cobol, and Fortran.

Static Allocation (Cont.) Static allocation can be quite wasteful of memory space. To reduce storage needs, in Fortran, the equivalent statement overlays variables by forcing two variables to share the same memory locations. In C,C++, union does this too. Overlaying hurts program readability, as assignment to one variable changes the value of another.

Static Allocation (Cont.) In more modern languages, static allocation is used for global variables and literals (constant) that are fixed in size and accessible throughout program execution. It is also used for static and extern variables in C/C++ and for static fields in C# and Java classes.

Stack Allocation Recursive languages require dynamic memory allocation. Each time a recursive method is called, a new copy of local variables (frame) is pushed on a runtime stack. The number of allocations is unknown at compile- time. A frame (or activation record) contains space for all of the local variables in the method. When the method returns, its frame is popped and the space reclaimed. Thus, only the methods that are actually executing are allocated memory space in the runtime stack. This is called stack allocation.

7

int fact (int n) { if (n>1) return n* fact (n-1); else return 1; } See the staxk allocation in Fig next.

9

Because the stack may contain more than just frames (e.g., registers saved across calls), dynamic link is used to point to the preceding frame (Fig. 12.4).

11

class k { int a; int sum () { int b = 42; return a+b; } When obj.sum() is called, local data “b” resides in a frame on the runtime stack. And, object member data “a” is accessed through an object pointer called “this”. (see Fig. 12.5)

13

int p (int a) { int q (int b) { if (b <0) q (-b) else return a+b; } return q (-10); } Methods can nest in C, Java as above. A static link points to the frame of the method that statically encloses the current method. (Fig. 12.6)

An alternative to using static links to access frames of enclosing methods is the use of a display. Here, we maintain a set of registers which comprise the display. (see Fig. 12,7)

16

void p (int a) { int b; if (a>0) {float c,d; //body of block 1//} else {int e[10]; //body of block 2//} } Because the then and else parts of the if statement above are mutually exclusive, variables in block 1 and block 2 can overlay each other. This is called procedure-level frame, as contrasted with block-level frame allocation. (Fig. 12.8)

18

Heap Allocation Lisp and subsequent C, C++, C#, and Java used a scheme to dynamically allocate data, called heap allocation. Heap allocation allows memory blocks to be allocated (by a call to “new” or “malloc”) and freed (by a call to “free”) at any time and in any order during program execution. Thus, each program execution can “customize” its memory allocation needs.

Memory Management When a program is started, most operating systems allocate 3 memory segments for it: 1) code segment: read-only 2) stack segment (data): manipulated by machine instructions. 3) heap segment (data): manipulated by the programmer.

Memory Management Allocating memory in heap: –1. keep a pointer to the first free location in the heap. –2. allocate the required block from there. –3. bump the pointer to the next free location. The problem with this scheme is that sooner or later, we run out of heap space. Thus, we need to release the blocks.

Memory Management Two data de-allocation schemes: 1) explicit de-allocation by the programmer at compile-time, 2) implicit de-allocation by operation system at run-time.

Basic memory allocation A memory allocation program 1) finds a block of unused memory of the requested size, 2) marks it as used, and 3) returns a pointer to the block. If no such block is available, the result varies: 1) a null pointer may be returned, 2) an error routine may be called, or 3) the program may be aborted.

Block vs. Chunk Blocks are handled by programmer, and Chunks are handled by the memory allocation program. A chunk contains a block plus some administration information, which includes the length of the chunk and is usually located just before the block.

Block vs. Chunk (Cont.)

We need a few bits in each chunk for administration purposes: –One of the bits is a free bit, which indicates whether a chunk is free or not. –And, the free chunks form the free list.

Basic memory allocation The basic memory allocation includes “malloc()” (for memory allocation) and “free()” to de-allocate memory. De-allocation is explicitly indicated by the programmer. Explicit de-allocation is a problem for programmer and compiler as well.

Basic memory allocation Programmer uses “free()” to de-alloctae. But, it is hard to predict lifetime of data. Many programmers free memory too early by mistake and, later on, dereferencing a ‘dangling pointer’ to the freed – and possibly re-allocated – data.

Basic memory allocation This error is hard to find because: –The dangling pointer is de-referenced long after the “free” –After dereferencing, the program can still proceed without being detected –Interactive system prohibits reproduce exact sequence of memory allocation and de-allocation that are required for debugging. Therefore, implicit de-allocation (automatic garbage collection) next is preferred.

Implicit de-allocation Implicit de-allocation, called garbage collection, is the automatic reclamation of memory that is no longer in use by the application program.

Garbage collection algorithm The garbage collection is to reclaim automatically the set of memory chunks that will no longer be used by the program (garbage): 1) Chunks to which there are no pointers, and 2) Chunks that are not reachable from the non-heap-allocated program data.

Garbage collection algorithm 1) ‘No-pointers’ above leads to a technique called reference counting. It directly identifies garbage chunks. It is simple and reasonably efficient, but requires all pointer actions to be monitored during program execution and may not recover all garbage chunks.

Reference Counting Reference counting records in each chunk the number of pointers that point to it; when the number drops to zero, the chunk is declared “garbage”.

Reference Counting Chunks with reference count in a heap

Circularity in Reference Counting In Fig next, the two objects point to each other forming a circular structure. If the global pointer p is set to null, the object’s reference count is reduced from 2 to 1. Now, both objects have a non-zero count (thus will NOT be de-allocated), but neither is accessible through any external pointer.

36

Mark-Sweep Garbage Collection Marking phase: 1) Starting with global pointers and pointers in stack frames, we mark reachable heap objects (perhaps setting a bit in the object’s header). 2) Then, follow the pointers in the marked objects until all live objects are marked.

Mark-Sweep Garbage Collection (Cont.) Sweep phase: After marking phase, any object not marked is regarded as garbage that may be freed. We then “sweep” through the heap, collecting all unmarked objects and returning them to the free space list for later reuse. Compaction phase: We can add a compaction phase as shown in Fig

39

The mark-sweep scheme has a problem that all heap objects must be swept. This can be costly if most objects are dead. On the other hand, copying collector examines ONLY live objects.

Copying Collectors We divide the heap into two halves: 1) the from space 2) the to space. We allocate objects in the from space. When it is exhausted, we collect live objects and copy them to the to space.

42

43