Compiler Run-time Organization Lecture 7. 2 What we have covered so far… We have covered the front-end phases –Lexical analysis –Parsing –Semantic analysis.

Slides:



Advertisements
Similar presentations
1 Lecture 3: MIPS Instruction Set Today’s topic:  More MIPS instructions  Procedure call/return Reminder: Assignment 1 is on the class web-page (due.
Advertisements

The University of Adelaide, School of Computer Science
1 Lecture 4: Procedure Calls Today’s topics:  Procedure calls  Large constants  The compilation process Reminder: Assignment 1 is due on Thursday.
Procedures in more detail. CMPE12cGabriel Hugh Elkaim 2 Why use procedures? –Code reuse –More readable code –Less code Microprocessors (and assembly languages)
Procedure Calls Prof. Sirer CS 316 Cornell University.
Lecture 6: MIPS Instruction Set Today’s topic –Control instructions –Procedure call/return 1.
Computer Architecture CSCE 350
CPS3340 COMPUTER ARCHITECTURE Fall Semester, /17/2013 Lecture 12: Procedures Instructor: Ashraf Yaseen DEPARTMENT OF MATH & COMPUTER SCIENCE CENTRAL.
Ch. 8 Functions.
Procedures II (1) Fall 2005 Lecture 07: Procedure Calls (Part 2)
The University of Adelaide, School of Computer Science
Prof. Necula CS 164 Lecture 141 Run-time Environments Lecture 8.
1 Storage Registers vs. memory Access to registers is much faster than access to memory Goal: store as much data as possible in registers Limitations/considerations:
Procedures in more detail. CMPE12cCyrus Bazeghi 2 Procedures Why use procedures? Reuse of code More readable Less code Microprocessors (and assembly languages)
CS 536 Spring Run-time organization Lecture 19.
3/17/2008Prof. Hilfinger CS 164 Lecture 231 Run-time organization Lecture 23.
CS 536 Spring Code generation I Lecture 20.
1 Lecture 9 Runtime Environment. 2 Outline Basic computer execution model Procedure abstraction run-time storage management Procedure linkage We need.
Intro to Computer Architecture
1 Pertemuan 20 Run-Time Environment Matakuliah: T0174 / Teknik Kompilasi Tahun: 2005 Versi: 1/6.
Run time vs. Compile time
CS 536 Spring Code Generation II Lecture 21.
Semantics of Calls and Returns
Run-time Environment and Program Organization
4/6/08Prof. Hilfinger CS164 Lecture 291 Code Generation Lecture 29 (based on slides by R. Bodik)
Chapter 8 :: Subroutines and Control Abstraction
Chapter 7: Runtime Environment –Run time memory organization. We need to use memory to store: –code –static data (global variables) –dynamic data objects.
CS-2710 Dr. Mark L. Hornick 1 Defining and calling procedures (subroutines) in assembly And using the Stack.
13/02/2009CA&O Lecture 04 by Engr. Umbreen Sabir Computer Architecture & Organization Instructions: Language of Computer Engr. Umbreen Sabir Computer Engineering.
Runtime Environments Compiler Construction Chapter 7.
Programming Language Principles Lecture 24 Prepared by Manuel E. Bermúdez, Ph.D. Associate Professor University of Florida Subroutines.
Compiler Construction
CSc 453 Runtime Environments Saumya Debray The University of Arizona Tucson.
CPSC 388 – Compiler Design and Construction Runtime Environments.
Copyright © 2005 Elsevier Chapter 8 :: Subroutines and Control Abstraction Programming Language Pragmatics Michael L. Scott.
Activation Records (in Tiger) CS 471 October 24, 2007.
Procedure Basics Computer Organization I 1 October 2009 © McQuain, Feng & Ribbens Procedure Support From previous study of high-level languages,
Lecture 4: MIPS Instruction Set
CSC 8505 Compiler Construction Runtime Environments.
CS412/413 Introduction to Compilers and Translators Spring ’99 Lecture 11: Functions and stack frames.
LECTURE 19 Subroutines and Parameter Passing. ABSTRACTION Recall: Abstraction is the process by which we can hide larger or more complex code fragments.
CS 164 Lecture 151 Code Generation (I) ICOM4029 Lecture 9.
Computer Architecture & Operations I
Computer structure: Procedure Calls
Lecture 5: Procedure Calls
Run-time organization
RISC Concepts, MIPS ISA Logic Design Tutorial 8.
Introduction to Compilers Tim Teitelbaum
Procedures (Functions)
Chapter 9 :: Subroutines and Control Abstraction
Instructions - Type and Format
Calling Conventions Hakim Weatherspoon CS 3410, Spring 2012
Code Generation Lecture 12 CS 164 Lecture 14 Fall 2004.
Chap. 8 :: Subroutines and Control Abstraction
Chap. 8 :: Subroutines and Control Abstraction
Lecture 30 (based on slides by R. Bodik)
The University of Adelaide, School of Computer Science
Lecture 5: Procedure Calls
UNIT V Run Time Environments.
Program and memory layout
Operating Systems Run-time Organization
Procedures and Calling Conventions
Program and memory layout
Run Time Environments 薛智文
Computer Architecture
Program and memory layout
Where is all the knowledge we lost with information? T. S. Eliot
Program and memory layout
Topic 2b ISA Support for High-Level Languages
Presentation transcript:

Compiler Run-time Organization Lecture 7

2 What we have covered so far… We have covered the front-end phases –Lexical analysis –Parsing –Semantic analysis Next are the back-end phases –Optimization –Code generation Lets take a look at code generation...

3 Run-time environments Before discussing code generation, we need to understand what we are trying to generate There are a number of standard techniques for structuring executable code that are widely used

4 Outline Management of run-time resources Correspondence between static (compile-time) and dynamic (run-time) structures Storage organization

5 Run-time Resources Execution of a program is initially under the control of the operating system When a program is invoked: –The OS allocates space for the program –The code is loaded into part of the space –The OS jumps to the entry point (i.e., “main”)

6 Memory Layout Low Address High Address Memory Code Other Space

7 Notes By tradition, pictures of machine organization have: –Low address at the top –High address at the bottom –Lines delimiting areas for different kinds of data These pictures are simplifications –E.g., not all memory need be contiguous

8 What is Other Space? Holds all data for the program Other Space = Data Space Compiler is responsible for: –Generating code –Orchestrating use of the data area

9 Code Generation Goals Two goals: –Correctness –Speed Most complications in code generation come from trying to be fast as well as correct

10 Assumptions about Execution 1.Execution is sequential; control moves from one point in a program to another in a well- defined order 2.When a procedure is called, control eventually returns to the point immediately after the call Do these assumptions always hold?

11 Activations An invocation of procedure P is an activation of P The lifetime of an activation of P is –All the steps to execute P –Including all the steps in procedures P calls

12 Lifetimes of Variables The lifetime of a variable x is the portion of execution in which x is defined Note that –Lifetime is a dynamic (run-time) concept –Scope is a static concept

13 Activation Trees Assumption (2) requires that when P calls Q, then Q returns before P does Lifetimes of procedure activations are properly nested Activation lifetimes can be depicted as a tree

14 Example class Main { int g() { return 1; } int f() {return g(); } void main() { g(); f(); } } Main f g g

15 Notes The activation tree depends on run-time behavior The activation tree may be different for every program input Since activations are properly nested, a stack can track currently active procedures

16 Example Main f g g Stack Main f g class Main { int g() { return 1; } int f() { return g(); } void main() { g(); f(); } }

17 Revised Memory Layout Low Address High Address Memory Code Stack

18 Activation Records The information needed to manage one procedure activation is called an activation record (AR) or frame If procedure F calls G, then G’s activation record contains a mix of info about F and G.

19 What is in G’s AR when F calls G? F is “suspended” until G completes, at which point F resumes. G’s AR contains information needed to resume execution of F. G’s AR may also contain: –G’s return value (needed by F) –Actual parameters to G (supplied by F) –Space for G’s local variables

20 The Contents of a Typical AR for G Space for G’s return value Actual parameters Pointer to the previous activation record –The control link; points to AR of caller of G Machine status prior to calling G –Contents of registers & program counter –Local variables Other temporary values

21 Example class Main { int g() { return 1; } int f(int x) { if (x == 0) { return g(); } else { return f(x - 1); (**) } } void main() { f(3); (*) } } AR for f: result argument control link return address

22 Stack After Two Calls to f Main (**) 2 (result)f (*)(*) 3 f

23 Notes Main has no argument or local variables and its result is never used; its AR is uninteresting (*) and (**) are return addresses of the invocations of f –The return address is where execution resumes after a procedure call finishes This is only one of many possible AR designs –Would also work for C, Pascal, FORTRAN, etc.

24 The Main Point The compiler must determine, at compile-time, the layout of activation records and generate code that correctly accesses locations in the activation record Thus, the AR layout and the code generator must be designed together!

25 Example The picture shows the state after the call to 2nd invocation of f returns Main (**) 2 1f (*)(*) 3 (result)f

26 Discussion The advantage of placing the return value 1st in a frame is that the caller can find it at a fixed offset from its own frame There is nothing magic about this organization –Can rearrange order of frame elements –Can divide caller/callee responsibilities differently –An organization is better if it improves execution speed or simplifies code generation

27 Discussion (Cont.) Real compilers hold as much of the frame as possible in registers –Especially the method result and arguments

28 Globals All references to a global variable point to the same object –Can’t store a global in an activation record Globals are assigned a fixed address once –Variables with fixed address are “statically allocated” Depending on the language, there may be other statically allocated values

29 Memory Layout with Static Data Low Address High Address Memory Code Stack Static Data

30 Heap Storage A value that outlives the procedure that creates it cannot be kept in the AR Class foo() { return new Class } The Class value must survive deallocation of foo’s AR Languages with dynamically allocated data use a heap to store dynamic data

31 Notes The code area contains object code –For most languages, fixed size and read only The static area contains data (not code) with fixed addresses (e.g., global data) –Fixed size, may be readable or writable The stack contains an AR for each currently active procedure –Each AR usually fixed size, contains locals Heap contains all other data –In C, heap is managed by malloc and free

32 Notes (Cont.) Both the heap and the stack grow Must take care that they don’t grow into each other Solution: start heap and stack at opposite ends of memory and let the grow towards each other

33 Memory Layout with Heap Low Address High Address Memory Code Stack Static Data Heap

34 Data Layout Low-level details of machine architecture are important in laying out data for correct code and maximum performance Chief among these concerns is alignment

35 Alignment Most modern machines are (still) 32 bit –8 bits in a byte –4 bytes in a word –Machines are either byte or word addressable Data is word aligned if it begins at a word boundary Most machines have some alignment restrictions –Or performance penalties for poor alignment

36 Alignment (Cont.) Example: A string “Hello” Takes 5 characters (without a terminating \0) To word align next datum, add 3 “padding” characters to the string The padding is not part of the string, it’s just unused memory

37 Code Generation Overview Stack machines The MIPS assembly language A simple source language Stack-machine implementation of the simple language

38 Stack Machines A simple evaluation model No variables or registers A stack of values for intermediate results Each instruction: –Takes its operands from the top of the stack –Removes those operands from the stack –Computes the required operation on them –Pushes the result on the stack

39 Example of Stack Machine Operation The addition operation on a stack machine … … pop  add 12 9 … push

40 Example of a Stack Machine Program Consider two instructions –push i - place the integer i on top of the stack –add - pop two elements, add them and put the result back on the stack A program to compute 7 + 5: push 7 push 5 add

41 Why Use a Stack Machine ? Each operation takes operands from the same place and puts results in the same place This means a uniform compilation scheme And therefore a simpler compiler

42 Why Use a Stack Machine ? Location of the operands is implicit –Always on the top of the stack No need to specify operands explicitly No need to specify the location of the result Instruction “add” as opposed to “add r 1, r 2 ”  Smaller encoding of instructions  More compact programs This is one reason why Java Byte codes use a stack evaluation model

43 Optimizing the Stack Machine The add instruction does 3 memory operations –Two reads and one write to the stack –The top of the stack is frequently accessed Idea: keep the top of the stack in a register (called accumulator) –Register accesses are faster The “add” instruction is now acc  acc + top_of_stack –Only one memory operation!

44 Stack Machine with Accumulator Invariants The result of computing an expression is always in the accumulator For an operation op(e 1,…,e n ) push the accumulator on the stack after computing each of e 1,…,e n-1 –After the operation pop n-1 values After computing an expression the stack is as before

45 Stack Machine with Accumulator. Example Compute using an accumulator … acc stack 5 7 … acc  5 12 …  acc  acc + top_of_stack pop … 7 acc  7 push acc 7

46 A Bigger Example: 3 + (7 + 5) Code Acc Stack acc  3 3 push acc 3 3, acc  7 7 3, push acc 7 7, 3, acc  5 5 7, 3, acc  acc + top_of_stack 12 7, 3, pop 12 3, acc  acc + top_of_stack 15 3, pop 15

47 Notes It is very important that the stack is preserved across the evaluation of a sub- expression –Stack before the evaluation of is 3, –Stack after the evaluation of is 3, –The first operand is on top of the stack

48 From Stack Machines to MIPS The compiler generates code for a stack machine with accumulator We want to run the resulting code on the MIPS processor (or simulator) We simulate stack machine instructions using MIPS instructions and registers

49 Simulating a Stack Machine… The accumulator is kept in MIPS register $a0 The stack is kept in memory The stack grows towards lower addresses –Standard convention on the MIPS architecture The address of the next location on the stack is kept in MIPS register $sp –The top of the stack is at address $sp + 4

50 MIPS Assembly MIPS architecture –Prototypical Reduced Instruction Set Computer (RISC) architecture –Arithmetic operations use registers for operands and results –Must use load and store instructions to use operands and results in memory –32 general purpose registers (32 bits each) We will use $sp, $a0 and $t1 (a temporary register)

51 A Sample of MIPS Instructions –lw reg 1 offset(reg 2 ) Load 32-bit word from address reg 2 + offset into reg 1 –add reg 1 reg 2 reg 3 reg 1  reg 2 + reg 3 –sw reg 1 offset(reg 2 ) Store 32-bit word in reg 1 at address reg 2 + offset –addiu reg 1 reg 2 imm reg 1  reg 2 + imm “u” means overflow is not checked –li reg imm reg  imm

52 MIPS Assembly. Example. The stack-machine code for in MIPS: acc  7 push acc acc  5 acc  acc + top_of_stack pop li $a0 7 sw $a0 0($sp) addiu $sp $sp -4 li $a0 5 lw $t1 4($sp) add $a0 $a0 $t1 addiu $sp $sp 4 We now generalize this to a simple language…

53 A Small Language A language with integers and integer operations P  D; P | D D  def id(ARGS) = E; ARGS  id, ARGS | id E  int | id | if E 1 = E 2 then E 3 else E 4 | E 1 + E 2 | E 1 – E 2 | id(E 1,…,E n )

54 A Small Language (Cont.) The first function definition f is the “main” routine Running the program on input i means computing f(i) Program for computing the Fibonacci numbers: def fib(x) = if x = 1 then 0 else if x = 2 then 1 else fib(x - 1) + fib(x – 2)

55 Code Generation Strategy For each expression e we generate MIPS code that: –Computes the value of e in $a0 –Preserves $sp and the contents of the stack We define a code generation function cgen(e) whose result is the code generated for e

56 Code Generation for Constants The code to evaluate a constant simply copies it into the accumulator: cgen(i) = li $a0 i Note that this also preserves the stack, as required

57 Code Generation for Add cgen(e 1 + e 2 ) = cgen(e 1 ) sw $a0 0($sp) addiu $sp $sp -4 cgen(e 2 ) lw $t1 4($sp) add $a0 $t1 $a0 addiu $sp $sp 4 Possible optimization: Put the result of e 1 directly in register $t1 ?

58 Code Generation for Add. Wrong! Optimization: Put the result of e 1 directly in $t1? cgen(e 1 + e 2 ) = cgen(e 1 ) move $t1 $a0 cgen(e 2 ) add $a0 $t1 $a0 Try to generate code for : 3 + (7 + 5)

59 Code Generation Notes The code for + is a template with “holes” for code for evaluating e 1 and e 2 Stack machine code generation is recursive Code for e 1 + e 2 consists of code for e 1 and e 2 glued together Code generation can be written as a recursive- descent of the AST –At least for expressions

60 Code Generation for Sub and Constants New instruction: sub reg 1 reg 2 reg 3 –Implements reg 1  reg 2 - reg 3 cgen(e 1 - e 2 ) = cgen(e 1 ) sw $a0 0($sp) addiu $sp $sp -4 cgen(e 2 ) lw $t1 4($sp) sub $a0 $t1 $a0 addiu $sp $sp 4

61 Code Generation for Conditional We need flow control instructions New instruction: beq reg 1 reg 2 label –Branch to label if reg 1 = reg 2 New instruction: b label –Unconditional jump to label

62 Code Generation for If (Cont.) cgen(if e 1 = e 2 then e 3 else e 4 ) = cgen(e 1 ) sw $a0 0($sp) addiu $sp $sp -4 cgen(e 2 ) lw $t1 4($sp) addiu $sp $sp 4 beq $a0 $t1 true_branch false_branch: cgen(e 4 ) b end_if true_branch: cgen(e 3 ) end_if:

63 The Activation Record Code for function calls and function definitions depends on the layout of the activation record A very simple AR suffices for this language: –The result is always in the accumulator No need to store the result in the AR –The activation record holds actual parameters For f(x 1,…,x n ) push x n,…,x 1 on the stack These are the only variables in this language

64 The Activation Record (Cont.) The stack discipline guarantees that on function exit $sp is the same as it was on function entry –No need for a control link We need the return address It’s handy to have a pointer to the current activation –This pointer lives in register $fp (frame pointer)

65 The Activation Record Summary: For this language, an AR with the caller’s frame pointer, the actual parameters, and the return address suffices Picture: Consider a call to f(x,y), The AR will be: y x old fp SP FP AR of f

66 Code Generation for Function Call The calling sequence is the instructions (of both caller and callee) to set up a function invocation New instruction: jal label –Jump to label, save address of next instruction in $ra –On other architectures the return address is stored on the stack by the “call” instruction

67 Code Generation for Function Call (Cont.) cgen(f(e 1,…,e n )) = sw $fp 0($sp) addiu $sp $sp -4 cgen(e n ) sw $a0 0($sp) addiu $sp $sp -4 … cgen(e 1 ) sw $a0 0($sp) addiu $sp $sp -4 jal f_entry The caller saves its value of the frame pointer Then it saves the actual parameters in reverse order The caller saves the return address in register $ra The AR so far is 4*n+4 bytes long

68 Code Generation for Function Definition New instruction: jr reg –Jump to address in register reg cgen(def f(x 1,…,x n ) = e) = move $fp $sp sw $ra 0($sp) addiu $sp $sp -4 cgen(e) lw $ra 4($sp) addiu $sp $sp z lw $fp 0($sp) jr $ra Note: The frame pointer points to the top, not bottom of the frame The callee pops the return address, the actual arguments and the saved value of the frame pointer z = 4*n + 8

69 Calling Sequence. Example for f(x,y). Before call On entry Before exit After call SP FP y x old fp SP FP SP FP SP return y x old fp FP

70 Code Generation for Variables Variable references are the last construct The “variables” of a function are just its parameters –They are all in the AR –Pushed by the caller Problem: Because the stack grows when intermediate results are saved, the variables are not at a fixed offset from $sp

71 Code Generation for Variables (Cont.) Solution: use a frame pointer –Always points to the return address on the stack –Since it does not move it can be used to find the variables Let x i be the i th (i = 1,…,n) formal parameter of the function for which code is being generated cgen(x i ) = lw $a0 z($fp) ( z = 4*i )

72 Code Generation for Variables (Cont.) Example: For a function def f(x,y) = e the activation and frame pointer are set up as follows: y x return old fp X is at fp + 4 Y is at fp + 8 FP SP

73 Summary The activation record must be designed together with the code generator Code generation can be done by recursive traversal of the AST Production compilers do different things –Emphasis is on keeping values (esp. current stack frame) in registers –Intermediate results are laid out in the AR, not pushed and popped from the stack

74 End of Lecture Next Lecture: Chapter 5 –Names –Bindings –Type Checking –Scopes