COMPILER DESIGN UNIT-I.

Slides:



Advertisements
Similar presentations
Lexical Analysis Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast!
Advertisements

COMP-421 Compiler Design Presented by Dr Ioanna Dionysiou.
Chapter 3 Lexical Analysis Yu-Chen Kuo.
CS-338 Compiler Design Dr. Syed Noman Hasany Assistant Professor College of Computer, Qassim University.
 Lex helps to specify lexical analyzers by specifying regular expression  i/p notation for lex tool is lex language and the tool itself is refered to.
Winter 2007SEG2101 Chapter 81 Chapter 8 Lexical Analysis.
Chapter 3 Chang Chi-Chung. The Structure of the Generated Analyzer lexeme Automaton simulator Transition Table Actions Lex compiler Lex Program lexemeBeginforward.
Lexical Analysis Recognize tokens and ignore white spaces, comments
CS 426 Compiler Construction
Scanner 1. Introduction A scanner, sometimes called a lexical analyzer A scanner : – gets a stream of characters (source program) – divides it into tokens.
1 Scanning Aaron Bloomfield CS 415 Fall Parsing & Scanning In real compilers the recognizer is split into two phases –Scanner: translate input.
Chapter 3 Lexical Analysis
Topic #3: Lexical Analysis
Lexical Analysis Natawut Nupairoj, Ph.D.
COP4020 Programming Languages
Chapter 1 Introduction Dr. Frank Lee. 1.1 Why Study Compiler? To write more efficient code in a high-level language To provide solid foundation in parsing.
1 Outline Informal sketch of lexical analysis –Identifies tokens in input string Issues in lexical analysis –Lookahead –Ambiguities Specifying lexers –Regular.
CS 326 Programming Languages, Concepts and Implementation Instructor: Mircea Nicolescu Lecture 2.
Lecture # 3 Chapter #3: Lexical Analysis. Role of Lexical Analyzer It is the first phase of compiler Its main task is to read the input characters and.
Topic #3: Lexical Analysis EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
Lexical Analyzer (Checker)
Unit-1 Introduction Prepared by: Prof. Harish I Rathod
Chapter 3. Lexical Analysis (1). 2 Interaction of lexical analyzer with parser.
Lexical Analyzer in Perspective
Lexical Analysis: Finite Automata CS 471 September 5, 2007.
CPS 506 Comparative Programming Languages Syntax Specification.
1.  It is the first phase of compiler.  In computer science, lexical analysis is the process of converting a sequence of characters into a sequence.
1 Compiler Design (40-414)  Main Text Book: Compilers: Principles, Techniques & Tools, 2 nd ed., Aho, Lam, Sethi, and Ullman, 2007  Evaluation:  Midterm.
Lexical Analysis S. M. Farhad. Input Buffering Speedup the reading the source program Look one or more characters beyond the next lexeme There are many.
Overview of Previous Lesson(s) Over View  Symbol tables are data structures that are used by compilers to hold information about source-program constructs.
Scanner Introduction to Compilers 1 Scanner.
Compiler Construction By: Muhammad Nadeem Edited By: M. Bilal Qureshi.
The Role of Lexical Analyzer
Lexical Analysis (Scanning) Lexical Analysis (Scanning)
Lexical Analysis.
1st Phase Lexical Analysis
What is a compiler? –A program that reads a program written in one language (source language) and translates it into an equivalent program in another language.
Prof. Necula CS 164 Lecture 31 Lexical Analysis Lecture 3-4.
Compiler Construction CPCS302 Dr. Manal Abdulaziz.
LECTURE 3 Compiler Phases. COMPILER PHASES Compilation of a program proceeds through a fixed series of phases.  Each phase uses an (intermediate) form.
1 Asstt. Prof Navjot Kaur Computer Dept PRESENTED BY.
CS 404Ahmed Ezzat 1 CS 404 Introduction to Compiler Design Lecture 1 Ahmed Ezzat.
COP4020 Programming Languages Introduction Prof. Robert van Engelen (modified by Prof. Em. Chris Lacher)
Lecture 2 Compiler Design Lexical Analysis By lecturer Noor Dhia
Lexical Analyzer in Perspective
Intro to compilers Based on end of Ch. 1 and start of Ch. 2 of textbook, plus a few additional references.
Compiler Design (40-414) Main Text Book:
Scanner Scanner Introduction to Compilers.
Chapter 3 Lexical Analysis.
CSc 453 Lexical Analysis (Scanning)
Finite-State Machines (FSMs)
Compilers Welcome to a journey to CS419 Lecture5: Lexical Analysis:
CSc 453 Lexical Analysis (Scanning)
Finite-State Machines (FSMs)
פרק 3 ניתוח לקסיקאלי תורת הקומפילציה איתן אביאור.
Chapter 3: Lexical Analysis
COP4020 Programming Languages
Review: Compiler Phases:
Lexical Analysis Lecture 3-4 Prof. Necula CS 164 Lecture 3.
Recognition of Tokens.
COP4020 Programming Languages
Scanner Scanner Introduction to Compilers.
Other Issues - § 3.9 – Not Discussed
Scanner Scanner Introduction to Compilers.
Lexical Analysis.
Topic 2: Compiler Front-End
Scanner Scanner Introduction to Compilers.
Scanner Scanner Introduction to Compilers.
Scanner Scanner Introduction to Compilers.
CSc 453 Lexical Analysis (Scanning)
Presentation transcript:

COMPILER DESIGN UNIT-I

A typical compilation process Preprocessor Source program with macros A typical compilation process Preprocessor Source program Compiler Target assembly program Try g++ with –v, -E, -S flags on linprog. assembler Relocatable machine code linker Absolute machine code

What is a compiler? A program that reads a program written in one language (source language) and translates it into an equivalent program in another language (target language). Two components Understand the program (make sure it is correct) Rewrite the program in the target language. Traditionally, the source language is a high level language and the target language is a low level language (machine code). Source program Target program compiler Error message

Compilation Phases and Passes Compilation of a program proceeds through a fixed series of phases Each phase use an (intermediate) form of the program produced by an earlier phase Subsequent phases operate on lower-level code representations Each phase may consist of a number of passes over the program representation Pascal, FORTRAN, C languages designed for one-pass compilation, which explains the need for function prototypes

Compiler Front- and Back-end Abstract syntax tree or other intermediate form Source program (character stream) Scanner (lexical analysis) Machine-Independent Code Improvement Tokens Parser (syntax analysis) Modified intermediate form Front end analysis Back end synthesis Target Code Generation Parse tree Semantic Analysis and Intermediate Code Generation Assembly or object code Machine-Specific Code Improvement Abstract syntax tree or other intermediate form Modified assembly or object code

Scanner: Lexical Analysis Lexical analysis breaks up a program into tokens Grouping characters into non-separatable units (tokens) Changing a stream to characters to a stream of tokens program gcd (input, output); var i, j : integer; begin   read (i, j);   while i <> j do     if i > j then i := i - j else j := j - i;   writeln (i) end. program gcd ( input , output ) ; var i , j : integer ; begin read ( i , j ) ; while i <> j do if i > j then i := i - j else j := i - i ;  writeln ( i ) end .

Scanner: Lexical Analysis What kind of errors can be reported by lexical analyzer? A = b + @3;

Parser: Syntax Analysis Checks whether the token stream meets the grammatical specification of the language and generates the syntax tree. A syntax error is produced by the compiler when the program does not meet the grammatical specification. For grammatically correct program, this phase generates an internal representation that is easy to manipulate in later phases Typically a syntax tree (also called a parse tree). A grammar of a programming language is typically described by a context free grammer, which also defines the structure of the parse tree.

Context-Free Grammars A context-free grammar defines the syntax of a programming language The syntax defines the syntactic categories for language constructs Statements Expressions Declarations Categories are subdivided into more detailed categories A Statement is a For-statement If-statement Assignment <statement> ::= <for-statement> | <if-statement> | <assignment> <for-statement> ::= for ( <expression> ; <expression> ; <expression> ) <statement> <assignment> ::= <identifier> := <expression>

Example: Micro Pascal <Program> ::= program <id> ( <id> <More_ids> ) ; <Block> . <Block> ::= <Variables> begin <Stmt> <More_Stmts> end <More_ids> ::= , <id> <More_ids> |  <Variables> ::= var <id> <More_ids> : <Type> ; <More_Variables> |  <More_Variables> ::= <id> <More_ids> : <Type> ; <More_Variables> |  <Stmt> ::= <id> := <Exp> | if <Exp> then <Stmt> else <Stmt> | while <Exp> do <Stmt> | begin <Stmt> <More_Stmts> end <Exp> ::= <num> | <id> | <Exp> + <Exp> | <Exp> - <Exp>

Parsing examples Pos = init + / rate * 60  id1 = id2 + / id3 * const  syntax error (exp ::= exp + exp cannot be reduced). Pos = init + rate * 60  id1 = id2 + id3 * const  := id1 + id2 * id3 60

Semantic Analysis Semantic analysis is applied by a compiler to discover the meaning of a program by analyzing its parse tree or abstract syntax tree. A program without grammatical errors may not always be correct program. pos = init + rate * 60 What if pos is a class while init and rate are integers? This kind of errors cannot be found by the parser Semantic analysis finds this type of error and ensure that the program has a meaning.

Semantic Analysis Static semantic checks (done by the compiler) are performed at compile time Type checking Every variable is declared before used Identifiers are used in appropriate contexts Check subroutine call arguments Check labels Dynamic semantic checks are performed at run time, and the compiler produces code that performs these checks Array subscript values are within bounds Arithmetic errors, e.g. division by zero Pointers are not dereferenced unless pointing to valid object A variable is used but hasn't been initialized When a check fails at run time, an exception is raised

Semantic Analysis and Strong Typing A language is strongly typed "if (type) errors are always detected" Errors are either detected at compile time or at run time Examples of such errors are listed on previous slide Languages that are strongly typed are Ada, Java, ML, Haskell Languages that are not strongly typed are Fortran, Pascal, C/C++, Lisp Strong typing makes language safe and easier to use, but potentially slower because of dynamic semantic checks

Code Generation and Intermediate Code Forms A typical intermediate form of code produced by the semantic analyzer is an abstract syntax tree (AST) The AST is annotated with useful information such as pointers to the symbol table entry of identifiers Example AST for the gcd program in Pascal

Code Generation and Intermediate Code Forms Other intermediate code forms intermediate code is something that is both close to the final machine code and easy to manipulate (for optimization). One example is the three-address code: dst = op1 op op2 The three-address code for the assignment statement: temp1 = 60 temp2 = id3 + temp1 temp3 = id2 + temp2 id1 = temp3 Machine-independent Intermediate code improvement temp1 = id3 * 60.0 id1 = id2 + temp1

Target Code Generation and Optimization From the machine-independent form assembly or object code is generated by the compiler MOVF id3, R2 MULF #60.0, R2 MOVF id2, R1 ADDF R2, R1 MOVF R1, id1 This machine-specific code is optimized to exploit specific hardware features COP4020 Fall 2006

The role of lexical analyzer token Lexical Analyzer Parser Source program To semantic analysis getNextToken Symbol table

Why to separate Lexical analysis and parsing Simplicity of design Improving compiler efficiency Enhancing compiler portability

Tokens, Patterns and Lexemes A token is a pair a token name and an optional token value A pattern is a description of the form that the lexemes of a token may take A lexeme is a sequence of characters in the source program that matches the pattern for a token

Example Token Informal description Sample lexemes if Characters i, f if else Characters e, l, s, e else comparison <=, != < or > or <= or >= or == or != id Letter followed by letter and digits pi, score, D2 number Any numeric constant 3.14159, 0, 6.02e23 literal Anything but “ sorrounded by “ “core dumped” printf(“total = %d\n”, score);

Attributes for tokens E = M * C ** 2 <id, pointer to symbol table entry for E> <assign-op> <id, pointer to symbol table entry for M> <mult-op> <id, pointer to symbol table entry for C> <exp-op> <number, integer value 2>

Lexical errors Some errors are out of power of lexical analyzer to recognize: fi (a == f(x)) … However it may be able to recognize errors like: d = 2r Such errors are recognized when no pattern for tokens matches a character sequence

Error recovery Panic mode: successive characters are ignored until we reach to a well formed token Delete one character from the remaining input Insert a missing character into the remaining input Replace a character by another character Transpose two adjacent characters

Input buffering Sometimes lexical analyzer needs to look ahead some symbols to decide about the token to return In C language: we need to look after -, = or < to decide what token to return In Fortran: DO 5 I = 1.25 We need to introduce a two buffer scheme to handle large look-aheads safely E = M * C * * 2 eof

Sentinels E = M eof * C * * 2 eof eof Switch (*forward++) { case eof: if (forward is at end of first buffer) { reload second buffer; forward = beginning of second buffer; } else if {forward is at end of second buffer) { reload first buffer;\ forward = beginning of first buffer; else /* eof within a buffer marks the end of input */ terminate lexical analysis; break; cases for the other characters;

Specification of tokens In theory of compilation regular expressions are used to formalize the specification of tokens Regular expressions are means for specifying regular languages Example: Letter_(letter_ | digit)* Each regular expression is a pattern specifying the form of strings

Regular expressions Ɛ is a regular expression, L(Ɛ) = {Ɛ} If a is a symbol in ∑then a is a regular expression, L(a) = {a} (r) | (s) is a regular expression denoting the language L(r) ∪ L(s) (r)(s) is a regular expression denoting the language L(r)L(s) (r)* is a regular expression denoting (L9r))* (r) is a regular expression denting L(r)

Regular definitions d1 -> r1 d2 -> r2 … dn -> rn Example: letter_ -> A | B | … | Z | a | b | … | Z | _ digit -> 0 | 1 | … | 9 id -> letter_ (letter_ | digit)*

Extensions One or more instances: (r)+ Zero of one instances: r? Character classes: [abc] Example: letter_ -> [A-Za-z_] digit -> [0-9] id -> letter_(letter|digit)*

Recognition of tokens Starting point is the language grammar to understand the tokens: stmt -> if expr then stmt | if expr then stmt else stmt | Ɛ expr -> term relop term | term term -> id | number

Recognition of tokens (cont.) The next step is to formalize the patterns: digit -> [0-9] Digits -> digit+ number -> digit(.digits)? (E[+-]? Digit)? letter -> [A-Za-z_] id -> letter (letter|digit)* If -> if Then -> then Else -> else Relop -> < | > | <= | >= | = | <> We also need to handle whitespaces: ws -> (blank | tab | newline)+

Transition diagrams Transition diagram for relop

Transition diagrams (cont.) Transition diagram for reserved words and identifiers

Transition diagrams (cont.) Transition diagram for unsigned numbers

Transition diagrams (cont.) Transition diagram for whitespace

Architecture of a transition-diagram-based lexical analyzer TOKEN getRelop() { TOKEN retToken = new (RELOP) while (1) { /* repeat character processing until a return or failure occurs */ switch(state) { case 0: c= nextchar(); if (c == ‘<‘) state = 1; else if (c == ‘=‘) state = 5; else if (c == ‘>’) state = 6; else fail(); /* lexeme is not a relop */ break; case 1: … … case 8: retract(); retToken.attribute = GT; return(retToken); }

Lexical Analyzer Generator - Lex Lex Source program lex.l Lexical Compiler lex.yy.c C compiler lex.yy.c a.out a.out Sequence of tokens Input stream

Structure of Lex programs declarations %% translation rules auxiliary functions Pattern {Action}

Example %{ /* definitions of manifest constants LT, LE, EQ, NE, GT, GE, IF, THEN, ELSE, ID, NUMBER, RELOP */ %} /* regular definitions delim [ \t\n] ws {delim}+ letter [A-Za-z] digit [0-9] id {letter}({letter}|{digit})* number {digit}+(\.{digit}+)?(E[+-]?{digit}+)? %% {ws} {/* no action and no return */} if {return(IF);} then {return(THEN);} else {return(ELSE);} {id} {yylval = (int) installID(); return(ID); } {number} {yylval = (int) installNum(); return(NUMBER);} … Int installID() {/* funtion to install the lexeme, whose first character is pointed to by yytext, and whose length is yyleng, into the symbol table and return a pointer thereto */ } Int installNum() { /* similar to installID, but puts numerical constants into a separate table */

Finite Automata Regular expressions = specification Finite automata = implementation A finite automaton consists of An input alphabet  A set of states S A start state n A set of accepting states F  S A set of transitions state input state

In state s1 on input “a” go to state s2 Finite Automata Transition s1 a s2 Is read In state s1 on input “a” go to state s2 If end of input If in accepting state => accept, othewise => reject If no transition possible => reject

Finite Automata State Graphs The start state An accepting state a A transition

A Simple Example A finite automaton that accepts only “1” A finite automaton accepts a string if we can follow transitions labeled with the characters in the string from the start to some accepting state 1

Another Simple Example A finite automaton accepting any number of 1’s followed by a single 0 Alphabet: {0,1} Check that “1110” is accepted but “110…” is not 1

And Another Example Alphabet {0,1} What language does this recognize? 1 1

And Another Example Alphabet still { 0, 1 } The operation of the automaton is not completely defined by the input On input “11” the automaton could be in either state 1

Epsilon Moves Another kind of transition: -moves  A B Machine can move from state A to state B without reading input

Deterministic and Nondeterministic Automata Deterministic Finite Automata (DFA) One transition per input per state No -moves Nondeterministic Finite Automata (NFA) Can have multiple transitions for one input in a given state Can have -moves Finite automata have finite memory Need only to encode the current state

Execution of Finite Automata A DFA can take only one path through the state graph Completely determined by input NFAs can choose Whether to make -moves Which of multiple transitions for a single input to take

Acceptance of NFAs Input: An NFA can get into multiple states 1 Input: 1 1 Rule: NFA accepts if it can get in a final state

NFA vs. DFA (1) NFAs and DFAs recognize the same set of languages (regular languages) DFAs are easier to implement There are no choices to consider

NFA vs. DFA (2) DFA can be exponentially larger than NFA For a given language the NFA can be simpler than the DFA 1 NFA 1 DFA DFA can be exponentially larger than NFA

Regular Expressions to Finite Automata High-level sketch NFA Regular expressions DFA Lexical Specification Table-driven Implementation of DFA

Regular Expressions to NFA (1) For each kind of rexp, define an NFA Notation: NFA for rexp A A For   For input a a

Regular Expressions to NFA (2) For AB A B  For A | B A B 

Regular Expressions to NFA (3) For A*  A  

Example of RegExp -> NFA conversion Consider the regular expression (1 | 0)*1 The NFA is  A H 1 C E  B G 1 I J D F 

Next NFA Regular expressions DFA Lexical Table-driven Specification Implementation of DFA

NFA to DFA. The Trick Simulate the NFA Each state of resulting DFA = a non-empty subset of states of the NFA Start state = the set of NFA states reachable through -moves from NFA start state Add a transition S a S’ to DFA iff S’ is the set of NFA states reachable from the states in S after seeing the input a considering -moves as well

NFA -> DFA Example  1   1      1 1 1 C E A B G H I J D F G H I J   D F   FGABCDHI 1 ABCDHI 1 1 EJGABCDHI

NFA to DFA. Remark An NFA may be in many states at any time How many different states ? If there are N states, the NFA must be in some subset of those N states How many non-empty subsets are there? 2N - 1 = finitely many, but exponentially many

Implementation A DFA can be implemented by a 2D table T One dimension is “states” Other dimension is “input symbols” For every transition Si a Sk define T[i,a] = k DFA “execution” If in state Si and input a, read T[i,a] = k and skip to state Sk Very efficient

Table Implementation of a DFA T 1 S 1 1 U 1 S T U

Implementation (Cont.) NFA -> DFA conversion is at the heart of tools such as flex or jflex But, DFAs can be huge In practice, flex-like tools trade off speed for space in the choice of NFA and DFA representations