Lexical and Syntax Analysis

Slides:



Advertisements
Similar presentations
Lexical and Syntactic Analysis Here, we look at two of the tasks involved in the compilation process –Given source code, we need to first break it into.
Advertisements

Chapter 4 Lexical and Syntax Analysis Sections
Chapter 4 Lexical and Syntax Analysis Sections 1-4.
ISBN Chapter 4 Lexical and Syntax Analysis.
ISBN Chapter 4 Lexical and Syntax Analysis.
ISBN Chapter 4 Lexical and Syntax Analysis The Parsing Problem Recursive-Descent Parsing.
CS 330 Programming Languages 09 / 23 / 2008 Instructor: Michael Eckmann.
Lexical and Syntax Analysis
Lecture 4 Concepts of Programming Languages Arne Kutzner Hanyang University / Seoul Korea.
ISBN Lecture 04 Lexical and Syntax Analysis.
Chapter 4 Lexical and Syntax Analysis. Chapter 4 Topics Introduction Lexical Analysis The Parsing Problem Recursive-Descent Parsing Bottom-Up Parsing.
Lexical and syntax analysis
CSC3315 (Spring 2009)1 CSC 3315 Lexical and Syntax Analysis Hamid Harroud School of Science and Engineering, Akhawayn University
Parsing. Goals of Parsing Check the input for syntactic accuracy Return appropriate error messages Recover if possible Produce, or at least traverse,
Copyright © 2009 Elsevier Chapter 2 :: Programming Language Syntax Programming Language Pragmatics Michael L. Scott.
Chapter 4 Lexical and Syntax Analysis. 4-2 Chapter 4 Topics 4.1 Introduction 4.2 Lexical Analysis 4.3 The Parsing Problem 4.4 Recursive-Descent Parsing.
CS 330 Programming Languages 09 / 26 / 2006 Instructor: Michael Eckmann.
Lexical and Syntax Analysis
CMSC 331, Some material © 1998 by Addison Wesley Longman, Inc. 1 Chapter 4 Chapter 4 Bottom Up Parsing.
ISBN Chapter 4 Lexical and Syntax Analysis.
ISBN Chapter 4 Lexical and Syntax Analysis.
College of Computer Science and Engineering Course: ICS313
ISBN Chapter 4 Lexical and Syntax Analysis.
CS 330 Programming Languages 09 / 20 / 2007 Instructor: Michael Eckmann.
ISBN Chapter 4 Lexical and Syntax Analysis.
CS 330 Programming Languages 09 / 25 / 2007 Instructor: Michael Eckmann.
Copyright © 2004 Pearson Addison-Wesley. All rights reserved.3-1 Language Specification and Translation Lecture 8.
CS 3304 Comparative Languages
Lecture 4 Concepts of Programming Languages
4.1 Introduction - Language implementation systems must analyze
Lexical and Syntax Analysis
Chapter 4 - Parsing CSCE 343.
Programming Languages Translator
Chapter 4 Lexical and Syntax Analysis.
Lexical and Syntax Analysis
Chapter 2 :: Programming Language Syntax
Chapter 2 :: Programming Language Syntax
Lexical and Syntax Analysis
Chapter 2 :: Programming Language Syntax
4 (c) parsing.
CS 3304 Comparative Languages
Subject Name:COMPILER DESIGN Subject Code:10CS63
Lexical and Syntax Analysis
Lexical and Syntactic Analysis
4d Bottom Up Parsing.
Chapter 2 :: Programming Language Syntax
R.Rajkumar Asst.Professor CSE
CS 3304 Comparative Languages
Lexical and Syntax Analysis
CS 3304 Comparative Languages
Lexical and Syntax Analysis
Chapter 4: Lexical and Syntax Analysis Sangho Ha
4d Bottom Up Parsing.
Lexical and Syntax Analysis
Chapter 2 :: Programming Language Syntax
Programming Language Specification and Translation
Language Specification and Translation
4d Bottom Up Parsing.
4d Bottom Up Parsing.
Programming Language Specification and Translation
Chapter 2 :: Programming Language Syntax
4d Bottom Up Parsing.
Lexical and Syntax Analysis
Programming Language Specification and Translation
Lexical and Syntax Analysis
4d Bottom Up Parsing.
Lexical and Syntax Analysis
4.1 Introduction - Language implementation systems must analyze
Presentation transcript:

Lexical and Syntax Analysis Chapter 4 Lexical and Syntax Analysis

Outline Introduction Lexical Analysis The Parsing Problem Recursive-Descent Parsing Bottom-Up Parsing

Introduction Language implementation systems must analyze source code, regardless of the specific implementation approach Nearly all syntax analysis is based on a formal description of the syntax of the source language (BNF)

Introduction The syntax analysis portion of a language processor nearly always consists of two parts: A low-level part called a lexical analyzer (mathematically, a finite automaton based on a regular grammar) A high-level part called a syntax analyzer, or parser (mathematically, a push-down automaton based on a context-free grammar, or BNF) Lexical analysis syntax analysis Source program tokens parse tree lexemes

Introduction Reasons to use BNF to describe syntax: Provides a clear and concise syntax description The parser can be based directly on the BNF Parsers based on BNF are easy to maintain

Introduction Reasons to separate lexical and syntax analysis: Simplicity - less complex approaches can be used for lexical analysis; separating them simplifies the parser Efficiency - separation allows optimization of the lexical analyzer Portability - parts of the lexical analyzer may not be portable, but the parser always is portable

Lexical Analysis A lexical analyzer is a pattern matcher for character strings A lexical analyzer is a “front-end” for the parser Identifies substrings of the source program that belong together - lexemes Lexemes match a character pattern, which is associated with a lexical category called a token sum is a lexeme; its token may be IDENT

Lexical Analysis The process of recognizing lexemes needs regular expression.

Regular Expressions A regular expression is one of the following: A character : a,b,c The empty string, denoted by  Two regular expressions concatenated: ab Two regular expressions separated by | (i.e., or): ab | bc A regular expression followed by the Kleene star (concatenation of zero or more strings) (a|ab)*

Regular Expressions Numerical literals in Pascal may be generated by the following: 39 39.75 39.75e-14 Regular language is a subset of CFG language.

Regular Expressions Scanning: divides the program into "tokens", which are the smallest meaningful units; this saves time, since character-by-character processing is slow we can tune the scanner better if its job is simple; it also saves complexity (lots of it) for later stages you can design a parser to take characters instead of tokens as input, but it isn't pretty scanning is recognition of a regular language, e.g., via DFA

Scanning Scanner/lexical analyzer is responsible for tokenizing source removing comments (often) dealing with pragmas (i.e., significant comments) saving text of identifiers, numbers, strings saving source locations (file, line, column) for error messages

Scanning Suppose we are building an ad-hoc (hand-written) scanner for Pascal: We read the characters one at a time with look-ahead If it is one of the one-character tokens { ( ) [ ] < > , ; = + - etc } we announce that token If it is a ., we look at the next character If that is a dot, we announce . Otherwise, we announce . and reuse the look-ahead

Scanning If it is a <, we look at the next character if that is a = we announce <= otherwise, we announce < and reuse the look-ahead, etc If it is a letter, we keep reading letters and digits and maybe underscores until we can't anymore then we check to see if it is a reserved word

Scanning If it is a digit, we keep reading until we find a non-digit if that is not a . we announce an integer otherwise, we keep looking for a real number if the character after the . is not a digit we announce an integer and reuse the . and the look-ahead

Scanning Pictorial representation of a Pascal scanner as a finite automaton

digit

Scanning This is a deterministic finite automaton (DFA) Lex, scangen, etc. build these things automatically from a set of regular expressions Specifically, they construct a machine that accepts the language identifier | int const | real const | comment | symbol | ...

Scanning We run the machine over and over to get one token after another Nearly universal rule: always take the longest possible token from the input thus foobar is foobar and never f or foo or foob more to the point, 3.14159 is a real const and never 3, ., and 14159 Regular expressions "generate" a regular language; DFAs "recognize" it

Lexical Analysis The lexical analyzer is usually a function that is called by the parser when it needs the next token Three approaches to building a lexical analyzer: Write a formal description of the tokens and use a software tool that constructs table-driven lexical analyzers given such a description Design a state diagram that describes the tokens and write a program that implements the state diagram Design a state diagram that describes the tokens and hand-construct a table-driven implementation of the state diagram We only discuss approaches 2-3

Lexical Analysis State diagram design: A naïve state diagram would have a transition from every state on every character in the source language - such a diagram would be very large! 1 If it is in digital, {0,1} is fine!!

Lexical Analysis In many cases, transitions can be combined to simplify the state diagram When recognizing an identifier, all uppercase and lowercase letters are equivalent Use a character class that includes all letters When recognizing an integer literal, all digits are equivalent - use a digit class Digit class: [0-9] Letter class : [a-z][A-Z] Digit Letter

Lexical Analysis Reserved words and identifiers can be recognized together (rather than having a part of the diagram for each reserved word) Use a table lookup (e.g lex) to determine whether a possible identifier is in fact a reserved word Hashing table is normally used.

Lexical Analysis Convenient utility subprograms: getChar - gets the next character of input, puts it in nextChar, determines its class and puts the class in charClass addChar - puts the character from nextChar into the place the lexeme is being accumulated, lexeme lookup - determines whether the string in lexeme is a reserved word (returns a code)

State Diagram

Lexical Analysis Implementation (assume initialization): … int lex() { getChar(); switch (charClass) { case LETTER: addChar(); while (charClass == LETTER || charClass == DIGIT) { } return lookup(lexeme); break; …

Lexical Analysis … case DIGIT: addChar(); getChar(); while (charClass == DIGIT) { } return INT_LIT; break; } /* End of switch */ } /* End of function lex */

Lexical Analysis Pictorial representation of a Pascal scanner as a finite automaton What about adding those functions?

The Parsing Problem Goals of the parser, given an input program: Find all syntax errors; for each, produce an appropriate diagnostic message, and recover quickly Produce the parse tree, or at least a trace of the parse tree, for the program

The Parsing Problem Two categories of parsers Top down - produce the parse tree, beginning at the root Order is that of a leftmost derivation Bottom up - produce the parse tree, beginning at the leaves Order is that of the reverse of a rightmost derivation Parsers look only one token ahead in the input

The Parsing Problem Top-down Parsers Given a sentential form, xA , the parser must choose the correct A-rule to get the next sentential form in the leftmost derivation, using only the first token produced by A The most common top-down parsing algorithms: Recursive descent - a coded implementation LL parsers - table driven implementation LL stands for 'Left-to-right, Leftmost derivation'. Ab|c b xA b

The Parsing Problem Bottom-up parsers Given a right sentential form, , determine what substring of  is the right-hand side of the rule in the grammar that must be reduced to produce the previous sentential form in the right derivation The most common bottom-up parsing algorithms are in the LR family LR stands for 'Left-to-right, Rightmost derivation’  A 

The Parsing Problem You commonly see LL or LR (or whatever) written with a number in parentheses after it This number indicates how many tokens of look-ahead are required in order to parse Almost all real compilers use one token of look-ahead LR(1) LL(1)

The Parsing Problem The Complexity of Parsing Parsers that work for any unambiguous grammar are complex and inefficient ( O(n3), where n is the length of the input ) There are two well-known parsing algorithms that permit this Early's algorithm Cook-Younger-Kasami (CYK) algorithm O(n^3) time is clearly unacceptable for a parser in a compiler - too slow

The Parsing Problem The Complexity of Parsing Compilers use parsers that only work for a subset of all unambiguous grammars, but do it in linear time ( O(n), where n is the length of the input )

Recursive-Descent Parsing Recursive Descent Process There is a subprogram for each nonterminal in the grammar, which can parse sentences that can be generated by that nonterminal EBNF is ideally suited for being the basis for a recursive-descent parser, because EBNF minimizes the number of nonterminals

Recursive-Descent Parsing A grammar for simple expressions: <expr>  <term> {(+ | -) <term>} <term>  <factor> {(* | /) <factor>} <factor>  id | ( <expr> )

Recursive-Descent Parsing Assume we have a lexical analyzer named lex, which puts the next token code in nextToken The coding process when there is only one RHS: For each terminal symbol in the RHS, compare it with the next input token; if they match, continue, else there is an error For each nonterminal symbol in the RHS, call its associated parsing subprogram

Recursive-Descent Parsing /* Function expr Parses strings in the language generated by the rule: <expr> → <term> {(+ | -) <term>} */ void expr() { /* Parse the first term */   term(); … Assume we have nextToken

Recursive-Descent Parsing /* As long as the next token is + or -, call lex to get the next token, and parse the next term */   while (nextToken == PLUS_CODE || nextToken == MINUS_CODE){     lex();     term();   } } This particular routine does not detect errors Convention: Every parsing routine leaves the next token in nextToken <expr> → <term> {(+ | -) <term>} Get the nextToken

Recursive-Descent Parsing A nonterminal that has more than one RHS requires an initial process to determine which RHS it is to parse The correct RHS is chosen on the basis of the next token of input (the lookahead) The next token is compared with the first token that can be generated by each RHS until a match is found If no match is found, it is a syntax error nextToken

Recursive-Descent Parsing /* Function factor Parses strings in the language generated by the rule: <factor> -> id | (<expr>) */ void factor() { /* Determine which RHS */    if (nextToken == ID_CODE) /* For the RHS id, just call lex */      lex(); Assume we have nextToken Match ID Get the nextToken

Recursive-Descent Parsing /* If the RHS is (<expr>) – call lex to pass over the left parenthesis, call expr, and check for the right parenthesis */    else if (nextToken == LEFT_PAREN_CODE) {       lex(); expr();     if (nextToken == RIGHT_PAREN_CODE) lex(); else error(); } /* End of else if (nextToken == ... */ else error(); /* Neither RHS matches */ } Match ‘(‘ Get the nextToken When finishing expr(), assume we have nextToken Match ‘)’ Get the nextToken Expect ‘right parenthesis’

LL Parser-Table driven Example of LL(1) grammar program  stmt_list $$$ stmt_list  stmt stmt_list |  stmt  id := expr | read id | write expr expr  term term_tail term_tail  add_op term term_tail

LL Parser-Table driven LL(1) grammar (continued) 10. term  factor fact_tail 11. fact_tail  mult_op fact fact_tail |  factor  ( expr ) | id | number add_op  + | - mult_op  * | /

LL Parser-Table driven Example (average program) read A read B sum := A + B write sum write sum / 2 We start at the top and predict needed productions on the basis of the current left-most non-terminal in the tree and the current input token

LL Parser-Table driven Parse tree for the average program

LL Parser-Table driven Table-driven LL parsing: you have a big loop in which you repeatedly look up an action in a two-dimensional table based on current leftmost non-terminal and current input token. The actions are (1) match a terminal (2) predict a production (3) announce a syntax error

LL Parser-Table driven LL(1) parse table for parsing for calculator language CFG rule #

LL Parser-Table driven To keep track of the left-most non-terminal, push the as-yet-unseen portions of productions onto a stack The key thing to keep in mind is that the stack contains all the stuff you expect to see between now and the end of the program what you predict you will see stmt stmt_list $$$ program stmt stmt_list $$$

The LL Grammar Class The Left Recursion Problem If a grammar has left recursion, either direct or indirect, it cannot be the basis for a top-down parser A grammar can be modified to remove left recursion A  cA’ A’  bA’ |  A  Ab|c A A b A A c A’ b b A’ c 

The LL Grammar Class example: id_list  id | id_list , id equivalently id_list  id id_list_tail id_list_tail  , id id_list_tail |  we can get rid of all left recursion mechanically in any grammar

The LL Grammar Class The other characteristic of grammars that disallows top-down parsing is the lack of pairwise disjointness The inability to determine the correct RHS on the basis of one token of lookahead Def: FIRST() = {a |  =>* a } (If  =>* ,  is in FIRST()) Where =>* is one or more step of derivation.

The LL Grammar Class Pairwise Disjointness Test: For each nonterminal, A, in the grammar that has more than one RHS, for each pair of rules, A  i and A  j, it must be true that FIRST(i)  FIRST(j) =  Examples: A  a | bB | cAb A  a | aB Here First(a) and First(bB ) and First(cAb ) are disjointed. Which is a,b, and c But this one is not!!

The LL Grammar Class Left factoring can resolve the problem Replace <variable>  identifier | identifier [<expression>] with <variable>  identifier <new> <new>   | <expression> or <variable>  identifier [[<expression>]] (the outer brackets are metasymbols of EBNF)

The LL Grammar Class Other problems trying to make a grammar LL(1) the"dangling else" problem prevents grammars from being LL(1) (or in fact LL(k) for any k) the following natural grammar fragment is ambiguous (Pascal) stmt  if cond then_clause else_clause | other_stuff then_clause  then stmt else_clause  else stmt | 

The LL Grammar Class Most grammars can be parsed using bottom-up not top-down stmt  balanced_stmt | unbalanced_stmt balanced_stmt  if cond then balanced_stmt else balanced_stmt | other_stuff unbalanced_stmt  if cond then stmt | if cond then balanced_stmt else unbalanced_stmt

The LL Grammar Class The usual approach is to use the ambiguous grammar together with a disambiguating rule that says else goes with the closest then or more generally, the first of two possible productions is the one to predict (or reduce)

The LL Grammar Class Better yet, languages (since Pascal) generally employ explicit end-markers, which eliminate this problem In Modula-2, for example, one says: if A = B then if C = D then E := F end else G := H end Ada says 'end if'; other languages say 'fi'

The LL Grammar Class One problem with end markers is that they tend to bunch up. In Pascal you say if A = B then … else if A = C then … else if A = D then … else if A = E then … else ...; With end markers this becomes if A = B then … else if A = C then … else if A = D then … else if A = E then … else ...; end; end; end; end;

Predictive Parsing Model Parsing table M is a two-dimensional array M[A,a] where A is a nonterminal, a is a terminal. Input stack

Predictive Parsing Action for parser X is top-stack symbol. If X=a =$, the parser halts , and announces successful, completion. If X=a  $, the parser pops X off stack and advances the next pointer to next input symbol. If X is a non terminal, look up entry M[X,a] from the table. The entry is either an X- production of the grammar or error. If M[X,a]={XUVW} , the parser replace X on top of stack by WVU (with U on top). Assume that we output the rule. If M[X,a] = error , the parser prints error, and calls error recovery routine. X is top-stack symbol. a is a current look ahead symbol.

Predictive Parsing Example: Consider grammar EE+T | T T  T*F | F F  (E) | id After eliminate left recursion, E  TE’ E’  +TE’ |  T  FT’ T’  *FT’|  EE+T | T T  T*F | F

Predictive Parsing Parsing Table M E  TE’ T  FT’ F id F (E)

* match id F id F + id match match match match T’ T’ T’ T T E’ E’ E $ id + id*id

Predictive Parsing Consider input : id + id*id E  TE’ T  FT’ F id F  (E) | id T’  *FT’ F id T’   E’  

Bottom-up Parsing The parsing problem is finding the correct RHS in a right-sentential form to reduce to get the previous right-sentential form in the derivation

Bottom-up Parsing Example E E+T | T T T*F | F F (E) | id =>E+T*F =>E+T*id =>E+F*id =>E+id*id =>T+id*id =>F+id*id =>id+id*id Rightmost derivation

Bottom-up Parsing Example E E+T | T T T*F | F F (E) | id Consider at step E + T*id. There are three possibilities. E+T from E E+T T from E T id from F  id But the right one is …. F  id How do we know?… E => E+T =>E+T*F =>E+T*id =>E+F*id =>E+id*id =>T+id*id =>F+id*id =>id+id*id Bottom-up parser starts with the source and work up!!

Bottom-up Parsing Intuition about handles: Def:  is the handle of the right sentential form  = w if and only if S =>*rm Aw =>rm w Def:  is a phrase of the right sentential form  if and only if S =>*  = 1A2 =>+ 12 Def:  is a simple phrase of the right sentential form  if and only if S =>*  = 1A2 => 12 where is zero or more rightmost derivation step. One or more step One step from root to leaves Task of bottom-up parsing is to find the unique handle of the RHS in BNF.

Bottom-up Parsing E Three internal nodes which correspond to 3 phrases. T E + F T * id Internal nodes are roots of subtree. The root node of the whole parse tree, E , generates E+T. T generates the leaves T*id which is a phrase. F generates id which is a phrase. Phrases of the sentence E+T*id are E+T*id, T*id, id Note phrases are not necessary RHSs in the grammar.

Bottom-up Parsing Simple phrases are subsets of the phrases. Here simple phrase is id.

Bottom-up Parsing Intuition about handles: The handle of a right sentential form is its leftmost simple phrase Given a parse tree, it is now easy to find the handle That is to rewrite a sentence into previous sentential form. Then the handle can be pruned from the parse tree and the process is repeated. Parsing can be thought of as handle pruning

Bottom-up Parsing Shift-Reduce Algorithms Reduce is the action of replacing the handle on the top of the parse stack with its corresponding LHS Shift is the action of moving the next token to the top of the parse stack

Bottom-up Parsing Advantages of LR parsers: They will work for nearly all grammars that describe programming languages. They work on a larger class of grammars than other bottom-up algorithms, but are as efficient as any other bottom-up parser. They can detect syntax errors as soon as it is possible. The LR class of grammars is a superset of the class parsable by LL parsers.

Bottom-up Parsing LR parsers must be constructed with a tool Knuth’s insight: A bottom-up parser could use the entire history of the parse, up to the current point, to make parsing decisions There were only a finite and relatively small number of different parse situations that could have occurred, so the history could be stored in a parser state, on the parse stack

Bottom-up Parsing An LR configuration stores the state of an LR parser (S0X1S1X2S2…XmSm, aiai+1…an$)

Bottom-up Parsing LR parsers are table driven, where the table has two components, an ACTION table and a GOTO table The ACTION table specifies the action of the parser, given the parser state and the next token Rows are state names; columns are terminals The GOTO table specifies which state to put on top of the parse stack after a reduction action is done Rows are state names; columns are nonterminals

Structure of An LR Parser

Bottom-up Parsing Initial configuration: (S0, a1…an$) Parser actions: If ACTION[Sm, ai] = Shift S, the next configuration is: (S0X1S1X2S2…XmSmaiS, ai+1…an$) If ACTION[Sm, ai] = Reduce A   and S = GOTO[Sm-r, A], where r = the length of , the next configuration is (S0X1S1X2S2…Xm-rSm-rAS, aiai+1…an$)

Bottom-up Parsing Parser actions (continued): If ACTION[Sm, ai] = Accept, the parse is complete and no errors were found. If ACTION[Sm, ai] = Error, the parser calls an error-handling routine.

LR Parsing Consider grammar E E+T E  T T T*F T  F F (E) F  id

LR Parsing Table

E E+T E  T T T*F T  F F (E) F  id 5 id 6 + 3 2 1 5 E T id F Parse: id+id*id $

LR Parsing E E+T E  T T T*F T  F F (E) F  id

Bottom-up Parsing A parser table can also be generated from a given grammar with a tool, e.g., yacc