Presentation is loading. Please wait.

Presentation is loading. Please wait.

Prof. Dr. Mostafa Abdel Aziem Mostafa Presented by Lexical and Syntax Analysis.

Similar presentations


Presentation on theme: "Prof. Dr. Mostafa Abdel Aziem Mostafa Presented by Lexical and Syntax Analysis."— Presentation transcript:

1 Prof. Dr. Mostafa Abdel Aziem Mostafa Presented by Lexical and Syntax Analysis

2 1-2 Chapter 4 Topics Introduction Lexical Analysis The Parsing Problem Recursive-Descent Parsing Bottom-Up Parsing

3 1-3 Why study lexical and syntax analyzers? Shows application of grammars discussed in chapter 3 Lexical and syntax analysis not just used in compiler design:  program formatters  programs that compute complexity  programs that analyze and react to configuration files Good to have some background, since compiler design not required

4 1-4 Introduction Language implementation systems must analyze source code, regardless of the specific implementation approach (compiled, interpreted, hybrid) Nearly all syntax analysis is based on a formal description of the syntax of the source language (context- free grammars or BNF) The parser can be based directly on the BNF Parsers based on BNF are easy to maintain (modular)

5 1-5 Syntax Analysis The syntax analysis portion of a language processor nearly always consists of two parts:  A low-level part called a lexical analyzer (mathematically, a finite automaton based on a regular grammar). Also called a scanner.  A high-level part called a syntax analyzer, or parser (mathematically, a push-down automaton based on a context-free grammar, or BNF)

6 1-6 Reasons to Separate Lexical and Syntax Analysis Simplicity - less complex approaches can be used for lexical analysis; separating them simplifies the parser Efficiency - separation allows optimization of the lexical analyzer – lex is fast! Portability - parts of the lexical analyzer may not be portable, but the parser always is portable because the lexical analyzer reads input program file and often includes buffering of that input. Why?

7 1-7 Lexical Analysis A lexical analyzer is a pattern matcher for character strings A lexical analyzer is a “front-end” for the parser. Views source as single string of characters. Identifies substrings of the source program that belong together - lexemes  Lexemes match a character pattern, which is associated with a lexical category called a token, normally represented as an enum/int – sum is a lexeme; its token may be IDENT

8 1-8 Lexical Analysis (review) result = oldsum – value / 100; LexemeToken resultIDENT =ASSIGN_OP oldsumIDENT -SUBTRACT_OP valueIDENT /DIVISION_OP 100INT_LIT ;SEMICOLON

9 1-9 Lexical Analysis (continued) The lexical analyzer is usually a function that is called by the parser when it needs the next token (used to process entire file at one time) Skips comments and whitespace May insert lexemes for user-defined names into symbol table Detects syntax errors in tokens, such as ill-formed floating point literals result = oldsum – value / 100 ; Scanner result ident = assign_op oldsum ident - subtract_op value ident …

10 1-10 Lexical Analysis (continued) Three approaches to building a lexical analyzer: Write a formal description of the tokens based on regular expressions and use a software tool that constructs a lexical analyzer given such a description (e.g., lex, flex) Design a state diagram that describes the tokens and write a program that implements the state diagram Design a state diagram that describes the tokens and hand-construct a table-driven implementation of the state diagram

11 1-11 Lexical Analysis (continued) six approaches to building a lexical analyzer: Define token, lexemes, patterns. Define regular expressions for patterns that have a regular expression. Define regular definitions. Draw transition diagrams. Draw non-deterministic finite automata NFA. Transform NFAs to DFAs.

12 1-12 State Diagram to recognize names, reserved words and integer literals – regular languages A naïve state diagram would have a transition from every state on every character in the source language - such a diagram would be very large! Use character class LETTER for all 52 letters, allows single transition from Start Instead of using state diagram for reserved words, use lookup table Use DIGIT to simplify transition for numbers Note that after 1 st character, LETTER or DIGIT is allowed in names

13 1-13 Lexical Analysis (cont.) Implementation (assume initialization): trace (sum+47)/total int lex() { getChar(); // puts char in nextChar, type in charClass // skips white space switch (charClass) { case LETTER: // Notice correspondence to state transition addChar(); // appends nextChar onto lexeme getChar(); while (charClass == LETTER || charClass == DIGIT) { addChar(); getChar(); } nextToken=IDENT; break; …

14 1-14 Lexical Analysis (cont.) … case DIGIT: addChar(); getChar(); while (charClass == DIGIT) { addChar(); getChar(); } nextToken=INT_LIT; break; so this corresponds to… How would 123abc be interpreted?

15 1-15 Lexical Analysis (cont.) … case UNKNOWN: Lookup(nextchar) getchar(); break; } /* End of switch */ Printf(next token is:%d, Next Lexeme is %s \n”,nexttoken,lexeme); Return nexttoken; } /* End of function lex */ so this corresponds to… How would (sum+47)/total be interpreted?

16 1-16 The Parsing Problem (syntax analysis) Goals of the parser, given an input program:  Find all syntax errors; for each, produce an appropriate diagnostic message, and recover.  Produce the parse tree, or at least a trace of the parse tree, for the program. The parse tree is used as the basis for translation.  Parse tree contains all the information needed by a language processor

17 1-17 The Parsing Problem (cont.) Two categories of parsers  Top down - produce the parse tree, beginning at the root  Order is that of a leftmost derivation  Traces or builds the parse tree in preorder  Bottom up - produce the parse tree, beginning at the leaves  Order is that of the reverse of a rightmost derivation Parsers look only one token ahead in the input

18 1-18 The Complexity of Parsing Parsers that work for any unambiguous grammar are complex and inefficient ( O(n 3 ), where n is the length of the input ). Algorithms must frequently back up and reparse, requires more maintenance of tree. Too slow to be used in practice. Compilers use parsers that only work for a subset of all unambiguous grammars, but do it in linear time ( O(n), where n is the length of the input )

19 1-19 2- Parser (syntax analyzer) Goals of the parser, given an input program:  Find all syntax errors; for each, produce an appropriate diagnostic message, and recover.  Produce the parse tree, or at least a trace of the parse tree, for the program. The parse tree is used as the basis for translation.  Parse tree contains all the information needed by a language processor What does that mean?What is the parse tree?

20 Parser Why?

21 Parse Tree

22

23 Derivation cont….

24 Parsers categories Two categories of parsers  1-Top down - produce the parse tree, beginning at the root  In term of Derivation: Order is that of a leftmost derivation  In term of Tree: Traces or builds the parse tree in preorder  2-Bottom up - produce the parse tree, beginning at the leaves  Order is that of the reverse of a rightmost derivation

25 1-Top-Down Parsing Like the grammar in slide 21? What is exponential time? Will see later on this section.

26 1.1- Top-Down Parsing, Backtracking Parsers Parsing xA  Assume three A-rules are: A->bB | cBb | a Next sentential form could be xbB , xcBb  or xa  parser must choose the correct rule for nonterminal A to get the next sentential form in the leftmost derivation, using only the first token produced by A. Compare next token of input with first symbol that can be generated by each rule. Not always easy, could be a nonterminal. May need to backtrack. What does that mean? see next example. How?

27 Backtrack Example Parsing ba Assume the rules are (start symbol is A): A->bE |Ba B -> b E -> e

28 1-28 1.2- Predictive Parsers Types of Implementation of the Top-Down Parsers The most common top-down parsing algorithms:  1.2.1.Recursive descent - a coded implementation  Recursive Descent parser (simply using recursion). Recursive Descent parser  1.2.2.LL parsers - table driven implementation  L left-to-right scan, L leftmost derivation (LL).  LL parser (using stack/parse table). LL parser ANTLR is an LL(*) recursive-descent parser generator LL(*) means not restricted to a finite k tokens of lookahead

29 1-29 1.2.1-Recursive-Descent Parser Hand-coded solution – general approach Write a subprogram for each nonterminal (e.g., ) Subprograms may be recursive (e.g., for nested structures). A recursive-descent parsing is so named because it consists of collection of subprograms, many of which are recursive. Produces a parse tree in top-down order EBNF is ideally suited to be the basis for a recursive- descent parser, because EBNF minimizes the number of nonterminals (i.e., fewer subprograms)

30 Remember from 3.3.2 Example 3.5 The BNF had the problem that options and repetitions could not be directly expressed. BNF syntax can only represent a rule in one line, whereas in EBNF a terminating character, the semicolon, marks the end of a rule. Furthermore, EBNF includes mechanisms for enhancements, defining the number of repetitions, excluding alternatives, comments, etc. Note: In EBNF, additional metacharacters { } for a series of zero or more ( ) for a list, must pick one [ ] for an optional list; pick none or one Translates to while loop in recursive descent

31 1-31 Recursive-Descent Parsing (cont.) A grammar for simple expressions:  {(+ | -) }  {(* | /) }  id | ( ) A grammar for selection and declaration  if ( ) [else ] ident {, } Grammar does not enforce associativity rules, compiler must

32 1-32 Recursive-Descent Parsing (cont.) Assume we have a lexical analyzer function named lex, which puts the next token code in nextToken Simple case: only one RHS for a non-terminal  For each terminal symbol in the RHS (e.g., if), compare it with the next input token; if they match, continue, else there is an error  For each nonterminal symbol in the RHS (e.g., ), call its associated parsing subprogram

33 1-33 Recursive-Descent Parsing (cont.) More complex: Nonterminal has more than one RHS. Requires an initial process to determine which RHS it is to parse.  The correct RHS is chosen on the basis of the next token of input (the lookahead)  The next token is compared with the first token that can be generated by each RHS until a match is found  If no match is found, it is a syntax error

34 1-34 Recursive-Descent Parsing ( cont.) This particular routine does not detect errors Convention: Every parsing routine leaves the next token in nextToken void expr() { term(); while (nextToken = = ADD_OP || nextToken = = SUB_OP) { lex(); term(); } void term() { printf("Enter \n"); factor(); while (nextToken == MULT_OP || nextToken == DIV_OP) { lex(); factor(); } printf("Exit \n"); } void factor() { if (nextToken) == ID_CODE || nextToken == INT_CODE) lex(); else if (nextToken == LP_CODE) { lex(); expr(); if (nextToken == RP_CODE) lex(); else error(); } else error(); }

35 1-35 Trace the code for a + b  Call lex // sets nextToken to a, ID_CODE  Enter  Enter // a is factor  Call lex // sets nextToken to +, PLUS_CODE  Exit Exit // only 1 factor, no * or /  + matches PLUS_CODE // while loop  Call lex // sets nextToken to b, ID_CODE  Enter  Enter // b is factor  Call lex // sets nextToken to end-of-input  Exit Exit Exit //end while Recursive-Descent Parsing ( cont.)

36 Recursive Descent Quick Exercise Do Recursive Descent exercises : Trace the expression (sum+47)/total

37 One must consider the limitation s of the approach in terms of grammar restrictions: 1.The problem of Left recursion. 2.Whether the parser can always choose correct RHS.  Compare next token of input with first symbol that can be generated by each rule. Not always easy, could be a nonterminal. 3.Left factoring of common prefixes. Recursive-Descent Parsing / LL Grammars and grammar restrictions and approach

38 In terms of context-free grammar, a non-terminal r is left- recursive if the left-most symbol in any of r’s ‘alternatives’ either immediately (direct left-recursive) or through some other non-terminal definitions (indirect/hidden left- recursive) rewrites to r again. This is a problem for all top-down parsing algorithms, not just recursive descent 1- The problem of Left Recursion

39 1-39 Recursive-Descent Parsing / LL Grammars and Left Recursion Important to consider limitations of recursive- descent in terms of grammar restrictions The LL Grammar Class  The Left Recursion Problem  If a grammar has left recursion, either direct or indirect, it cannot be the basis for a top-down parser.  it causes a catastrophic problem for LL parsers.  A grammar can be modified to remove left recursion  A -> A + B // direct left recursive  continues to try to parse A Left-to-right scan Leftmost

40 Left Recursion

41 The problem of Left Recursion (cont)

42 1-42 Left Recursion Example E -> E + T | T T -> T * F | F F -> (E) | id x + y + z nextToken = x E // terminal symbol not matched E + T // terminal symbol not matched E + T + T // terminal symbol not matched E + T + T + T // terminal symbol not matched A person, of course, would choose T rather than E + T, but the function for is deterministic, would always choose E + T

43 To remove direct left recursion:  For each nonterminal, A, 1. Group the A-rules as: A->A  1, |...|A  m |  1 |...|  n where none of the  s begin with A 2. Replace the original A-rules with A ->  1 A' |  2 A' |... +  n A' A' ->  1 A' |...  m A' |   where  is the empty string 1-43 Eliminating Immediate Left Recursion erasure rule recursive rules NOTE: The effect of original A rules is to add  s to end of string, the revised rules add  s to the end of the string, then get rid of the A

44 Eliminating Immediate Left Recursion

45 Eliminating Indirect Left Recursion

46 Left Recursion Removal continue

47

48

49

50 Convince yourself (quick exercise) Grammar with Left recursion A -> Ax |   z Grammar without left recursion A ->  A’ A’ -> xA’ |   z Which grammar (or both) can produce: zxx zxxx We could just do x for this simple grammar, but see next slide

51 Slightly more complex example Grammar with Left recursion A -> Ax | Ay | Az |   c Grammar without left recursion A ->  A’ A’ -> xA’ | yA’ | zA’ |   c Grammar without left recursion A ->  A’ A’ -> xA’ | yA’ | zA’ | x | y | z  c Can we parse c?

52 1-52 Recursive-Descent Parsing / LL Grammars (cont) 1. E -> T E' 2. E' -> + T E' |  3. T -> F T' 4. T' -> * F T' |  5. F -> (E) | id E -> E + T | T T -> T * F | F F -> (E) | id  E + T  T + T  F + T  A + T  A + T * F  A + F * F  A + B * F  A + B * C –Grammar generates the same language but is not left recursive. Ex: A + B * C T E ' //rule 1 F T ' E ' // rule 3 A T’ E' // rule 5 F E ' // rule 4 erasure A + T E' // rule 2 A + F T' E’ // rule 3 A + B T' E’ // rule 5 A + B * F T' E’// rule 4 A + B * C T' E’// rule 5 A + B * C E’ // erasure rule A + B * C // erasure rule

53 Convince Yourself Try A * C + B with both grammars (hint for 2 nd grammar: try T + T) 1.E -> T E' 2.E' -> + T E' |  3.T -> F T' 4.T' -> * F T' |  5.F -> (E) | id 1.E -> E + T | T 2.T -> T * F | F 3.F -> (E) | id

54 2-choose the correct RHS Whether the parser can always choose the correct RHS on the basis of the next token of input, using only the first token generated by the leftmost nonterminal in the current sentential form.(how to achieve?)

55 1-55 Need to be able to choose RHS on the basis of the next token, using only first token generated by leftmost nonterminal Test to determine whether this can be done is pairwise disjointness test. Must compute a set named FIRST based on RHSs of nonterminal symbol Recursive-Descent Parsing /LL Grammars & pairwise disjointness

56 1-56 Recursive-Descent Parsing /LL Grammars & pairwise disjointness Pairwise Disjointness Test:  For each nonterminal, A, in the grammar that has more than one RHS, for each pair of rules, A   i and A   j, it must be true that FIRST(  i )  FIRST(  j ) =  Examples: A  aB | bAb | Bb B -> cB | d  FIRST of A-rules are {a}, {b} and {c, d} These are disjoint  FIRST of B-rules are {c},{d} Also disjoint  NOTE that we do the FIRST of all non-terminals, not just the first one

57 1-57 A -> aB | Bab B -> aB | g  FIRST sets of A are {a} and {a, g} which are NOT disjoint  If next input is an a, can't decide which RHS  Becomes more complex as more RHSs begin with nonterminals Example: do top-down parse of ag using just a one- token lookahead.  token is a. Do I apply aB or Bab?  remember recursive descent coding:  if (nextToken == ‘a’) // do what?? Recursive-Descent Parsing /LL Grammars & pairwise disjointness

58 Pairwise Disjoint cautions Simple Pairwise Disjoint A -> aB | aC // FIRST(A) = {a},{a} not disjoint B -> x // FIRST(B) = {x} C-> y // FIRST(C ) = {y} Is this pairwise disjoint? A -> aB | gC B -> x C -> x Is this pairwise disjoint? A -> Aa | aB B -> x is this a problem?

59 More examples Is this pairwise disjoint? A => aB | c B => c | Ab

60 3- the problem of left factoring

61 1-61 Left factoring can sometimes resolve the problem Replace  identifier | identifier [ ] with  identifier   | [ ] or  identifier [[ ]] (the outer brackets are metasymbols of EBNF indicating that inside is optional) left factoring ???????????

62 1.2.2-LL Parsers (table driven implementation)

63 LL Parsers (Cont.) The LL parsing actions are: ­ Match: to match top of parser stack with next input token ­ Predict: to predict a production and apply it in a derivation step ­ Accept: to accept and terminate the parsing of a sequence of tokens ­ Error: to report an error message when matching or prediction fails

64 LL Parsers (Cont.)

65 1-65 2-Bottom-Up Parsing – the Idea Example grammar and derivation: S -> aAc A -> aA | b S => aAc => aaAc => aabc grammar sample derivation Starting with sentence, aabc, must find handle. String aabc contains RHS b. Yields aaAc. String aaAc contains handle aA. Yields aAc. String aAc contains handle aAc. Yields S.

66 1-66 The Parsing Problem: Bottom-Up Bottom-up parsers  Given a right sentential form, , determine what substring of  is the RHS of the rule in the grammar that must be reduced to produce the previous sentential form in the right derivation(reduce sequence of tokens to start symbol)  Given sentential form may include more than one RHS from the language grammar  The correct RHS is called the handle.  The most common bottom-up parsing algorithms are in the LR family (Left-to-right scan, generates Rightmost derivation)

67 Quick Exercise Given the grammar: Goal -> aABe A -> Abc | b B -> d and the input string abbcde: Do a bottom-up parse to determine whether the string is a sentence in the language.

68 1-68 Bottom-up Parsing Bottom-up parser starts with last sentential form (input sentence – no non-terminals) and produces sequence of forms until all that remains is the start symbol The parsing problem is finding the correct RHS (handle) in a right-sentential form to reduce to get the previous right-sentential form in the derivation Grammars can be left recursive

69 1-69 Bottom-up Parsing Simple Example Compare the derivation/parse using the same grammar E =>E + T => E + T * F => E + T * id => E + F * id => E + id * id => T + id * id =>F + id * id => id + id * id id + id * id => F + id * id => T + id * id => E + id * id => E + F * id => E + T * id => E + T * F => E + T => E Sample bottom-up parse Sample derivation* E -> E + T | T T -> T * F | F F -> (E) | id *rightmost left recursion

70 1-70 Bottom-up Parsing - Handles  Formalized:  is the handle of the right sentential form  =  w if and only if S =>* rm  Aw => rm  w If a grammar is unambiguous, then there will be a unique handle If the grammar is ambiguous, there may be more than one possible handle => rm is rightmost derivation leading from S (start symbol) handle non-terminal

71 Back to our simple example Example grammar and derivation: S -> aAc A -> aA | b S => aAc => aaAc => aabc Bottom-up parse: aabc => aaAc => aAc => S Remember there’s an incorrect choice: aabc => aaAc => aS So how can we choose the right handle? Bottom-up is based on rightmost derivation Second option is not a rightmost derivation Insight is to identify a phrase (next slide…)*

72 Are you ready to find the Handle? A phrase is a sub-sequence of a sentential form that is eventually “reduced” to a non-terminal A simple phrase can be reduced in one step The handle is the left-most simple phrase Example: S => aAc => aaAc => aabc S ca A aA sentential form In this sentential form, what are the: phrases simple phrases handle So if we have the parse tree, it’s easy to identify the handle. But how do we do it when we’re reading input from left to right? May need to delay decision, read input until find handle. b

73 Phrases – Quick Exercise In this sentential form, what are the: Phrases. simple phrases. handle. S Ca A aA b c Example grammar and derivation: S -> aAC A -> aA | b C -> c S => aAC => aAc => aaAc => aabc

74 convince yourself Given the following grammer and the right sentential form,draw a parse tree and show the phrases,as well as the handle.  S->aAb|bBA A->ab|aAB B->aB|b  aaAbb

75 1-75 2.1-L.R parser For bottom-up parsing, need to decide:  when to reduce  what production (rule) to apply Shift-Reduce Algorithms  Reduce is the action of replacing the handle on the top of the parse stack with its corresponding LHS  Shift is the action of moving the next token to the top of the parse stack  Error parser discover that a syntax error has occurred.  Accept parser announce a successful completion of parsing. Every parser is a pushdown automaton (PDA) – can recognize a context-free grammar

76 1-76 Bottom-up Parsing (cont.) Knuth’s insight: A bottom-up parser could use the entire history of the parse, up to the current point, to make parsing decisions  Finite, relatively small number of different possible parse situations  Store history on the parse stack  No need to look to left and right of substring, can just look to the left LR parsers must be constructed with a tool S ca A aA b

77 1-77 Bottom-up Parsing (cont.) An LR configuration stores the state of an LR parser S s is a parser state (from a table - you’ll see) X s is a grammar symbol (from our CFG) a s are input symbols (from the string) (S 0 X 1 S 1 X 2 S 2 … X m S m, a i a i+1 …a n $) (S 0 X 1 S 1 X 2 S 2 … A->bS m, ba i a i+1 …a n $) A reduction step is triggered when we see the symbols corresponding to a rule’s RHS on the top of the stack

78 1-78 Structure of An LR Parser

79 1-79 Bottom-up Parsing (cont.) LR parsers are table driven, where the table has two components, an ACTION table and a GOTO table  The ACTION table specifies the action of the parser, given the parser state and the next token  Rows are state names; columns are terminals  The GOTO table specifies which state to put on top of the parse stack after a reduction action is done  Rows are state names; columns are nonterminals

80 1-80 Bottom-up Parsing (cont.) Initial configuration: (S 0, a 1 …a n $) Parser actions:  (S 0 X 1 S 1 X 2 S 2 …S m X m, a i a i+1 …a n $)  If ACTION[S m, a i ] = Shift S, the next configuration is: (S 0 X 1 S 1 X 2 S 2 …S m X m Sa i, a i+1 …a n $)  If ACTION[S m, a i ] = Reduce A   and S = GOTO[S m-r, A], where r = the length of , the next configuration is (S 0 X 1 S 1 X 2 S 2 …X m-r S m-r AS, a i a i+1 …a n $) stack input new stack top  was popped off stack

81 1-81 Bottom-up Parsing (cont.) Parser actions (continued):  If ACTION[S m, a i ] = Accept, the parse is complete and no errors were found.  If ACTION[S m, a i ] = Error, the parser calls an error- handling routine.

82 1-82 LR Parsing Table 1.E -> E + T 2.E -> T 3.T -> T * F 4.T -> F 5.F -> (E) 6.F -> id S4

83 1-83 Bottom-up Parsing/LR example Parse id + id * id StackInputAction 0id + id * id$S5 0 id5+ id * id$R6 (use GOTO[0.F]) 0 F3+ id * id$R4 (use GOTO[0,T]) 0 T2+ id * id$R2 (use GOTO[0,E]) 0 E1+ id * id$S6 0 E1 +6id * id$S5 0 E1 +6 id5* id$R6 (use GOTO[6,F]) 0 E1 +6 F3* id$R4 (use GOTO[6,T] 0 E1 +6 T9* id$S7 0 E1 +6 T9 *7id$S5 0 E1 +6 T9 *7 id5$R6 (use GOTO[7.F]) 0 E1 +6 T9 *7 F10$R3 (use GOTO[6,T]) 0 E1 +6 T9$R1 (use GOTO[0.E]) 0 E1$accept

84 1-84 Bottom-up Parsing/ LR parsers LR parsers use a relatively small program and a parsing table Advantages of LR parsers:  They will work for nearly all grammars that describe programming languages.  They work on a larger class of grammars than other bottom- up algorithms, but are as efficient as any other bottom-up parser.  They can detect syntax errors as soon as it is possible in left- to-right scan  The LR class of grammars is a superset of the class parsable by LL parsers.


Download ppt "Prof. Dr. Mostafa Abdel Aziem Mostafa Presented by Lexical and Syntax Analysis."

Similar presentations


Ads by Google