Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 4 Syntax Analysis

Similar presentations


Presentation on theme: "Chapter 4 Syntax Analysis"— Presentation transcript:

1 Chapter 4 Syntax Analysis

2 How do grammars relate to parsing process ?
An Overview of Parsing Why are Grammars to formally describe Languages Important ? Precise, easy-to-understand representations Compiler-writing tools can take grammar and generate a compiler Allow language to be evolved (new statements, changes to statements, etc.) Languages are not static, but are constantly upgraded to add new features or fix “old” ones How do grammars relate to parsing process ?

3 Parsing During Compilation
regular expressions errors lexical analyzer parser rest of front end symbol table source program parse tree get next token token intermediate repres Also technically part or parsing Includes augmenting info on tokens in source, type checking, semantic analysis uses a grammar to check structure of tokens produces a parse tree syntactic errors and recovery recognize correct syntax report errors

4 Parsing Responsibilities
Syntax Error Identification / Handling Recall typical error types: Lexical : Misspellings Syntactic : Omission, wrong order of tokens Semantic : Incompatible types Logical : Infinite loop / recursive call Majority of error processing occurs during syntax analysis NOTE: Not all errors are identifiable !!

5 Key Issues – Error Processing
Detecting errors Finding position at which they occur Clear / accurate presentation Recover (pass over) to continue and find later errors Don’t impact compilation of “correct” programs

6 What are some Typical Errors ?
#include<stdio.h> int f1(int v) { int i,j=0; for (i=1;i<5;i++) { j=v+f2(i) } return j; } int f2(int u) { int j; j=u+f1(u*u); int main() for (i=1;i<10;i++) { j=j+i*i printf(“%d\n”,i); } printf("%d\n",f1(j)); return 0; } As reported by MS VC++ 'f2' undefined; syntax error : missing ';' before '}‘ syntax error : missing ';' before identifier 'printf' Which are “easy” to recover from? Which are “hard” ?

7 Error Recovery Strategies
Panic Mode – Discard tokens until a “synchronous” token is found ( end, “;”, “}”, etc. ) -- Decision of designer -- Problems: skip input miss declaration – causing more errors miss errors in skipped material -- Advantages: simple suited to 1 error per statement Phrase Level – Local correction on input -- “,” ”;” – Delete “,” – insert “;” -- Also decision of designer -- Not suited to all situations -- Used in conjunction with panic mode to allow less input to be skipped

8 Error Recovery Strategies – (2)
Error Productions: --Augment grammar with rules -- Augment grammar used for parser construction / generation -- example: add a rule for := in C assignment statements Report error but continue compile -- Self correction + diagnostic messages Global Correction: -- Adding / deleting / replacing symbols may do many changes ! -- Algorithms available to minimize changes costly - key issues

9 Motivating Grammars Regular Expressions  Basis of lexical analysis
 Represent regular languages Context Free Grammars  Basis of parsing  Represent language constructs Reg. Lang. CFLs

10 Context Free Grammars : Concepts & Terminology
Definition: A Context Free Grammar, CFG, is described by T, NT, S, PR, where: T: Terminals / tokens of the language NT: Non-terminals to denote sets of strings generated by the grammar & in the language S: Start symbol, SNT, which defines all strings of the language PR: Production rules to indicate how T and NT are combined to generate valid strings of the language. PR: NT  (T | NT)* Like a Regular Expression / DFA / NFA, a Context Free Grammar is a mathematical model

11 Context Free Grammars : A First Look
assign_stmt  id := expr ; expr  expr operator term expr  term term  id term  real term  integer operator  + operator  - Derivation: A sequence of grammar rule applications and substitutions that transform a starting non-term into a sequence of terminals / tokens. Simply stated: Grammars / production rules allow us to “rewrite” and “identify” correct syntax.

12 Derivation Let’s derive: id := id + real – integer ; using production:
assign_stmt assign_stmt  id := expr ;  id := expr ; expr  expr operator term id := expr operator term; expr  expr operator term id := expr operator term operator term; expr  term id := term operator term operator term; term  id id := id operator term operator term; operator  + id := id + term operator term; term  real id := id + real operator term; operator  - id := id + real - term; term  integer id := id + real - integer;

13 Example Grammar 9 Production rules expr  expr op expr expr  ( expr )
expr  id op  + op  - op  * op  / op   9 Production rules To simplify / standardize notation, we offer a synopsis of terminology.

14 Example Grammar - Terminology
Terminals: a,b,c,+,-,punc,0,1,…,9 Non Terminals: A,B,C,S T or NT: X,Y,Z Strings of Terminals: u,v,…,z in T* Strings of T / NT:  ,  in ( T  NT)* Alternatives of production rules: A 1; A 2; …; A k;  A  1 | 2 | … | 1 First NT on LHS of 1st production rule is designated as start symbol ! E  E A E | ( E ) | -E | id A  + | - | * | / | 

15 Grammar Concepts  derives in  one step  derives in  zero steps
A step in a derivation is zero or one action that replaces a NT with the RHS of a production rule. EXAMPLE: E  -E (the  means “derives” in one step) using the production rule: E  -E EXAMPLE: E  E A E  E * E  E * ( E ) DEFINITION:  derives in one step  derives in  one step  derives in  zero steps + * EXAMPLES:  A      if A  is a production rule 1  2 …  n  1  n ;    for all  If    and    then    * * * *

16 How does this relate to Languages?
+ Let G be a CFG with start symbol S. Then S W (where W has no non-terminals) represents the language generated by G, denoted L(G). So WL(G)  S W. + W : is a sentence of G When S  (and  may have NTs) it is called a sentential form of G. EXAMPLE: id * id is a sentence Here’s the derivation: E  E A E E * E  id * E  id * id E  id * id Sentential forms *

17 Other Derivation Concepts
Leftmost: Replace the leftmost non-terminal symbol E  E A E  id A E  id * E  id * id Rightmost: Replace the leftmost non-terminal symbol E  E A E  E A id  E * id  id * id lm lm lm lm rm rm rm rm Important Notes: A   If A    , what’s true about  ? If A    , what’s true about  ? lm rm Derivations: Actions to parse input can be represented pictorially in a parse tree.

18 Examples of LM / RM Derivations
E  E A E | ( E ) | -E | id A  + | - | * | / |  A leftmost derivation of : id + id * id A rightmost derivation of : id + id * id

19 Derivations & Parse Tree
E  E A E E A *  E * E E A id *  id * E E A id *  id * id

20 Parse Trees and Derivations
Consider the expression grammar: E  E+E | E*E | (E) | -E | id Leftmost derivations of id + id * id E + E  id + E E + id E + E  E + E E * id + E  id + E * E + id

21 Parse Tree & Derivations - continued
* id + E * E  id + id * E + id id + id * E  id + id * id E * + id

22 Alternative Parse Tree & Derivation
E  E * E  E + E * E  id + E * E  id + id * E  id + id * id E + * id WHAT’S THE ISSUE HERE ? Two distinct leftmost derivations!

23 Resolving Grammar Problems/Difficulties
Regular Expressions : Basis of Lexical Analysis Reg. Expr.  generate/represent regular languages Reg. Languages  smallest, most well defined class of languages Context Free Grammars: Basis of Parsing CFGs  represent context free languages CFLs  contain more powerful languages Reg. Lang. CFLs

24 Resolving Problems/Difficulties – (2)
Since Reg. Lang.  Context Free Lang., it is possible to go from reg. expr. to CFGs via NFA. Recall: (a | b)*abb start 3 b 2 1 a

25 Resolving Problems/Difficulties – (3)
Construct CFG as follows: Each State I has non-terminal Ai : A0, A1, A2, A3 If then Ai a Aj If then Ai bAj If I is an accepting state, Ai  : A3   If I is a starting state, Ai is the start symbol : A0 i j a i j b T={a,b}, NT={A0, A1, A2, A3}, S = A0 PR ={ A0 aA0 | aA1 | bA0 ; A1  bA2 ; A2  bA3 ; A3   } start 3 b 2 1 a

26 How Does This CFG Derive Strings ?
start 3 b 2 1 a vs. A0 aA0, A0 aA1 A0 bA0, A1 bA2 A2 bA3, A3  How is abaabb derived in each ?

27 Regular Expressions vs. CFGs
Regular expressions for lexical syntax CFGs are overkill, lexical rules are quite simple and straightforward REs – concise / easy to understand More efficient lexical analyzer can be constructed RE for lexical analysis and CFGs for parsing promotes modularity, low coupling & high cohesion. CFGs : Match tokens “(“ “)”, begin / end, if-then-else, whiles, proc/func calls, … Intended for structural associations between tokens ! Are tokens in correct order ?

28 Resolving Grammar Difficulties : Motivation
Humans write / develop grammars Different parsing approaches have different needs Top-Down vs. Bottom-Up For: 1  remove “errors” For: 2  put / redesign grammar ambiguity -moves cycles left recursion left factoring Grammar Problems

29 Resolving Problems: Ambiguous Grammars
Consider the following grammar segment: Else must match to previous then. Structure indicates parse sub-tree for expression. stmt  if expr then stmt | if expr then stmt else stmt | other (any other statement) What’s problem here ? Let’s consider a simple parse tree: stmt expr E1 E2 S3 S1 S2 then else if

30 Example : What Happens with this string?
If E1 then if E2 then S1 else S2 How is this parsed ? if E1 then if E2 then S1 else S2 if E1 then if E2 then S1 else S2 vs. What’s the issue here ?

31 Parse Trees for Example
Form 1: stmt expr E1 then if E2 S2 S1 else Form 2: stmt expr E1 S2 then else if E2 S1 What’s the issue here ?

32 Rule: Match each else with the closest previous unmatched then.
Removing Ambiguity Take Original Grammar: stmt  if expr then stmt | if expr then stmt else stmt | other (any other statement) Rule: Match each else with the closest previous unmatched then. Revise to remove ambiguity: stmt  matched_stmt | unmatched_stmt matched_stmt  if expr then matched_stmt else matched_stmt | other unmatched_stmt  if expr then stmt | if expr then matched_stmt else unmatched_stmt

33 Resolving Difficulties : Left Recursion
A left recursive grammar has rules that support the derivation : A  A, for some . + Top-Down parsing can’t reconcile this type of grammar, since it could consistently make choice which wouldn’t allow termination. A  A  A  A … etc. A A |  Take left recursive grammar: A  A |  To the following: A  A’ A’  A’ | 

34 Why is Left Recursion a Problem ?
Derive : id + id + id E  E + T  Consider: E  E + T | T T  T * F | F F  ( E ) | id How can left recursion be removed ? E  E + T | T What does this generate? E  E + T  T + T E  E + T  E + T + T  T + T + T How does this build strings ? What does each string have to start with ?

35 Resolving Difficulties : Left Recursion (2)
Informal Discussion: Take all productions for A and order as: A  A1 | A2 | … | Am | 1 | 2 | … | n Where no i begins with A. Now apply concepts of previous slide: A  1A’ | 2A’ | … | nA’ A’  1A’ | 2A’ | … | m A’ |  For our example: E  E + T | T T  T * F | F F  ( E ) | id E  TE’ E’  + TE’ |  T  FT’ T’  * FT’ | 

36 Resolving Difficulties : Left Recursion (3)
Problem: If left recursion is two-or-more levels deep, this isn’t enough S  Aa | b A  Ac | Sd |  S  Aa  Sda Algorithm: Input: Grammar G with ordered Non-Terminals A1, ..., An Output: An equivalent grammar with no left recursion Arrange the non-terminals in some order A1=start NT,A2,…An for i := 1 to n do begin for j := 1 to i – 1 do begin replace each production of the form Ai  Aj by the productions Ai  1 | 2 | … | k where Aj  1|2|…|k are all current Aj productions; end eliminate the immediate left recursion among Ai productions

37 Using the Algorithm Apply the algorithm to: A1  A2a | b| 
A2  A2c | A1d i = 1 For A1 there is no left recursion i = 2 for j=1 to 1 do Take productions: A2  A1 and replace with A2  1  | 2  | … | k | where A1 1 | 2 | … | k are A1 productions in our case A2  A1d becomes A2  A2ad | bd | d What’s left: A1 A2a | b |  A2  A2 c | A2 ad | bd | d Are we done ?

38 Using the Algorithm (2) No ! We must still remove A2 left recursion !
A1 A2a | b |  A2  A2 c | A2 ad | bd | d Recall: A  A1 | A2 | … | Am | 1 | 2 | … | n A  1A’ | 2A’ | … | nA’ A’  1A’ | 2A’ | … | m A’ |  A1 A2a | b |  A2  bdA2’ | dA2’ A2’  c A2’ | adA2’ |  Apply to above case. What do you get ?

39 Removing Difficulties : -Moves
Transformation: In order to remove A  find all rules of the form B uAv and add the rule B uv to the grammar G. Why does this work ? A is Grammar -free if: It has no -production or There is exactly one -production S   and then the start symbol S does not appear on the right side of any production. Examples: E  TE’ E’  + TE’ |  T  FT’ T’  * FT’ |  F  ( E ) | id A1  A2 a | b A2  bd A2’ | A2’ A2’  c A2’ | bd A2’ | 

40 Removing Difficulties : Cycles
How would cycles be removed ? Make sure every production is adding some terminal(s) (except a single  -production in the start NT)… e.g. S  SS | ( S ) |  Has a cycle: S  SS  S Transform to: S  S ( S ) | ( S ) |  S  

41 Removing Difficulties : Left Factoring
Problem : Uncertain which of 2 rules to choose: stmt  if expr then stmt else stmt | if expr then stmt When do you know which one is valid ? What’s the general form of stmt ? A  1 |   : if expr then stmt 1: else stmt 2 :  Transform to: A   A’ A’  1 | 2 EXAMPLE: stmt  if expr then stmt rest rest  else stmt | 

42 Top-Down Parsing Can be viewed as an attempt to find a leftmost derivation for an input string. Why ? By always replacing the leftmost non-terminal symbol via a production rule, we are guaranteed of developing a parse tree in a left-to-right fashion that is consistent with scanning the input. A  aBc  adDc  adec (scan a, scan d, scan e, scan c - accept!) Recursive-descent parsing concepts – may involve backtracking Predictive parsing Recursive / Brute force technique non-recursive / table driven Error recovery Implementation

43 Recursive Descent Parsing Concepts
General category of Parsing Top-Down Choose production rule based on input symbol May require backtracking to correct a wrong choice. Example: S  c A d A  ab | a input: cad cad S c d A a b Problem: backtrack cad S c d A S cad S c d A a b cad S c d A a cad c d A a

44 Predictive Parsing : Recursive
To eliminate backtracking, what must we do/be sure of for grammar? no left recursion apply left factoring Frequently, when grammar satisfies above conditions: current input symbol in conjunction with current non-terminal uniquely determines the production that needs to be applied. Utilize transition diagrams: For each non-terminal of the grammar do following: Create an initial and final state If A X1X2…Xn is a production, add path with edges X1, X2, … , Xn Once transition diagrams have been developed, apply a straightforward technique to algorithmic transition diagrams with procedure and possible recursion.

45 Transition Diagrams Unlike lexical equivalents, each edge represents a token Transition implies: if token, match input else call proc Recall earlier grammar and its associated transition diagrams E  TE’ E’  + TE’ |  T  FT’ T’  * FT’ |  F  ( E ) | id 2 1 T E’ E: How are transition diagrams used ? Are -moves a problem ? Can we simplify transition diagrams ? Why is simplification critical ? 3 6 + 4 T E’: 5 E’ 7 9 8 F T’ T: 10 13 * 11 F T’: 12 T’ 14 17 ( 15 E F: 16 ) id

46 How are Transition Diagrams Used ?
main() { TD_E(); } TD_E’() { token = get_token(); if token = ‘+’ then { TD_T(); TD_E’(); } } What happened to -moves? … “else unget() and terminate” NOTE: not all error conditions have been represented. TD_F() { token = get_token(); if token = ‘(’ then { TD_E(); match(‘)’); } else if token.value <> id then {error + EXIT} ... } TD_E() { TD_T(); TD_E’(); } TD_T() { TD_F(); TD_T’(); } TD_T’() { token = get_token(); if token = ‘*’ then { TD_F(); TD_T’(); } }

47 How can Transition Diagrams be Simplified ?
6 E’ E’: 5 3 + 4 T

48 How can Transition Diagrams be Simplified ? (2)
6 E’ E’: 5 3 + 4 T E’: 5 3 + 4 T 6

49 How can Transition Diagrams be Simplified ? (3)
6 E’ E’: 5 3 + 4 T E’: 3 + 4 T 6 E’: 5 3 + 4 T 6

50 How can Transition Diagrams be Simplified ? (4)
6 E’ E’: 5 3 + 4 T E’: 3 + 4 T 6 E’: 5 3 + 4 T 6 2 1 E’ T E:

51 How can Transition Diagrams be Simplified ? (5)
6 E’ E’: 5 3 + 4 T E’: 3 + 4 T 6 E’: 5 3 + 4 T 6 2 1 E’ T E: T E: 3 + 4 6

52 Additional Transition Diagram Simplifications
Similar steps for T and T’ Simplified Transition diagrams: * F 7 T: 10 13 T’: 10 * 11 F 13 14 17 ( 15 E F: 16 ) id

53 Motivating Table-Driven Parsing
1. Left to right scan input 2. Find leftmost derivation Terminator Grammar: E  TE’ E’  +TE’ |  T  id Input : id + id $ Derivation: E  Processing Stack:

54 Non-Recursive / Table Driven
+ b $ Y X Z Input Predictive Parsing Program Stack Output Parsing Table M[A,a] (String + terminator) NT + T symbols of CFG What actions parser should take based on stack / input Empty stack symbol General parser behavior: X : top of stack a : current input 1. When X=a = $ halt, accept, success 2. When X=a  $ , POP X off stack, advance input, go to 1. 3. When X is a non-terminal, examine M[X,a] if it is an error  call recovery routine if M[X,a] = {X  UVW}, POP X, PUSH W,V,U DO NOT expend any input

55 Algorithm for Non-Recursive Parsing
Set ip to point to the first symbol of w$; repeat let X be the top stack symbol and a the symbol pointed to by ip; if X is terminal or $ then if X=a then pop X from the stack and advance ip else error() else /* X is a non-terminal */ if M[X,a] = XY1Y2…Yk then begin pop X from stack; push Yk, Yk-1, … , Y1 onto stack, with Y1 on top output the production XY1Y2…Yk end until X=$ /* stack is empty */ Input pointer May also execute other code based on the production used

56 Example Our well-worn example ! Table M E  TE’ E’  + TE’ | 
T  FT’ T’  * FT’ |  F  ( E ) | id Our well-worn example ! Table M Non-terminal INPUT SYMBOL id + * ( ) $ E E’ T T’ F ETE’ TFT’ Fid E’+TE’ T’ T’*FT’ F(E) E’

57 Trace of Example STACK INPUT OUTPUT

58 Trace of Example Expend Input STACK INPUT OUTPUT $E $E’T $E’T’F
$E’T’id $E’T’ $E’ $E’T+ $E’T’F* $ id + id * id$ + id * id$ id * id$ * id$ id$ E TE’ T FT’ F  id T’   E’  +TE’ T’  *FT’ E’   STACK INPUT OUTPUT Expend Input

59 Leftmost Derivation for the Example
The leftmost derivation for the example is as follows: E  TE’  FT’E’  id T’E’  id E’  id + TE’  id + FT’E’  id + id T’E’  id + id * FT’E’  id + id * id T’E’  id + id * id E’  id + id * id

60 What’s the Missing Puzzle Piece ?
Constructing the Parsing Table M ! 1st : Calculate First & Follow for Grammar 2nd: Apply Construction Algorithm for Parsing Table ( We’ll see this shortly ) Basic Tools: First: Let  be a string of grammar symbols. First() is the set that includes every terminal that appears leftmost in  or in any string originating from . NOTE: If   , then  is First( ). Follow: Let A be a non-terminal. Follow(A) is the set of terminals a that can appear directly to the right of A in some sentential form. (S  Aa, for some  and ). * *

61 Computing First(X) : All Grammar Symbols
1. If X is a terminal, First(X) = {X} 2. If X  is a production rule, add  to First(X) 3. If X is a non-terminal, and X Y1Y2…Yk is a production rule Place First(Y1) in First(X) if Y1 , Place First(Y2) in First(X) if Y2  , Place First(Y3) in First(X) if Yk-1  , Place First(Yk) in First(X) NOTE: As soon as Yi   , Stop. Repeat above steps until no more elements are added to any First( ) set. Checking “Yj   ?” essentially amounts to checking whether  belongs to First(Yj) *

62 Computing First(X) : All Grammar Symbols - continued
Informally, suppose we want to compute First(X1 X2 … Xn ) = First (X1) “+” First(X2) if  is in First(X1) “+” First(X3) if  is in First(X2) “+” First(Xn) if  is in First(Xn-1) Note 1: Only add  to First(X1 X2 … Xn) if  is in First(Xi) for all i Note 2: For First(X1), if X1 Z1 Z2 … Zm , then we need to compute First(Z1 Z2 … Zm) !

63 Example 1 Given the production rules: S  i E t SS’ | a S’  eS | 
E  b

64 Example 1 Given the production rules: Verify that S  i E t SS’ | a
S’  eS |  E  b Verify that First(S) = { i, a } First(S’) = { e,  } First(E) = { b }

65 Example 2 Computing First for: E  TE’ E’  + TE’ | 
T  FT’ T’  * FT’ |  F  ( E ) | id

66 Example 2 Computing First for: E  TE’ E’  + TE’ | 
T  FT’ T’  * FT’ |  F  ( E ) | id First(TE’) First(T) “+” First(E’) First(E) * Not First(E’) since T   First(T) First(F) “+” First(T’) First(F) * Not First(T’) since F   First((E)) “+” First(id) “(“ and “id” Overall: First(E) = { ( , id } = First(F) First(E’) = { + ,  } First(T’) = { * ,  } First(T)  First(F) = { ( , id }

67 Computing Follow(A) : All Non-Terminals
1. Place $ in Follow(S), where S is the start symbol and $ signals end of input 2. If there is a production A B, then everything in First() is in Follow(B) except for . 3. If A B is a production, or A B and    (First() contains  ), then everything in Follow(A) is in Follow(B) * Whatever followed A must follow B, since nothing follows B from the production rule. We’ll calculate Follow for two grammars.

68 The Algorithm for Follow – pseudocode
Initialize Follow(X) for all non-terminals X to empty set. Place $ in Follow(S), where S is the start NT. Repeat the following step until no modifications are made to any Follow-set For any production X  X1 X2 … Xm For j=1 to m, if Xj is a non-terminal then: Follow(Xj) =Follow(Xj)(First(Xj+1,…,Xm)-{}); If First(Xj+1,…,Xm) contains  or Xj+1,…,Xm=  then Follow(Xj)=Follow(Xj) Follow(X);

69 Computing Follow : 1st Example
Recall: S  i E t SS’ | a S’  eS |  E  b First(S) = { i, a } First(S’) = { e,  } First(E) = { b }

70 Computing Follow : 1st Example
Recall: S  i E t SS’ | a S’  eS |  E  b First(S) = { i, a } First(S’) = { e,  } First(E) = { b } Follow(S) – Contains $, since S is start symbol Since S  i E t SS’ , put in First(S’) – not  Since S’  , Put in Follow(S) Since S’  eS, put in Follow(S’) So…. Follow(S) = { e, $ } Follow(S’) = Follow(S) HOW? Follow(E) = { t } *

71 Example 2 Compute Follow for: E  TE’ E’  + TE’ | 
T  FT’ T’  * FT’ |  F  ( E ) | id

72 Example 2 Compute Follow for: Follow First E $ ) E ( id E’ $ ) E’  +
E  TE’ E’  + TE’ |  T  FT’ T’  * FT’ |  F  ( E ) | id Follow E $ ) E’ $ ) T + $ ) T’ + $ ) F + * $ ) First E ( id E’  + T ( id T’  * F ( id

73 Motivation Behind First & Follow
Is used to help find the appropriate production to follow given the top-of-the-stack non-terminal and the current input symbol. First: Example: If A   , and a is in First(), then when a=input, replace A with  (in the stack). ( a is one of first symbols of , so when A is on the stack and a is input, POP A and PUSH . Follow: Is used when First has a conflict, to resolve choices, or when First gives no suggestion. When    or   , then what follows A dictates the next choice to be made. * Example: If A   , and b is in Follow(A ), then when    and b is an input character, then we expand A with  , which will eventually expand to , of which b follows! (   : i.e., First( ) contains .) * *

74 Constructing Parsing Table
Algorithm: Repeat Steps 2 & 3 for each rule A Terminal a in First(): Add A to M[A, a ] a)  in First(): Add A  to M[A, b ] for all terminals b in Follow(A). b)  in First() and $ in Follow(A): Add A  to M[A, $ ] 4. All undefined entries are errors.

75 Constructing Parsing Table – Example 1
S  i E t SS’ | a S’  eS |  E  b First(S) = { i, a } First(S’) = { e,  } First(E) = { b } Follow(S) = { e, $ } Follow(S’) = { e, $ } Follow(E) = { t }

76 Constructing Parsing Table – Example 1
S  i E t SS’ | a S’  eS |  E  b First(S) = { i, a } First(S’) = { e,  } First(E) = { b } Follow(S) = { e, $ } Follow(S’) = { e, $ } Follow(E) = { t } S  i E t SS’ S  a E  b First(i E t SS’)={i} First(a) = {a} First(b) = {b} S’  eS S   First(eS) = {e} First() = {} Follow(S’) = { e, $ } Non-terminal INPUT SYMBOL a b e i t $ S S a S iEtSS’ S’ S’  S’ eS S’  E E b

77 Constructing Parsing Table – Example 2
E  TE’ E’  + TE’ |  T  FT’ T’  * FT’ |  F  ( E ) | id First(E,F,T) = { (, id } First(E’) = { +,  } First(T’) = { *,  } Follow(E,E’) = { ), $} Follow(F) = { *, +, ), $ } Follow(T,T’) = { +, ) , $}

78 Constructing Parsing Table – Example 2
E  TE’ E’  + TE’ |  T  FT’ T’  * FT’ |  F  ( E ) | id First(E,F,T) = { (, id } First(E’) = { +,  } First(T’) = { *,  } Follow(E,E’) = { ), $} Follow(F) = { *, +, ), $ } Follow(T,T’) = { +, ) , $} Expression Example: E  TE’ : First(TE’) = First(T) = { (, id } M[E, ( ] : E  TE’ M[E, id ] : E  TE’ (by rule 2) E’  +TE’ : First(+TE’) = + : M[E’, +] : E’  +TE’ (by rule 3) E’   :  in First( ) T’   :  in First( ) by rule 2 M[E’, )] : E’   (3.1) M[T’, +] : T’   (3.1) M[E’, $] : E’   (3.2) M[T’, )] : T’   (3.1) (Due to Follow(E’) M[T’, $] : T’   (3.2)

79 LL(1) Grammars L : Scan input from Left to Right
L : Construct a Leftmost Derivation 1 : Use “1” input symbol as lookahead in conjunction with stack to decide on the parsing action LL(1) grammars == they have no multiply-defined entries in the parsing table. Properties of LL(1) grammars: Grammar can’t be ambiguous or left recursive Grammar is LL(1) when A   &  do not derive strings starting with the same terminal a Either  or  can derive , but not both. Note: It may not be possible for a grammar to be manipulated into an LL(1) grammar

80 Predictive Parsing Program
Error Recovery When Do Errors Occur? Recall Predictive Parser Function: a + b $ Y X Z Input Predictive Parsing Program Stack Output Parsing Table M[A,a] If X is a terminal and it doesn’t match input. If M[ X, Input ] is empty – No allowable actions Consider two recovery techniques: A. Panic Mode B. Phrase-level Recovery

81 Panic-Mode Recovery Assume a non-terminal on the top of the stack.
Idea: skip symbols on the input until a token in a selected set of synchronizing tokens is found. The choice for a synchronizing set is important. Some ideas: Define the synchronizing set of A to be FOLLOW(A). then skip input until a token in FOLLOW(A) appears and then pop A from the stack. Resume parsing... Add symbols of FIRST(A) into synchronizing set. In this case we skip input and once we find a token in FIRST(A) we resume parsing from A. Productions that lead to  if available might be used. If a terminal appears on top of the stack and does not match to the input == pop it and and continue parsing (issuing an error message saying that the terminal was inserted).

82 Panic Mode Recovery, II General Approach: Modify the empty cells of the Parsing Table. if M[A,a] = {empty} and a belongs to Follow(A) then we set M[A,a] = “synch” Error-recovery Strategy : If A=top-of-the-stack and a=current-input, If A is NT and M[A,a] = {empty} then skip a from the input. If A is NT and M[A,a] = {synch} then pop A. If A is a terminal and A!=a then pop token (essentially inserting it).

83 Revised Parsing Table / Example
Non-terminal INPUT SYMBOL id + * ( ) $ E E’ T T’ F ETE’ TFT’ Fid E’+TE’ T’ T’*FT’ F(E) E’ Skip input symbol From Follow sets. Pop top of stack NT “synch” action

84 Revised Parsing Table / Example(2)
$E’T $E’T’F $E’T’id $E’T’ $E’T’F* $E’ $E’T+ $ + id * + id$ id * + id$ * + id$ + id$ id$ STACK INPUT Remark error, skip + error, M[F,+] = synch F has been popped

85 Writing Error Messages
Keep input counter(s) Recall: every non-terminal symbolizes an abstract language construct. Examples of Error-messages for our usual grammar E = means expression. top-of-stack is E, input is + “Error at location i, expressions cannot start with a ‘+’” or “error at location i, invalid expression” Similarly for E, * E’= expression ending. Top-of-stack is E’, input is * or id “Error: expression starting at j is badly formed at location i”

86 Writing Error-Messages, II
T = summation term. Top-of-stack is T, input is * “error at location i, invalid term.” T’= term ending Top-of-stack is T’, input is ( “error: term starting at k is badly formed at location i” F = summation/multiplication term Messages for Synch Errors. Top-of-stack is F input is + “error at location i, expected summation/multiplication term missing” Top-of-stack is E input is ) “error at location i, expected expression missing”

87 Writing Error Messages, III
When the top-of-the stack is a terminal that does not match… E.g. top-of-stack is id and the input is + “error at location i: identifier expected” Top-of-stack is ) and the input is terminal other than ) Every time you match an ‘(‘ push the location of ‘(‘ to a “left parenthesis” stack. this can also be done with the symbol stack. When the mismatch is discovered look at the left parenthesis stack to recover the location of the parenthesis. “error at location i: left parenthesis at location m has no closing right parenthesis” E.g. consider ( id * + (id id) $

88 Phrase-Level Recovery
Fill in blanks entries of parsing table with error handling routines These routines Modify stack and / or input stream Issue error message Problems: Modifying stack has to be done with care, so as to not create possibility of derivations that aren’t in language Infinite loops must be avoided Can be used in conjunction with panic mode to have more complete error handling

89 How Would You Implement TD Parser
Stack – Easy to handle. Input Stream – Responsibility of lexical analyzer Key Issue – How is parsing table implemented ? One approach: Assign unique IDS Non-terminal INPUT SYMBOL id + * ( ) $ E E’ T T’ F ETE’ TFT’ Fid E’+TE’ T’ T’*FT’ F(E) E’ synch All rules have unique IDs Also for blanks which handle errors synch actions

90 Revised Parsing Table:
INPUT SYMBOL Non-terminal id + * ( ) $ E 1 18 19 1 9 10 E’ 20 2 21 22 3 3 T 4 11 23 4 12 13 T’ 24 6 5 25 6 6 F 8 14 15 7 16 17 1 ETE’ 2 E’+TE’ 3 E’ 4 TFT’ 5 T’*FT’ 6 T’ 7 F(E) 8 Fid 9 – 17 : Sync Actions 18 – 25 : Error Handlers

91 How is Parser Constructed ?
One large CASE statement: state = M[ top(s), current_token ] switch (state) { case 1: proc_E_TE’( ) ; break ; case 8: proc_F_id( ) ; case 9: proc_sync_9( ) ; case 17: proc_sync_17( ) ; case 18: case 25: } Combine  put in another switch Some sync actions may be same Some error handlers may be similar Procs to handle errors

92 Final Comments – Top-Down Parsing
So far, We’ve examined grammars and language theory and its relationship to parsing Key concepts: Rewriting grammar into an acceptable form Examined Top-Down parsing: Brute Force : Transition diagrams & recursion Elegant : Table driven We’ve identified its shortcomings: Not all grammars can be made LL(1) ! Bottom-Up Parsing - Future

93 The End


Download ppt "Chapter 4 Syntax Analysis"

Similar presentations


Ads by Google