Download presentation
Presentation is loading. Please wait.
Published byKatherine Bryant Modified over 9 years ago
1
Parsing — Part II (Top-down parsing, left-recursion removal) Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit permission to make copies of these materials for their personal use.
2
Parsing Techniques Top-down parsers (LL(1), recursive descent) Start at the root of the parse tree and grow toward leaves Pick a production & try to match the input Bad “pick” may need to backtrack Some grammars are backtrack-free (predictive parsing) Bottom-up parsers (LR(1), operator precedence) Start at the leaves and grow toward root As input is consumed, encode possibilities in an internal state Start in a state valid for legal first tokens Bottom-up parsers handle a large class of grammars
3
A top-down parser starts with the root of the parse tree The root node is labeled with the goal symbol of the grammar Top-down parsing algorithm: Construct the root node of the parse tree Repeat until the fringe of the parse tree matches the input string At a node labeled A, select a production with A on its lhs and, for each symbol on its rhs, construct the appropriate child When a terminal symbol is added to the fringe and it doesn’t match the input, backtrack Find the next node to be expanded (label NT) The key is picking the right production in step 1 That choice should be guided by the input string Top-down Parsing
4
Remember the expression grammar? And the input x – 2 * y Version with precedence derived last lecture
5
Let’s try x – 2 * y : Example Goal Expr Term + Expr Term Fact. Leftmost derivation, choose productions in an order that exposes problems
6
Let’s try x – 2 * y : This worked well, except that “–” doesn’t match “+” The parser must backtrack to here Example Goal Expr Term + Expr Term Fact.
7
Example Continuing with x – 2 * y : Goal Expr Term – Expr Term Fact.
8
Example Continuing with x – 2 * y : Goal Expr Term – Expr Term Fact. We can advance past “–” to look at “2” This time, “–” and “–” matched Now, we need to expand Term - the last NT on the fringe
9
Example Trying to match the “2” in x – 2 * y : Goal Expr Term – Expr Term Fact. Fact.
10
Example Trying to match the “2” in x – 2 * y : Where are we? “2” matches “2” We have more input, but no NT s left to expand The expansion terminated too soon Need to backtrack Goal Expr Term - Expr Term Fact. Fact.
11
Example Trying again with “2” in x – 2 * y : This time, we matched & consumed all the input Success! Goal Expr Term – Expr Term Fact. Fact. Term Fact. *
12
Other choices for expansion are possible This doesn’t terminate (obviously) Wrong choice of expansion leads to non-termination Non-termination is a bad property for a parser to have Parser must make the right choice Another possible parse consuming no input !
13
Left Recursion Top-down parsers cannot handle left-recursive grammars Formally, A grammar is left recursive if A NT such that a derivation A + A , for some string (NT T ) + Our expression grammar is left recursive This can lead to non-termination in a top-down parser For a top-down parser, any recursion must be right recursion We would like to convert the left recursion to right recursion Non-termination is a bad property in any part of a compiler
14
Eliminating Left Recursion To remove left recursion, we can transform the grammar Consider a grammar fragment of the form Fee Fee | where neither nor start with Fee We can rewrite this as Fee Fie Fie Fie | where Fie is a new non-terminal This accepts the same language, but uses only right recursion
15
Eliminating Left Recursion The expression grammar contains two cases of left recursion Applying the transformation yields These fragments use only right recursion They retain the original left associativity
16
Eliminating Left Recursion Substituting them back into the grammar yields This grammar is correct, if somewhat non-intuitive. It is left associative, as was the original A top-down parser will terminate using it. A top-down parser may need to backtrack with it.
17
Eliminating Left Recursion The transformation eliminates immediate left recursion What about more general, indirect left recursion ? The general algorithm: arrange the NTs into some order A 1, A 2, …, A n for i 1 to n for s 1 to i – 1 replace each production A i A s with A i 1 2 k , where A s 1 2 k are all the current productions for A s eliminate any immediate left recursion on A i using the direct transformation This assumes that the initial grammar has no cycles (A i + A i ), and no epsilon productions Must start with 1 to ensure that A 1 A 1 is transformed
18
Lookahead in predictive parsing The Lookahead token (next token in the input) is used to determine which rule should be used next Consider the following example: term num term' term' ‘+’ num term‘ | ‘-’ num term' | num ‘0’| ‘1’| … | ‘9’ 7+3-2 term num term’
19
Lookahead in predictive parsing The Lookahead token (next token in the input) is used to determine which rule should be used next Consider the following example: term num term' term' ‘+’ num term‘ | ‘-’ num term' | num ‘0’| ‘1’| … | ‘9’ 7+3-2 term num +term’ num
20
Lookahead in predictive parsing The Lookahead token (next token in the input) is used to determine which rule should be used next Consider the following example: term num term' term' ‘+’ num term‘ | ‘-’ num term' | num ‘0’| ‘1’|’2’|’3’|… | ‘9’ 7+3-2 term num +term’ num
21
Lookahead in predictive parsing The Lookahead token (next token in the input) is used to determine which rule should be used next Consider the following example: term num term' term' ‘+’ num term‘ | ‘-’ num term' | num ‘0’| ‘1’| … | ‘9’ 7+3-2 term’ term num +term’ num -
22
Lookahead in predictive parsing The Lookahead token (next token in the input) is used to determine which rule should be used next Consider the following example: term num term' term' ‘+’ num term‘ | ‘-’ num term' | num ‘0’| ‘1’| ‘2’|… | ‘9’ 7+3-2 term’ term num +term’ num -
23
Lookahead in predictive parsing The Lookahead token (next token in the input) is used to determine which rule should be used next Consider the following example: term num term' term' ‘+’ num term‘ | ‘-’ num term' | num ‘0’| ‘1’| ‘2’|… | ‘9’ 7+3-2_ term’ term num +term’ num -
24
Predictive Parsing Basic idea Given A , the parser should be able to choose between & F IRST sets For some rhs G, define F IRST ( ) as the set of tokens that appear as the first symbol in some string that derives from That is, x F IRST ( ) iff * x , for some The LL(1) Property If A and A both appear in the grammar, we would like F IRST ( ) F IRST ( ) = This would allow the parser to make a correct choice with a lookahead of exactly one symbol ! This is almost correct See the next slide
25
Predictive Parsing What about -productions? They complicate the definition of LL(1) If A and A and F IRST ( ), then we need to ensure that F IRST ( ) is disjoint from F OLLOW ( ), too Define F IRST + ( ) as F IRST ( ) F OLLOW ( ), if F IRST ( ) F IRST ( ), otherwise Then, a grammar is LL(1) iff A and A implies F IRST + ( ) F IRST + ( ) = F OLLOW ( ) is the set of all words in the grammar that can legally appear immediately after an
26
Determining First Sets If is a terminal symbol, T FIRST( ) = { } If A is a non-terminal symbol, A NT For each production A , where is 1 2 k FIRST(A) = FIRST(A) = FIRST(A) (FIRST{ 1 } - ) i = 1 While FIRST( i ) and i k-1 FIRST(A) = FIRST(A) (FIRST{ i+1 } - ) If i == k and FIRST( k ) FIRST (A) = FIRST(A)
27
First Sets (Example) S AC C c | A aBCd |BQ | B bB |d Q q What is FIRST(C) ? FIRST(C)= FIRST(c) FIRST( ) = {c, } What is FIRST(B) ? FIRST(B) = FIRST(b) FIRST(d) = {b, d} What is FIRST(A)? FIRST(A) = FIRST(aBCd) FIRST(BQ) FIRST( ) FIRST(A) = {a, b, d, } What is FIRST(Q)? FIRST(Q) = {q} What is FIRST(S)? FIRST(S) = FIRST(AC) FIRST(S) =FIRST(A)- FIRST(C) FIRST(S) = {a, b, c, d, }
28
Determining Follow Sets For each A NT FOLLOW(A) = FOLLOW(S) = {$} While (Follow sets are still changing) For each production A B , FIRST( ) FOLLOW(B) = FOLLOW(B) (FIRST( ) For each production A B , FIRST( ) FOLLOW(B) = FOLLOW(B) (FIRST ( )- ) Follow(A) $ is eof Start Symbol
29
Follow Sets (Example) S AC C c | A aBCd |BQ | B bB |d Q q What is FOLLOW(S)? FOLLOW(S) = {$} What is FOLLOW(C) ? FOLLOW(C) = {d} FOLLOW(S) = {d, $} What is FOLLOW(B) ? FOLLOW(B) = FIRST(Cd) FIRST(Q) FOLLOW(B) FOLLOW(B) = {c, d, q} What is FOLLOW(A)? FOLLOW(A) = FIRST(C) FOLLOW(S) = {c, $} What is FOLLOW(Q)? FOLLOW(Q) = FOLLOW(A) = {c, $}
30
Predictive Parsing Given a grammar that has the LL(1) property Can write a simple routine to recognize each lhs Code is both simple & fast Consider A 1 | 2 | 3, with F IRST + ( 1 ) F IRST + ( 2 ) F IRST + ( 3 ) = /* find an A */ if (current_word F IRST ( 1 )) find a 1 and return true else if (current_word F IRST ( 2 )) find a 2 and return true else if (current_word F IRST ( 3 )) find a 3 and return true else report an error and return false Of course, there is more detail to “find a i ” ( § 3.3.4 in EAC ) Grammars with the LL(1) property are called predictive grammars because the parser can “predict” the correct expansion at each point in the parse. Parsers that capitalize on the LL(1) property are called predictive parsers. One kind of predictive parser is the recursive descent parser.
31
Recursive Descent Parsing Recall the expression grammar, after transformation This produces a parser with six mutually recursive routines: Goal Expr EPrime Term TPrime Factor Each recognizes one NT or T The term descent refers to the direction in which the parse tree is built.
32
Recursive Descent Parsing (Procedural) A couple of routines from the expression parser Goal( ) token next_token( ); if (Expr( ) = true & token = EOF) then next compilation step; else report syntax error; return false; Expr( ) if (Term( ) = false) then return false; else return Eprime( ); Factor( ) if (token = Number) then token next_token( ); return true; else if (token = Identifier) then token next_token( ); return true; else report syntax error; return false; looking for EOF, found token looking for Number or Identifier, found token instead
33
Left Factoring What if my grammar does not have the LL(1) property? Sometimes, we can transform the grammar The Algorithm A NT, find the longest prefix that occurs in two or more right-hand sides of A if ≠ then replace all of the A productions, A 1 | 2 | … | n | , with A Z | Z 1 | 2 | … | n where Z is a new element of NT Repeat until no common prefixes remain
34
A graphical explanation for the same idea becomes … Left Factoring A 1 | 2 | 3 A Z Z 1 | 2 | n A 1 3 2 ZZ 11 33 22 A
35
Left Factoring (An example) Consider the following fragment of the expression grammar After left factoring, it becomes This form has the same syntax, with the LL(1) property F IRST (rhs 1 ) = { Identifier } F IRST (rhs 2 ) = { Identifier } F IRS T(rhs 3 ) = { Identifier } F IRST (rhs 1 ) = { Identifier } F IRS T(rhs 2 ) = { [ } F IRST (rhs 3 ) = { ( } F IRST (rhs 4 ) = F OLLOW (Factor) It has the LL(1) property
36
Graphically becomes … Left Factoring Factor Identifier []ExprList () FactorIdentifier[]ExprList () No basis for choice Word determines correct choice
37
Question By eliminating left recursion and left factoring, can we transform an arbitrary CFG to a form where it meets the LL(1) condition? (and can be parsed predictively with a single token lookahead?) Answer Given a CFG that doesn’t meet the LL(1) condition, it is undecidable whether or not an equivalent LL(1) grammar exists. Example {a n 0 b n | n 1} {a n 1 b 2n | n 1} has no LL(1) grammar Left Factoring (Generality)
38
Language that Cannot Be LL(1) Example {a n 0 b n | n 1} {a n 1 b 2n | n 1} has no LL(1) grammar G aAb | aBbb A aAb | 0 B aBbb | 1 Problem: need an unbounded number of a characters before you can determine whether you are in the A group or the B group.
39
Recursive Descent (Summary) 1. Build F IRST (and F OLLOW ) sets 2. Massage grammar to have LL(1) condition a. Remove left recursion b. Left factor it 3. Define a procedure for each non-terminal a. Implement a case for each right-hand side b. Call procedures as needed for non-terminals 4. Add extra code, as needed a. Perform context-sensitive checking b. Build an IR to record the code Can we automate this process?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.