Download presentation
Presentation is loading. Please wait.
Published byDwain Parrish Modified over 9 years ago
1
Parsing IV Bottom-up Parsing
2
Parsing Techniques Top-down parsers (LL(1), recursive descent) Start at the root of the parse tree and grow toward leaves Pick a production & try to match the input Bad “pick” may need to backtrack Some grammars are backtrack-free (predictive parsing) Bottom-up parsers (LR(1), operator precedence) Start at the leaves and grow toward root As input is consumed, encode possibilities in an internal state Start in a state valid for legal first tokens Bottom-up parsers handle a large class of grammars
3
Bottom-up Parsing (definitions) The point of parsing is to construct a derivation A derivation consists of a series of rewrite steps S 0 1 2 … n–1 n sentence Each i is a sentential form If contains only terminal symbols, is a sentence in L(G) If contains ≥ 1 non-terminals, is a sentential form To get i from i–1, expand some NT A i–1 by using A Replace the occurrence of A i–1 with to get i In a leftmost derivation, it would be the first NT A i–1 A left-sentential form occurs in a leftmost derivation A right-sentential form occurs in a rightmost derivation
4
Bottom-up Parsing A bottom-up parser builds a derivation by working from the input sentence back toward the start symbol S S 0 1 2 … n–1 n sentence To reduce i to i–1 match some rhs against i then replace with its corresponding lhs, A. (assuming the production A ) In terms of the parse tree, this is working from leaves to root Nodes with no parent in a partial tree form its upper fringe Since each replacement of with A shrinks the upper fringe, we call it a reduction. The parse tree need not be built, it can be simulated |parse tree nodes| = |words| + |reductions| bottom-up
5
Finding Reductions Consider the simple grammar And the input string abbcde The trick is scanning the input and finding the next reduction The mechanism for doing this must be efficient
6
Finding Reductions (Handles) The parser must find a substring of the tree’s frontier that matches some production A that occurs as one step in the rightmost derivation ( A is in RRD ) Informally, we call this substring a handle Formally, A handle of a right-sentential form is a pair where A P and k is the position in of ’s rightmost symbol. If is a handle, then replacing at k with A produces the right sentential form from which is derived in the rightmost derivation. Because is a right-sentential form, the substring to the right of a handle contains only terminal symbols the parser doesn’t need to scan past the handle (very far)
7
Finding Reductions (Handles) Critical Insight (Theorem?) If G is unambiguous, then every right-sentential form has a unique handle. If we can find those handles, we can build a derivation ! Sketch of Proof: 1 G is unambiguous rightmost derivation is unique 2 a unique production A applied to derive i from i–1 3 a unique position k at which A is applied 4 a unique handle This all follows from the definitions
8
Example (a very busy slide) The expression grammarHandles for rightmost derivation of x – 2 * y This is the inverse of Figure 3.9 in EaC
9
Handle-pruning, Bottom-up Parsers The process of discovering a handle & reducing it to the appropriate left-hand side is called handle pruning Handle pruning forms the basis for a bottom-up parsing method To construct a rightmost derivation S 0 1 2 … n–1 n w Apply the following simple algorithm for i n to 1 by –1 Find the handle in i Replace i with A i to generate i–1 This takes 2n steps
10
Handle-pruning, Bottom-up Parsers One implementation technique is the shift-reduce parser push INVALID token next_token( ) repeat until (top of stack = Goal and token = EOF) if the top of the stack is a handle A then // reduce to A pop | | symbols off the stack push A onto the stack else if (token EOF) then // shift push token token next_token( ) else // need to shift, but out of input report an error Figure 3.7 in EAC How do errors show up? failure to find a handle hitting EOF & needing to shift (final else clause) Either generates an error
11
Back to x - 2 * y 1. Shift until the top of the stack is the right end of a handle 2. Find the left end of the handle & reduce
12
Back to x - 2 * y 1. Shift until the top of the stack is the right end of a handle 2. Find the left end of the handle & reduce
13
Back to x - 2 * y 1. Shift until the top of the stack is the right end of a handle 2. Find the left end of the handle & reduce
14
Back to x - 2 * y 1. Shift until the top of the stack is the right end of a handle 2. Find the left end of the handle & reduce
15
Back to x - 2 * y 1. Shift until the top of the stack is the right end of a handle 2. Find the left end of the handle & reduce
16
Back to x – 2 * y 1. Shift until the top of the stack is the right end of a handle 2. Find the left end of the handle & reduce 5 shifts + 9 reduces + 1 accept
17
Example Goal TermFact.Expr – Fact. Term *
18
Shift-reduce Parsing Shift reduce parsers are easily built and easily understood A shift-reduce parser has just four actions Shift — next word is shifted onto the stack Reduce — right end of handle is at top of stack Locate left end of handle within the stack Pop handle off stack & push appropriate lhs Accept — stop parsing & report success Error — call an error reporting/recovery routine Accept & Error are simple Shift is just a push and a call to the scanner Reduce takes |rhs| pops & 1 push If handle-finding requires state, put it in the stack 2x work Handle finding is key handle is on stack finite set of handles use a DFA !
19
An Important Lesson about Handles To be a handle, a substring of a sentential form must have two properties: It must match the right hand side of some rule A There must be some rightmost derivation from the goal symbol that produces the sentential form with A as the last production applied Simply looking for right hand sides that match strings is not good enough Critical Question: How can we know when we have found a handle without generating lots of different derivations? Answer: we use look ahead in the grammar along with tables produced as the result of analyzing the grammar. L R(1) parsers build a DFA that runs over the stack & finds them
20
An Important Lesson about Handles To be a handle, a substring of a sentential form must have two properties: It must match the right hand side of some rule A There must be some rightmost derivation from the goal symbol that produces the sentential form with A as the last production applied We have seen that simply looking for right hand sides that match strings is not good enough Critical Question: How can we know when we have found a handle without generating lots of different derivations? Answer: we use look ahead in the grammar along with tables produced as the result of analyzing the grammar. o There are a number of different ways to do this. o We will look at two: operator precedence and LR parsing
21
LR(1) Parsers LR(1) parsers are table-driven, shift-reduce parsers that use a limited right context (1 token) for handle recognition LR(1) parsers recognize languages that have an LR(1 ) grammar Informal definition: A grammar is LR(1) if, given a rightmost derivation S 0 1 2 … n–1 n sentence We can 1. isolate the handle of each right-sentential form i, and 2. determine the production by which to reduce, by scanning i from left-to-right, going at most 1 symbol beyond the right end of the handle of i
22
LR(1) Parsers A table-driven LR(1) parser looks like Tables can be built by hand However, this is a perfect task to automate Scanner Table-driven Parser A CTION & G OTO Tables Parser Generator source code grammar IR
23
LR(1) Skeleton Parser stack.push( INVALID ); stack.push(s 0 ); not_found = true; token = scanner.next_token(); do while (not_found) { s = stack.top(); if ( ACTION[s,token] == “reduce A ” ) then { stack.popnum(2*| |); // pop 2*| | symbols s = stack.top(); stack.push(A); stack.push(GOTO[s,A]); } else if ( ACTION[s,token] == “shift s i ” ) then { stack.push(token); stack.push(s i ); token scanner.next_token(); } else if ( ACTION[s,token] == “accept” & token == EOF ) then not_found = false; else report a syntax error and recover; } report success; The skeleton parser uses ACTION & GOTO tables does |words| shifts does |derivation| reductions does 1 accept detects errors by failure of 3 other cases
24
To make a parser for L(G), need a set of tables The grammar The tables LR(1) Parsers (parse tables)
25
Example Parse 1 The string “baa”
26
Example Parse 1 The string “baa”
27
Example Parse 1 The string “baa”
28
Example Parse 1 The string “baa”
29
Example Parse 2 The string “baa baa ”
30
Example Parse 2 The string “baa baa ”
31
Example Parse 2 The string “baa baa ”
32
Example Parse 2 The string “baa baa ”
33
LR(1) Parsers How does this LR(1) stuff work? Unambiguous grammar unique rightmost derivation Keep upper fringe on a stack All active handles include top of stack (TOS) Shift inputs until TOS is right end of a handle Language of handles is regular (finite) Build a handle-recognizing DFA A CTION & G OTO tables encode the DFA To match subterm, invoke subterm DFA & leave old DFA ’s state on stack Final state in DFA a reduce action New state is G OTO[state at TOS (after pop), lhs] For SN, this takes the DFA to s 1 S0S0 S3S3 S2S2 S1S1 baa SN Control DFA for SN Reduce action
34
Building LR(1) Parsers How do we generate the A CTION and G OTO tables? Use the grammar to build a model of the DFA Use the model to build A CTION & G OTO tables If construction succeeds, the grammar is LR(1) The Big Picture Model the state of the parser Use two functions goto( s, X ) and closure( s ) goto() is analogous to move() in the subset construction closure() adds information to round out a state Build up the states and transition functions of the DFA Use this information to fill in the A CTION and G OTO tables Terminal or non-terminal
35
LR(k) items The LR(1) table construction algorithm uses LR(1) items to represent valid configurations of an LR(1) parser An LR(k) item is a pair [ P, ], where P is a production A with a at some position in the rhs is a lookahead string of length ≤ k (words or EOF ) The in an item indicates the position of the top of the stack [ A ,a] means that the input seen so far is consistent with the use of A immediately after the symbol on top of the stack [A ,a] means that the input sees so far is consistent with the use of A at this point in the parse, and that the parser has already recognized . [A ,a] means that the parser has seen , and that a lookahead symbol of a is consistent with reducing to A.
36
LR(1) Items The production A , where = B 1 B 1 B 1 with lookahead a, can give rise to 4 items [A B 1 B 2 B 3,a], [A B 1B 2 B 3,a], [A B 1 B 2B 3,a], & [A B 1 B 2 B 3,a] The set of LR(1) items for a grammar is finite What’s the point of all these lookahead symbols? Carry them along to choose the correct reduction, if there is a choice Lookaheads are bookkeeping, unless item has at right end Has no direct use in [A ,a] In [A ,a], a lookahead of a implies a reduction by A For { [A ,a],[B ,b] }, a reduce to A; F IRST ( ) shift Limited right context is enough to pick the actions
37
High-level overview Build the canonical collection of sets of LR(1) Items, I aBegin in an appropriate state, s 0 [S’ S, EOF ], along with any equivalent items Derive equivalent items as closure( s 0 ) bRepeatedly compute, for each s k, and each X, goto(s k,X) If the set is not already in the collection, add it Record all the transitions created by goto( ) This eventually reaches a fixed point 2Fill in the table from the collection of sets of LR(1) items The canonical collection completely encodes the transition diagram for the handle-finding DFA LR(1) Table Construction
38
Back to Finding Handles Revisiting an issue from last class Parser in a state where the stack (the fringe) was Expr – Term With lookahead of * How did it choose to expand Term rather than reduce to Expr? Lookahead symbol is the key With lookahead of + or –, parser should reduce to Expr With lookahead of * or /, parser should shift Parser uses lookahead to decide All this context from the grammar is encoded in the handle recognizing mechanism
39
Back to x – 2 * y 1. Shift until TOS is the right end of a handle 2. Find the left end of the handle & reduce shift here reduce here
40
Computing F IRST Sets Define F IRST as If * a , a T, (T NT)*, then a F IRST ( ) If * , then F IRST ( ) Note: if = X , F IRST ( ) = F IRST ( X) To compute F IRST Use a fixed-point method F IRST (A) 2 (T ) Loop is monotonic Algorithm halts Computation of FIRST uses FOLLOW, so build FOLLOW sets before FIRST sets
41
Computing F OLLOW Sets FOLLOW(S) {EOF } for each A NT, FOLLOW(A) Ø while (FOLLOW sets are still changing) for each p P, of the form A 1 2 … k FOLLOW( k ) FOLLOW( k ) FOLLOW(A) TRAILER FOLLOW(A) for i k down to 2 if FIRST( i ) then FOLLOW( i-1 ) FOLLOW( i-1 ) { FIRST( i ) – { } } TRAILER else FOLLOW( i-1 ) FOLLOW( i-1 ) FIRST( i ) TRAILER Ø For SheepNoise: FOLLOW(Goal ) = { EOF } F OLLOW (SN) = { baa, EOF }
42
Computing F IRST Sets for each x T, FIRST(x) { x } for each A NT, FIRST(A) Ø while (FIRST sets are still changing) for each p P, of the form A , if is then FIRST(A) FIRST(A) { } else if is B 1 B 2 …B k then begin FIRST(A) FIRST(A) ( FIRST(B 1 ) – { } ) for i 1 to k–1 by 1 while FIRST(B i ) FIRST(A) FIRST(A) ( FIRST(B i +1 ) – { } ) if i = k–1 and FIRST(B k ) then FIRST(A) FIRST(A) { } end for each A NT if FIRST(A) then FIRST(A) FIRST(A) FOLLOW(A) For SheepNoise: F IRST (Goal) = { baa } F IRST (SN ) = { baa } F IRST (baa) = { baa } on SN baa on G SN
43
Computing Closures Closure(s) adds all the items implied by items already in s Any item [A B ,a] implies [B ,x] for each production with B on the lhs, and each x F IRST ( a) Since B is valid, any way to derive B is valid, too The algorithm Closure( s ) while ( s is still changing ) items [A B ,a] s productions B P b F IRST ( a) // might be if [B ,b] s then add [B ,b] to s Classic fixed-point method Halts because s I TEMS Worklist version is faster Closure “fills out” a state
44
Example From SheepNoise Initial step builds the item [Goal SheepNoise, EOF ] and takes its closure( ) Closure( [Goal SheepNoise, EOF ] ) So, S 0 is { [Goal SheepNoise,EOF], [SheepNoise SheepNoise baa,EOF], [SheepNoise baa,EOF], [SheepNoise SheepNoise baa,baa], [SheepNoise baa,baa] } Remember, this is the left-recursive SheepNoise; EaC shows the right- recursive version.
45
Computing Gotos Goto(s,x) computes the state that the parser would reach if it recognized an x while in state s Goto( { [A X ,a] }, X ) produces [A X ,a] (obviously) It also includes closure( [A X ,a] ) to fill out the state The algorithm Goto( s, X ) new Ø items [A X ,a] s new new [A X ,a] return closure(new) Not a fixed-point method! Straightforward computation Uses closure ( ) Goto() moves forward
46
Example from SheepNoise S 0 is { [Goal SheepNoise, EOF ], [SheepNoise SheepNoise baa, EOF ], [SheepNoise baa, EOF ], [SheepNoise SheepNoise baa,baa], [SheepNoise baa,baa] } Goto( S 0, baa ) Loop produces Closure adds nothing since is at end of rhs in each item In the construction, this produces s 2 { [SheepNoise baa, { EOF,baa}]} New, but obvious, notation for two distinct items [SheepNoise baa, EOF] & [SheepNoise baa, baa]
47
Example from SheepNoise S 0 : { [Goal SheepNoise, EOF ], [SheepNoise SheepNoise baa, EOF ], [SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa], [SheepNoise baa, baa] } S 1 = Goto(S 0, SheepNoise ) = { [Goal SheepNoise, EOF ], [SheepNoise SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa] } S 2 = Goto(S 0, baa ) = { [SheepNoise baa, EOF ], [SheepNoise baa, baa] } S 3 = Goto(S 1, baa ) = { [SheepNoise SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa] }
48
Building the Canonical Collection Start from s 0 = closure( [S’ S, EOF ] ) Repeatedly construct new states, until all are found The algorithm s 0 closure ( [S’ S,EOF] ) S { s 0 } k 1 while ( S is still changing ) s j S and x ( T NT ) s k goto(s j,x) record s j s k on x if s k S then S S s k k k + 1 Fixed-point computation Loop adds to S S 2 ITEMS, so S is finite Worklist version is faster
49
Example from SheepNoise Starts with S 0 S 0 : { [Goal SheepNoise, EOF ], [SheepNoise SheepNoise baa, EOF ], [SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa], [SheepNoise baa, baa] }
50
Example from SheepNoise Starts with S 0 S 0 : { [Goal SheepNoise, EOF ], [SheepNoise SheepNoise baa, EOF ], [SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa], [SheepNoise baa, baa] } Iteration 1 computes S 1 = Goto(S 0, SheepNoise ) = { [Goal SheepNoise, EOF ], [SheepNoise SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa] } S 2 = Goto(S 0, baa ) = { [SheepNoise baa, EOF ], [SheepNoise baa, baa] }
51
Example from SheepNoise Starts with S 0 S 0 : { [Goal SheepNoise, EOF ], [SheepNoise SheepNoise baa, EOF ], [SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa], [SheepNoise baa, baa] } Iteration 1 computes S 1 = Goto(S 0, SheepNoise ) = { [Goal SheepNoise, EOF ], [SheepNoise SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa] } S 2 = Goto(S 0, baa ) = { [SheepNoise baa, EOF ], [SheepNoise baa, baa] } Iteration 2 computes S 3 = Goto(S 1, baa ) = { [SheepNoise SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa] }
52
Example from SheepNoise Starts with S 0 S 0 : { [Goal SheepNoise, EOF ], [SheepNoise SheepNoise baa, EOF ], [SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa], [SheepNoise baa, baa] } Iteration 1 computes S 1 = Goto(S 0, SheepNoise ) = { [Goal SheepNoise, EOF ], [SheepNoise SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa] } S 2 = Goto(S 0, baa ) = { [SheepNoise baa, EOF ], [SheepNoise baa, baa] } Iteration 2 computes S 3 = Goto(S 1, baa ) = { [SheepNoise SheepNoise baa, EOF ], [SheepNoise SheepNoise baa, baa] } Nothing more to compute, since is at the end of every item in S 3.
53
Example (grammar & sets) Simplified, right recursive expression grammar Goal Expr Expr Term – Expr Expr Term Term Factor * Term Term Factor Factor ident
54
Example (building the collection) Initialization Step s 0 closure( { [ Goal Expr, EOF ] } ) { [ Goal Expr, EOF], [ Expr Term – Expr, EOF], [ Expr Term, EOF], [ Term Factor * Term, EOF], [ Term Factor * Term, –], [ Term Factor, EOF], [ Term Factor, –], [ Factor ident, EOF], [ Factor ident, –], [ Factor ident, *] } S {s 0 }
55
Example (building the collection) Iteration 1 s 1 goto(s 0, Expr) s 2 goto(s 0, Term) s 3 goto(s 0, Factor) s 4 goto(s 0, ident ) Iteration 2 s 5 goto(s 2, – ) s 6 goto(s 3, * ) Iteration 3 s 7 goto(s 5, Expr ) s 8 goto(s 6, Term )
56
Example (Summary) S 0 : { [ Goal Expr, EOF], [ Expr Term – Expr, EOF], [ Expr Term, EOF], [ Term Factor * Term, EOF], [ Term Factor * Term, –], [ Term Factor, EOF], [ Term Factor, –], [ Factor ident, EOF], [ Factor ident, –], [ Factor ident, *] } S 1 : { [ Goal Expr, EOF] } S 2 : { [ Expr Term – Expr, EOF], [ Expr Term, EOF] } S 3 : { [ Term Factor * Term, EOF],[ Term Factor * Term, –], [ Term Factor, EOF], [ Term Factor, –] } S 4 : { [ Factor ident, EOF],[ Factor ident, –], [ Factor ident, *] } S 5 : { [ Expr Term – Expr, EOF ], [ Expr Term – Expr, EOF ], [ Expr Term, EOF ], [ Term Factor * Term, –], [ Term Factor, –], [ Term Factor * Term, EOF ], [ Term Factor, EOF ], [ Factor ident, *], [ Factor ident, –], [ Factor ident, EOF ] }
57
Example (Summary) S 6 : { [ Term Factor * Term, EOF ], [ Term Factor * Term, –], [ Term Factor * Term, EOF ], [ Term Factor * Term, –], [ Term Factor, EOF ], [ Term Factor, –], [ Factor ident, EOF ], [ Factor ident, –], [ Factor ident, *] } S 7 : { [ Expr Term – Expr, EOF ] } S 8 : { [ Term Factor * Term, EOF ], [ Term Factor * Term, –] }
58
Example (Summary) The Goto Relationship (from the construction)
59
Filling in the A CTION and G OTO Tables The algorithm Many items generate no table entry Closure( ) instantiates F IRST (X) directly for [A X ,a ] set s x S item i s x if i is [A a ,b] and goto(s x,a) = s k, a T then A CTION [x,a] “shift k” else if i is [S’ S, EOF ] then A CTION [x,a] “accept” else if i is [A ,a] then A CTION [x,a] “reduce A ” n NT if goto(s x,n) = s k then G OTO [x,n] k x is the state number
60
Example (Filling in the tables) The algorithm produces the following table
61
What can go wrong? What if set s contains [A a ,b] and [B ,a] ? First item generates “shift”, second generates “reduce” Both define ACTION[s,a] — cannot do both actions This is a fundamental ambiguity, called a shift/reduce error Modify the grammar to eliminate it (if-then-else) Shifting will often resolve it correctly What is set s contains [A , a] and [B , a] ? Each generates “reduce”, but with a different production Both define ACTION[s,a] — cannot do both reductions This is a fundamental ambiguity, called a reduce/reduce conflict Modify the grammar to eliminate it (P L/I’ s overloading of (...)) In either case, the grammar is not LR(1)
62
Shrinking the Tables Three options: Combine terminals such as number & identifier, + & -, * & / Directly removes a column, may remove a row For expression grammar, 198 (vs. 384) table entries Combine rows or columns Implement identical rows once & remap states Requires extra indirection on each lookup Use separate mapping for A CTION & for G OTO Use another construction algorithm Both L ALR(1) and S LR(1) produce smaller tables Implementations are readily available
63
LR(k) versus LL(k) (Top-down Recursive Descent ) Finding Reductions LR(k) Each reduction in the parse is detectable with the complete left context, the reducible phrase, itself, and the k terminal symbols to its right LL(k) Parser must select the reduction based on The complete left context The next k terminals Thus, LR(k) examines more context “… in practice, programming languages do not actually seem to fall in the gap between LL(1) languages and deterministic languages” J.J. Horning, “LR Grammars and Analysers”, in Compiler Construction, An Advanced Course, Springer-Verlag, 1976
64
Summary Advantages Fast Good locality Simplicity Good error detection Fast Deterministic langs. Automatable Left associativity Disadvantages Hand-coded High maintenance Right associativity Large working sets Poor error messages Large table sizes Top-down recursive descent LR(1)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.