Pushdown Automata (PDA) Intro

Slides:



Advertisements
Similar presentations
Context-Free and Noncontext-Free Languages
Advertisements

Theory of Computation CS3102 – Spring 2014 A tale of computers, math, problem solving, life, love and tragic death Nathan Brunelle Department of Computer.
Chapter 5 Pushdown Automata
1 Pushdown Automata (PDA) Informally: –A PDA is an NFA-ε with a stack. –Transitions are modified to accommodate stack operations. Questions: –What is a.
C O N T E X T - F R E E LANGUAGES ( use a grammar to describe a language) 1.
Pushdown Automata Chapter 12. Recognizing Context-Free Languages Two notions of recognition: (1) Say yes or no, just like with FSMs (2) Say yes or no,
Pushdown Automata Chapter 12. Recognizing Context-Free Languages We need a device similar to an FSM except that it needs more power. The insight: Precisely.
Western Michigan University CS6800 Advanced Theory of Computation Spring 2014 By Abduljaleel Alhasnawi & Rihab Almalki.
Pushdown Automata Part II: PDAs and CFG Chapter 12.
1 Introduction to Computability Theory Lecture12: Decidable Languages Prof. Amos Israeli.
Introduction to Computability Theory
1 Introduction to Computability Theory Lecture7: PushDown Automata (Part 1) Prof. Amos Israeli.
CS5371 Theory of Computation
Courtesy Costas Busch - RPI1 NPDAs Accept Context-Free Languages.
CS Master – Introduction to the Theory of Computation Jan Maluszynski - HT Lecture 4 Context-free grammars Jan Maluszynski, IDA, 2007
Deterministic FA/ PDA Sequential Machine Theory Prof. K. J. Hintz Department of Electrical and Computer Engineering Lecture 4 Updated by Marek Perkowski.
1 Normal Forms for Context-free Grammars. 2 Chomsky Normal Form All productions have form: variable and terminal.
Foundations of (Theoretical) Computer Science Chapter 2 Lecture Notes (Section 2.2: Pushdown Automata) Prof. Karen Daniels, Fall 2009 with acknowledgement.
January 14, 2015CS21 Lecture 51 CS21 Decidability and Tractability Lecture 5 January 14, 2015.
1 Normal Forms for Context-free Grammars. 2 Chomsky Normal Form All productions have form: variable and terminal.
Normal forms for Context-Free Grammars
CS5371 Theory of Computation Lecture 8: Automata Theory VI (PDA, PDA = CFG)
MA/CSSE 474 Theory of Computation
Finite State Machines Data Structures and Algorithms for Information Processing 1.
CS 3240: Languages and Computation Pushdown Automata & CF Grammars NOTE: THESE ARE ONLY PARTIAL SLIDES RELATED TO WEEKS 9 AND 10. PLEASE REFER TO THE TEXTBOOK.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 8 Mälardalen University 2010.
Pushdown Automata (PDA) Part 2 While I am distributing graded exams: Design a PDA to accept L = { a m b n : m  n; m, n > 0} MA/CSSE 474 Theory of Computation.
CSCI 2670 Introduction to Theory of Computing September 21, 2005.
Context-Free Grammars Normal Forms Chapter 11. Normal Forms A normal form F for a set C of data objects is a form, i.e., a set of syntactically valid.
CSCI 2670 Introduction to Theory of Computing September 20, 2005.
TM Design Universal TM MA/CSSE 474 Theory of Computation.
Pushdown Automata (PDAs)
Pushdown Automata Part I: PDAs Chapter Recognizing Context-Free Languages Two notions of recognition: (1) Say yes or no, just like with FSMs (2)
Context-Free and Noncontext-Free Languages Chapter 13 1.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 11 Midterm Exam 2 -Context-Free Languages Mälardalen University 2005.
1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 6 Mälardalen University 2010.
Pushdown Automata Chapters Generators vs. Recognizers For Regular Languages: –regular expressions are generators –FAs are recognizers For Context-free.
Context-Free and Noncontext-Free Languages Chapter 13 1.
1 A well-parenthesized string is a string with the same number of (‘s as )’s which has the property that every prefix of the string has at least as many.
Closure Properties Lemma: Let A 1 and A 2 be two CF languages, then the union A 1  A 2 is context free as well. Proof: Assume that the two grammars are.
CS 208: Computing Theory Assoc. Prof. Dr. Brahim Hnich Faculty of Computer Sciences Izmir University of Economics.
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
Foundations of (Theoretical) Computer Science Chapter 2 Lecture Notes (Section 2.2: Pushdown Automata) Prof. Karen Daniels, Fall 2010 with acknowledgement.
Context-Free and Noncontext-Free Languages Chapter 13.
Formal Languages, Automata and Models of Computation
Structure and Ambiguity Removing Ambiguity Chomsky Normal Form Pushdown Automata Intro (who is he foolin', thinking that there will be time to get to this?)
Donghyun (David) Kim Department of Mathematics and Physics North Carolina Central University 1 Chapter 2 Context-Free Languages Some slides are in courtesy.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen Department of Computer Science University of Texas-Pan American.
Context-Free Grammars Normal Forms Chapter 11. Normal Forms A normal form F for a set C of data objects is a form, i.e., a set of syntactically valid.
Grammar Set of variables Set of terminal symbols Start variable Set of Production rules.
FORMAL LANGUAGES, AUTOMATA, AND COMPUTABILITY
1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 8 Mälardalen University 2011.
1 A well-parenthesized string is a string with the same number of (‘s as )’s which has the property that every prefix of the string has at least as many.
Pushdown Automata Chapter 12. Recognizing Context-Free Languages Two notions of recognition: (1) Say yes or no, just like with FSMs (2) Say yes or no,
1 Chapter Pushdown Automata. 2 Section 12.2 Pushdown Automata A pushdown automaton (PDA) is a finite automaton with a stack that has stack operations.
Bottom-up parsing Pumping Theorem for CFLs MA/CSSE 474 Theory of Computation.
Theory of Languages and Automata By: Mojtaba Khezrian.
MA/CSSE 474 Theory of Computation Decision Problems, Continued DFSMs.
Lecture 11  2004 SDU Lecture7 Pushdown Automaton.
Context-Free and Noncontext-Free Languages Chapter 13.
Normal Forms (Chomsky and Greibach) Pushdown Automata (PDA) Intro PDA examples MA/CSSE 474 Theory of Computation.
Formal Languages, Automata and Models of Computation
Deterministic FA/ PDA Sequential Machine Theory Prof. K. J. Hintz
Chapter 7 PUSHDOWN AUTOMATA.
Structure and Ambiguity
Chapter 2 Context-Free Language - 01
Pushdown Automata (PDA) Part 2
MA/CSSE 474 Theory of Computation
More About Nondeterminism
MA/CSSE 474 Theory of Computation
Presentation transcript:

Pushdown Automata (PDA) Intro MA/CSSE 474 Theory of Computation Pushdown Automata (PDA) Intro Exam next Friday!

Your Questions? Previous class days' material Reading Assignments HW10 problems Anything else

Recap: Normal Forms for Grammars Chomsky Normal Form, in which all rules are of one of the following two forms: ● X  a, where a  , or ● X  BC, where B and C are elements of V - . Advantages: ● Parsers can use binary trees. ● Exact length of derivations is known: S A B A A B B a a b B B b b What is that exact length of a derivation of a string of length n? 2n-1.

Recap: Normal Forms for Grammars Greibach Normal Form, in which all rules are of the following form: ● X  a , where a   and   (V - )*. Advantages: ● Every derivation of a string s contains |s| rule applications. ● Greibach normal form grammars can easily be converted to pushdown automata with no - transitions. This is useful because such PDAs are guaranteed to halt. Length of a derivation?

Theorems: Normal Forms Exist Theorem: Given a CFG G, there exists an equivalent Chomsky normal form grammar GC such that: L(GC) = L(G) – {}. Proof: The proof is by construction. Theorem: Given a CFG G, there exists an equivalent Greibach normal form grammar GG such that: L(GG) = L(G) – {}. Proof: The proof is also by construction. Details of Chomsky conversion are complex but straightforward; I leave them for you to read in Chapter 11 and/or in the next 16 slides. Details of Greibach conversion are more complex but still straightforward; I leave them for you to read in Appendix D if you wish (not req'd).

Hidden: Converting to a Normal Form 1. Apply some transformation to G to get rid of undesirable property 1. Show that the language generated by G is unchanged. 2. Apply another transformation to G to get rid of undesirable property 2. Show that the language generated by G is unchanged and that undesirable property 1 has not been reintroduced. 3. Continue until the grammar is in the desired form.

Hidden: Rule Substitution X  aYc Y  b Y  ZZ We can replace the X rule with the rules: X  abc X  aZZc X  aYc  aZZc

Hidden: Rule Substitution Theorem: Let G contain the rules: X  Y and Y  1 | 2 | … | n , Replace X  Y by: X  1, X  2, …, X  n. The new grammar G' will be equivalent to G.

Hidden: Rule Substitution Replace X  Y by: X  1, X  2, …, X  n. Proof: ● Every string in L(G) is also in L(G'): If X  Y is not used, then use same derivation. If it is used, then one derivation is: S  …  X  Y  k  …  w Use this one instead: S  …  X  k  …  w ● Every string in L(G') is also in L(G): Every new rule can be simulated by old rules.

Hidden: Convert to Chomsky Normal Form 1. Remove all -rules, using the algorithm removeEps. 2. Remove all unit productions (rules of the form A  B). 3. Remove all rules whose right hand sides have length greater than 1 and include a terminal: (e.g., A  aB or A  BaC) 4. Remove all rules whose right hand sides have length greater than 2: (e.g., A  BCDE)

Hidden: Recap: Removing -Productions Remove all  productions: (1) If there is a rule P  Q and Q is nullable, Then: Add the rule P  . (2) Delete all rules Q  .

Hidden: Removing -Productions Example: S  aA A B | CDC B   B  a C  BD D  b D  

Hidden: Unit Productions A unit production is a rule whose right-hand side consists of a single nonterminal symbol. Example: S  X Y X  A A  B | a B  b Y  T T  Y | c

Hidden: Removing Unit Productions removeUnits(G) = 1. Let G' = G. 2. Until no unit productions remain in G' do: 2.1 Choose some unit production X  Y. 2.2 Remove it from G'. 2.3 Consider only rules that still remain. For every rule Y  , where   V*, do: Add to G' the rule X   unless it is a rule that has already been removed once. 3. Return G'. After removing epsilon productions and unit productions, all rules whose right hand sides have length 1 are in Chomsky Normal Form.

Hidden: Removing Unit Productions removeUnits(G) = 1. Let G' = G. 2. Until no unit productions remain in G' do: 2.1 Choose some unit production X  Y. 2.2 Remove it from G'. 2.3 Consider only rules that still remain. For every rule Y  , where   V*, do: Add to G' the rule X   unless it is a rule that has already been removed once. 3. Return G'. Example: S  X Y X  A A  B | a B  b Y  T T  Y | c S  X Y A  a | b B  b T  c X  a | b Y  c

Hidden: Mixed Rules removeMixed(G) = 1. Let G = G. 2. Create a new nonterminal Ta for each terminal a in . 3. Modify each rule whose right-hand side has length greater than 1 and that contains a terminal symbol by substituting Ta for each occurrence of the terminal a. 4. Add to G, for each Ta, the rule Ta  a. 5. Return G. Example: A  a A  a B A  BaC A  BbC A  a A  Ta B A  BTa C A  BTbC Ta  a Tb  b

Hidden: Long Rules removeLong(G) = 1. Let G = G. 2. For each rule r of the form: A  N1N2N3N4…Nn, n > 2 create new nonterminals M2, M3, … Mn-1. 3. Replace r with the rule A  N1M2. 4. Add the rules: M2  N2M3, M3  N3M4, … Mn-1  Nn-1Nn. 5. Return G. Example: A  BCDEF

Hidden: An Example S  aACa A  B | a B  C | c C  cC |  removeEps returns: S  aACa | aAa | aCa | aa A  B | a B  C | c C  cC | c

Hidden: An Example S  aACa | aAa | aCa | aa A  B | a B  C | c C  cC | c Next we apply removeUnits: Remove A  B. Add A  C | c. Remove B  C. Add B  cC (B  c, already there). Remove A  C. Add A  cC (A  c, already there). So removeUnits returns: S  aACa | aAa | aCa | aa A  a | c | cC B  c | cC C  cC | c

Hidden: An Example S  aACa | aAa | aCa | aa A  a | c | cC B  c | cC C  cC | c Next we apply removeMixed, which returns: S  TaACTa | TaATa | TaCTa | TaTa A  a | c | TcC B  c | TcC C  TcC | c Ta  a Tc  c

Hidden: An Example S  TaACTa | TaATa | TaCTa | TaTa A  a | c | TcC B  c | TcC C  TcC | c Ta  a Tc  c Finally, we apply removeLong, which returns: S  TaS1 S  TaS3 S  TaS4 S  TaTa S1  AS2 S3  ATa S4  CTa S2  CTa A  a | c | TcC

The Price of Normal Forms E  E + E E  (E) E  id Converting to Chomsky normal form: E  E E E  P E E  L E E  E R L  ( R  ) P  + Conversion doesn’t change weak generative capacity but it may change strong generative capacity. Other prices: Lots of productions, less understandable. Advantage: A canonical form that we can use in proofs. Similar for Greibach normal form.

Pushdown Automata

Recognizing Context-Free Languages Two notions of recognition: (1) Say yes or no, just like with FSMs (2) Say yes or no, AND if yes, describe the structure a + b * c

Definition of a Pushdown Automaton M = (K, , , , s, A), where: K is a finite set of states  is the input alphabet  is the stack alphabet s  K is the initial state A  K is the set of accepting states, and  is the transition relation. It is a finite subset of (K  (  {})  *)  (K  *) state input or  string of state string of symbols symbols to pop to push from top on top of stack of stack  and  are not necessarily disjoint

Definition of a Pushdown Automaton A configuration of M is an element of K  *  *. The initial configuration of M is (s, w, ), where w is the input string.

Manipulating the Stack c will be written as cab a b If c1c2…cn is pushed onto the stack: c1 c2 cn c c1c2…cncab

Yields Let c be any element of   {}, Let 1, 2 and  be any elements of *, and Let w be any element of *. Then: (q1, cw, 1) |-M (q2, w, 2) iff ((q1, c, 1), (q2, 2))  . Let |-M* be the reflexive, transitive closure of |-M. C1 yields configuration C2 iff C1 |-M* C2

Computations A computation by M is a finite sequence of configurations C0, C1, …, Cn for some n  0 such that: ● C0 is an initial configuration, ● Cn is of the form (q, , ), for some state q  KM and some string  in *, and ● C0 |-M C1 |-M C2 |-M … |-M Cn.

Nondeterminism If M is in some configuration (q1, s, ) it is possible that: ●  contains exactly one transition that matches. ●  contains more than one transition that matches. ●  contains no transition that matches.

Accepting A computation C of M is an accepting computation iff: ● C = (s, w, ) |-M* (q, , ), and ● q  A. M accepts a string w iff at least one of its computations accepts. Other paths may: ● Read all the input and halt in a nonaccepting state, ● Read all the input and halt in an accepting state with the stack not empty, ● Loop forever and never finish reading the input, or ● Reach a dead end where no more input can be read. The language accepted by M, denoted L(M), is the set of all strings accepted by M.

Rejecting A computation C of M is a rejecting computation iff: ● C = (s, w, ) |-M* (q, w, ), ● C is not an accepting computation, and ● M has no moves that it can make from (q, , ). M rejects a string w iff all of its computations reject. Note that it is possible that, on input w, M neither accepts nor rejects.

A PDA for Bal M = (K, , , , s, A), where: K = {s} the states  = {(, )} the input alphabet  = {(} the stack alphabet A = {s}  contains: ((s, (, ), (s, ( )) ** ((s, ), ( ), (s, )) **Important: This does not mean that the stack is empty

A PDA for AnBn = {anbn: n  0}

A PDA for {wcwR: w  {a, b}*} M = (K, , , , s, A), where: K = {s, f} the states  = {a, b, c} the input alphabet  = {a, b} the stack alphabet A = {f} the accepting states  contains: ((s, a, ), (s, a)) ((s, b, ), (s, b)) ((s, c, ), (f, )) ((f, a, a), (f, )) ((f, b, b), (f, ))

A PDA for {anb2n: n  0}

A PDA for {anb2n: n  0}

This one is nondeterministic A PDA for PalEven ={wwR: w  {a, b}*} S   S  aSa S  bSb A PDA: This one is nondeterministic

This one is nondeterministic A PDA for PalEven ={wwR: w  {a, b}*} S   S  aSa S  bSb A PDA: This one is nondeterministic

A PDA for {w  {a, b}* : #a(w) = #b(w)}

A PDA for {w  {a, b}* : #a(w) = #b(w)}

More on Nondeterminism Accepting Mismatches L = {ambn : m  n; m, n > 0} Start with the case where n = m: b/a/ a//a b/a/ 1 2

More on Nondeterminism Accepting Mismatches L = {ambn : m  n; m, n > 0} Start with the case where n = m: b/a/ a//a b/a/ 1 2 ● If stack and input are empty, halt and reject. ● If input is empty but stack is not (m > n) (accept): ● If stack is empty but input is not (m < n) (accept):

More on Nondeterminism Accepting Mismatches L = {ambn : m  n; m, n > 0} b/a/ a//a b/a/ 2 1 ● If input is empty but stack is not (m < n) (accept): b/a/ a//a /a/ /a/ b/a/ 2 1 3

More on Nondeterminism Accepting Mismatches L = {ambn : m  n; m, n > 0} b/a/ a//a b/a/ 2 1 ● If stack is empty but input is not (m > n) (accept): b// b/a/ a//a b// b/a/ 2 4 1

Putting It Together ● Jumping to the input clearing state 4: L = {ambn : m  n; m, n > 0} ● Jumping to the input clearing state 4: Need to detect bottom of stack. ● Jumping to the stack clearing state 3: Need to detect end of input.

The Power of Nondeterminism Consider AnBnCn = {anbncn: n  0}. PDA for it?

The Power of Nondeterminism Consider AnBnCn = {anbncn: n  0}. Now consider L =  AnBnCn. L is the union of two languages: 1. {w  {a, b, c}* : the letters are out of order}, and 2. {aibjck: i, j, k  0 and (i  j or j  k)} (in other words, unequal numbers of a’s, b’s, and c’s).

A PDA for L = AnBnCn

Are the Context-Free Languages Closed Under Complement? AnBnCn is context free. If the CF languages were closed under complement, then AnBnCn = AnBnCn would also be context-free. But we will prove that it is not.

L = {anbmcp: n, m, p  0 and n  m or m  p} S  NC /* n  m, then arbitrary c's S  QP /* arbitrary a's, then p  m N  A /* more a's than b's N  B /* more b's than a's A  a A  aA A  aAb B  b B  Bb B  aBb C   | cC /* add any number of c's P  B' /* more b's than c's P  C' /* more c's than b's B'  b B'  bB' B'  bB'c C'  c | C'c C'  C'c C'  bC'c Q   | aQ /* prefix with any number of a's

Reducing Nondeterminism ● Jumping to the input clearing state 4: Need to detect bottom of stack, so push # onto the stack before we start. ● Jumping to the stack clearing state 3: Need to detect end of input. Add to L a termination character (e.g., $)

Reducing Nondeterminism ● Jumping to the input clearing state 4:

Reducing Nondeterminism ● Jumping to the stack clearing state 3:

More on PDAs A PDA for {wwR : w  {a, b}*}: What about a PDA to accept {ww : w  {a, b}*}?

PDAs and Context-Free Grammars Theorem: The class of languages accepted by PDAs is exactly the class of context-free languages. Recall: context-free languages are languages that can be defined with context-free grammars. Restate theorem: Can describe with context-free grammar Can accept by PDA

Going One Way Lemma: Each context-free language is accepted by some PDA. Proof (by construction): The idea: Let the stack do the work. Two approaches: Top down Bottom up

Top Down The idea: Let the stack keep track of expectations. Example: Arithmetic expressions E  E + T E  T T  T  F T  F F  (E) F  id (1) (q, , E), (q, E+T) (7) (q, id, id), (q, ) (2) (q, , E), (q, T) (8) (q, (, ( ), (q, ) (3) (q, , T), (q, T*F) (9) (q, ), ) ), (q, ) (4) (q, , T), (q, F) (10) (q, +, +), (q, ) (5) (q, , F), (q, (E) ) (11) (q, , ), (q, ) (6) (q, , F), (q, id) Show the state of the stack after each transition.

A Top-Down Parser The outline of M is: M = ({p, q}, , V, , p, {q}), where  contains: ● The start-up transition ((p, , ), (q, S)). ● For each rule X  s1s2…sn. in R, the transition: ((q, , X), (q, s1s2…sn)). ● For each character c  , the transition: ((q, c, c), (q, )). A top-down parser is sometimes called a predictive parser.

Example of the Construction L = {anb*an} 0 (p, , ), (q, S) (1) S   * 1 (q, , S), (q, ) (2) S  B 2 (q, , S), (q, B) (3) S  aSa 3 (q, , S), (q, aSa) (4) B   4 (q, , B), (q, ) (5) B  bB 5 (q, , B), (q, bB) 6 (q, a, a), (q, ) input = a a b b a a 7 (q, b, b), (q, ) trans state unread input stack p a a b b a a  0 q a a b b a a S 3 q a a b b a a aSa 6 q a b b a a Sa 3 q a b b a a aSaa 6 q b b a a Saa 2 q b b a a Baa 5 q b b a a bBaa 7 q b a a Baa 5 q b a a bBaa 7 q a a Baa 4 q a a aa 6 q a a 6 q  

Another Example L = {anbmcpdq : m + n = p + q} (1) S  aSd (2) S  T (3) S  U (4) T  aTc (5) T  V (6) U  bUd (7) U  V (8) V  bVc (9) V   input = a a b c d d

Another Example L = {anbmcpdq : m + n = p + q} 0 (p, , ), (q, S) (1) S  aSd 1 (q, , S), (q, aSd) (2) S  T 2 (q, , S), (q, T) (3) S  U 3 (q, , S), (q, U) (4) T  aTc 4 (q, , T), (q, aTc) (5) T  V 5 (q, , T), (q, V) (6) U  bUd 6 (q, , U), (q, bUd) (7) U  V 7 (q, , U), (q, V) (8) V  bVc 8 (q, , V), (q, bVc) (9) V   9 (q, , V), (q, ) 10 (q, a, a), (q, ) 11 (q, b, b), (q, ) input = a a b c d d 12 (q, c, c), (q, ) 13 (q, d, d), (q, ) trans state unread input stack Trace through the first few steps, and suggest that students do the rest for practice

The Other Way to Build a PDA - Directly L = {anbmcpdq : m + n = p + q} (1) S  aSd (6) U  bUd (2) S  T (7) U  V (3) S  U (8) V  bVc (4) T  aTc (9) V   (5) T  V input = a a b c d d

The Other Way to Build a PDA - Directly L = {anbmcpdq : m + n = p + q} (1) S  aSd (6) U  bUd (2) S  T (7) U  V (3) S  U (8) V  bVc (4) T  aTc (9) V   (5) T  V input = a a b c d d a//a b//a c/a/ d/a/ b//a c/a/ d/a/ 1 2 3 4 c/a/ d/a/ d/a/

Notice Nondeterminism Machines constructed with the algorithm are often nondeterministic, even when they needn't be. This happens even with trivial languages. Example: AnBn = {anbn: n  0} A grammar for AnBn is: A PDA M for AnBn is: (0) ((p, , ), (q, S)) [1] S  aSb (1) ((q, , S), (q, aSb)) [2] S   (2) ((q, , S), (q, )) (3) ((q, a, a), (q, )) (4) ((q, b, b), (q, )) But transitions 1 and 2 make M nondeterministic. A directly constructed machine for AnBn:

Bottom-Up The idea: Let the stack keep track of what has been found. (1) E  E + T (2) E  T (3) T  T  F (4) T  F (5) F  (E) (6) F  id Reduce Transitions: (1) (p, , T + E), (p, E) (2) (p, , T), (p, E) (3) (p, , F  T), (p, T) (4) (p, , F), (p, T) (5) (p, , )E( ), (p, F) (6) (p, , id), (p, F) Shift Transitions (7) (p, id, ), (p, id) (8) (p, (, ), (p, () (9) (p, ), ), (p, )) (10) (p, +, ), (p, +) (11) (p, , ), (p, ) A bottom-up parser is sometimes called a shift-reduce parser. Show how it works on id + id * id State stack remaining input transition to use p  id + id * id 7 p id + id * id 6 p F + id * id 4 p T + id * id 2 p E + id * id 10 p +E id * id 7 p id+E * id 6 p F+E * id 4 p T+E * id 11 p *T+E id 7 p id*T+E  6 p F*T+E  3 p T+E  1 p E  0 q  

A Bottom-Up Parser The outline of M is: M = ({p, q}, , V, , p, {q}), where  contains: ● The shift transitions: ((p, c, ), (p, c)), for each c  . ● The reduce transitions: ((p, , (s1s2…sn.)R), (p, X)), for each rule X  s1s2…sn. in G. ● The finish up transition: ((p, , S), (q, )).