Chapter 7 Pushdown Automata. Context Free Languages A context-free grammar is a simple recursive way of specifying grammar rules by which strings of a.

Slides:



Advertisements
Similar presentations
1 Chapter Parsing Techniques. 2 Section 12.3 Parsing Techniques We know (via a theorem) that the context- free languages are exactly those languages.
Advertisements

Chapter 5 Pushdown Automata
1 Pushdown Automata (PDA) Informally: –A PDA is an NFA-ε with a stack. –Transitions are modified to accommodate stack operations. Questions: –What is a.
C O N T E X T - F R E E LANGUAGES ( use a grammar to describe a language) 1.
Pushdown Automata Chapter 12. Recognizing Context-Free Languages Two notions of recognition: (1) Say yes or no, just like with FSMs (2) Say yes or no,
Pushdown Automata Chapter 12. Recognizing Context-Free Languages We need a device similar to an FSM except that it needs more power. The insight: Precisely.
CFGs and PDAs Sipser 2 (pages ). Long long ago…
LR-Grammars LR(0), LR(1), and LR(K).
Pushdown Automata Part II: PDAs and CFG Chapter 12.
CS21 Decidability and Tractability
Introduction to Computability Theory
1 Introduction to Computability Theory Lecture7: PushDown Automata (Part 1) Prof. Amos Israeli.
CS5371 Theory of Computation
Costas Busch - RPI1 NPDAs Accept Context-Free Languages.
Courtesy Costas Busch - RPI1 NPDAs Accept Context-Free Languages.
January 14, 2015CS21 Lecture 51 CS21 Decidability and Tractability Lecture 5 January 14, 2015.
Normal forms for Context-Free Grammars
CS5371 Theory of Computation Lecture 8: Automata Theory VI (PDA, PDA = CFG)
January 15, 2014CS21 Lecture 61 CS21 Decidability and Tractability Lecture 6 January 16, 2015.
PZ03A Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ03A - Pushdown automata Programming Language Design.
Finite State Machines Data Structures and Algorithms for Information Processing 1.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 8 Mälardalen University 2010.
Pushdown Automata.
Chapter 7 PDA and CFLs.
Pushdown Automata (PDA) Intro
Context-free Grammars Example : S   Shortened notation : S  aSaS   | aSa | bSb S  bSb Which strings can be generated from S ? [Section 6.1]
Pushdown Automata CS 130: Theory of Computation HMU textbook, Chap 6.
Pushdown Automata (PDAs)
Pushdown Automata Part I: PDAs Chapter Recognizing Context-Free Languages Two notions of recognition: (1) Say yes or no, just like with FSMs (2)
Definition Moves of the PDA Languages of the PDA Deterministic PDA’s Pushdown Automata 11.
1 Section 12.3 Context-Free Parsing We know (via a theorem) that the context-free languages are exactly those languages that are accepted by PDAs. When.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 11 Midterm Exam 2 -Context-Free Languages Mälardalen University 2005.
1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 6 Mälardalen University 2010.
Pushdown Automata Chapters Generators vs. Recognizers For Regular Languages: –regular expressions are generators –FAs are recognizers For Context-free.
Chapter 6 Simplification of Context-free Grammars and Normal Forms These class notes are based on material from our textbook, An Introduction to Formal.
Chapter 7 Pushdown Automata
CS 208: Computing Theory Assoc. Prof. Dr. Brahim Hnich Faculty of Computer Sciences Izmir University of Economics.
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
PZ03A Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ03A - Pushdown automata Programming Language Design.
PushDown Automata. What is a stack? A stack is a Last In First Out data structure where I only have access to the last element inserted in the stack.
Introduction Finite Automata accept all regular languages and only regular languages Even very simple languages are non regular (  = {a,b}): - {a n b.
1 Pushdown Automata There are context-free languages that are not regular. Finite automata cannot recognize all context-free languages.
Chapter 8 Properties of Context-free Languages These class notes are based on material from our textbook, An Introduction to Formal Languages and Automata,
Formal Languages, Automata and Models of Computation
Donghyun (David) Kim Department of Mathematics and Physics North Carolina Central University 1 Chapter 2 Context-Free Languages Some slides are in courtesy.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen Department of Computer Science University of Texas-Pan American.
1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 8 Mälardalen University 2011.
Pushdown Automata Chapter 12. Recognizing Context-Free Languages Two notions of recognition: (1) Say yes or no, just like with FSMs (2) Say yes or no,
1 Chapter Pushdown Automata. 2 Section 12.2 Pushdown Automata A pushdown automaton (PDA) is a finite automaton with a stack that has stack operations.
1 Section 12.2 Pushdown Automata A pushdown automaton (PDA) is a finite automaton with a stack that has stack operations pop, push, and nop. PDAs always.
CS 154 Formal Languages and Computability March 15 Class Meeting Department of Computer Science San Jose State University Spring 2016 Instructor: Ron Mak.
Theory of Languages and Automata By: Mojtaba Khezrian.
Lecture 11  2004 SDU Lecture7 Pushdown Automaton.
Formal Languages, Automata and Models of Computation
Introduction to Formal Languages and Automata
Pushdown Automata.
Theory of Computation Pushdown Automata pda Lecture #10.
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Theorem 29 Given any PDA, there is another PDA that accepts exactly the same language with the additional property that whenever a path leads to ACCEPT,
Pushdown Automata.
Chapter 7 PUSHDOWN AUTOMATA.
NPDAs Accept Context-Free Languages
Pushdown Automata Reading: Chapter 6.
Chapter 5 Pushdown Automata
Intro to Data Structures
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Pushdown automata Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Presentation transcript:

Chapter 7 Pushdown Automata

Context Free Languages A context-free grammar is a simple recursive way of specifying grammar rules by which strings of a language can be generated. All regular languages, and some non-regular languages, can be generated by context-free grammars. 12/26/ :43 AM

Context Free Languages Regular Languages are represented by regular expressions Context Free Languages are represented by a context-free grammar 12/26/ :43 AM

Context Free Languages Regular Languages are accepted by deterministic finite automata (DFAs). Context Free Languages are accepted by pushdown automata, which are non- deterministic finite state automata with a stack as auxiliary memory. Note that pushdown automata which are deterministic can represent some but not all of the context-free languages. 12/26/ :43 AM

Definition A context-free grammar (CFG) is a 4-tuple G = (V, T, S, P) where V and T are disjoint sets, S  V, and P is a finite set of rules of the form A  x, where A  V and x  (V  T)*. V = non-terminals or variables T = terminals S = Start symbol P = Productions or grammar rules 12/26/ :43 AM

Example Let G be the CFG having productions: S  aSa | bSb | c Then G will generate the language L = {xcx R | x  {a, b}*} This is the language of odd palindromes - palindromes with a single isolated character in the middle.

Memory What kind of memory do we need to be able to recognize strings in L, using a single left-to-right pass? Example: aaabbcbbaaa We need to remember what we saw before the c We could push the first part of the string onto a stack and, when the c is encountered, start popping characters off of the stack and matching them with each character from the center of the string on to the end. If everything matches, this string is an odd palindrome. 12/26/ :43 AM

Counting We can use a stack for counting out equal numbers of a’s and b’s on different sides of a center marker. Example: L = a n cb n aaaacbbbb Push the a’s onto the stack until you see a c, then pop an a off and match it with a b whenever you see a b. If we finish processing the string successfully (and there are no more a’s on our stack), then the string belongs to L. 12/26/ :43 AM

Definition 7.1: Pushdown Automaton A nondeterministic pushdown automaton (NPDA) is a 7-tuple M = (Q,  q 0,  z  F), where Q is a finite set of states  is the input alphabet (a finite set)  is the stack alphabet (a finite set)  Q  finite subsets of Q    is the transition function q 0  Q is the start state z   is the initial stack symbol F  Q is the set of accepting states 12/26/ :43 AM

Production rules So we can fully specify any NPDA like this: Q = {q 0, q 1, q 2, q 3 }  = {a, b}  = {0, 1} q 0 is the start state z = # (the empty stack marker) F = {q 3 }  is the transition function: 12/26/ :43 AM

Production rules δ(q 0, a, #)  {(q 1, 1#), (q 3, λ)} δ(q 0, λ, #)  {(q 3, λ)} δ(q 1, a, 1)  {(q 1, 11)} δ(q 1, b, 1)  {(q 2, λ)} δ(q 2, b, 1)  {(q 2, λ)} δ(q 2, λ, #)  {(q 3, λ)} This PDA is nondeterministic. Why? 12/26/ :43 AM

Production rules Note that in an FSA, each rule told us that when we were in a given state and saw a specific character, we moved to a specified state. In a PDA, we also need to know what is on the stack before we can decide what new state to move to. After moving to the new state, we also need to decide what to do with the stack. 12/26/ :43 AM

Working with a stack: You can only access the top element of the stack. To access the top element of the stack, you have to POP it off the stack. Once the the top element of the stack has been POPped, if you want to save it, you need to PUSH it back onto the stack immediately. Characters from the input string must be read one character at a time. You cannot back up. The current configuration of the machine includes: the current state, the remaining characters left in the input string, and the entire contents of the stack 12/26/ :43 AM

L={a n b n :n  0}  {a} In the previous example we had two key transitions: δ(q 1, a, 1)  {(q 1, 11)}, which adds a 1 to the stack when an a is read δ(q 1, b, 1)  {(q 2, λ)}, which removes a 1 when a b is encountered We also have the rule: δ(q 0, a, #)  {(q 1, 1#), (q 3, λ)}, which allows us to transition directly to the acceptance state, q 3, if we initially see an a 12/26/ :43 AM

Instantaneous description Given the transition function  Q  finite subsets of Q    a configuration, or instantaneous description, of M is a snapshot of the current status of the PDA. It consists of a triple: (q, w, u) where: q  Q (q is the current state of the control unit) w    (w is the remaining unread part of the input string), and u     u is the current stack contents, with the leftmost symbol indicating the top of the stack) 12/26/ :43 AM

Instantaneous description To indicate that the application of a transition rule has caused our PDA to move from one state to another, we use the following notation: (q 1, aw, bx) |- (q 2, w, yx) To indicate that we have moved from one state to another via the application of several rules, we use: (q 1, aw, bx) |- * (q 2, w, yx) or (q 1, aw, bx) |- M * (q 2, w, yx) to indicate a specific PDA 12/26/ :43 AM

Definition 7.2: Acceptance If M = (Q,   q 0, z  F) is a push-down automaton and w   *, the string w is accepted by M if: (q 0, w, #) |- M * (q f,, u) for some u   * and some q f  F. This means that we start at the start state, with the stack empty, and after processing the string w, we end up in an accepting state, with no more characters left to process in the original string. We don’t care what is left on the stack. This is called acceptance by final state. 12/26/ :43 AM

2 types of acceptance An alternate type of acceptance is acceptance by empty stack. This means that we start at the start state, with the stack empty, and after processing the string w, we end up with no more characters left to process in the original string, and no more characters (except the empty-stack character) left on the stack. 12/26/ :43 AM

2 types of acceptance The two types of acceptance are equivalent; if we can build a PDA to accept language L via acceptance by final state, we can also build a PDA to accept L via acceptance by empty stack. 12/26/ :43 AM

Definition 7.2: Acceptance A language L   * is said to be accepted by M if L is precisely the set of strings accepted by M. In this case, we say that L = L(M). 12/26/ :43 AM

Determinism/non-determinism: A deterministic PDA must have only one transition for any given pair of input symbol/ stack symbol. A non-deterministic PDA (NPDA) may have no transition or several transitions defined for a particular input symbol/stack symbol pair. In an NPDA, there may be several paths to follow to process a given input string. Some of the paths may result in accepting the string. Other paths may end in a non-accepting state. An NPDA can “guess” which path to follow through the machine in order to accept a string. 12/26/ :43 AM

Example: a n bc n b, a / a  # # # # a, # / a # a, a / aa c, a / L = {a n bc n | n>0} q0q0 q1q1 q2q2 12/26/ :43 AM

Production rules for a n bc n Rule #StateInputTop of StackMove(s) 1q0q0 a # (q 0, a # ) 2q0q0 aa(q 0, aa) 3q0q0 ba(q 1, a) 4q1q1 ca(q 1, λ) 5q1q1 λ # (q 2, # ) 12/26/ :43 AM

Example: aabcc b, a / a  # # # # a, # / a # a, a / aa c, a / q0q0 q1q1 q2q2 # a a b c c 12/26/ :43 AM

Example: aabcc b, a / a  # # # # a, # / a # a, a / aa c, a / q0q0 q1q1 q2q2 a # a b c c 12/26/ :43 AM

Example: aabcc b, a / a  # # # # a, # / a # a, a / aa c, a / q0q0 q1q1 q2q2 a a # b c c 12/26/ :43 AM

Example: aabcc b, a / a  # # # # a, # / a # a, a / aa c, a / q0q0 q1q1 q2q2 a a # c 12/26/ :43 AM

Example: aabcc b, a / a  # # # # a, # / a # a, a / aa c, a / q0q0 q1q1 q2q2 a # c 12/26/ :43 AM

Example: aabcc b, a / a  # # # # a, # / a # a, a / aa c, a / q0q0 q1q1 q2q2 # λ 12/26/ :43 AM

Example: aabcc b, a / a  # # # # a, # / a # a, a / aa c, a / q0q0 q1q1 q2q2 # λ 12/26/ :43 AM

Example: Odd palindrome c, # / # c, a / a c, b / b   a, # / a# b, # / b# a, a / aa b, a / ba a, b / ab b, b / bb a, a / b  b / q0q0 q1q1 q2q2 L = {xcx R | x  {a, b}*} 12/26/ :43 AM

Production rules for Odd palindromes Rule #StateInputTop of StackMove(s) 1q0q0 a#(q 0, a#) 2q0q0 b#(q 0, b#) 3q0q0 aa(q 0, aa) 4q0q0 ba(q 0, ba) 5q0q0 ab(q 0, ab) 6q0q0 bb(q 0, bb) 7q0q0 c#(q 1, #) 8q0q0 ca(q 1, a) 9q0q0 cb(q 1, b) 10q1q1 aa(q 1, λ) 11q1q1 bb(q 1, λ) 12q1q1 λ#(q 2, #) 12/26/ :43 AM

Processing abcba Rule #Resulting stateUnread inputStack (initially)q0q0 abcba# 1q0q0 bcbaa# 4q0q0 cbaba# 9q1q1 baba# 11q1q1 aa# 10q1q1 -# 12q2q2 -# accept 12/26/ :43 AM

Processing ab Rule #Resulting stateUnread inputStack (initially)q0q0 ab# 1q0q0 ba# 4q0q0 -ba# crash 12/26/ :43 AM

Processing acaa Rule #Resulting stateUnread inputStack (initially)q0q0 acaa# 1q0q0 caaa# 8q1q1 aaa# 10q1q1 a# 12q2q2 a# crash 12/26/ :43 AM

Crashing: What is happening in the last example? We process the first 3 letters of acaa and are in state q 1. We have an a left to process in our input string. We have the empty- stack marker as the top character in our stack. Rule 12 says that if we are in state q 1 and have # on the stack, then we can make a free move (a -move) to q 2, pushing # back onto the stack. So this is legal. So far, the automaton is saying that it would accept aca. But note that we are in state q 2 and we still have the last a in our input string left to process. There are no rules like this. On the next move, when we try to process the a, the automaton will crash, rejecting acaa. 12/26/ :43 AM

Example: Even palindromes Consider the following context-free language: L = {ww R | w  {a, b}*} This is the language of all even-length palindromes over {a, b}. 12/26/ :43 AM

Production rules for Even palindromes Rule #StateInputTop of StackMove(s) 1q0q0 a#(q 0, a#) 2q0q0 b#(q 0, b#) 3q0q0 aa(q 0, aa) 4q0q0 ba(q 0, ba) 5q0q0 ab(q 0, ab) 6q0q0 bb(q 0, bb) 7q0q0 λ#(q 1, #) 8q0q0 λa(q 1, a) 9q0q0 λb(q 1, b) 10q1q1 aa(q 1, λ) 11q1q1 bb(q 1, λ) 12q1q1 λ#(q 2, #) 12/26/ :43 AM

Example: Even palindromes This PDA is non-deterministic. Note moves 7, 8, and 9. Here the PDA is “guessing” where the middle of the string occurs. If it guesses correctly (and if the PDA doesn’t accept any strings that aren’t actually in the language), this is OK. 12/26/ :43 AM

Example: Even palindromes (q 0, baab, #)|- (q 0, aab, b#) |- (q 0, ab, ab#) |- (q 1, ab, ab#) |- (q 1, b, b#) |- (q 1, λ, #) |- (q 2, λ, #) (accept) 12/26/ :43 AM

Example: All palindromes Consider the following context-free language: L = pal = {x  {a, b}* | x = x R } This is the language of all palindromes, both odd and even, over {a, b}. 12/26/ :43 AM

Production rules for All palindromes Rule #StateInputTop of StackMove(s) 1q0q0 a#(q 0, a#), (q 1, #) 2q0q0 b#(q 0, b#), (q 1, #) 3q0q0 aa(q 0, aa), (q 1, a) 4q0q0 ba(q 0, ba), (q 1, a) 5q0q0 ab(q 0, ab), (q 1, b) 6q0q0 bb(q 0, bb), (q 1, b) 7q0q0 λ#(q 1, #) 8q0q0 λa(q 1, a) 9q0q0 λb(q 1, b) 10q1q1 aa(q 1, λ) 11q1q1 bb(q 1, λ) 12q1q1 λ#(q 2, #) 12/26/ :43 AM

Production rules for All palindromes At each point before we start processing the second half of the string, there are three possibilities: 1. The next input character is still in the first half of the string and needs to be pushed onto the stack to save it. 2.The next input character is the middle symbol of an odd- length string and should be read and thrown away (because we don’t need to save it to match it up with anything). 3. The next input character is the first character of the second half of an even-length string. 12/26/ :43 AM

Production rules for All palindromes Why is this PDA non-deterministic? Note the first 6 rules of this NPDA. This PDA is obviously non-deterministic, because in each of these rules, there are two moves that may be chosen. 12/26/ :43 AM

Production rules for All palindromes Each move in a PDA has three pre-conditions: the current state you are in, the next character to be processed from the input string, and the top character on the stack. In rule 1, our current state is q 0, the next character in the input string is a, and the top character on the stack is the empty-stack marker. But there are two possible moves for this one set of preconditions: 1) move back to state q 0 and push a# onto the stack or 2) move to state q 1 and push # onto the stack Whenever we have multiple moves possible from a given set of preconditions, we have nondeterminism. 12/26/ :43 AM

Definition 7.3: Let M = (Q,  q 0, z  A,  ), be a pushdown automaton. M is deterministic if there is no configuration for which M has a choice of more than one move. In other words, M is deterministic if it satisfies both of the following: 1.For any q  Q, a  , and X  , the set  (q, a, X) has at most one element. 2.For any q  Q and X  , if  (q,, X)  , then  (q, a, X) =  for every a  . A language L is a deterministic context-free language if there is a deterministic PDA (DPDA) accepting L. 12/26/ :43 AM

Definition 7.3: If M is deterministic, then multiple moves for a single input/stack configuration are not allowed. That is: Given stack = Y and input = X, there cannot exist another move with the same stack value and the same input from the same state. There may be -productions, BUT for input =  and stack = X, there cannot exist another move with stack = X, from the same state. 12/26/ :43 AM

Non-determinism Some PDA’s which are initially described in a non-deterministic way can also be described as deterministic PDA’s. However, some CFLs are inherently non- deterministic, e.g.: L = pal = {x  {a, b}* | x = x R } cannot be accepted by any DPDA. 12/26/ :43 AM

Example: L = {w  {a, b}* | n a (w) > n b (w)} This is the set of all strings over the alphabet {a, b} in which the number of a’s is greater than the number of b’s. This can be represented by either an NPDA or a DPDA. 12/26/ :43 AM

Example (NPDA): L = {w  {a, b}* | n a (w) > n b (w)} Rule #StateInputTop of StackMove(s) 1q0q0 a#(q 0, a#) 2q0q0 b#(q 0, b#) 3q0q0 aa(q 0, aa) 4q0q0 bb(q 0, bb) 5q0q0 ab(q 0, λ) 6q0q0 ba 7q0q0 λa(q 1, a) 12/26/ :43 AM

Example (NPDA): What is happening in this PDA? We start, as usual, in state q 0. If the stack is empty, we read the first character and push it onto the stack. Thereafter, if the stack character matches the input character, we push both characters onto the stack. If the input character differs from the stack character, we throw both away. When we run out of characters in the input string, then if the stack still has an a on top, we make a free move to q 1 and halt; q 1 is the accepting state. 12/26/ :43 AM

Example (NPDA): Why is it non-deterministic? Rules 6 and 7 both have preconditions of: the starting state is q 0 and the stack character is a. But we have two possible moves from here, one of them if the input is a b, and one of them any time we want (a -move), including if the input is a b. So we have two different moves allowed under the same preconditions, which means this PDA is non-deterministic. 12/26/ :43 AM

Example (DPDA): L = {w  {a, b}* | n a (w) > n b (w)} Rule #StateInputTop of StackMove(s) 1q0q0 a#(q 1, #) 2q0q0 b#(q 0, b#) 3q0q0 ab(q 0, λ) 4q0q0 bb(q 0, bb) 5q0q0 a#(q 1, a#) 6q0q0 b#(q 0, #) 7q0q0 aa(q 1, aa) 8q0q0 ba(q 1, λ) 12/26/ :43 AM

Example (DPDA): What is happening in this PDA? Here being in state q 1 means we have seen more a’s than b’s. Being in state q 0 means we have not seen more a’s than b’s. We start in state q 0. If we are in state q 0 and read a b, we push it onto the stack. If we are in state q 1 and read an a, we push it onto the stack. Otherwise we don’t push a’s or b’s onto the stack. Any time we read an a from the input string and pop a b from the stack, or vice versa, we throw the pair away and stay in the same state. When we run out of characters in the input string, then we halt; q 1 is the accepting state. 12/26/ :43 AM

7.2: PDAs and CFLs: Theorem 7.1: For any context-free language L, there exists an NPDA M such that L = L(M). 12/26/ :43 AM

7.2: PDAs and CFLs: Proof: If L is a context-free language (without λ), there exists a context-free grammar G that generates it. We can always convert a context-free grammar into Greibach Normal Form. We can always construct an NPDA which simulates leftmost derivations in the GNF grammar. QED 12/26/ :43 AM

Greibach Normal Form: Greibach Normal Form (GNF) for Context-Free Grammars requires the Context-Free Grammar to have only productions of the following form: A  ax where a  T and x  V *. That is, Nonterminal  one Terminal concatenated with a string of 0 or more Nonterminals Convert the following Context-Free Grammar to GNF: S  abSb | aa 12/26/ :43 AM

Greibach Normal Form: S  abSb | aa Let’s fix S  aa. Get rid of the terminal at the end by changing this to S  aA and creating a new rule, A  a. Now let’s fix S  abSb. Get rid of bSb by replacing the original rule with S  aX and creating a new rule, X  bSb. Unfortunately, this rule itself needs fixing. Replace the rule with X  bSB by creating a new rule, B  b. 12/26/ :43 AM

Greibach Normal Form: So, starting with this set of production rules: S  abSb | aa we now have: S  aA | aX X  bSB A  a B  b (other solutions are possible) 12/26/ :43 AM

7.2: CFG to PDA To convert a context-free grammar to an equivalent pushdown automaton: 1. Convert the grammar to Greibach Normal Form (GNF). 2. Write a transition rule for the PDA that pushes S (the Start symbol in the grammar) onto the stack. 3. For each production rule in the grammar, write an equivalent transition rule. 4. Write a transition rule that takes the automaton to the accepting state when you run out of characters in the input string and the stack is empty. 5. If the empty string is a legitimate string in the language described by the grammar, write a transition rule that takes the automaton to the accepting state directly from the start state. 12/26/ :43 AM

7.2: CFG to PDA How do you write the transition rules? It’s really simple: 1. Every rule in the GNF grammar has the following form: One variable  one terminal + 0 or more variables Example: A  bB 12/26/ :43 AM

7.2: CFG to PDA 2. The left side of each transition rule is the precondition, a triple that specifies what conditions must be true before you can move to the next state. The precondition consists of the current state, the character just read in from the input string, and the symbol just popped off the top of the stack. So write a transition rule that has as its precondition the current state, the terminal from the grammar rule, and the left-hand variable from the grammar rule. Our grammar rule: A  bB The left side of the transition rule:  (q 1, b, A) (What about the B? See the next slide.) 12/26/ :43 AM

7.2: CFG to PDA 3. The right side of the transition rule is the post-condition. The post-condition consists of the state to move to, and the symbol(s) to push onto the stack. So the post-condition for this transition rule will be the state to move to, and the variable (or variables) on the right-hand side of the grammar rule. Example:  (q 1, b, A) = {(q 1, B)} If there are no variables on the right-hand side of the grammar rule, don’t push anything onto the stack. In the transition rule, put a where you would show what symbol to push onto the stack. Example: A  a[no variable here] would be represented in transition rule form as:  (q 1, a, A) = {(q 1, )} 12/26/ :43 AM

7.2: CFG to PDA How do you know which state to move to? It’s really simple: 1. We always start off with this special transition rule:  (q 0,, #) = {(q 1, S#)} This rule says: a. begin in state q 0 b. pop the top of the stack. If it is # (the empty stack symbol), then c. take a free move to q 1 without reading anything from the input string, push # back onto the stack, and then push S (the Start symbol in the grammar) onto the stack. 12/26/ :43 AM

7.2: CFG to PDA 2. We always end up with this special transition rule:  (q 1,, #) = {(q 2, #)} This rule says: a. begin in state q 1 b. pop the top of the stack. If it is # (the empty stack symbol), then c. take a free move to q 2 without reading anything from the input string, and push # back onto the stack. In order to be in state q 1 we previously must have pushed something onto the stack. If we now pop the stack and find the empty stack symbol, it tells us that we have finished processing the string, so we can move on to state q 2. 12/26/ :43 AM

7.2: CFG to PDA 3. Every other transition rule leaves us in state 1. 12/26/ :43 AM

7.2: CFG to PDA Here is a grammar in GNF: G = (V, T, S, P), where V = {S, A, B, C}, T = {a, b, c,}, S = S, and P = S  aA A  aABC | bB | a B  b C  c Let’s convert this grammar to a PDA. 12/26/ :43 AM

7.2: CFG to PDA Grammar rule: PDA transition rule: (none)  (q 0,, #) = {(q 1, S#)} S  aA  (q 1, a, S) = {(q 1, A)} A  aABC  (q 1, a, A) = {(q 1, ABC)}) A  bB  (q 1, b, A) = {(q 1, B)} A  a  (q 1, a, A) = {(q 1, )} B  b  (q 1, b, B) = {(q 1, )} C  c  (q 1, c, C) = {(q 1, )} (none)  (q 1,, #) = {(q 2, #)} So the equivalent PDA can be defined as: M = ({q 0, q 1, q 2 }, T, V  {#}, , q 0, #, {q 2 }), where  is the set of transition rules given above. 12/26/ :43 AM

7.2: CFG to PDA Is this grammar deterministic? Let’s group the transition rules together so that all rules with the same precondition are described in a single rule. 1.  (q 0,, #) = {(q 1, S#)} 2.  (q 1, a, S) = {(q 1, A)} 3.  (q 1, a, A) = {(q 1, ABC), (q 1, )} 4.  (q 1, b, A) = {(q 1, B)} 5.  (q 1, b, B) = {(q 1, )} 6.  (q 1, c, C) = {(q 1, )} 7.  (q 1,, #) = {(q 2, #)} Here we see that rule three has the same precondition but two different possible post-conditions. Thus this PDA is nondeterministic. 12/26/ :43 AM

7.2: CFG to PDA Let’s follow the steps that the PDA would go through to process the string aaabc, starting with the initial precondition: (q 0, aaabc, #) |- (q 1, aaabc, S#) rule 1 |- (q 1, aabc, A#) rule 2 |- (q 1, abc, ABC#) rule 3, first alternative |- (q 1, bc, BC#) rule 3, second alternative |- (q 1, c, C#) rule 5 |- (q 1,, #) rule 6 |- (q 2,, #) rule 7 Notice that this corresponds to the following leftmost derivation in the grammar: S  aA  aaABC  aaaBC  aaabC  aaabc 12/26/ :43 AM

7.2: CFG to PDA In fact, this is exactly what our set of PDA transformational rules does. It carries out a leftmost derivation of any string in the language described by the CFG. After each step, the remaining unprocessed sentential form (the as- yet unprocessed variables) is on the stack, as can be seen by looking down the post-condition column above. This corresponds precisely to the left-to-right sequence of unprocessed variables in each step of the leftmost derivation given above. 12/26/ :43 AM

7.2: Alternative Approach to Constructing a PDA from a CFG Let G = (V, , S, P) be a context-free grammar. Then there is a push-down automaton M so that L(M) = G. Can we generate an NPDA from a CFG without converting to GNF first? Yes. 12/26/ :43 AM

7.2: Alternative Approach to Constructing a PDA from a CFG In this approach, the plan is to let the production rules directly reflect the manipulation of the stack implied by the grammar rules. With this method, you do not need to convert to GNF first, but the technique is harder to understand. The beginning and ending production rules are the same in the GNF method. 12/26/ :43 AM

7.2: Alternative Approach to Constructing a PDA from a CFG So we will always need to have the following 2 sets of production rules in our PDA:  (q 0,, #) = {(q 1, S#)} and  (q 1,, #) = {(q 2, #)} 12/26/ :43 AM

7.2: Alternative Approach to Constructing a PDA from a CFG The other production rules are derived from the grammar rules: If you pop the top of the stack and it is a variable, don’t read anything from the input string. Push the right side of the grammar rule involving this variable onto the stack. If you pop the top of the stack and it is a terminal, read the next character in the input string. Don’t push anything onto the stack. 12/26/ :43 AM

7.2: Constructing a PDA from a CFG Given G = (V, , S, P), construct M = (Q,  δ, q 0, #  F, ), with Q = {q 0, q 1, q 2 }  = V   {#} | #  V  F = q 2  (q 0,, #) = {(q 1, S #)} For A  V,  (q 1,, A) = {(q 1,  )}, where A   For a  ,  (q 1, a, a) = {(q 1, )}  (q 1,, #) = {(q 2, #)} 12/26/ :43 AM

7.2: Constructing a PDA from a CFG Language: L = {x  {a, b}* | n a (x) > n b (x)} Context-free grammar: S  a | aS | bSS | SSb | SbS 12/26/ :43 AM

7.2: Constructing a PDA from a CFG S  a | aS | bSS | SSb | SbS Let M = (Q,  q 0, z  A,  ), be a pushdown automaton as previously described. The production rules will be: Rule #StateInputTop of StackMove(s) 1q0q0 λ#(q 1, S#) 2q1q1 λS(q 1, a), (q 1, aS), (q 1, bSS), (q 1, SSb), (q 1, SbS) 3q1q1 aa(q 1, λ) 4q1q1 bb 5q1q1 λ#(q 2, #) 12/26/ :43 AM

7.2: CFG to PDA Let’s follow the steps that the PDA would go through to process the string baaba, starting with the initial precondition: (q 0, aaabc, #) |- (q 1, baaba, S#)rule 1 |- (q 1, baaba, bSS#)rule 2, 3 rd alternative |- (q 1, aaba, SS#)rule 4 |- (q 1, aaba, aS#)rule 2, 1 st alternative |- (q 1, aba, S#)rule 3 |- (q 1, aba, SbS#)rule 2, 5 th alternative |- (q 1, aba, abS#)rule 2, 1 st alternative |- (q 1, ba, bS#)rule 3 |- (q 1, a, S#)rule 4 |- (q 1, a, a#)rule 2, 1 st alternative |- (q 1, λ, #)rule 3 |- (q 2, λ, #)rule 5 12/26/ :43 AM

7.2: PDA to CFG Theorem 7.2: If L = L(M) for some NPDA, then L is a context-free language. Proof: Convert the NPDA into a particular form (if needed). From the NPDA, generate a corresponding context-free grammar, G, where the language generated by G = L(M). Since any language generated by a CFG is a context-free language, L must be a CFL. 12/26/ :43 AM

7.2: PDA to CFG It is possible to convert any PDA into a CFG. In order to do this, we need to convert our PDA into a form in which: 1. there is just one final state, which is entered iff the stack is empty, and 2. each transition rule must either increase or decrease the stack content by one. This means that all transition rules must be of the form: a.  (q i, a, A) = (q j, ) or b.  (q i, a, A) = (q j,  C) 12/26/ :43 AM

7.2: PDA to CFG For transition rules that delete a variable from the stack, we will have production rules in the grammar that correspond to: (q i, A, q j )  a For transition rules that add a variable to the stack, we will have production rules in the grammar that correspond to: (q i, A, q j )  a(q i, B, q l )(q l, C, q k ) The start variable in the grammar will correspond to: (q 0, #, q f ) 12/26/ :43 AM

7.2: PDA to CFG We will not go into the details of this process, as it is tedious, and the grammar rules derived are often somewhat complicated, and don’t look much like the rules we are used to seeing; just remember that it can be done. 12/26/ :43 AM

7.4: Parsing Starting with a CFG G and a string x in L(G), we would like to be able to parse x, or find a derivation for x. There are two basic approaches to parsing, top- down parsing and bottom up parsing. 12/26/ :43 AM

7.4: Parsing Remember that Chomsky Normal Form (CNF) requires that every production be one of these two types: A  BC A  a If G is in Chomsky Normal Form, we can bound the length of a derivation string. Every rule in a CNF grammar replaces a variable with either two variables or a single terminal. We always start off with a single variable, S. Therefore, every CNF derivation must have 2n - 1 rule applications, where n is the number of characters in the input string. 12/26/ :43 AM

Parsing: Example: S  SA A  AA | a Starting with the S symbol, to derive the string aaa we would need 5 rule applications: S  SA  AAA  aAA  aaA  aaa If we want to automate this process, using a nondeterministic PDA may require following many alternatives; a deterministic PDA (if available for this grammar) is more efficient. 12/26/ :43 AM

LL(k) grammars A grammar is an LL(k) grammar if, while trying to generate a specific string, we can determine the (unique) correct production rule to apply, given the current character from the input string, plus a “look-ahead” of the next k-1 characters. A simple example is the following: S  aSb | ab 12/26/ :43 AM

LL(k) grammars S  aSb | ab Assume that we want to generate the string ab. We look at the first character, which is an a, plus a look-ahead of one more (a b), for a total of 2 characters. We MUST use the second rule to produce this string. 12/26/ :43 AM

LL(k) grammars S  aSb | ab Now assume that we want to generate the string aabb. We look at our current character, the first symbol (an a), plus one more (another a), and we immediately know that we must use the first rule. But we still have more letters to produce, so we make the second character our current character, and look ahead one more character (the first b), and now we have ab, so we know we must use the second rule. 12/26/ :43 AM

LL(k) grammars S  aSb | ab This is an LL(2) grammar. All LL(k) grammars are deterministic context-free grammars, but not all deterministic context-free grammars are LL(k) grammars. LL(k) grammars are often used to define programming languages. If you take Compilers, you will study this in more depth. 12/26/ :43 AM

Top Down Parsing S  T$ T  [T]T | This is the language of balanced strings of square brackets. The $ is a special end-marker added to the end of each string. This CFG is non-deterministic since there are two rules for T. Its grammar is not in CNF. We can convert this to a DCFG by using look- ahead. 12/26/ :43 AM

Top Down Parsing Here is the derivation of []$: (q 0, []$, #)|- (q 1, []$, S#) S |- (q 1, []$, T$#)  T$ |- (q [, ]$, [T]T$#)  [T]T$ |- (q 1, ]$, T]T$#) |- (q ], $, ]T$#)  []T$ |- (q 1, $, T$#) |- (q $,, $#)  []$ |- (q 1,, #) |- (q 2,, #) 12/26/ :43 AM

Top Down Parsing Top-down parsing involves finding the left hand (precondition) part of the production rule on the stack and replacing it with the right hand (postproduction) sides. In a way, the PDA is saving information so that it can backtrack if, during parsing, it finds that it has made the wrong choice of how to process a string. 12/26/ :43 AM

Left recursion Example: T  T[T] Here the first symbol on the right side is the same as the variable on the left side. With left recursion, the PDA will never crash, and can never backtrack. There is an easy method for eliminating left-recursion. 12/26/ :43 AM

Recursive Descent LL(x) grammars perform a left to right scan generating a leftmost derivation with a look- ahead of x characters. Recursive descent means that the PDA contains a collection of mutually recursive procedures corresponding to the variables in the grammar. LL(1) grammars perform recursive descent parsing. Recursive descent is deterministic 12/26/ :43 AM

Bottom Up Parsers Input symbols are read in and pushed (“shifted”) onto the stack until the stack matches the right-hand side of a production rule; then the string is popped off the stack (“reduced”) and replaced by the variable on the left side of that production rule. Bottom-up parsers perform a rightmost derivation. Bottom-up parsers can be deterministic under some conditions 12/26/ :43 AM