Grammars August 31, 2005 11/16/2018.

Slides:



Advertisements
Similar presentations
BİL711 Natural Language Processing1 Problems with CFGs We know that CFGs cannot handle certain things which are available in natural languages. In particular,
Advertisements

Natural Language Processing - Parsing 1 - Language, Syntax, Parsing Problems in Parsing Ambiguity, Attachment / Binding Bottom vs. Top Down Parsing.
PARSING WITH CONTEXT-FREE GRAMMARS
Parsing with Context Free Grammars Reading: Chap 13, Jurafsky & Martin
March 1, 2009 Dr. Muhammed Al-Mulhem 1 ICS 482 Natural Language Processing Probabilistic Context Free Grammars (Chapter 14) Muhammed Al-Mulhem March 1,
Artificial Intelligence 2004 Natural Language Processing - Syntax and Parsing - Language, Syntax, Parsing Problems in Parsing Ambiguity, Attachment.
1 Earley Algorithm Chapter 13.4 October 2009 Lecture #9.
 Christel Kemke /08 COMP 4060 Natural Language Processing PARSING.
Parsing context-free grammars Context-free grammars specify structure, not process. There are many different ways to parse input in accordance with a given.
Parsing with CFG Ling 571 Fei Xia Week 2: 10/4-10/6/05.
Features and Unification
Basic Parsing with Context- Free Grammars 1 Some slides adapted from Julia Hirschberg and Dan Jurafsky.
Syntactic Parsing with CFGs CMSC 723: Computational Linguistics I ― Session #7 Jimmy Lin The iSchool University of Maryland Wednesday, October 14, 2009.
Artificial Intelligence 2004 Natural Language Processing - Syntax and Parsing - Language Syntax Parsing.
Parsing SLP Chapter 13. 7/2/2015 Speech and Language Processing - Jurafsky and Martin 2 Outline  Parsing with CFGs  Bottom-up, top-down  CKY parsing.
CS 4705 Lecture 11 Feature Structures and Unification Parsing.
Basic Parsing with Context- Free Grammars 1 Some slides adapted from Julia Hirschberg and Dan Jurafsky.
11 CS 388: Natural Language Processing: Syntactic Parsing Raymond J. Mooney University of Texas at Austin.
1 Features and Unification Chapter 15 October 2012 Lecture #10.
1 Basic Parsing with Context- Free Grammars Slides adapted from Dan Jurafsky and Julia Hirschberg.
BİL711 Natural Language Processing1 Statistical Parse Disambiguation Problem: –How do we disambiguate among a set of parses of a given sentence? –We want.
Probabilistic Parsing Reading: Chap 14, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
LINGUISTICA GENERALE E COMPUTAZIONALE ANALISI SINTATTICA (PARSING)
May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Parsing with Context Free Grammars.
October 2005csa3180: Parsing Algorithms 11 CSA350: NLP Algorithms Sentence Parsing I The Parsing Problem Parsing as Search Top Down/Bottom Up Parsing Strategies.
Parsing with Context Free Grammars CSC 9010 Natural Language Processing Paula Matuszek and Mary-Angela Papalaskari This slide set was adapted from: Jim.
Parsing I: Earley Parser CMSC Natural Language Processing May 1, 2003.
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Page 1 Probabilistic Parsing and Treebanks L545 Spring 2000.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
Sentence Parsing Parsing 3 Dynamic Programming. Jan 2009 Speech and Language Processing - Jurafsky and Martin 2 Acknowledgement  Lecture based on  Jurafsky.
Natural Language - General
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
Section 11.3 Features structures in the Grammar ─ Jin Wang.
CS 4705 Lecture 10 The Earley Algorithm. Review Top-Down vs. Bottom-Up Parsers –Both generate too many useless trees –Combine the two to avoid over-generation:
Computerlinguistik II / Sprachtechnologie Vorlesung im SS 2010 (M-GSW-10) Prof. Dr. Udo Hahn Lehrstuhl für Computerlinguistik Institut für Germanistische.
Natural Language Processing Lecture 15—10/15/2015 Jim Martin.
Instructor: Nick Cercone CSEB - 1 Parsing and Context Free Grammars Parsers, Top Down, Bottom Up, Left Corner, Earley.
October 2005CSA3180: Parsing Algorithms 21 CSA3050: NLP Algorithms Parsing Algorithms 2 Problems with DFTD Parser Earley Parsing Algorithm.
Chapter 11: Parsing with Unification Grammars Heshaam Faili University of Tehran.
PARSING David Kauchak CS159 – Fall Admin Assignment 3 Quiz #1  High: 36  Average: 33 (92%)  Median: 33.5 (93%)
Speech and Language Processing SLP Chapter 13 Parsing.
Parsing with Context Free Grammars. Slide 1 Outline Why should you care? Parsing Top-Down Parsing Bottom-Up Parsing Bottom-Up Space (an example) Top -
Natural Language Processing Vasile Rus
CSC 594 Topics in AI – Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
CS60057 Speech &Natural Language Processing
Basic Parsing with Context Free Grammars Chapter 13
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
CPSC 503 Computational Linguistics
Grammars August 25, /10/2018.
Probabilistic and Lexicalized Parsing
CSCI 5832 Natural Language Processing
CPSC 503 Computational Linguistics
CSCI 5832 Natural Language Processing
CS 388: Natural Language Processing: Syntactic Parsing
CSCI 5832 Natural Language Processing
Probabilistic and Lexicalized Parsing
Lecture 14: Grammar and Parsing (II) November 11, 2004 Dan Jurafsky
CSCI 5832 Natural Language Processing
Natural Language - General
Grammars August 26, /27/2018.
Parsing and More Parsing
CSCI 5832 Natural Language Processing
CPSC 503 Computational Linguistics
CPSC 503 Computational Linguistics
CSA2050 Introduction to Computational Linguistics
Parsing I: CFGs & the Earley Parser
David Kauchak CS159 – Spring 2019
Presentation transcript:

Grammars August 31, 2005 11/16/2018

CFGs: a summary CFGs appear to be just about what we need to account for a lot of basic syntactic structure in English. But there are problems That can be dealt with adequately, although not elegantly, by staying within the CFG framework. There are simpler, more elegant, solutions that take us out of the CFG framework (beyond its formal power) Syntactic theories: HPSG, LFG, CCG, Minimalism, etc 11/16/2018

Parsing Parsing with a CFG is the task of assigning a correct parse tree (or derivation) to a string given some grammar. The correct means that it is consistent with the input and grammar. It doesn’t mean that it’s the “right” tree in global sense of correctness. The leaves of the parse tree cover all and only the input, and that parse tree corresponds to a valid derivation according to the grammar. The parsing can be viewed as a search. The search space corresponds to the space of parse trees generated by the grammar. The search is guided by the structure of space and by the input. First, we will look at basic (bad) methods of the parsing. After seeing what’s wrong with them, we will look at better methods. 11/16/2018

A Simple English Grammar S  NP VP Det  that | this | a | the S  Aux NP VP Noun  book | flight | meal | money S  VP Verb  book | include | prefer NP  Det NOM Aux  does NP  ProperNoun Prep  from | to | on NOM  Noun ProperNoun  Houston | TWA NOM  Noun NOM NOM  NOM PP VP  Verb VP  Verb NP PP  Prep NOM 11/16/2018

Basic Top-Down Parsing A top-down parser searches a parse tree by trying to build from the root node S (start symbol) down to leaves. First, we create the root node, then we create its children. We chose one of its children and then we create its children. We can search the search space of the parse trees: breadth first search -- level by level search depth first search -- first we search one of the children 11/16/2018

Top Down Space 11/16/2018

Basic Bottom-Up Parsing In bottom-up parsing, the parser starts with the words of input, tries to build parse trees from words up. The parser is successful if the parser succeeds building a parse tree rooted in the start symbol that covers all of the input. 11/16/2018

Bottom-Up Space 11/16/2018

Top-Down or Bottom-Up? Each of top-down and bottom-up parsing techniques has its own advantages and disadvantages. The top-down strategy never wastes time exploring trees cannot result in the start symbol (starts from there). On the other hand, bottom-up strategy may waste time in those kind of trees. But the top-down strategy spends with trees which are not consistent with the input. On the other hand, bottom-up strategy never suggests trees that are not at least locally grounded in the actual input. None of these two basic strategies are good enough to be used in the parsing of natural languages. 11/16/2018

Problems with Basic Top-Down Parser Even the top-down parser with bottom-up filtering has three problems that make it an insufficient solution to general-purpose parsing problem. Left-Recursion Ambiguity Inefficient Reparsing of Subtrees First we will talk about these three problems. Then we will present Early algorithm to avoid these problems. 11/16/2018

Left-Recursion When left-recursive grammars are used, top-down depth-first left-to-right parsers can dive into an infinite path. A grammar is left-recursive if it contains at least one non-terminal A such that: A * A This kind of structures are common in natural language grammars. NP  NP PP We can convert a left-recursive grammar into an equivalent grammar which is not left-recursive. A  A |  ==> A  A’ A’  A’ |  Unfortunately, the resulting grammar may no longer be the most grammatically natural way to represent syntactic structures. 11/16/2018

Left-Recursion What happens in the following situation S -> NP VP S -> Aux NP VP NP -> NP PP NP -> Det Nominal … With the sentence starting with Did the flight… 11/16/2018

Ambiguity One morning I shot an elephant in my pyjamas. How he got into my pajamas I don’t know. (Groucho Marx) 11/16/2018

Ambiguity Top-down parser is not efficient at handling ambiguity. Local ambiguity lead to hypotheses that are locally reasonable but eventually lead nowhere. They lead to backtracking. Global ambiguity potentially leads to multiple parses for the same input (if we force it to do). The parsers without disambiguation tools must simply return all possible parses. But most of disambiguation tools require statistical and semantic knowledge. There will be many unreasonable parses. But most of applications do not want all possible parses, they want a single correct parse. The reason for many unreasonable parses, exponential number of parses are possible for certain inputs. 11/16/2018

Ambiguity - Example If we add the following rules to our grammar: VP  VP PP NP  NP PP The following input: Show me the meal on flight 286 from Ankara to Istanbul. will have a lot of parses (14 parses?). Some of them are really strange parses. If we have PP  Prep NP Number of NP parses Number of PPs 2 2 5 3 14 4 132 5 469 6 11/16/2018

Lots of ambiguity Church and Patil (1982) Number of parses for such sentences grows at rate of number of parenthesizations of arithmetic expressions Which grow with Catalan numbers PPs Parses 1 2 2 5 3 14 4 132 5 469 6 1430 11/16/2018

Avoiding Repeated Work Parsing is hard, and slow. It’s wasteful to redo stuff over and over and over. Consider an attempt to top-down parse the following as an NP A flight from Indanapolis to Houston on TWA Grammar Rules: NP  Det NOM NP  NP PP NP  ProperNoun 11/16/2018

flight 11/16/2018

flight flight 11/16/2018

11/16/2018

11/16/2018

Repeated Parsing of Subtrees The parser often builds valid trees for portion of the input, then discards them during backtracking, only to find that it has to rebuild them again. The parser creates small parse trees that fail because they do not cover all the input. The parser backtracks to cover more input, and recreates subtrees again and again. The same thing is repeated more than once unnecessarily. 11/16/2018

Dynamic Programming Does not do repeated work We want a parsing algorithm (using dynamic programming technique) that fills a table with solutions to subproblems that: Does not do repeated work Does top-down search with bottom-up filtering Solves the left-recursion problem Solves an exponential problem in O(N3) time. The answer is Earley Algorithm. 11/16/2018

Earley Algorithm Fills a table in a single pass over the input. The table will be size N+1 (N is the number of words) Table entries represent Completed constituents and their locations In-progress constituents Predicted constituents Each possible subtree is represented only once, and it can be shared by all the parses that need it. 11/16/2018

States a subtree corresponding to a single grammar rule A state in a table entry contains three kinds of information: a subtree corresponding to a single grammar rule information about the progress made in completing this subtree the position of subtree with respect to to the input. We use a dot in the state’s grammar rule to indicate the progress made in recognizing it. We call this resulting structure dotted rule. A state’s position are represented by by two numbers indicating that where the state starts and where its dot lies. 11/16/2018

Earley States The table-entries are called states and are represented with dotted-rules. S -> · VP A VP is predicted NP -> Det · Nominal An NP is in progress VP -> V NP · A VP has been found 11/16/2018

Earley States/Locations We need to know where these things are in the input: S -> · VP [0,0] A VP is predicted at the start of the sentence NP -> Det · Nominal [1,2] An NP is in progress; the Det goes from 1 to 2 VP -> V NP · [0,3] A VP has been found starting at 0 and ending at 3 11/16/2018

States - Dotted Rule S   VP, [0,0] NP  Det  NOM, [1,2] Three example states: (Ex: Book that flight) S   VP, [0,0] NP  Det  NOM, [1,2] VP  Verb NP , [0,3] The first state represents a top-down prediction for S. The first 0 indicates that the constituent predicted by this state should begin at position 0 (beginning of the input). The second 0 indicates that the dot lies at position 0. The second state represents an in-progress constituent. The constituent starts at position 1 and the dot lies at position 2. The third state represents a completed constituent. This state describes that VP is successfully parsed, and that constituent covers the input from position 0 to position 3. 11/16/2018

Graphically 11/16/2018

Earley Algorithm March through chart left-to-right. At each step, apply 1 of 3 operators Predictor Create new states representing top-down expectations Scanner Match word predictions (rule with word after dot) to words Completer When a state is complete, see what rules were looking for that completed constituent 11/16/2018

Predictor Given a state With a non-terminal to right of dot That is not a part-of-speech category Create a new state for each expansion of the non-terminal Place these new states into same chart entry as generated state, beginning and ending where generating state ends. So predictor looking at S -> . VP [0,0] results in VP -> . Verb [0,0] VP -> . Verb NP [0,0] 11/16/2018

Scanner Given a state With a non-terminal to right of dot That is a part-of-speech category If the next word in the input matches this part-of-speech Create a new state with dot moved over the non-terminal So scanner looking at VP -> . Verb NP [0,0] If the next word, “book”, can be a verb, add new state: VP -> Verb . NP [0,1] Add this state to chart entry following current one Note: Earley algorithm uses top-down input to disambiguate POS! Only POS predicted by some state can get added to chart! 11/16/2018

Completer Applied to a state when its dot has reached right end of role. Parser has discovered a category over some span of input. Find and advance all previous states that were looking for this category copy state, move dot, insert in current chart entry Given: NP -> Det Nominal . [1,3] VP -> Verb. NP [0,1] Add VP -> Verb NP . [0,3] 11/16/2018

Earley: how do we know we are done? How do we know when we are done?. Find an S state in the final column that spans from 0 to n+1 and is complete. If that’s the case you’re done. S –> α · [0,n+1] 11/16/2018

Earley So sweep through the table from 0 to n+1… New predicted states are created by starting top-down from S New incomplete states are created by advancing existing states as new constituents are discovered New complete states are created in the same way. 11/16/2018

Earley More specifically… Predict all the states you can upfront Read a word Extend states based on matches Add new predictions Go to 2 Look at N+1 to see if you have a winner 11/16/2018

Example Book that flight We should find… an S from 0 to 3 that is a completed state… 11/16/2018

Example 11/16/2018

Example 11/16/2018

Earley example cont’d 11/16/2018

What is it? What kind of parser did we just describe (trick question). Earley parser… yes Not a parser – a recognizer The presence of an S state with the right attributes in the right place indicates a successful recognition. But no parse tree… no parser That’s how we solve (not) an exponential problem in polynomial time 11/16/2018

Converting Earley from Recognizer to Parser With the addition of a few pointers we have a parser Augment the “Completer” to point to where we came from. 11/16/2018

Augmenting the chart with structural information 11/16/2018

Retrieving Parse Trees from Chart All the possible parses for an input are in the table We just need to read off all the backpointers from every complete S in the last column of the table Find all the S -> X . [0,N+1] Follow the structural traces from the Completer Of course, this won’t be polynomial time, since there could be an exponential number of trees So we can at least represent ambiguity efficiently 11/16/2018

Earley and Left Recursion Earley solves the left-recursion problem without having to alter the grammar or artificially limiting the search. Never place a state into the chart that’s already there Copy states before advancing them 11/16/2018

Earley and Left Recursion: 1 S -> NP VP NP -> NP PP Predictor, given first rule: S -> · NP VP [0,0] Predicts: NP -> · NP PP [0,0] stops there since predicting same again would be redundant 11/16/2018

Earley and Left Recursion: 2 When a state gets advanced make a copy and leave the original alone… Say we have NP -> · NP PP [0,0] We find an NP from 0 to 2 so we create NP -> NP · PP [0,2] But we leave the original state as is 11/16/2018

Dynamic Programming Approaches Earley Top-down, no filtering, no restriction on grammar form CYK Bottom-up, no filtering, grammars restricted to Chomsky-Normal Form (CNF) Details are not important... Bottom-up vs. top-down With or without filters With restrictions on grammar form or not 11/16/2018

How to do parse disambiguation Probabilistic methods Augment the grammar with probabilities Then modify the parser to keep only most probable parses And at the end, return the most probable parse 11/16/2018

Probabilistic CFGs The probabilistic model Assigning probabilities to parse trees Getting the probabilities for the model Parsing with probabilities Slight modification to dynamic programming approach Task is to find the max probability tree for an input 11/16/2018

Probability Model Attach probabilities to grammar rules The expansions for a given non-terminal sum to 1 VP -> Verb .55 VP -> Verb NP .40 VP -> Verb NP NP .05 Read this as P(Specific rule | LHS) 11/16/2018

Probability Model (1) A derivation (tree) consists of the set of grammar rules that are in the tree The probability of a tree is just the product of the probabilities of the rules in the derivation. 11/16/2018

Probability Model (1.1) The probability of a word sequence (sentence) is the probability of its tree in the unambiguous case. It’s the sum of the probabilities of the trees in the ambiguous case. 11/16/2018

Getting the Probabilities From an annotated database (a treebank) So for example, to get the probability for a particular VP rule just count all the times the rule is used and divide by the number of VPs overall. 11/16/2018

Assumptions We’re assuming that there is a grammar to be used to parse with. We’re assuming the existence of a large robust dictionary with parts of speech We’re assuming the ability to parse (i.e. a parser) Given all that… we can parse probabilistically 11/16/2018

Typical Approach Bottom-up (CYK) dynamic programming approach Assign probabilities to constituents as they are completed and placed in the table Use the max probability for each constituent going up 11/16/2018

Parsing with Early Algorithm New predicted states are based on existing table entries (predicted or in-progress) that predict a certain constituent at that spot. New in-progress states are created by updating older states to reflect the fact that the previously expected completed constituents have been located. New complete states are created when the dot in an in-progress state moves to the end. 11/16/2018

More Specifically 1. Predict all the states 2. Read an input. See what predictions you can match. Extend matched states, add new predictions. Go to next state (state 2) 3. At the end, see if state[N+1] contains a complete S 11/16/2018

A Simple English Grammar (Ex.) S  NP VP Det  that | this | a | the S  Aux NP VP Noun  flight | meal | money S  VP Verb  book | include | prefer NP  Det NOM Aux  does NP  ProperNoun NOM  Noun ProperNoun  Houston | TWA NOM  Noun NOM VP  Verb VP  Verb NP 11/16/2018

Example: Chart[0]    S [0,0] Dummy start state S   NP VP [0,0] Predictor NP   Det NOM [0,0] Predictor NP   ProperNoun [0,0] Predictor S   Aux NP VP [0,0] Predictor S   VP [0,0] Predictor VP   Verb [0,0] Predictor VP   Verb NP [0,0] Predictor 11/16/2018

Example: Chart[1] Verb  book  [0,1] Scanner VP  Verb  [0,1] Completer S  VP  [0,1] Completer VP  Verb  NP [0,1] Completer NP   Det NOM [1,1] Predictor NP   ProperNoun [1,1] Predictor 11/16/2018

Example: Chart[2] Det  that  [1,2] Scanner NP  Det  NOM [1,2] Completer NOM   Noun [2,2] Predictor NOM   Noun NOM [2,2] Predictor 11/16/2018

Example: Chart[3] Noun  flight  [2,3] Scanner NOM  Noun  [2,3] Completer NOM  Noun  NOM [2,3] Completer NP  Det NOM  [1,3] Completer VP  Verb NP  [0,3] Completer S  VP  [0,3] Completer NOM   Noun [3,3] Predictor NOM   Noun NOM [3,3] Predictor 11/16/2018

Earley Algorithm The Earley algorithm has three main functions that do all the work. Predictor: Adds predictions into the chart. It is activated when the dot (in a state) is in the front of a non-terminal which is not a part of speech. Completer: Moves the dot to the right when new constituents are found. It is activated when the dot is at the end of a state. Scanner: Reads the input words and enters states representing those words into the chart. It is activated when the dot (in a state) is in the front of a non-terminal which is a part of speech. The Early algorithm uses theses functions to maintain the chart. 11/16/2018

Predictor procedure PREDICTOR((A    B , [i,j])) for each (B  ) in GRAMMAR-RULES-FOR(B,grammar) do ENQUEUE((B   , [j,j]), chart[j]) end 11/16/2018

Completer procedure COMPLETER((B    , [j,k])) for each (A    B , [i,j]) in chart[j] do ENQUEUE((A   B  , [i,k]), chart[k]) end 11/16/2018

Scanner procedure SCANNER((A    B , [i,j])) if (B  PARTS-OF-SPEECH(word[j]) then ENQUEUE((B  word[j]  , [j,j+1]), chart[j+1]) end 11/16/2018

Enqueue procedure ENQUEUE(state,chart-entry) if state is not already in chart-entry then Add state at the end of chart-entry) end 11/16/2018

Early Code function EARLY-PARSE(words,grammar) returns chart ENQUEUE((   S, [0,0], chart[0]) for i from 0 to LENGTH(words) do for each state in chart[i] do if INCOMPLETE?(state) and NEXT-CAT(state) is not a PS then PREDICTOR(state) elseif INCOMPLETE?(state) and NEXT-CAT(state) is a PS then SCANNER(state) else COMPLETER(state) end return(chart) 11/16/2018

Retrieving Parse Trees from A Chart To retrieve parse trees from a chart, the representation of each state must be augmented with an additional field to store information about the completed states that generated its constituents. To collect parse trees, we have to update COMPLETER such that it should add a pointer to the older state onto the list of previous-states of the new state. Then, the parse tree can be created by retrieving these list of previous-states (starting from the completed state of S). 11/16/2018

Chart[0] - with Parse Tree Info S0    S [0,0] [] Dummy start state S1 S   NP VP [0,0] [] Predictor S2 NP   Det NOM [0,0] [] Predictor S3 NP   ProperNoun [0,0] [] Predictor S4 S   Aux NP VP [0,0] [] Predictor S5 S   VP [0,0] [] Predictor S6 VP   Verb [0,0] [] Predictor S7 VP   Verb NP [0,0] [] Predictor 11/16/2018

Chart[1] - with Parse Tree Info S8 Verb  book  [0,1] [] Scanner S9 VP  Verb  [0,1] [S8] Completer S10 S  VP  [0,1] [S9] Completer S11 VP  Verb  NP [0,1] [S8] Completer S12 NP   Det NOM [1,1] [] Predictor S13 NP   ProperNoun [1,1] [] Predictor 11/16/2018

Chart[2] - with Parse Tree Info S14 Det  that  [1,2] [] Scanner S15 NP  Det  NOM [1,2] [S14] Completer S16 NOM   Noun [2,2] [] Predictor S17 NOM   Noun NOM [2,2] [] Predictor 11/16/2018

Chart[3] - with Parse Tree Info S18 Noun  flight  [2,3] [] Scanner S19 NOM  Noun  [2,3] [S18] Completer S20 NOM  Noun  NOM [2,3] [S18] Completer S21 NP  Det NOM  [1,3] [S14,S19] Completer S22 VP  Verb NP  [0,3] [S8,S21] Completer S23 S  VP  [0,3] [S22] Completer S24 NOM   Noun [3,3] [] Predictor S25 NOM   Noun NOM [3,3] [] Predictor 11/16/2018

Global Ambiguity S  Verb S  Noun Chart[0] S0    S [0,0] [] Dummy start state S1 S   Verb [0,0] [] Predictor S2 S   Noun [0,0] [] Predictor Chart[1] S3 Verb  book  [0,1] [] Scanner S4 Noun  book  [0,1] [] Scanner S5 S  Verb  [0,1] [S3] Predictor S6 S  Noun  [0,1] [S4] Predictor 11/16/2018

Statistical Parse Disambiguation Problem: How do we disambiguate among a set of parses of a given sentence? We want to pick the parse tree that corresponds to the correct meaning. Possible Solutions: Pass the problem onto Semantic Processing Use principle-based disambiguation methods. Use a probabilistic model to assign likelihoods to the alternative parse trees and select the best one (or at least rank them). Associating probabilities with the grammar rules gives us such a model. 11/16/2018

Probabilistic CFGs Associate a probability with each grammar rule. The probability reflects relative likelihood of using the rule in generating the LHS constituent. Assume for a constituent C we have k grammar rules of form C i. We are interested in calculating P(C i|C) -- the probability of using rule i for deriving C. Such probabilities can be estimated from a corpus of parse trees: 11/16/2018

Probabilistic CFGs (cont.) Attach probabilities to grammar rules The expansions for a given non-terminal sum to 1 VP -> Verb .55 VP -> Verb NP .40 VP -> Verb NP NP .05 11/16/2018

Assigning Probabilities to Parse Trees Assume that probability of a constituent is independent of context in which it appears in the parse tree. Probability of a constituent C’ that was constructed from A1’,…,An’ using the rule C A1,…,An is: P(C’)=P(C A1,…,An|C) P(A1’) … P(An’) At the leafs of the tree, we use the POS probabilities P(C|wi). 11/16/2018

Assigning Probabilities to Parse Trees (cont.) A derivation (tree) consists of the set of grammar rules that are in the tree The probability of a derivation (tree) is just the product of the probabilities of the rules in the derivation. 11/16/2018

Assigning Probabilities to Parse Trees (Ex. Grammar) S -> NP VP 0.6 S -> VP 0.4 NP -> Noun 1.0 VP -> Verb 0.3 VP -> Verb NP 0.7 Noun -> book 0.2 . Verb-> book 0.1 11/16/2018

Parse Trees for An Input: book book [S [NP [Noun book]] [VP [Verb book]]] P([Noun book])=P(Noun->book)=0.1 P([Verb book])=P(Verb->book)=0.2 P([NP [Noun book]])=P(NP->Noun)P([Noun book])=1.0*0.1=0.1 P([VP [Verb book]])=P(VP->Verb)P([Verb book])=0.3*0.2=0.06 P [S [NP [Noun book]] [VP [Verb book]]]) =P(S->NP VP)*0.1*0.06=0.6*0.1*0.06=0.0036 [S [VP [Verb book] [NP [Noun book]]]] P([VP [Verb book] [NP [Noun book]]])=P(VP->Verb NP)*0.2*0.1=0.7*0.2*0.1=0.014 P([S [VP [Verb book] [NP [Noun book]]]])=P(S->VP)*0.014=0.4*.014=0.0056 11/16/2018

Problems with Probabilistic CFG Models Main problem with Probabilistic CFG Model: it does not take contextual effects into account. Example: Pronouns are much more likely to appear in the subject position of a sentence than an object position. But in a PCFG, the rule NPPronoun has only one probability. One simple possible extension -- make probabilities dependent on first word of the constituent. Instead of P(C i|C), use P(C i|C,w) where w is the first word in C. Example: the rule VP  V NP PP is used 93% of the time with the verb put, but only 10% of the time for like. Requires estimating a much larger set of probabilities, and can significantly improve disambiguation performance. 11/16/2018

Probabilistic Lexicalized CFGs A solution to some of the problems with Probabilistic CFGs is to use Probabilistic Lexicalized CFGs. Use the probabilities of particular words in the computation of the probabilities in the derivation 11/16/2018

Example 11/16/2018

How to find the probabilities? We used to have VP -> V NP PP P(r|VP) That’s the count of this rule divided by the number of VPs in a treebank Now we have VP(dumped)-> V(dumped) NP(sacks)PP(in) P(r|VP ^ dumped is the verb ^ sacks is the head of the NP ^ in is the head of the PP) Not likely to have significant counts in any treebank 11/16/2018

Subcategorization When stuck, exploit independence and collect the statistics you can… We’ll focus on capturing two things Verb subcategorization Particular verbs have affinities for particular VPs Objects affinities for their predicates (mostly their mothers and grandmothers) Some objects fit better with some predicates than others Condition particular VP rules on their head… so r: VP -> V NP PP P(r|VP) Becomes P(r | VP ^ dumped) What’s the count? How many times was this rule used with dump, divided by the number of VPs that dump appears in total 11/16/2018

Atomic Subcat Symbols VP  Verb <VP HEAD> = <Verb HEAD> <VP HEAD SUBCAT> = INTRANS VP  Verb NP <VP HEAD SUBCAT> = TRANS VP  Verb NP NP <VP HEAD SUBCAT> = DITRANS Verb  slept <Verb HEAD SUBCAT> = INTRANS Verb  served <Verb HEAD SUBCAT> = TRANS Verb  gave <Verb HEAD SUBCAT> = DITRANS 11/16/2018

Encoding Subcat Lists as Features Verb  gave <Verb HEAD SUBCAT FIRST CAT> = NP <Verb HEAD SUBCAT SECOND CAT> = NP <Verb HEAD SUBCAT THIRD> = END VP  Verb NP NP <VP HEAD> = <Verb HEAD> <VP HEAD SUBCAT FIRST CAT> = <NP CAT> <VP HEAD SUBCAT SECOND CAT> = <NP CAT> <VP HEAD SUBCAT THIRD> = END We are only encoding lists using positional features 11/16/2018

Minimal Rule Approach In fact, we do not use symbols like SECOND, THIRD. They are just used to encode lists. We can use lists directly (similar to LISP). <SUBCAT FIRST CAT> = NP <SUBCAT REST FIRST CAT> = NP <SUBCAT REST REST> = END 11/16/2018

Subcategorization Frames for Lexical Entries We can use two different notations to represent subcategorization frames for lexical entries (verbs). Verb  want <Verb HEAD SUBCAT FIRST CAT> = NP <Verb HEAD SUBCAT FIRST CAT> = VP <Verb HEAD SUBCAT FIRST FORM> = INFINITITIVE 11/16/2018

Implementing Unification The representation we have used cannot facilitate the destructive merger aspect of unification algorithm. For this reason, we add additional features (additional edges to DAGs) into our feature structures. Each feature structure will consists of two fields: Content Field -- This field can be NULL or may contain ordinary feature structure. Pointer Field -- This field can be NULL or may contain a pointer into another feature structure. If the pointer field of a DAG is NULL, the content field of DAG contains the actual feature structure to be processed. If the pointer field of a DAG is not NULL, the destination of that pointer represents the actual feature structure to be processed. 11/16/2018

Extended Feature Structures  11/16/2018

Extended DAG  C P Num Per Null 3 SG 11/16/2018

Unification of Extended DAGs  C P Num Null SG  C P Per Null 3 11/16/2018

Unification of Extended DAGs (cont.)  C P Num Null SG Per 3 11/16/2018

Unification Algorithm function UNIFY(f1,f2) returns fstructure or failure f1real  real contents of f1 /* dereference f1 */ f2real  real contents of f2 /* dereference f2 */ if f1real is Null then { f1.pointer  f2; return f2; } else if f2real is Null then { f2.pointer  f1; return f1; } else if f1real and f2real are identical then { f1.pointer  f2; return f2; } else if f1real and f2real are complex feature structures then { f2.pointer  f1; for each feature in f2real do { otherfeature  Find or create a feature corresponding to feature in f1real; if UNIFY(feature.value,otherfeature.value) returns failure then return failure; } return f1; } else return failure; 11/16/2018

Example - Unification of Complex Structures 11/16/2018

Example - Unification of Complex Structures (cont.) • Null C Agr Num SG Sub Per 3 11/16/2018

Parsing with Unification Constraints Let us assume that we have augmented our grammar with sets of unification constraints. What changes do we need to make a parser to make use of them? Building feature structures and associate them with sub-trees. Unifying feature structures when sub-trees are created. Blocking ill-formed constituents 11/16/2018

Earley Parsing with Unification Constraints What do we have to do to integrate unification constraints with Early Parser? Building feature structures (represented as DAGs) and associate them with states in the chart. Unifying feature structures as states are advanced in the chart. Blocking ill-formed states from entering the chart. The main change will be in COMPLETER function of Earley Parser. This routine will invoke the unifier to unify two feature structures. 11/16/2018

Building Feature Structures NP  Det NOMINAL <Det HEAD AGREEMENT> = <NOMINAL HEAD AGREEMENT> <NP HEAD> = <NOMINAL HEAD> corresponds to 11/16/2018

Augmenting States with DAGs Each state will have an additional field to contain the DAG representing the feature structure corresponding to the state. When a rule is first used by PREDICTOR to create a state, the DAG associated with the state will simply consist of the DAG retrieved from the rule. For example, S   NP VP, [0,0],[],Dag1 where Dag1 is the feature structure corresponding to S  NP VP. NP   Det NOMINAL, [0,0],[],Dag2 where Dag2 is the feature structure corresponding to S  Det NOMINAL. 11/16/2018

What does COMPLETER do? When COMPLETER advances the dot in a state, it should unify the feature structure of the newly completed state with the appropriate part of the feature structure being advanced. If this unification process is succesful, the new state gets the result of the unification as its DAG, and this new state is entered into the chart. If it fails, nothing is entered into the chart. 11/16/2018

A Completion Example Parsing the phrase that flight after that is processed. NP  Det  NOMINAL, [0,1],[SDet],Dag1 Dag1 A newly completed state NOMINAL  Noun , [1,2],[SNoun],Dag2 Dag2 To advance in NP, the parser unifies the feature structure found under the NOMINAL feature of Dag2, with the feature structure found under the NOMINAL feature of Dag1. 11/16/2018

Earley Parse function EARLY-PARSE(words,grammar) returns chart ENQUEUE((   S, [0,0], chart[0],dag) for i from 0 to LENGTH(words) do for each state in chart[i] do if INCOMPLETE?(state) and NEXT-CAT(state) is not a PS then PREDICTOR(state) elseif INCOMPLETE?(state) and NEXT-CAT(state) is a PS then SCANNER(state) else COMPLETER(state) end return(chart) 11/16/2018

Predictor and Scanner procedure PREDICTOR((A    B , [i,j],dagA)) for each (B  ) in GRAMMAR-RULES-FOR(B,grammar) do ENQUEUE((B   , [i,j],dagB), chart[j]) end procedure SCANNER((A    B , [i,j],dagA)) if (B  PARTS-OF-SPEECH(word[j]) then ENQUEUE((B  word[j]  , [j,j+1],dagB), chart[j+1]) 11/16/2018

Completer and UnifyStates procedure COMPLETER((B    , [j,k],dagB)) for each (A    B , [i,j],dagA) in chart[j] do if newdag  UNIFY-STATES(dagB,dagA,B)  fails then ENQUEUE((A   B  , [i,k],newdag), chart[k]) end procedure UNIFY-STATES(dag1,dag2,cat) dag1cp  CopyDag(dag1); dag2cp  CopyDag(dag2); UNIFY(FollowPath(cat,dag1cp),FollowPath(cat,dag2cp)); 11/16/2018

Enqueue procedure ENQUEUE(state,chart-entry) if state is not subsumed by a state in chart-entry then Add state at the end of chart-entry end 11/16/2018