GRAMMARS David Kauchak CS457 – Spring 2011 some slides adapted from Ray Mooney.

Slides:



Advertisements
Similar presentations
Basic Parsing with Context-Free Grammars CS 4705 Julia Hirschberg 1 Some slides adapted from Kathy McKeown and Dan Jurafsky.
Advertisements

Syntactic analysis using Context Free Grammars. Analysis of language Morphological analysis – Chairs, Part Of Speech (POS) tagging – The/DT man/NN left/VBD.
Natural Language Processing - Parsing 1 - Language, Syntax, Parsing Problems in Parsing Ambiguity, Attachment / Binding Bottom vs. Top Down Parsing.
Parsing with Context Free Grammars Reading: Chap 13, Jurafsky & Martin
Introduction and Jurafsky Model Resource: A Probabilistic Model of Lexical and Syntactic Access and Disambiguation, Jurafsky 1996.
March 1, 2009 Dr. Muhammed Al-Mulhem 1 ICS 482 Natural Language Processing Probabilistic Context Free Grammars (Chapter 14) Muhammed Al-Mulhem March 1,
CKY Parsing Ling 571 Deep Processing Techniques for NLP January 12, 2011.
For Monday Read Chapter 23, sections 3-4 Homework –Chapter 23, exercises 1, 6, 14, 19 –Do them in order. Do NOT read ahead.
Artificial Intelligence 2004 Natural Language Processing - Syntax and Parsing - Language, Syntax, Parsing Problems in Parsing Ambiguity, Attachment.
1 Earley Algorithm Chapter 13.4 October 2009 Lecture #9.
 Christel Kemke /08 COMP 4060 Natural Language Processing PARSING.
Parsing context-free grammars Context-free grammars specify structure, not process. There are many different ways to parse input in accordance with a given.
Parsing with PCFG Ling 571 Fei Xia Week 3: 10/11-10/13/05.
Introduction to Syntax, with Part-of-Speech Tagging Owen Rambow September 17 & 19.
Syntactic Parsing with CFGs CMSC 723: Computational Linguistics I ― Session #7 Jimmy Lin The iSchool University of Maryland Wednesday, October 14, 2009.
1 CONTEXT-FREE GRAMMARS. NLE 2 Syntactic analysis (Parsing) S NPVP ATNNSVBD NP AT NNthechildrenate thecake.
Artificial Intelligence 2004 Natural Language Processing - Syntax and Parsing - Language Syntax Parsing.
Parsing SLP Chapter 13. 7/2/2015 Speech and Language Processing - Jurafsky and Martin 2 Outline  Parsing with CFGs  Bottom-up, top-down  CKY parsing.
Syllabus Text Books Classes Reading Material Assignments Grades Links Forum Text Books עיבוד שפות טבעיות - שיעור תשע Bottom Up Parsing עידו דגן.
Basic Parsing with Context- Free Grammars 1 Some slides adapted from Julia Hirschberg and Dan Jurafsky.
Context-Free Grammar CSCI-GA.2590 – Lecture 3 Ralph Grishman NYU.
1 Basic Parsing with Context Free Grammars Chapter 13 September/October 2012 Lecture 6.
11 CS 388: Natural Language Processing: Syntactic Parsing Raymond J. Mooney University of Texas at Austin.
Context Free Grammars Reading: Chap 12-13, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
PARSING David Kauchak CS457 – Fall 2011 some slides adapted from Ray Mooney.
CS460/626 : Natural Language Processing/Speech, NLP and the Web (Lecture 29– CYK; Inside Probability; Parse Tree construction) Pushpak Bhattacharyya CSE.
For Friday Finish chapter 23 Homework: –Chapter 22, exercise 9.
CFGS – TAKE 2 David Kauchak CS30 – Spring Admin Today’s mentor hours moved to 6-8pm Assignment 4 graded Assignment 5 - how’s it going? - part A.
1 CKY and Earley Algorithms Chapter 13 October 2012 Lecture #8.
GRAMMARS David Kauchak CS159 – Fall 2014 some slides adapted from Ray Mooney.
SI485i : NLP Set 8 PCFGs and the CKY Algorithm. PCFGs We saw how CFGs can model English (sort of) Probabilistic CFGs put weights on the production rules.
10. Parsing with Context-free Grammars -Speech and Language Processing- 발표자 : 정영임 발표일 :
11 Syntactic Parsing. Produce the correct syntactic parse tree for a sentence.
May 2006CLINT-LN Parsing1 Computational Linguistics Introduction Parsing with Context Free Grammars.
October 2005csa3180: Parsing Algorithms 11 CSA350: NLP Algorithms Sentence Parsing I The Parsing Problem Parsing as Search Top Down/Bottom Up Parsing Strategies.
Parsing with Context Free Grammars CSC 9010 Natural Language Processing Paula Matuszek and Mary-Angela Papalaskari This slide set was adapted from: Jim.
Parsing I: Earley Parser CMSC Natural Language Processing May 1, 2003.
PARSING David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
Context Free Grammars Reading: Chap 9, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Rada Mihalcea.
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
For Wednesday Read chapter 23 Homework: –Chapter 22, exercises 1,4, 7, and 14.
CPE 480 Natural Language Processing Lecture 4: Syntax Adapted from Owen Rambow’s slides for CSc Fall 2006.
CSA2050 Introduction to Computational Linguistics Parsing I.
Sentence Parsing Parsing 3 Dynamic Programming. Jan 2009 Speech and Language Processing - Jurafsky and Martin 2 Acknowledgement  Lecture based on  Jurafsky.
Natural Language - General
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
1 Context Free Grammars October Syntactic Grammaticality Doesn’t depend on Having heard the sentence before The sentence being true –I saw a unicorn.
NLP. Introduction to NLP Motivation –A lot of the work is repeated –Caching intermediate results improves the complexity Dynamic programming –Building.
CS 4705 Lecture 10 The Earley Algorithm. Review Top-Down vs. Bottom-Up Parsers –Both generate too many useless trees –Combine the two to avoid over-generation:
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
October 2005CSA3180: Parsing Algorithms 21 CSA3050: NLP Algorithms Parsing Algorithms 2 Problems with DFTD Parser Earley Parsing Algorithm.
NLP. Introduction to NLP Time flies like an arrow –Many parses –Some (clearly) more likely than others –Need for a probabilistic ranking method.
NLP. Introduction to NLP #include int main() { int n, reverse = 0; printf("Enter a number to reverse\n"); scanf("%d",&n); while (n != 0) { reverse =
PARSING David Kauchak CS159 – Fall Admin Assignment 3 Quiz #1  High: 36  Average: 33 (92%)  Median: 33.5 (93%)
Syntactic Parsing: Part I Niranjan Balasubramanian Stony Brook University March 8 th and Mar 22 nd, 2016 Many slides adapted from: Ray Mooney, Michael.
Speech and Language Processing SLP Chapter 13 Parsing.
1 Statistical methods in NLP Diana Trandabat
Natural Language Processing Vasile Rus
CSC 594 Topics in AI – Natural Language Processing
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
Basic Parsing with Context Free Grammars Chapter 13
Probabilistic CKY Parser
CS 388: Natural Language Processing: Statistical Parsing
CS 388: Natural Language Processing: Syntactic Parsing
Natural Language - General
Parsing and More Parsing
David Kauchak CS159 – Spring 2019
Parsing I: CFGs & the Earley Parser
David Kauchak CS159 – Spring 2019
Presentation transcript:

GRAMMARS David Kauchak CS457 – Spring 2011 some slides adapted from Ray Mooney

Admin  Assignment 2  Assignment 3  Technically due Sunday Oct. 16 at midnight  Work in pairs  Any programming language  Given example output

Constituency  Parts of speech can be thought of as the lowest level of syntactic information  Groups words together into categories likes to eat candy. What can/can’t go here?

Constituency likes to eat candy. He She The man The boy The cat Dave Professor Kauchak Dr. Suess nounsdeterminer nouns pronouns The man that I saw The boy with the blue pants The cat in the hat determiner nouns +

Constituency  Words in languages tend to form into functional groups (parts of speech)  Groups of words (aka phrases) can also be grouped into functional groups  often some relation to parts of speech  though, more complex interactions  These phrase groups are called constituents

Common constituents He likes to eat candy. The man in the hat ran to the park. noun phraseverb phrase noun phrase verb phrase

Common constituents The man in the hat ran to the park. noun phraseverb phrase noun phrase prepositional phrase

Common constituents The man in the hat ran to the park. noun phrase verb phrase noun phrase prepositional phrase noun phrase

Syntactic structure  Hierarchical: syntactic trees The man in the hat ran to the park. DT NN IN DT NN VBD IN DT NN NP PP NP PP VP S parts of speech terminals (words) non-terminals

Syntactic structure The man in the hat ran to the park. DT NN IN DT NN VBD IN DT NN NP PP NP PP VP S (S (NP (NP (DT the) (NN man)) (PP (IN in) (NP (DT the) (NN hat)))) (VP (VBD ran) (PP (TO to (NP (DT the) (NN park))))))

Syntactic structure (S (NP (NP (DT the) (NN man)) (PP (IN in) (NP (DT the) (NN hat)))) (VP (VBD ran) (PP (TO to) (NP (DT the) (NN park)))))) (S (NP (NP (DT the) (NN man)) (PP (IN in) (NP (DT the) (NN hat)))) (VP (VBD ran) (PP (TO to (NP (DT the) (NN park))))))

Syntactic structure  A number of related problems:  Given a sentence, can we determine the syntactic structure?  Can we determine if a sentence is grammatical?  Can we determine how likely a sentence is to be grammatical? to be an English sentence?  Can we generate candidate, grammatical sentences?

Grammars What is a grammar (3 rd grade again…)?

Grammars  Grammar is a set of structural rules that govern the composition of sentences, phrases and words  Lots of different kinds of grammars:  regular  context-free  context-sensitive  recursively enumerable  transformation grammars

States What is the capitol of this state? Jefferson City (Missouri)

Context free grammar  How many people have heard of them?  Look like: S  NP VP left hand side (single symbol) right hand side (one or more symbols)

Formally… G = (NT, T, P, S)  NT: finite set of nonterminal symbols  T: finite set of terminal symbols, NT and T are disjoint  P: finite set of productions of the form A  , A  V and   (T  NT)*  S  NT: start symbol

CFG: Example  Many possible CFGs for English, here is an example (fragment):  S  NP VP  VP  V NP  NP  DetP N | AdjP NP  AdjP  Adj | Adv AdjP  N  boy | girl  V  sees | likes  Adj  big | small  Adv  very  DetP  a | the

Grammar questions  Can we determine if a sentence is grammatical?  Given a sentence, can we determine the syntactic structure?  Can we determine how likely a sentence is to be grammatical? to be an English sentence?  Can we generate candidate, grammatical sentences? Which of these can we answer with a CFG? How?

Grammar questions  Can we determine if a sentence is grammatical?  Is it accepted/recognized by the grammar  Applying rules right to left, do we get the start symbol?  Given a sentence, can we determine the syntactic structure?  Keep track of the rules applied…  Can we determine how likely a sentence is to be grammatical? to be an English sentence?  Not yet… no notion of “likelihood” (probability)  Can we generate candidate, grammatical sentences?  Start from the start symbol, randomly pick rules that apply (i.e. left hand side matches)

Derivations in a CFG S  NP VP VP  V NP NP  DetP N | AdjP NP AdjP  Adj | Adv AdjP N  boy | girl V  sees | likes Adj  big | small Adv  very DetP  a | the S

Derivations in a CFG S  NP VP VP  V NP NP  DetP N | AdjP NP AdjP  Adj | Adv AdjP N  boy | girl V  sees | likes Adj  big | small Adv  very DetP  a | the NP VP

Derivations in a CFG S  NP VP VP  V NP NP  DetP N | AdjP NP AdjP  Adj | Adv AdjP N  boy | girl V  sees | likes Adj  big | small Adv  very DetP  a | the DetP N VP

Derivations in a CFG S  NP VP VP  V NP NP  DetP N | AdjP NP AdjP  Adj | Adv AdjP N  boy | girl V  sees | likes Adj  big | small Adv  very DetP  a | the the boy VP

Derivations in a CFG S  NP VP VP  V NP NP  DetP N | AdjP NP AdjP  Adj | Adv AdjP N  boy | girl V  sees | likes Adj  big | small Adv  very DetP  a | the the boy likes NP

Derivations in a CFG S  NP VP VP  V NP NP  DetP N | AdjP NP AdjP  Adj | Adv AdjP N  boy | girl V  sees | likes Adj  big | small Adv  very DetP  a | the the boy likes a girl

Derivations in a CFG; Order of Derivation Irrelevant S  NP VP VP  V NP NP  DetP N | AdjP NP AdjP  Adj | Adv AdjP N  boy | girl V  sees | likes Adj  big | small Adv  very DetP  a | the DetP N VP the boy VPDetP N likes NP the boy likes a girl

Derivations of CFGs  String rewriting system: we derive a string  But derivation history represented by phrase- structure tree the boy likes a girl boythelikes DetP NP girl a NP DetP S VP N N V

Parsing  Parsing is the field of NLP interested in automatically determining the syntactic structure of a sentence  parsing can be thought of as determining what sentences are “valid” English sentences  As a by product, we often can get the structure

Parsing  Given a CFG and a sentence, determine the possible parse tree(s) S -> NP VP NP -> N NP -> PRP NP -> N PP VP -> V NP VP -> V NP PP PP -> IN N PRP -> I V -> eat N -> sushi N -> tuna IN -> with I eat sushi with tuna What parse trees are possible for this sentence? What if the grammar is much larger?

Parsing I eat sushi with tuna PRP NP VNINN PP NP VP S I eat sushi with tuna PRP NP VNINN PP NP VP S S -> NP VP NP -> PRP NP -> N PP NP -> N VP -> V NP VP -> V NP PP PP -> IN N PRP -> I V -> eat N -> sushi N -> tuna IN -> with What is the difference between these parses?

Parsing ambiguity I eat sushi with tuna PRP NP VNINN PP NP VP S I eat sushi with tuna PRP NP VNINN PP NP VP S S -> NP VP NP -> PRP NP -> N PP VP -> V NP VP -> V NP PP PP -> IN N PRP -> I V -> eat N -> sushi N -> tuna IN -> with How can we decide between these?

A Simple PCFG S  NP VP 1.0 VP  V NP 0.7 VP  VP PP 0.3 PP  P NP 1.0 P  with 1.0 V  saw 1.0 NP  NP PP 0.4 NP  astronomers 0.1 NP  ears 0.18 NP  saw 0.04 NP  stars 0.18 NP  telescope 0.1 Probabilities!

Just like n-gram language modeling, PCFGs break the sentence generation process into smaller steps/probabilities The probability of a parse is the product of the PCFG rules

= 1.0 * 0.1 * 0.7 * 1.0 * 0.4 * 0.18 * 1.0 * 1.0 * 0.18 = = 1.0 * 0.1 * 0.3 * 0.7 * 1.0 * 0.18 * 1.0 * 1.0 * 0.18 =

Parsing problems  Pick a model  e.g. CFG, PCFG, …  Train (or learn) a model  What CFG/PCFG rules should I use?  Parameters (e.g. PCFG probabilities)?  What kind of data do we have?  Parsing  Determine the parse tree(s) given a sentence

PCFG: Training  If we have example parsed sentences, how can we learn a set of PCFGs? Tree Bank Supervised PCFG Training S → NP VP S → VP NP → Det A N NP → NP PP NP → PropN A → ε A → Adj A PP → Prep NP VP → V NP VP → VP PP English S NP VP John V NP PP put the dog in the pen S NP VP John V NP PP put the dog in the pen

Extracting the rules PRP NP VNIN PP NP VP S I eat sushi with tuna N What CFG rules occur in this tree? S -> NP VP NP -> PRP PRP -> I VP -> V NP V -> eat NP -> N PP N -> sushi PP -> IN N IN -> with N -> tuna

Estimating PCFG Probabilities  We can extract the rules from the trees S  NP VP 1.0 VP  V NP 0.7 VP  VP PP 0.3 PP  P NP 1.0 P  with 1.0 V  saw 1.0 How do we go from the extracted CFG rules to PCFG rules? S -> NP VP NP -> PRP PRP -> I VP -> V NP V -> eat NP -> N PP N -> sushi …

Estimating PCFG Probabilities  Extract the rules from the trees  Calculate the probabilities using MLE

Estimating PCFG Probabilities S -> NP VP10 S -> V NP3 S -> VP PP2 NP -> N7 NP -> N PP3 NP -> DT N6 P( S -> V NP) = ? Occurrences P( S -> V NP) = P( S -> V NP | S) = count(S -> V NP) count(S) 3 15 =

Grammar Equivalence  Weak equivalence: grammars generate same set of strings  Grammar 1: NP  DetP N and DetP  a | the  Grammar 2: NP  a N | NP  the N  Strong equivalence: grammars have same set of derivation trees  With CFGs, possible only with useless rules  Grammar 2: NP  a N | NP  the N  Grammar 3: NP  a N | NP  the N, DetP  many

Normal Forms  There are weakly equivalent normal forms (Chomsky Normal Form, Greibach Normal Form)  A CFG is in Chomsky Normal Form (CNF) if all productions are of one of two forms:  A  BC with A, B, C nonterminals  A  a, with A a nonterminal and a a terminal  Every CFG has a weakly equivalent CFG in CNF

CNF Grammar S -> VP VP -> VB NP VP -> VB NP PP NP -> DT NN NP -> NN NP -> NP PP PP -> IN NP DT -> the IN -> with VB -> film VB -> trust NN -> man NN -> film NN -> trust S -> VP VP -> VB NP VP -> VP2 PP VP2 -> VB NP NP -> DT NN NP -> NN NP -> NP PP PP -> IN NP DT -> the IN -> with VB -> film VB -> trust NN -> man NN -> film NN -> trust

Probabilistic Grammar Conversion S → NP VP S → Aux NP VP S → VP NP → Pronoun NP → Proper-Noun NP → Det Nominal Nominal → Noun Nominal → Nominal Noun Nominal → Nominal PP VP → Verb VP → Verb NP VP → VP PP PP → Prep NP Original GrammarChomsky Normal Form S → NP VP S → X1 VP X1 → Aux NP S → book | include | prefer S → Verb NP S → VP PP NP → I | he | she | me NP → Houston | NWA NP → Det Nominal Nominal → book | flight | meal | money Nominal → Nominal Noun Nominal → Nominal PP VP → book | include | prefer VP → Verb NP VP → VP PP PP → Prep NP

Grammar questions  Can we determine if a sentence is grammatical?  Given a sentence, can we determine the syntactic structure?  Can we determine how likely a sentence is to be grammatical? to be an English sentence?  Can we generate candidate, grammatical sentences?

Parsing  Parsing is the field of NLP interested in automatically determining the syntactic structure of a sentence  parsing can also be thought of as determining what sentences are “valid” English sentences

Parsing  We have a grammar, determine the possible parse tree(s)  Let’s start with parsing with a CFG (no probabilities) S -> NP VP NP -> PRP NP -> N PP VP -> V NP VP -> V NP PP PP -> IN N PRP -> I V -> eat N -> sushi N -> tuna IN -> with I eat sushi with tuna approaches? algorithms?

Parsing  Top-down parsing  ends up doing a lot of repeated work  doesn’t take into account the words in the sentence until the end!  Bottom-up parsing  constrain based on the words  avoids repeated work (dynamic programming)  CKY parser

Parsing  Top-down parsing  start at the top (usually S) and apply rules  matching left-hand sides and replacing with right-hand sides  Bottom-up parsing  start at the bottom (i.e. words) and build the parse tree up from there  matching right-hand sides and replacing with left-hand sides

Parsing Example S VP Verb NP book Det Nominal that Noun flight book that flight

Top Down Parsing S NP VP Pronoun

Top Down Parsing S NP VP Pronoun book X

Top Down Parsing S NP VP ProperNoun

Top Down Parsing S NP VP ProperNoun book X

Top Down Parsing S NP VP Det Nominal

Top Down Parsing S NP VP Det Nominal book X

Top Down Parsing S Aux NP VP

Top Down Parsing S Aux NP VP book X

Top Down Parsing S VP

Top Down Parsing S VP Verb

Top Down Parsing S VP Verb book

Top Down Parsing S VP Verb book X that

Top Down Parsing S VP Verb NP

Top Down Parsing S VP Verb NP book

Top Down Parsing S VP Verb NP book Pronoun

Top Down Parsing S VP Verb NP book Pronoun X that

Top Down Parsing S VP Verb NP book ProperNoun

Top Down Parsing S VP Verb NP book ProperNoun X that

Top Down Parsing S VP Verb NP book Det Nominal

Top Down Parsing S VP Verb NP book Det Nominal that

Top Down Parsing S VP Verb NP book Det Nominal that Noun

Top Down Parsing S VP Verb NP book Det Nominal that Noun flight

Bottom Up Parsing book that flight

Bottom Up Parsing book that flight Noun

Bottom Up Parsing book that flight Noun Nominal

Bottom Up Parsing book that flight Noun Nominal Noun Nominal

Bottom Up Parsing book that flight Noun Nominal Noun Nominal X

Bottom Up Parsing book that flight Noun Nominal PP Nominal

Bottom Up Parsing book that flight Noun Det Nominal PP Nominal

Bottom Up Parsing book that flight Noun Det NP Nominal Nominal PP Nominal

Bottom Up Parsing book that Noun Det NP Nominal flight Noun Nominal PP Nominal

Bottom Up Parsing book that Noun Det NP Nominal flight Noun Nominal PP Nominal

Bottom Up Parsing book that Noun Det NP Nominal flight Noun S VP Nominal PP Nominal

Bottom Up Parsing book that Noun Det NP Nominal flight Noun S VP X Nominal PP Nominal

Bottom Up Parsing book that Noun Det NP Nominal flight Noun Nominal PP Nominal X

Bottom Up Parsing book that Verb Det NP Nominal flight Noun

Bottom Up Parsing book that Verb VP Det NP Nominal flight Noun

Det Bottom Up Parsing book that Verb VP S NP Nominal flight Noun

Det Bottom Up Parsing book that Verb VP S X NP Nominal flight Noun

Bottom Up Parsing book that Verb VP PP Det NP Nominal flight Noun

Bottom Up Parsing book that Verb VP PP Det NP Nominal flight Noun X

Bottom Up Parsing book that Verb VP Det NP Nominal flight Noun NP

Bottom Up Parsing book that Verb VP Det NP Nominal flight Noun

Bottom Up Parsing book that Verb VP Det NP Nominal flight Noun S

Parsing  Pros/Cons?  Top-down: Only examines parses that could be valid parses (i.e. with an S on top) Doesn’t take into account the actual words!  Bottom-up: Only examines structures that have the actual words as the leaves Examines sub-parses that may not result in a valid parse!

Why is parsing hard?  Actual grammars are large  Lots of ambiguity!  Most sentences have many parses  Some sentences have a lot of parses  Even for sentences that are not ambiguous, there is often ambiguity for subtrees (i.e. multiple ways to parse a phrase)

Why is parsing hard? I saw the man on the hill with the telescope What are some interpretations?

Structural Ambiguity Can Give Exponential Parses Me See A man The telescope The hill “I was on the hill that has a telescope when I saw a man.” “I saw a man who was on the hill that has a telescope on it.” “I was on the hill when I used the telescope to see a man.” “I saw a man who was on a hill and who had a telescope.” “Using a telescope, I saw a man who was on a hill.”... I saw the man on the hill with the telescope

Dynamic Programming Parsing  To avoid extensive repeated work you must cache intermediate results, specifically found constituents  Caching (memoizing) is critical to obtaining a polynomial time parsing (recognition) algorithm for CFGs  Dynamic programming algorithms based on both top-down and bottom-up search can achieve O(n 3 ) recognition time where n is the length of the input string.

Dynamic Programming Parsing Methods  CKY (Cocke-Kasami-Younger) algorithm based on bottom-up parsing and requires first normalizing the grammar.  Earley parser is based on top-down parsing and does not require normalizing grammar but is more complex.  These both fall under the general category of chart parsers which retain completed constituents in a chart

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust what does this cell represent?

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust all constituents spanning 1-3 or “the man with”

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust how could we figure this out?

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust Key: rules are binary and only have two constituents on the right hand side VP -> VB NP NP -> DT NN

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust See if we can make a new constituent combining any for “the” with any for “man with”

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust See if we can make a new constituent combining any for “the man” with any for “with”

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust ?

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust See if we can make a new constituent combining any for “Film” with any for “the man with trust”

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust See if we can make a new constituent combining any for “Film the” with any for “man with trust”

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust See if we can make a new constituent combining any for “Film the man” with any for “with trust”

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust See if we can make a new constituent combining any for “Film the man with” with any for “trust”

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust What if our rules weren’t binary?

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust See if we can make a new constituent combining any for “Film” with any for “the man” with any for “with trust”

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust What order should we fill the entries in the chart?

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust What order should we traverse the entries in the chart?

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust From bottom to top, left to right

CKY parser: the chart i= j = Cell[i,j] contains all constituents covering words i through j Film the man with trust Top-left along the diagonals moving to the right

CKY parser: unary rules  Often, we will leave unary rules rather than converting to CNF  Do these complicate the algorithm?  Must check whenever we add a constituent to see if any unary rules apply S -> VP VP -> VB NP VP -> VP2 PP VP2 -> VB NP NP -> DT NN NP -> NN NP -> NP PP PP -> IN NP DT -> the IN -> with VB -> film VB -> trust NN -> man NN -> film NN -> trust

CKY parser: the chart i= j = Film the man with trust S -> VP VP -> VB NP VP -> VP2 PP VP2 -> VB NP NP -> DT NN NP -> NN NP -> NP PP PP -> IN NP DT -> the IN -> with VB -> film VB -> man VB -> trust NN -> man NN -> film NN -> trust