Probabilistic and Lexicalized Parsing

Slides:



Advertisements
Similar presentations
Computational language: week 10 Lexical Knowledge Representation concluded Syntax-based computational language Sentence structure: syntax Context free.
Advertisements

Probabilistic and Lexicalized Parsing CS Probabilistic CFGs: PCFGs Weighted CFGs –Attach weights to rules of CFG –Compute weights of derivations.
Syntactic analysis using Context Free Grammars. Analysis of language Morphological analysis – Chairs, Part Of Speech (POS) tagging – The/DT man/NN left/VBD.
Chapter 12 Lexicalized and Probabilistic Parsing Guoqiang Shan University of Arizona November 30, 2006.
10. Lexicalized and Probabilistic Parsing -Speech and Language Processing- 발표자 : 정영임 발표일 :
March 1, 2009 Dr. Muhammed Al-Mulhem 1 ICS 482 Natural Language Processing Probabilistic Context Free Grammars (Chapter 14) Muhammed Al-Mulhem March 1,
PCFG Parsing, Evaluation, & Improvements Ling 571 Deep Processing Techniques for NLP January 24, 2011.
6/9/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 11 Giuseppe Carenini.
Parsing with PCFG Ling 571 Fei Xia Week 3: 10/11-10/13/05.
1/13 Parsing III Probabilistic Parsing and Conclusions.
Basic Parsing with Context- Free Grammars 1 Some slides adapted from Julia Hirschberg and Dan Jurafsky.
Fall 2004 Lecture Notes #5 EECS 595 / LING 541 / SI 661 Natural Language Processing.
Parsing SLP Chapter 13. 7/2/2015 Speech and Language Processing - Jurafsky and Martin 2 Outline  Parsing with CFGs  Bottom-up, top-down  CKY parsing.
Context-Free Grammar CSCI-GA.2590 – Lecture 3 Ralph Grishman NYU.
11 CS 388: Natural Language Processing: Syntactic Parsing Raymond J. Mooney University of Texas at Austin.
Context Free Grammars Reading: Chap 12-13, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
PARSING David Kauchak CS457 – Fall 2011 some slides adapted from Ray Mooney.
1 Basic Parsing with Context- Free Grammars Slides adapted from Dan Jurafsky and Julia Hirschberg.
BİL711 Natural Language Processing1 Statistical Parse Disambiguation Problem: –How do we disambiguate among a set of parses of a given sentence? –We want.
Probabilistic Parsing Reading: Chap 14, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
1 Statistical Parsing Chapter 14 October 2012 Lecture #9.
December 2004CSA3050: PCFGs1 CSA305: Natural Language Algorithms Probabilistic Phrase Structure Grammars (PCFGs)
1 Natural Language Processing Lecture Notes 11 Chapter 15 (part 1)
Parsing I: Earley Parser CMSC Natural Language Processing May 1, 2003.
Context Free Grammars Reading: Chap 9, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Rada Mihalcea.
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Page 1 Probabilistic Parsing and Treebanks L545 Spring 2000.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
Albert Gatt Corpora and Statistical Methods Lecture 11.
Rules, Movement, Ambiguity
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
CS 4705 Lecture 10 The Earley Algorithm. Review Top-Down vs. Bottom-Up Parsers –Both generate too many useless trees –Combine the two to avoid over-generation:
CPSC 422, Lecture 28Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 28 Nov, 18, 2015.
Natural Language Processing Lecture 15—10/15/2015 Jim Martin.
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
December 2011CSA3202: PCFGs1 CSA3202: Human Language Technology Probabilistic Phrase Structure Grammars (PCFGs)
NLP. Introduction to NLP Time flies like an arrow –Many parses –Some (clearly) more likely than others –Need for a probabilistic ranking method.
PARSING David Kauchak CS159 – Fall Admin Assignment 3 Quiz #1  High: 36  Average: 33 (92%)  Median: 33.5 (93%)
Chapter 12: Probabilistic Parsing and Treebanks Heshaam Faili University of Tehran.
Roadmap Probabilistic CFGs –Handling ambiguity – more likely analyses –Adding probabilities Grammar Parsing: probabilistic CYK Learning probabilities:
Context Free Grammars. Slide 1 Syntax Syntax = rules describing how words can connect to each other * that and after year last I saw you yesterday colorless.
Probabilistic and Lexicalized Parsing. Probabilistic CFGs Weighted CFGs –Attach weights to rules of CFG –Compute weights of derivations –Use weights to.
Natural Language Processing Vasile Rus
Natural Language Processing Vasile Rus
CSC 594 Topics in AI – Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
CSC 594 Topics in AI – Natural Language Processing
CS60057 Speech &Natural Language Processing
Basic Parsing with Context Free Grammars Chapter 13
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27
Probabilistic and Lexicalized Parsing
CSCI 5832 Natural Language Processing
CS 388: Natural Language Processing: Syntactic Parsing
CSCI 5832 Natural Language Processing
Lecture 14: Grammar and Parsing (II) November 11, 2004 Dan Jurafsky
CSCI 5832 Natural Language Processing
Natural Language - General
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27
Parsing and More Parsing
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 26
CSCI 5832 Natural Language Processing
CPSC 503 Computational Linguistics
CPSC 503 Computational Linguistics
CSCI 5832 Natural Language Processing
CPSC 503 Computational Linguistics
CPSC 503 Computational Linguistics
David Kauchak CS159 – Spring 2019
CPSC 503 Computational Linguistics
Presentation transcript:

Probabilistic and Lexicalized Parsing 11/23/2018 COMS 4705 Fall 2004

Probabilistic CFGs The probabilistic model Assigning probabilities to parse trees Disambiguate, LM for ASR, faster parsing Getting the probabilities for the model Parsing with probabilities Slight modification to dynamic programming approach Task: find max probability tree for an input string 11/23/2018 COMS 4705 – Fall 2004

Probability Model Attach probabilities to grammar rules Expansions for a given non-terminal sum to 1 VP -> V .55 VP -> V NP .40 VP -> V NP NP .05 Read this as P(Specific rule | LHS) “What’s the probability that VP will expand to V, given that we have a VP?” 11/23/2018 COMS 4705 – Fall 2004

Probability of a Derivation A derivation (tree) consists of the set of grammar rules that are in the parse tree The probability of a tree is just the product of the probabilities of the rules in the derivation Note the independence assumption – why don’t we use conditional probabilities? 11/23/2018 COMS 4705 – Fall 2004

Probability of a Sentence Probability of a word sequence (sentence) is probability of its tree in unambiguous case Sum of probabilities of possible trees in ambiguous case 11/23/2018 COMS 4705 – Fall 2004

Getting the Probabilities From an annotated database E.g. the Penn Treebank To get the probability for a particular VP rule just count all the times the rule is used and divide by the number of VPs overall What if you have no treebank (e.g. for a ‘new’ language)? 11/23/2018 COMS 4705 – Fall 2004

Assumptions We have a grammar to parse with We have a large robust dictionary with parts of speech We have a parser Given all that… we can parse probabilistically 11/23/2018 COMS 4705 – Fall 2004

Typical Approach Bottom-up (CYK) dynamic programming approach Assign probabilities to constituents as they are completed and placed in the table Use the max probability for each constituent going up the tree 11/23/2018 COMS 4705 – Fall 2004

How do we fill in the DP table? Say we’re talking about a final part of a parse– finding an S, e.g., of the rule S->0NPiVPj The probability of S is… P(S->NP VP)*P(NP)*P(VP) The acqua part is already known, since we’re doing bottom-up parsing. We don’t need to recalculate the probabilities of constituents lower in the tree. 11/23/2018 COMS 4705 – Fall 2004

Using the Maxima P(NP) is known But what if there are multiple NPs for the span of text in question (0 to i)? Take the max (Why?) Does not mean that other kinds of constituents for the same span are ignored (i.e. they might be in the solution) 11/23/2018 COMS 4705 – Fall 2004

CYK Parsing: John called Mary from Denver S -> NP VP VP -> V NP NP -> NP PP VP -> VP PP PP -> P NP NP -> John, Mary, Denver V -> called P -> from 11/23/2018 COMS 4705 – Fall 2004

John called Mary from Denver Example S NP VP PP VP V NP NP P John called Mary from Denver 11/23/2018 COMS 4705 – Fall 2004

John called Mary from Denver Example S NP VP NP NP PP V John called Mary from Denver 11/23/2018 COMS 4705 – Fall 2004

Example John called Mary from Denver 11/23/2018 COMS 4705 – Fall 2004

Base Case: Aw NP P Denver from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

Recursive Cases: ABC NP P Denver from X V Mary called John 11/23/2018 COMS 4705 – Fall 2004

NP P Denver VP from X V Mary called John 11/23/2018 COMS 4705 – Fall 2004

NP X P Denver VP from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

PP NP X P Denver VP from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

PP NP X P Denver S VP from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

PP NP X P Denver S VP from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

NP PP X P Denver S VP from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

NP PP X P Denver S VP from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

VP NP PP X P Denver S from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

VP NP PP X P Denver S from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

VP1 VP2 NP PP X P Denver S VP from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

S VP1 VP2 NP PP X P Denver VP from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

S VP NP PP X P Denver from V Mary called John 11/23/2018 COMS 4705 – Fall 2004

Problems with PCFGs The probability model we’re using is just based on the rules in the derivation… Doesn’t use the words in any real way – e.g. PP attachment often depends on the verb, its object, and the preposition (I ate pickles with a fork. I ate pickles with relish.) Doesn’t take into account where in the derivation a rule is used – e.g. pronouns more often subjects than objects (She hates Mary. Mary hates her.) 11/23/2018 COMS 4705 – Fall 2004

Solution Add lexical dependencies to the scheme… Add the predilections of particular words into the probabilities in the derivation I.e. Condition the rule probabilities on the actual words 11/23/2018 COMS 4705 – Fall 2004

Heads Make use of notion of the head of a phrase, e.g. Phrasal heads The head of an NP is its noun The head of a VP is its verb The head of a PP is its preposition Phrasal heads Can ‘take the place of’ whole phrases, in some sense Define most important characteristics of the phrase Phrases are generally identified by their heads 11/23/2018 COMS 4705 – Fall 2004

Example (correct parse) Attribute grammar 11/23/2018 COMS 4705 – Fall 2004

Example (wrong) 11/23/2018 COMS 4705 – Fall 2004

How? We started with rule probabilities VP -> V NP PP P(rule|VP) E.g., count of this rule divided by the number of VPs in a treebank Now we want lexicalized probabilities VP(dumped)-> V(dumped) NP(sacks)PP(in) P(r|VP ^ dumped is the verb ^ sacks is the head of the NP ^ in is the head of the PP) Not likely to have significant counts in any treebank 11/23/2018 COMS 4705 – Fall 2004

Declare Independence So, exploit independence assumption and collect the statistics you can… Focus on capturing two things Verb subcategorization Particular verbs have affinities for particular VPs Objects have affinities for their predicates (mostly their mothers and grandmothers) Some objects fit better with some predicates than others 11/23/2018 COMS 4705 – Fall 2004

Verb Subcategorization Condition particular VP rules on their head… so r: VP -> V NP PP P(r|VP) Becomes P(r | VP ^ dumped) What’s the count? How many times was this rule used with dump, divided by the number of VPs that dump appears in total 11/23/2018 COMS 4705 – Fall 2004

Preferences Subcat captures the affinity between VP heads (verbs) and the VP rules they go with. What about the affinity between VP heads and the heads of the other daughters of the VP? 11/23/2018 COMS 4705 – Fall 2004

Example (correct parse) 11/23/2018 COMS 4705 – Fall 2004

Example (wrong) 11/23/2018 COMS 4705 – Fall 2004

Preferences The issue here is the attachment of the PP So the affinities we care about are the ones between dumped and into vs. sacks and into. Count the times dumped is the head of a constituent that has a PP daughter with into as its head and normalize (alternatively, P(into|PP,dumped is mother’s head)) Vs. the situation where sacks is a constituent with into as the head of a PP daughter (or, P(into|PP,sacks is mother’s head)) 11/23/2018 COMS 4705 – Fall 2004

Another Example Consider the VPs Ate spaghetti with gusto Ate spaghetti with marinara The affinity of gusto for eat is much larger than its affinity for spaghetti On the other hand, the affinity of marinara for spaghetti is much higher than its affinity for ate 11/23/2018 COMS 4705 – Fall 2004

But not a Head Probability Relationship Note the relationship here is more distant and doesn’t involve a headword since gusto and marinara aren’t the heads of the PPs (Hindle & Rooth ’91) Vp (ate) Vp(ate) Np(spag) Vp(ate) Pp(with) np Pp(with) v v np Ate spaghetti with marinara Ate spaghetti with gusto 11/23/2018 COMS 4705 – Fall 2004

Next Time Midterm Covers everything assigned and all lectures up through today Short answers (2-3 sentences), exercises, longer answers (1 para) Closed book, calculators allowed but no laptops 11/23/2018 COMS 4705 – Fall 2004