Probabilistic Parsing Reading: Chap 14, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.

Slides:



Advertisements
Similar presentations
Probabilistic and Lexicalized Parsing CS Probabilistic CFGs: PCFGs Weighted CFGs –Attach weights to rules of CFG –Compute weights of derivations.
Advertisements

Parsing with Context Free Grammars Reading: Chap 13, Jurafsky & Martin
Probabilistic Parsing Chapter 14, Part 2 This slide set was adapted from J. Martin, R. Mihalcea, Rebecca Hwa, and Ray Mooney.
10. Lexicalized and Probabilistic Parsing -Speech and Language Processing- 발표자 : 정영임 발표일 :
March 1, 2009 Dr. Muhammed Al-Mulhem 1 ICS 482 Natural Language Processing Probabilistic Context Free Grammars (Chapter 14) Muhammed Al-Mulhem March 1,
Shallow Parsing CS 4705 Julia Hirschberg 1. Shallow or Partial Parsing Sometimes we don’t need a complete parse tree –Information extraction –Question.
Probabilistic Parsing: Enhancements Ling 571 Deep Processing Techniques for NLP January 26, 2011.
PCFG Parsing, Evaluation, & Improvements Ling 571 Deep Processing Techniques for NLP January 24, 2011.
6/9/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 11 Giuseppe Carenini.
Albert Gatt LIN3022 Natural Language Processing Lecture 8.
Amirkabir University of Technology Computer Engineering Faculty AILAB Efficient Parsing Ahmad Abdollahzadeh Barfouroush Aban 1381 Natural Language Processing.
1/13 Parsing III Probabilistic Parsing and Conclusions.
Features and Unification
Basic Parsing with Context- Free Grammars 1 Some slides adapted from Julia Hirschberg and Dan Jurafsky.
1/17 Probabilistic Parsing … and some other approaches.
Syntax and Context-Free Grammars CMSC 723: Computational Linguistics I ― Session #6 Jimmy Lin The iSchool University of Maryland Wednesday, October 7,
Parsing SLP Chapter 13. 7/2/2015 Speech and Language Processing - Jurafsky and Martin 2 Outline  Parsing with CFGs  Bottom-up, top-down  CKY parsing.
Context-Free Grammar CSCI-GA.2590 – Lecture 3 Ralph Grishman NYU.
SI485i : NLP Set 9 Advanced PCFGs Some slides from Chris Manning.
11 CS 388: Natural Language Processing: Syntactic Parsing Raymond J. Mooney University of Texas at Austin.
Context Free Grammars Reading: Chap 12-13, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
PARSING David Kauchak CS457 – Fall 2011 some slides adapted from Ray Mooney.
Context Free Grammars Reading: Chap 12-13, Jurafsky & Martin This slide set was adapted from J. Martin and Rada Mihalcea.
1 Basic Parsing with Context- Free Grammars Slides adapted from Dan Jurafsky and Julia Hirschberg.
BİL711 Natural Language Processing1 Statistical Parse Disambiguation Problem: –How do we disambiguate among a set of parses of a given sentence? –We want.
1 CPE 480 Natural Language Processing Lecture 5: Parser Asst. Prof. Nuttanart Facundes, Ph.D.
1 Statistical Parsing Chapter 14 October 2012 Lecture #9.
December 2004CSA3050: PCFGs1 CSA305: Natural Language Algorithms Probabilistic Phrase Structure Grammars (PCFGs)
SI485i : NLP Set 8 PCFGs and the CKY Algorithm. PCFGs We saw how CFGs can model English (sort of) Probabilistic CFGs put weights on the production rules.
Parsing with Context Free Grammars CSC 9010 Natural Language Processing Paula Matuszek and Mary-Angela Papalaskari This slide set was adapted from: Jim.
Context Free Grammars Reading: Chap 9, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Rada Mihalcea.
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Page 1 Probabilistic Parsing and Treebanks L545 Spring 2000.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
Albert Gatt Corpora and Statistical Methods Lecture 11.
Rules, Movement, Ambiguity
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
NLP. Introduction to NLP Motivation –A lot of the work is repeated –Caching intermediate results improves the complexity Dynamic programming –Building.
Supertagging CMSC Natural Language Processing January 31, 2006.
CPSC 422, Lecture 28Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 28 Nov, 18, 2015.
Natural Language Processing Lecture 15—10/15/2015 Jim Martin.
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
Natural Language Processing Lecture 14—10/13/2015 Jim Martin.
December 2011CSA3202: PCFGs1 CSA3202: Human Language Technology Probabilistic Phrase Structure Grammars (PCFGs)
NLP. Introduction to NLP Time flies like an arrow –Many parses –Some (clearly) more likely than others –Need for a probabilistic ranking method.
PARSING David Kauchak CS159 – Fall Admin Assignment 3 Quiz #1  High: 36  Average: 33 (92%)  Median: 33.5 (93%)
Speech and Language Processing SLP Chapter 13 Parsing.
Chapter 12: Probabilistic Parsing and Treebanks Heshaam Faili University of Tehran.
Roadmap Probabilistic CFGs –Handling ambiguity – more likely analyses –Adding probabilities Grammar Parsing: probabilistic CYK Learning probabilities:
Probabilistic and Lexicalized Parsing. Probabilistic CFGs Weighted CFGs –Attach weights to rules of CFG –Compute weights of derivations –Use weights to.
Natural Language Processing Vasile Rus
CSC 594 Topics in AI – Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
Parsing Recommended Reading: Ch th Jurafsky & Martin 2nd edition
CSC 594 Topics in AI – Natural Language Processing
CS60057 Speech &Natural Language Processing
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27
Probabilistic and Lexicalized Parsing
CSCI 5832 Natural Language Processing
Probabilistic and Lexicalized Parsing
Lecture 14: Grammar and Parsing (II) November 11, 2004 Dan Jurafsky
CSCI 5832 Natural Language Processing
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 26
CPSC 503 Computational Linguistics
CPSC 503 Computational Linguistics
CPSC 503 Computational Linguistics
CPSC 503 Computational Linguistics
David Kauchak CS159 – Spring 2019
CPSC 503 Computational Linguistics
Presentation transcript:

Probabilistic Parsing Reading: Chap 14, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada Mihalcea’s original slides

Slide 1 Probabilistic CFGs The probabilistic model – Assigning probabilities to parse trees Getting the probabilities for the model Parsing with probabilities – Slight modification to dynamic programming approach – Task is to find the max probability tree for an input

Slide 1 Probability Model Attach probabilities to grammar rules The expansions for a given non-terminal sum to 1 VP -> Verb.55 VP-> Verb NP.40 VP-> Verb NP NP.05

Slide 1 Probability Model (cont’d) A derivation (tree) consists of the set of grammar rules that are in the tree The probability of a derivation (tree) is just the product of the probabilities of the rules in the derivation. The probability of a word sequence (sentence) is the probability of its tree in the unambiguous case. It’s the sum of the probabilities of the trees in the ambiguous case.

Slide 1 Getting the Probabilities From an annotated database (a treebank) – Treebank are annotated manually – English: Penn Treebank (about 1.6 million words) – Chinese: Chinese Treebank (about 500K words) Learned from a raw corpus

Slide 1 Learning from a Treebank The easiest and the most accurate way to learn probabilities Get a large collection of parsed sentences Collect counts for each non-terminal rule expansion in the collection Normalize Done What if you don’t have a treebank (and can’t get one) Take a large collection of text and parse it using a grammar In the case of syntactically ambiguous sentences collect all the possible parses Prorate the rule statistics gathered for rules in the ambiguous case by their probability Proceed as you did with a treebank.

Slide 1 Assumptions We’re assuming that there is a grammar to be used to parse with. We’re assuming the existence of a large robust dictionary with parts of speech We’re assuming the ability to parse (i.e. a parser) Given all that… we can parse probabilistically Alternatively: we can build the dictionary with parts of speech and the grammar based on the treebank corpus

Slide 1 Typical Approach Bottom-up dynamic programming approach Assign probabilities to constituents as they are completed and placed in the table Use the max probability for each constituent going up

Slide 1 What’s that last bullet mean? Say we’re talking about a final part of a parse S-> 0 NP i VP j The probability of the S is… P(S->NP VP)*P(NP)*P(VP) The green part is already known, since we are doing bottom-up parsing

Slide 1 Max P(NP) is known. What if there are multiple NPs for the span of text in question – (0 to i)? Take the max Does not mean that other kinds of constituents for the same span are ignored (i.e. they might be in the solution)

Slide 1 Problems The probability model we’re using is just based on the rules in the derivation… – Doesn’t use the words in any real way – Doesn’t take into account where in the derivation a rule is used

Slide 1 Solution Add lexical dependencies to the scheme… – Infiltrate the influence of particular words into the probabilities in the derivation – I.e. condition on actual words – All the words? No, only the right ones.

Slide 1 Heads To do that we’re going to make use of the notion of the head of a phrase – The head of an NP is its noun – The head of a VP is its verb – The head of a PP is its preposition A way of achieving some generalization – Replace phrases with their heads Finding basic phrases (NP, VP, PP) and their heads: – Shallow parsing – – + Heuristics

Slide 1 Example (right)

Slide 1 Example (wrong)

Slide 1 How? We used to have VP -> V NP PP P(r|VP) That’s the count of this rule divided by the number of VPs in a treebank Now we have VP(dumped)-> V(dumped) NP(sacks)PP(in) P(r|VP ^ dumped is the verb ^ sacks is the head of the NP ^ in is the head of the PP) Not likely to have significant counts in any treebank

Slide 1 Declare Independence When stuck, exploit independence and collect the statistics you can… We’ll focus on capturing two things – Verb subcategorization Particular verbs have affinities for particular VPs – Objects affinities for their predicates Some objects fit better with some predicates than others

Slide 1 Subcategorization Condition particular VP rules on their head… so r: VP -> V NP PP P(r|VP) Becomes P(r | VP ^ dumped) What’s the count? How many times was this rule used with dump, divided by the number of VPs that dump appears in total

Slide 1 Preferences Consider the VPs Ate spaghetti with gusto Ate spaghetti with marinara The affinity of gusto for eat is much larger than its affinity for spaghetti On the other hand, the affinity of marinara for spaghetti is much higher than its affinity for ate

Slide 1 Dependency grammars Model binary dependencies between words For instance: – I eat – eat apples Find the set of dependencies that best fits the input words (I.e. with no contradictions)

Slide 1 Evaluating parsers Precision – # of correct dependencies from the parse / total # of dependencies in the parse Recall – # of correct dependencies from the parse / total # of dependencies in the treebank (gold standard) Cross-brackets – # of crossing brackets in the parse and the treebank – E.g. (A (B C)) and ((A B) C) has one crossing bracket