NLP. Introduction to NLP Time flies like an arrow –Many parses –Some (clearly) more likely than others –Need for a probabilistic ranking method.

Slides:



Advertisements
Similar presentations
Probabilistic and Lexicalized Parsing CS Probabilistic CFGs: PCFGs Weighted CFGs –Attach weights to rules of CFG –Compute weights of derivations.
Advertisements

PARSING WITH CONTEXT-FREE GRAMMARS
GRAMMAR & PARSING (Syntactic Analysis) NLP- WEEK 4.
10. Lexicalized and Probabilistic Parsing -Speech and Language Processing- 발표자 : 정영임 발표일 :
March 1, 2009 Dr. Muhammed Al-Mulhem 1 ICS 482 Natural Language Processing Probabilistic Context Free Grammars (Chapter 14) Muhammed Al-Mulhem March 1,
CKY Parsing Ling 571 Deep Processing Techniques for NLP January 12, 2011.
1 Earley Algorithm Chapter 13.4 October 2009 Lecture #9.
PCFG Parsing, Evaluation, & Improvements Ling 571 Deep Processing Techniques for NLP January 24, 2011.
Albert Gatt LIN3022 Natural Language Processing Lecture 8.
Intro to NLP - J. Eisner1 Probabilistic CKY.
Parsing with PCFG Ling 571 Fei Xia Week 3: 10/11-10/13/05.
Context-Free Grammar Parsing by Message Passing Paper by Dekang Lin and Randy Goebel Presented by Matt Watkins.
Basic Parsing with Context- Free Grammars 1 Some slides adapted from Julia Hirschberg and Dan Jurafsky.
Syntactic Parsing with CFGs CMSC 723: Computational Linguistics I ― Session #7 Jimmy Lin The iSchool University of Maryland Wednesday, October 14, 2009.
Fall 2004 Lecture Notes #5 EECS 595 / LING 541 / SI 661 Natural Language Processing.
Artificial Intelligence 2004 Natural Language Processing - Syntax and Parsing - Language Syntax Parsing.
Parsing SLP Chapter 13. 7/2/2015 Speech and Language Processing - Jurafsky and Martin 2 Outline  Parsing with CFGs  Bottom-up, top-down  CKY parsing.
Intro to NLP - J. Eisner1 The Expectation Maximization (EM) Algorithm … continued!
Context-Free Grammar CSCI-GA.2590 – Lecture 3 Ralph Grishman NYU.
11 CS 388: Natural Language Processing: Syntactic Parsing Raymond J. Mooney University of Texas at Austin.
PARSING David Kauchak CS457 – Fall 2011 some slides adapted from Ray Mooney.
CS460/626 : Natural Language Processing/Speech, NLP and the Web (Lecture 29– CYK; Inside Probability; Parse Tree construction) Pushpak Bhattacharyya CSE.
1 Basic Parsing with Context- Free Grammars Slides adapted from Dan Jurafsky and Julia Hirschberg.
Tree Kernels for Parsing: (Collins & Duffy, 2001) Advanced Statistical Methods in NLP Ling 572 February 28, 2012.
BİL711 Natural Language Processing1 Statistical Parse Disambiguation Problem: –How do we disambiguate among a set of parses of a given sentence? –We want.
Probabilistic Parsing Reading: Chap 14, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
1 Statistical Parsing Chapter 14 October 2012 Lecture #9.
1 CKY and Earley Algorithms Chapter 13 October 2012 Lecture #8.
GRAMMARS David Kauchak CS159 – Fall 2014 some slides adapted from Ray Mooney.
SI485i : NLP Set 8 PCFGs and the CKY Algorithm. PCFGs We saw how CFGs can model English (sort of) Probabilistic CFGs put weights on the production rules.
LINGUISTICA GENERALE E COMPUTAZIONALE ANALISI SINTATTICA (PARSING)
Parsing I: Earley Parser CMSC Natural Language Processing May 1, 2003.
CS626: NLP, Speech and the Web Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 15, 17: Parsing Ambiguity, Probabilistic Parsing, sample seminar 17.
PARSING David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
Probabilistic CKY Roger Levy [thanks to Jason Eisner]
CS460/626 : Natural Language Processing/Speech, NLP and the Web (Lecture 29– CYK; Inside Probability; Parse Tree construction) Pushpak Bhattacharyya CSE.
CS : Speech, NLP and the Web/Topics in AI Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture-16: Probabilistic parsing; computing probability of.
CSA2050 Introduction to Computational Linguistics Parsing I.
NLP. Introduction to NLP Background –Developed by Jay Earley in 1970 –No need to convert the grammar to CNF –Left to right Complexity –Faster than O(n.
Sentence Parsing Parsing 3 Dynamic Programming. Jan 2009 Speech and Language Processing - Jurafsky and Martin 2 Acknowledgement  Lecture based on  Jurafsky.
Natural Language - General
Basic Parsing Algorithms: Earley Parser and Left Corner Parsing
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
NLP. Introduction to NLP Motivation –A lot of the work is repeated –Caching intermediate results improves the complexity Dynamic programming –Building.
CPSC 503 Computational Linguistics
CS 4705 Lecture 10 The Earley Algorithm. Review Top-Down vs. Bottom-Up Parsers –Both generate too many useless trees –Combine the two to avoid over-generation:
Natural Language Processing Lecture 15—10/15/2015 Jim Martin.
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
GRAMMARS David Kauchak CS457 – Spring 2011 some slides adapted from Ray Mooney.
DERIVATION S RULES USEDPROBABILITY P(s) = Σ j P(T,S) where t is a parse of s = Σ j P(T) P(T) – The probability of a tree T is the product.
CS : Speech, NLP and the Web/Topics in AI Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture-15: Probabilistic parsing; PCFG (contd.)
NLP. Introduction to NLP #include int main() { int n, reverse = 0; printf("Enter a number to reverse\n"); scanf("%d",&n); while (n != 0) { reverse =
PARSING David Kauchak CS159 – Fall Admin Assignment 3 Quiz #1  High: 36  Average: 33 (92%)  Median: 33.5 (93%)
Parallel Tools for Natural Language Processing Mark Brigham Melanie Goetz Andrew Hogue / March 16, 2004.
N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximation.
CS460/626 : Natural Language Processing/Speech, NLP and the Web (Lecture 25– Probabilistic Parsing) Pushpak Bhattacharyya CSE Dept., IIT Bombay 14 th March,
Speech and Language Processing SLP Chapter 13 Parsing.
Natural Language Processing Vasile Rus
CSC 594 Topics in AI – Natural Language Processing
CKY Parser 0Book 1 the 2 flight 3 through 4 Houston5 6/19/2018
Probabilistic and Lexicalized Parsing
CKY Parser 0Book 1 the 2 flight 3 through 4 Houston5 11/16/2018
Probabilistic and Lexicalized Parsing
CSCI 5832 Natural Language Processing
Parsing and More Parsing
David Kauchak CS159 – Spring 2019
David Kauchak CS159 – Spring 2019
NLP.
Presentation transcript:

NLP

Introduction to NLP

Time flies like an arrow –Many parses –Some (clearly) more likely than others –Need for a probabilistic ranking method

Just like (deterministic) CFG, a 4-tuple (N, ,R,S) –N: non-terminal symbols –  : terminal symbols (disjoint from N) –R: rules (A   ) [p]  is a string from (   N)* p is the probability P(  |A) –S: start symbol (from N)

S -> NP VP NP -> DT N | NP PP PP -> PRP NP VP -> V NP | VP PP DT -> 'a' | 'the' N -> 'child' | 'cake' | 'fork' PRP -> 'with' | 'to' V -> 'saw' | 'ate'

S -> NP VP NP -> DT N NP -> NP PP PP -> PRP NP VP -> V NP VP -> VP PP DT -> 'a' DT -> 'the' N -> 'child' N -> 'cake' N -> 'fork' PRP -> 'with' PRP -> 'to' V -> 'saw' V -> 'ate'

S -> NP VP NP -> DT N NP -> NP PP PP -> PRP NP VP -> V NP VP -> VP PP DT -> 'a' DT -> 'the' N -> 'child' N -> 'cake' N -> 'fork' PRP -> 'with' PRP -> 'to' V -> 'saw' V -> 'ate'

S -> NP VP [p0=1] NP -> DT N [p1] NP -> NP PP [p2] PP -> PRP NP [p3=1] VP -> V NP [p4] VP -> VP PP [p5] DT -> 'a' [p6] DT -> 'the' [p7] N -> 'child' [p8] N -> 'cake' [p9] N -> 'fork' [p10] PRP -> 'with' [p11] PRP -> 'to' [p12] V -> 'saw' [p13] V -> 'ate' [p14]

The probability of a parse tree t given all n productions used to build it: The most likely parse is determined as follows: The probability of a sentence is the sum of the probabilities of all of its parses

S -> NP VP [p0=1] NP -> DT N [p1] NP -> NP PP [p2] PP -> PRP NP [p3=1] VP -> V NP [p4] VP -> VP PP [p5] DT -> 'a' [p6] DT -> 'the' [p7] N -> 'child' [p8] N -> 'cake' [p9] N -> 'fork' [p10] PRP -> 'with' [p11] PRP -> 'to' [p12] V -> 'saw' [p13] V -> 'ate' [p14]

S -> NP VP [p0=1] NP -> DT N [p1] NP -> NP PP [p2] PP -> PRP NP [p3=1] VP -> V NP [p4] VP -> VP PP [p5] DT -> 'a' [p6] DT -> 'the' [p7] N -> 'child' [p8] N -> 'cake' [p9] N -> 'fork' [p10] PRP -> 'with' [p11] PRP -> 'to' [p12] V -> 'saw' [p13] V -> 'ate' [p14] t1t1

S -> NP VP [p0=1] NP -> DT N [p1] NP -> NP PP [p2] PP -> PRP NP [p3=1] VP -> V NP [p4] VP -> VP PP [p5] DT -> 'a' [p6] DT -> 'the' [p7] N -> 'child' [p8] N -> 'cake' [p9] N -> 'fork' [p10] PRP -> 'with' [p11] PRP -> 'to' [p12] V -> 'saw' [p13] V -> 'ate' [p14] t1t1 t2t2

S -> NP VP [p0=1] NP -> DT N [p1] NP -> NP PP [p2] PP -> PRP NP [p3=1] VP -> V NP [p4] VP -> VP PP [p5] DT -> 'a' [p6] DT -> 'the' [p7] N -> 'child' [p8] N -> 'cake' [p9] N -> 'fork' [p10] PRP -> 'with' [p11] PRP -> 'to' [p12] V -> 'saw' [p13] V -> 'ate' [p14] t1t1 t2t2

S -> NP VP [p0=1] NP -> DT N [p1] NP -> NP PP [p2] PP -> PRP NP [p3=1] VP -> V NP [p4] VP -> VP PP [p5] DT -> 'a' [p6] DT -> 'the' [p7] N -> 'child' [p8] N -> 'cake' [p9] N -> 'fork' [p10] PRP -> 'with' [p11] PRP -> 'to' [p12] V -> 'saw' [p13] V -> 'ate' [p14] t1t1 t2t2

S -> NP VP [p0=1] NP -> DT N [p1] NP -> NP PP [p2] PP -> PRP NP [p3=1] VP -> V NP [p4] VP -> VP PP [p5] DT -> 'a' [p6] DT -> 'the' [p7] N -> 'child' [p8] N -> 'cake' [p9] N -> 'fork' [p10] PRP -> 'with' [p11] PRP -> 'to' [p12] V -> 'saw' [p13] V -> 'ate' [p14] t1t1 t2t2

S -> NP VP [p0=1] NP -> DT N [p1] NP -> NP PP [p2] PP -> PRP NP [p3=1] VP -> V NP [p4] VP -> VP PP [p5] DT -> 'a' [p6] DT -> 'the' [p7] N -> 'child' [p8] N -> 'cake' [p9] N -> 'fork' [p10] PRP -> 'with' [p11] PRP -> 'to' [p12] V -> 'saw' [p13] V -> 'ate' [p14] t1t1 t2t2

Given a grammar G and a sentence s, let T(s) be all parse trees that correspond to s Task 1 –find which tree t among T(s) maximizes the probability p(t) Task 2 –find the probability of the sentence p(s) as the sum of all possible tree probabilities p(t)

Probabilistic Earley algorithm –Top-down parser with a dynamic programming table Probabilistic Cocke-Kasami-Younger (CKY) algorithm –Bottom-up parser with a dynamic programming table

Probabilities can be learned from a training corpus (Treebank) Intuitive meaning –Parse #1 is twice as probable as parse #2 Possible to do reranking Possible to combine with other stages –E.g., speech recognition, translation

Use the parsed training set for getting the counts –P ML (α  β) = Count (α  β)/Count(α) Example: –P ML (S  NP VP) = Count (S  NP VP)/Count(S)

Example from Jurafsky and Martin

S -> NP VP [p0=1] NP -> DT N [p1=.8] NP -> NP PP [p2=.2] PP -> PRP NP [p3=1] VP -> V NP [p4=.7] VP -> VP PP [p5=.3] DT -> 'a' [p6=.25] DT -> 'the' [p7=.75] N -> 'child' [p8=.5] N -> 'cake' [p9=.3] N -> 'fork' [p10=.2] PRP -> 'with' [p11=.1] PRP -> 'to' [p12=.9] V -> 'saw' [p13=.4] V -> 'ate' [p14=.6]

the child ate the cake with the fork

the child ate the cake with the fork DT.75

DT.75 N.5 the child ate the cake with the fork

DT.75 N.5 NP.8 the child ate the cake with the fork

DT.75 N.5 NP.8*.5*.75 the child ate the cake with the fork

Now, on your own, compute the probability of the entire sentence using Probabilistic CKY. Don’t forget that there may be multiple parses, so you will need to add the corresponding probabilities.

NLP