New Results in Parsing Eugene Charniak Brown Laboratory for Linguistic Information Processing BL IP L.

Slides:



Advertisements
Similar presentations
Feature Forest Models for Syntactic Parsing Yusuke Miyao University of Tokyo.
Advertisements

Self-training with Products of Latent Variable Grammars Zhongqiang Huang, Mary Harper, and Slav Petrov.
Albert Gatt Corpora and Statistical Methods Lecture 11.
Learning Accurate, Compact, and Interpretable Tree Annotation Recent Advances in Parsing Technology WS 2011/2012 Saarland University in Saarbrücken Miloš.
Learning and Inference for Hierarchically Split PCFGs Slav Petrov and Dan Klein.
Probabilistic Parsing Chapter 14, Part 2 This slide set was adapted from J. Martin, R. Mihalcea, Rebecca Hwa, and Ray Mooney.
A Joint Model For Semantic Role Labeling Aria Haghighi, Kristina Toutanova, Christopher D. Manning Computer Science Department Stanford University.
10. Lexicalized and Probabilistic Parsing -Speech and Language Processing- 발표자 : 정영임 발표일 :
March 1, 2009 Dr. Muhammed Al-Mulhem 1 ICS 482 Natural Language Processing Probabilistic Context Free Grammars (Chapter 14) Muhammed Al-Mulhem March 1,
Recognizing Implicit Discourse Relations in the Penn Discourse Treebank Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng Department of Computer Science National.
In Search of a More Probable Parse: Experiments with DOP* and the Penn Chinese Treebank Aaron Meyers Linguistics 490 Winter 2009.
Probabilistic Parsing: Enhancements Ling 571 Deep Processing Techniques for NLP January 26, 2011.
PCFG Parsing, Evaluation, & Improvements Ling 571 Deep Processing Techniques for NLP January 24, 2011.
6/9/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 11 Giuseppe Carenini.
1/13 Parsing III Probabilistic Parsing and Conclusions.
Features and Unification
Grammar induction by Bayesian model averaging Guy Lebanon LARG meeting May 2001 Based on Andreas Stolcke’s thesis UC Berkeley 1994.
1/17 Probabilistic Parsing … and some other approaches.
Language Model. Major role: Language Models help a speech recognizer figure out how likely a word sequence is, independent of the acoustics. A lot of.
Probabilistic Parsing Ling 571 Fei Xia Week 5: 10/25-10/27/05.
Seven Lectures on Statistical Parsing Christopher Manning LSA Linguistic Institute 2007 LSA 354 Lecture 7.
Introduction to Machine Learning Approach Lecture 5.
SI485i : NLP Set 9 Advanced PCFGs Some slides from Chris Manning.
Robert Hass CIS 630 April 14, 2010 NP NP↓ Super NP tagging JJ ↓
Empirical Methods in Information Extraction Claire Cardie Appeared in AI Magazine, 18:4, Summarized by Seong-Bae Park.
Tree Kernels for Parsing: (Collins & Duffy, 2001) Advanced Statistical Methods in NLP Ling 572 February 28, 2012.
Unambiguity Regularization for Unsupervised Learning of Probabilistic Grammars Kewei TuVasant Honavar Departments of Statistics and Computer Science University.
Probabilistic Parsing Reading: Chap 14, Jurafsky & Martin This slide set was adapted from J. Martin, U. Colorado Instructor: Paul Tarau, based on Rada.
Spring /22/071 Beyond PCFGs Chris Brew Ohio State University.
1 Statistical Parsing Chapter 14 October 2012 Lecture #9.
December 2004CSA3050: PCFGs1 CSA305: Natural Language Algorithms Probabilistic Phrase Structure Grammars (PCFGs)
10/12/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 10 Giuseppe Carenini.
SI485i : NLP Set 8 PCFGs and the CKY Algorithm. PCFGs We saw how CFGs can model English (sort of) Probabilistic CFGs put weights on the production rules.
1 Semi-Supervised Approaches for Learning to Parse Natural Languages Rebecca Hwa
1 Boosting-based parse re-ranking with subtree features Taku Kudo Jun Suzuki Hideki Isozaki NTT Communication Science Labs.
Semi-supervised Training of Statistical Parsers CMSC Natural Language Processing January 26, 2006.
11 Chapter 14 Part 1 Statistical Parsing Based on slides by Ray Mooney.
Page 1 Probabilistic Parsing and Treebanks L545 Spring 2000.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture August 2007.
Albert Gatt Corpora and Statistical Methods Lecture 11.
Tokenization & POS-Tagging
CSA2050 Introduction to Computational Linguistics Parsing I.
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
CS : Speech, NLP and the Web/Topics in AI Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture-14: Probabilistic parsing; sequence labeling, PCFG.
Evaluating Models of Computation and Storage in Human Sentence Processing Thang Luong CogACLL 2015 Tim J. O’Donnell & Noah D. Goodman.
Automatic Grammar Induction and Parsing Free Text - Eric Brill Thur. POSTECH Dept. of Computer Science 심 준 혁.
CPSC 422, Lecture 28Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 28 Nov, 18, 2015.
Natural Language Processing Lecture 15—10/15/2015 Jim Martin.
Probabilistic Context Free Grammars Grant Schindler 8803-MDM April 27, 2006.
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
11 Project, Part 3. Outline Basics of supervised learning using Naïve Bayes (using a simpler example) Features for the project 2.
December 2011CSA3202: PCFGs1 CSA3202: Human Language Technology Probabilistic Phrase Structure Grammars (PCFGs)
1 Semi-Supervised Approaches for Learning to Parse Natural Languages Slides are from Rebecca Hwa, Ray Mooney.
NLP. Introduction to NLP Time flies like an arrow –Many parses –Some (clearly) more likely than others –Need for a probabilistic ranking method.
Towards Semi-Automated Annotation for Prepositional Phrase Attachment Sara Rosenthal William J. Lipovsky Kathleen McKeown Kapil Thadani Jacob Andreas Columbia.
Dependency Parsing Niranjan Balasubramanian March 24 th 2016 Credits: Many slides from: Michael Collins, Mausam, Chris Manning, COLNG 2014 Dependency Parsing.
Dan Roth University of Illinois, Urbana-Champaign 7 Sequential Models Tutorial on Machine Learning in Natural.
LING 581: Advanced Computational Linguistics Lecture Notes March 2nd.
Chapter 12: Probabilistic Parsing and Treebanks Heshaam Faili University of Tehran.
Roadmap Probabilistic CFGs –Handling ambiguity – more likely analyses –Adding probabilities Grammar Parsing: probabilistic CYK Learning probabilities:
Probabilistic and Lexicalized Parsing. Probabilistic CFGs Weighted CFGs –Attach weights to rules of CFG –Compute weights of derivations –Use weights to.
Statistical Parsing IP disclosure: Content borrowed from J&M 3 rd edition and Raymond Mooney.
Natural Language Processing Vasile Rus
CSC 594 Topics in AI – Natural Language Processing
COSC 6336 Natural Language Processing Statistical Parsing
Authorship Attribution Using Probabilistic Context-Free Grammars
Probabilistic and Lexicalized Parsing
LING/C SC 581: Advanced Computational Linguistics
CSCI 5832 Natural Language Processing
Probabilistic Parsing
Presentation transcript:

New Results in Parsing Eugene Charniak Brown Laboratory for Linguistic Information Processing BL IP L

Joint Work with Mark Johnson and David McClosky

New Parsing Results IVery Old Parsing Results ( ) IIMore Recent Old Parsing Results ( ) IIILearning from Unlabeled Data IV Beyond the Wall-Street Journal V Future Work

Parsing Parser Alice ate yellow squash. S NPVP VNP NAdj. Alice ate yellow squash N

The Importance of Parsing In the hotel fake property was sold to tourists. What does “fake” modify? What does “In the hotel” modify?

Ambiguity S S NP VP VVN N NN NN Det Salesmen sold the dog biscuits

Probabilistic Context-free Grammars (PCFGs) S NP VP 1.0 VP V NP0.5 VP V NP NP 0.5 NP Det N0.5 NP Det N N0.5 N salespeople0.3 N dog0.4 N biscuits0.3 V sold1.0 S NP VP VN NNDet Salesmen sold the dog biscuits

The Basic Paradigm Training Portion of the Tree-bank Testing Portion Learner Tester Parser

The Penn Wall Street Journal Tree-bank About one million words. Average sentence length is 23 words and punctuation. In an Oct. 19 review of “The Misanthrope” at Chicago’s Goodman Theatre (“Revitalized Classics Take the Stage in Windy City,” Leisure & Arts), the role of Celimene, played by Kim Cattrall, was mistakenly attributed to Christina Hagg.

“Learning” a PCFG from a Tree-Bank (S (NP (N Salespeople)) (VP (V sold) (NP (Det the) (N dog) (N biscuits))) (..)) S NP VP. VP V NP

The parser finds the most probable parse tree given the sentence (s) For a PCFG we have the following, where c varies over the constituents in the tree : Producing a Single “Best” Parse

Evaluating Parsing Accuracy Few sentences are assigned a completely correct parse by any currently existing parsers. Rather, evaluation is done in terms of the percentage of correct constituents. [ label, start, finish ] A constituent is a triple, all of which must be in the true parse for the constituent to be marked correct.

Evaluating Constituent Accuracy Let C be the number of correct constituents produced by the parser over the test set, M be the total number of constituents produced, and N be the total in the correct version Precision = C/M Recall = C/N It is possible to artificially inflate either one by itself. I will typically give the harmonic mean (called the “f-measure”

Parsing Results Method Prec/Rec PCFG75% PCFG + simple tuning78%

Lexicalized Parsing To do better, it is necessary to condition probabilities on the actual words of the sentence. This makes the probabilities much tighter: p(VP V NP NP) = p(VP V NP NP | said) = p(VP V NP NP | gave) =

Lexicalized Probabilities for Heads p(prices | n-plural) =.013 p(prices | n-plural, NP) =.013 p(prices | n-plural, NP, S) =.025 p(prices | n-plural, NP, S, v-past) =.052 p(prices | n-plural, NP, S, v-past, fell) =.146

Statistical Parsing The last seven years have seen a rapid improvement of statistical parsers due to lexicalization. Contributors to this literature include Bod, Collins, Johnson, Magerman, Ratnaparkhi, and myself.

Statistical Parser Improvement Year Labeled Precision & Recall %

New Parsing Results IVery Old Parsing Results ( ) IIMore Recent Old Parsing Results ( ) IIILearning from Unlabeled Data IV Beyond the Wall-Street Journal V Future Work

Coarse-to-fine n-best parsing and MaxEnt discriminative re-ranking N-best parser SentenceN parses MaxEnt discriminative re-ranker Best Parse

50-best parsing results (oracle) 1-best 2-best 10-best 25-best 50-best FMEASUREFMEASURE Base rate for the parser

MaxEnt Re-ranking N-best parses M re-ranking features M weights, one for each feature Value of parse Find highest-valued parse

Feature Schema There are 13 feature schema, for example Number of constituents of { length k X constituents are at end of sentence, before a punctuation mark, neither} There are 1049 different versions of this feature schema, e.g., the parse has 2-4 constituents of length 5-8 at end of sentence.

Features Continued Only schema instantiations with five or more occurrences are retained. There are 1,141,697 features. For any given parse, almost all will have value zero. There are functions which map from parses to features with non-zero values to make the computation efficient.

Cross validation for feature-weight estimation 34,000 for training 2,000 test set2000 dev set Generative parser Parsing statistics Smoothing parameters 50-best parses for 2000 test set Collins (2000)

Discriminative Parsing Results “Standard” setup. Trained on sections 2-22, 24 for development, tested on section 23, sentences of length New Parser Re-ranked Collins n-best Charniak % error reduction F-measure

New Parsing Results IVery Old Parsing Results ( ) IIMore Recent Old Parsing Results ( ) IIILearning from Unlabeled Data IV Beyond the Wall-Street Journal V Future Work

Self Training Previously we trained on the human parsed Penn Tree-bank. It would be beneficial if we could use more, unparsed, data to “learn” to parse better. Self training is using the output of a system as extra training data for the system.

The Basic Paradigm Training Portion of the Tree-bank Testing Portion Learner Tester Parser Re-Ranker

Training Portion of the Tree-bank Testing Portion Learner Tester Parser Re-Ranker Unlabeled Text Parser- Produced Parses

Self Training The trouble is, it is “known” not to work. “Nor does self-training … offer a way out. Charniak(1997) showed a small improvement from training on 40M words of its own output, probably because the increased size of the training set gave somewhat better counts for the head dependency model. However, training on data that merely confirms the parser’s existing view cannot help in any other way and one might expect such improvements to be transient and eventually to degrade in the face of otherwise noisy data.” (Steedman et. Al. 2002)

Experimental Setup Parsed and re-ranked 400 million words of “North-American News” text. Added 20 million words from LA Times to training data. Parsed Section 23 of Penn Tree-bank as before

Self-Training Results Charniak 2000 parser 89.5% With re-ranking 91.3% With extra 40 million words % error reduction

What is Going On? Without the re-ranker self-training does not seem to work. With the re-ranker, it does. We have no idea why.

New Parsing Results IVery Old Parsing Results ( ) IIMore Recent Old Parsing Results ( ) IIILearning from Unlabeled Data IV Beyond the Wall-Street Journal V Future Work

How does the Parser Work on Corpora other than WSJ? Very badly. 83.9% on the Brown Corpus The Brown Corpus is a “balanced” corpus of 1,000,000 words of text from 1961: Newspapers, novels, religion, etc. Are our parsers over-bread racehorses, finely tuned to Wall-Street Journal?

Results on the Brown Corpus Basic parser on WSJ89.5% Basic parser on Brown83.9% Re-ranking parser on Brown85.8% Re-ranking + 40million words87.7% 83.9 to 87.7 is a 24% error reduction

New Parsing Results IVery Old Parsing Results ( ) IIMore Recent Old Parsing Results ( ) IIILearning from Unlabeled Data IV Beyond the Wall-Street Journal V Future Work

What Next? More experiments on self-training and new domains. E.g., would self-training on a biology textbook help parse medical text? We have some really way-out ideas on repeated parsing the same text using different grammars. A comparatively tame version of this has been written up for NAACL.

Conclusion We have improved parsing a lot in the last year. We have lots of wacky ideas for the future.

Why do we Need to Use Tree-bank Non-terminals? Because our output will be marked incorrect otherwise. So we only need them at output, before then we can use whatever ones we want. Why might we want other ones?

We are Already Using Different Non-Terminals When we condition the choice of a word on another word we are in-effect creating new non-terminals P(prices | NNS, subject of “fell”) = 0.25 P(NP-NNS^S-fell prices) = 0.25 This is just a funny non-terminal

Clustering Non-Terminals So let us create lots of really specific non- terminals and cluster them, e.g., NonTrm1005 = “A noun phrase headed by “prices” under a sentence headed by “fell” ClusterNonTerm223 = {NonTrm1005, NonTerm2200 …} ClusterNonTerm223 ClusterNonTerm111

Problem: Optimal Clustering is NP-Hard Solution 1: Don’t worry, many cluster algorithms do quite well anyway. Solution 2: Do many different clusterings.