Reranking Parse Trees with a SRL system Charles Sutton and Andrew McCallum University of Massachusetts June 30, 2005.

Slides:



Advertisements
Similar presentations
HPSG parser development at U-tokyo Takuya Matsuzaki University of Tokyo.
Advertisements

Progress update Lin Ziheng. System overview 2 Components – Connective classifier Features from Pitler and Nenkova (2009): – Connective: because – Self.
CONSTRAINED CONDITIONAL MODELS TUTORIAL Jingyu Chen, Xiao Cheng.
1 Automatic Semantic Role Labeling Scott Wen-tau Yih Kristina Toutanova Microsoft Research Thanks to.
Structured SVM Chen-Tse Tsai and Siddharth Gupta.
5/10/20151 GC16/3011 Functional Programming Lecture 13 Example Programs 1. Evaluating arithmetic expressions 2. lists as functions.
Max-Margin Matching for Semantic Role Labeling David Vickrey James Connor Daphne Koller Stanford University.
计算机科学与技术学院 Chinese Semantic Role Labeling with Dependency-driven Constituent Parse Tree Structure Hongling Wang, Bukang Wang Guodong Zhou NLP Lab, School.
Event Extraction Using Distant Supervision Kevin Reschke, Martin Jankowiak, Mihai Surdeanu, Christopher D. Manning, Daniel Jurafsky 30 May 2014 Language.
Semantic Role Labeling Abdul-Lateef Yussiff
A Joint Model For Semantic Role Labeling Aria Haghighi, Kristina Toutanova, Christopher D. Manning Computer Science Department Stanford University.
10. Lexicalized and Probabilistic Parsing -Speech and Language Processing- 발표자 : 정영임 발표일 :
Recognizing Implicit Discourse Relations in the Penn Discourse Treebank Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng Department of Computer Science National.
Towards Parsing Unrestricted Text into PropBank Predicate- Argument Structures ACL4 Project NCLT Seminar Presentation, 7th June 2006 Conor Cafferkey.
Shallow Parsing CS 4705 Julia Hirschberg 1. Shallow or Partial Parsing Sometimes we don’t need a complete parse tree –Information extraction –Question.
Robust Textual Inference via Graph Matching Aria Haghighi Andrew Ng Christopher Manning.
Natural Language Processing - Feature Structures - Feature Structures and Unification.
Re-ranking for NP-Chunking: Maximum-Entropy Framework By: Mona Vajihollahi.
PCFG Parsing, Evaluation, & Improvements Ling 571 Deep Processing Techniques for NLP January 24, 2011.
SRL using complete syntactic analysis Mihai Surdeanu and Jordi Turmo TALP Research Center Universitat Politècnica de Catalunya.
Single Category Classification Stage One Additive Weighted Prototype Model.
1/13 Parsing III Probabilistic Parsing and Conclusions.
 Christel Kemke 2007/08 COMP 4060 Natural Language Processing Feature Structures and Unification.
Page 1 Generalized Inference with Multiple Semantic Role Labeling Systems Peter Koomen, Vasin Punyakanok, Dan Roth, (Scott) Wen-tau Yih Department of Computer.
Taking the Kitchen Sink Seriously: An Ensemble Approach to Word Sense Disambiguation from Christopher Manning et al.
1/17 Probabilistic Parsing … and some other approaches.
Course Summary LING 575 Fei Xia 03/06/07. Outline Introduction to MT: 1 Major approaches –SMT: 3 –Transfer-based MT: 2 –Hybrid systems: 2 Other topics.
Dependency Parsing with Reference to Slovene, Spanish and Swedish Simon Corston-Oliver Anthony Aue Microsoft Research.
Probabilistic Parsing Ling 571 Fei Xia Week 5: 10/25-10/27/05.
An SVM Based Voting Algorithm with Application to Parse Reranking Paper by Libin Shen and Aravind K. Joshi Presented by Amit Wolfenfeld.
Seven Lectures on Statistical Parsing Christopher Manning LSA Linguistic Institute 2007 LSA 354 Lecture 7.
Microsoft Research Faculty Summit Robert Moore Principal Researcher Microsoft Research.
SI485i : NLP Set 9 Advanced PCFGs Some slides from Chris Manning.
Outline P1EDA’s simple features currently implemented –And their ablation test Features we have reviewed from Literature –(Let’s briefly visit them) –Iftene’s.
Richard Socher Cliff Chiung-Yu Lin Andrew Y. Ng Christopher D. Manning
Tree Kernels for Parsing: (Collins & Duffy, 2001) Advanced Statistical Methods in NLP Ling 572 February 28, 2012.
A Grammar-based Entity Representation Framework for Data Cleaning Authors: Arvind Arasu Raghav Kaushik Presented by Rashmi Havaldar.
1 CS546: Machine Learning and Natural Language Multi-Class and Structured Prediction Problems Slides from Taskar and Klein are used in this lecture TexPoint.
The Impact of Grammar Enhancement on Semantic Resources Induction Luca Dini Giampaolo Mazzini
1 Statistical Parsing Chapter 14 October 2012 Lecture #9.
A search-based Chinese Word Segmentation Method ——WWW 2007 Xin-Jing Wang: IBM China Wen Liu: Huazhong Univ. China Yong Qin: IBM China.
Training dependency parsers by jointly optimizing multiple objectives Keith HallRyan McDonaldJason Katz- BrownMichael Ringgaard.
Triplet Extraction from Sentences Lorand Dali Blaž “Jožef Stefan” Institute, Ljubljana 17 th of October 2008.
1 Boosting-based parse re-ranking with subtree features Taku Kudo Jun Suzuki Hideki Isozaki NTT Communication Science Labs.
INSTITUTE OF COMPUTING TECHNOLOGY Forest-based Semantic Role Labeling Hao Xiong, Haitao Mi, Yang Liu and Qun Liu Institute of Computing Technology Academy.
A Cascaded Finite-State Parser for German Michael Schiehlen Institut für Maschinelle Sprachverarbeitung Universität Stuttgart
A Systematic Exploration of the Feature Space for Relation Extraction Jing Jiang & ChengXiang Zhai Department of Computer Science University of Illinois,
Supertagging CMSC Natural Language Processing January 31, 2006.
2003 (c) University of Pennsylvania1 Better MT Using Parallel Dependency Trees Yuan Ding University of Pennsylvania.
11 Project, Part 3. Outline Basics of supervised learning using Naïve Bayes (using a simpler example) Features for the project 2.
RANKING David Kauchak CS 451 – Fall Admin Assignment 4 Assignment 5.
Conditional Markov Models: MaxEnt Tagging and MEMMs
Wei Lu, Hwee Tou Ng, Wee Sun Lee National University of Singapore
NLP. Introduction to NLP Time flies like an arrow –Many parses –Some (clearly) more likely than others –Need for a probabilistic ranking method.
Support Vector Machines Reading: Ben-Hur and Weston, “A User’s Guide to Support Vector Machines” (linked from class web page)
Department of Computer Science The University of Texas at Austin USA Joint Entity and Relation Extraction using Card-Pyramid Parsing Rohit J. Kate Raymond.
A Database of Narrative Schemas A 2010 paper by Nathaniel Chambers and Dan Jurafsky Presentation by Julia Kelly.
Human features are those things created by man.
Parsing Natural Scenes and Natural Language with Recursive Neural Networks INTERNATIONAL CONFERENCE ON MACHINE LEARNING (ICML 2011) RICHARD SOCHER CLIFF.
CSC 594 Topics in AI – Natural Language Processing
Relation Extraction CSCI-GA.2591
Two Discourse Driven Language Models for Semantics
(Entity and) Event Extraction CSCI-GA.2591
Improving a Pipeline Architecture for Shallow Discourse Parsing
Statistical NLP Spring 2011
Probabilistic and Lexicalized Parsing
CKY Parser 0Book 1 the 2 flight 3 through 4 Houston5 11/16/2018
Lei Sha, Jing Liu, Chin-Yew Lin, Sujian Li, Baobao Chang, Zhifang Sui
Statistical NLP Spring 2011
Owen Rambow 6 Minutes.
Presentation transcript:

Reranking Parse Trees with a SRL system Charles Sutton and Andrew McCallum University of Massachusetts June 30, 2005

Motivation for Joint Processing VERB: barked ARG0: the dog ARG1: the man AM-LOC: TV Miller (2000): single probabilistic model for parsing, relations Why not for SRL? Rather than augmenting grammar, we use reranking approach Uncertainty Long-range features

Reranking by weighted combination Basic MaxEnt SRL model Rerank n-best parse trees from Bikel’s implementation of Collins parser Gildea & Jurafsky tried this with  =0.5

Weighted combination Choosing tree by SRL score has lower recall

Training a global reranker MaxEnt reranking of parse tree list, using features from base SRL frame Inspired by Toutanova et al. 2005, but different. Features include: Standard local SRL features Does arg Ax occur in frame? Linear sequence of frame arugments (e.g. A0_V_A1) Conjunctions of heads from argument pairs Parse tree score Parse TreesSRL F1 Gold best 63.9 Trained combination 63.6 Simple combination (  =0.5) 56.9

Ceiling performance Parse TreesSRL F1 Gold best 63.9 Reranked by gold parse F Reranked by gold frame F Best SRL performance of parse-tree reranking system. Parse F1: 95.0

Discussion / Future Work Reranking parse trees has strong limitations Marginalizing over parse trees Should SRL / parsing be joint at all? –If not, how different than MUC relation extraction? (Miller et al., 2000)