Supervised models for coreference resolution Altaf Rahman and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas 1.

Slides:



Advertisements
Similar presentations
Specialized models and ranking for coreference resolution Pascal Denis ALPAGE Project Team INRIA Rocquencourt F Le Chesnay, France Jason Baldridge.
Advertisements

A Machine Learning Approach to Coreference Resolution of Noun Phrases By W.M.Soon, H.T.Ng, D.C.Y.Lim Presented by Iman Sen.
Coreference Based Event-Argument Relation Extraction on Biomedical Text Katsumasa Yoshikawa 1), Sebastian Riedel 2), Tsutomu Hirao 3), Masayuki Asahara.
Progress update Lin Ziheng. System overview 2 Components – Connective classifier Features from Pitler and Nenkova (2009): – Connective: because – Self.
A Corpus for Cross- Document Co-Reference D. Day 1, J. Hitzeman 1, M. Wick 2, K. Crouch 1 and M. Poesio 3 1 The MITRE Corporation 2 University of Massachusetts,
Chapter 18: Discourse Tianjun Fu Ling538 Presentation Nov 30th, 2006.
Easy-First Coreference Resolution Veselin Stoyanov and Jason Eisner Johns Hopkins University.
On feature distributional clustering for text categorization Bekkerman, El-Yaniv, Tishby and Winter The Technion. June, 27, 2001.
Ang Sun Ralph Grishman Wei Xu Bonan Min November 15, 2011 TAC 2011 Workshop Gaithersburg, Maryland USA.
Detecting Anaphoricity and Antecedenthood for Coreference Resolution Olga Uryupina Institute of Linguistics, RAS.
1 Automated Feature Abstraction of the fMRI Signal using Neural Network Clustering Techniques Stefan Niculescu and Tom Mitchell Siemens Medical Solutions,
Improving Machine Learning Approaches to Coreference Resolution Vincent Ng and Claire Cardie Cornell Univ. ACL 2002 slides prepared by Ralph Grishman.
Anaphora Resolution Sanghoon Kwak Takahiro Aoyama.
SI485i : NLP Set 14 Reference Resolution. 2 Kraken, also called the Crab-fish, which is not that huge, for heads and tails counted, he is no larger than.
Learning Table Extraction from Examples Ashwin Tengli, Yiming Yang and Nian Li Ma School of Computer Science Carnegie Mellon University Coling 04.
Mining and Summarizing Customer Reviews
A Joint Model of Feature Mining and Sentiment Analysis for Product Review Rating Jorge Carrillo de Albornoz Laura Plaza Pablo Gervás Alberto Díaz Universidad.
A Global Relaxation Labeling Approach to Coreference Resolution Coling 2010 Emili Sapena, Llu´ıs Padr´o and Jordi Turmo TALP Research Center Universitat.
A Light-weight Approach to Coreference Resolution for Named Entities in Text Marin Dimitrov Ontotext Lab, Sirma AI Kalina Bontcheva, Hamish Cunningham,
807 - TEXT ANALYTICS Massimo Poesio Lecture 7: Coreference (Anaphora resolution)
Andreea Bodnari, 1 Peter Szolovits, 1 Ozlem Uzuner 2 1 MIT, CSAIL, Cambridge, MA, USA 2 Department of Information Studies, University at Albany SUNY, Albany,
Empirical Methods in Information Extraction Claire Cardie Appeared in AI Magazine, 18:4, Summarized by Seong-Bae Park.
Automatic Detection of Tags for Political Blogs Khairun-nisa Hassanali and Vasileios Hatzivassiloglou Human Language Technology Research Institute The.
Illinois-Coref: The UI System in the CoNLL-2012 Shared Task Kai-Wei Chang, Rajhans Samdani, Alla Rozovskaya, Mark Sammons, and Dan Roth Supported by ARL,
Lecture 8 & Lecture 9 Pronouns Introduction: Pronouns are a varied closed- class words with nominal function. English has a developed pronoun system, comprising:
On the Issue of Combining Anaphoricity Determination and Antecedent Identification in Anaphora Resolution Ryu Iida, Kentaro Inui, Yuji Matsumoto Nara Institute.
Automatic Detection of Tags for Political Blogs Khairun-nisa Hassanali Vasileios Hatzivassiloglou The University.
Incorporating Extra-linguistic Information into Reference Resolution in Collaborative Task Dialogue Ryu Iida Shumpei Kobayashi Takenobu Tokunaga Tokyo.
JHU/CLSP/WS07/ELERFED Scoring Metrics for IDC, CDC and EDC David Day ELERFED JHU Workshop July 18, 2007.
Combining terminology resources and statistical methods for entity recognition: an evaluation Angus Roberts, Robert Gaizauskas, Mark Hepple, Yikun Guo.
Multi-modal Reference Resolution in Situated Dialogue by Integrating Linguistic and Extra-Linguistic Clues Ryu Iida Masaaki Yasuhara Takenobu Tokunaga.
1 Exploiting Syntactic Patterns as Clues in Zero- Anaphora Resolution Ryu Iida, Kentaro Inui and Yuji Matsumoto Nara Institute of Science and Technology.
A Language Independent Method for Question Classification COLING 2004.
Partially Supervised Classification of Text Documents by Bing Liu, Philip Yu, and Xiaoli Li Presented by: Rick Knowles 7 April 2005.
Coreference Resolution
Recognizing Names in Biomedical Texts: a Machine Learning Approach GuoDong Zhou 1,*, Jie Zhang 1,2, Jian Su 1, Dan Shen 1,2 and ChewLim Tan 2 1 Institute.
1 Learning Sub-structures of Document Semantic Graphs for Document Summarization 1 Jure Leskovec, 1 Marko Grobelnik, 2 Natasa Milic-Frayling 1 Jozef Stefan.
A Cross-Lingual ILP Solution to Zero Anaphora Resolution Ryu Iida & Massimo Poesio (ACL-HLT 2011)
Opinion Holders in Opinion Text from Online Newspapers Youngho Kim, Yuchul Jung and Sung-Hyon Myaeng Reporter: Chia-Ying Lee Advisor: Prof. Hsin-Hsi Chen.
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
COLING 2012 Extracting and Normalizing Entity-Actions from Users’ comments Swapna Gottipati, Jing Jiang School of Information Systems, Singapore Management.
Page 1 March 2011 Local and Global Algorithms for Disambiguation to Wikipedia Lev Ratinov 1, Dan Roth 1, Doug Downey 2, Mike Anderson 3 1 University of.
A Scalable Machine Learning Approach for Semi-Structured Named Entity Recognition Utku Irmak(Yahoo! Labs) Reiner Kraft(Yahoo! Inc.) WWW 2010(Information.
An Entity-Mention Model for Coreference Resolution with Inductive Logic Programming Xiaofeng Yang 1 Jian Su 1 Jun Lang 2 Chew Lim Tan 3 Ting Liu 2 Sheng.
1 Toward Opinion Summarization: Linking the Sources Veselin Stoyanov and Claire Cardie Department of Computer Science Cornell University Ithaca, NY 14850,
807 - TEXT ANALYTICS Massimo Poesio Lecture 7: Anaphora resolution (Coreference)
Using Semantic Relations to Improve Passage Retrieval for Question Answering Tom Morton.
Enquiring Minds: Early Detection of Rumors in Social Media from Enquiry Posts Zhe Zhao Paul Resnick Qiaozhu Mei Presentation Group 2.
Error Analysis for Learning-based Coreference Resolution Olga Uryupina
Multilingual Opinion Holder Identification Using Author and Authority Viewpoints Yohei Seki, Noriko Kando,Masaki Aono Toyohashi University of Technology.
Inference Protocols for Coreference Resolution Kai-Wei Chang, Rajhans Samdani, Alla Rozovskaya, Nick Rizzolo, Mark Sammons, and Dan Roth This research.
Number Sense Disambiguation Stuart Moore Supervised by: Anna Korhonen (Computer Lab)‏ Sabine Buchholz (Toshiba CRL)‏
Evaluation issues in anaphora resolution and beyond Ruslan Mitkov University of Wolverhampton Faro, 27 June 2002.
Measuring the Influence of Errors Induced by the Presence of Dialogs in Reference Clustering of Narrative Text Alaukik Aggarwal, Department of Computer.
Learning Subjective Nouns using Extraction Pattern Bootstrapping Ellen Riloff School of Computing University of Utah Janyce Wiebe, Theresa Wilson Computing.
Support Vector Machines and Kernel Methods for Co-Reference Resolution 2007 Summer Workshop on Human Language Technology Center for Language and Speech.
Finding document topics for improving topic segmentation Source: ACL2007 Authors: Olivier Ferret (18 route du Panorama, BP6) Reporter:Yong-Xiang Chen.
26/01/20161Gianluca Demartini Ranking Categories for Faceted Search Gianluca Demartini L3S Research Seminars Hannover, 09 June 2006.
Exploiting Named Entity Taggers in a Second Language Thamar Solorio Computer Science Department National Institute of Astrophysics, Optics and Electronics.
Department of Computer Science The University of Texas at Austin USA Joint Entity and Relation Extraction using Card-Pyramid Parsing Rohit J. Kate Raymond.
Cold-Start KBP Something from Nothing Sean Monahan, Dean Carpenter Language Computer.
Solving Hard Coreference Problems Haoruo Peng, Daniel Khashabi and Dan Roth Problem Description  Problems with Existing Coref Systems Rely heavily on.
ANAPHORA RESOLUTION SYSTEM FOR NATURAL LANGUAGE REQUIREMENTS DOCUMENT IN KOREAN 課程 : 自然語言與應用 課程老師 : 顏國郎 報告者 : 鄭冠瑀.
Pronoun/Antecedent Agreement Wednesday, Jan. 9 Thursday, Jan. 10.
A Deep Memory Network for Chinese Zero Pronoun Resolution
Simone Paolo Ponzetto University of Heidelberg Massimo Poesio
NYU Coreference CSCI-GA.2591 Ralph Grishman.
Clustering Algorithms for Noun Phrase Coreference Resolution
A Machine Learning Approach to Coreference Resolution of Noun Phrases
A Machine Learning Approach to Coreference Resolution of Noun Phrases
Presentation transcript:

Supervised models for coreference resolution Altaf Rahman and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas 1

What is Coreference Resolution ? – Identify all noun phrases (mentions) that refer to the same real world entity Barack Obama nominated Hillary Rodham Clinton as his secretary of state on Monday. He chose her because she had foreign affair experience as a former First Lady. 2

What is Coreference Resolution ? – Identify all noun phrases (mentions) that refer to the same real world entity Barack Obama nominated Hillary Rodham Clinton as his secretary of state on Monday. He chose her because she had foreign affair experience as a former First Lady. 3

What is Coreference Resolution ? – Identify all noun phrases (mentions) that refer to the same real world entity Barack Obama nominated Hillary Rodham Clinton as his secretary of state on Monday. He chose her because she had foreign affair experience as a former First Lady. 4

What is Coreference Resolution ? – Identify all noun phrases (mentions) that refer to the same real world entity Barack Obama nominated Hillary Rodham Clinton as his secretary of state on Monday. He chose her because she had foreign affair experience as a former First Lady. 5

What is Coreference Resolution ? – Identify all noun phrases (mentions) that refer to the same real world entity Barack Obama nominated Hillary Rodham Clinton as his secretary of state on Monday. He chose her because she had foreign affair experience as a former First Lady. 6

Plan for the talk Existing learning based coreference models – Overview – Implementation details Our cluster ranking model Evaluation Summary 7

Plan for the talk Existing learning based coreference models – Overview – Implementation details Our cluster ranking model Evaluation Summary 8

Existing learning based coreference models Mention-Pair (MP) model Entity-Mention (EM) model Mention-Ranking (MR) model 9

Mention-Pair (MP) Model Soon et al ; Ng and Cardie 2002 Classifies whether two mentions are coreferent or not. Weaknesses – Insufficient information to make an informed coreference decision. 10

Mention-Pair (MP) Model Soon et al ; Ng and Cardie 2002 Classifies whether two mentions are coreferent or not. Weaknesses – Insufficient information to make an informed coreferenced decision. Barack Obama ………………Hillary Rodham Clinton …….his ……….. secretary of state ……………………..He ….………her 11

Mention-Pair (MP) Model Soon et al ; Ng and Cardie 2002 Classifies whether two mentions are coreferent or not. Weaknesses – Insufficient information to make an informed coreferenced decision. – Each candidate antecedents is considered independently of the others. 12

Mention-Pair (MP) Model Soon et al ; Ng and Cardie 2002 Classifies whether two mentions are coreferent or not. Weaknesses – Insufficient information to make an informed coreferenced decision. – Each candidate antecedents is considered independently of the others. Barack Obama ………Hillary Rodham Clinton …….his ……….. secretary of state …………the President……..He ….………her 13

Entity-Mention (EM) Model Pasula et al ; Luo et al ; Yang et al Classifies whether a mention and a preceding, possibly partially formed cluster are coreferent or not. Strength – Improved expressiveness. – Allows the computation of cluster level features Weakness – Each candidate cluster is considered independently of the others. Barack Obama ………………Hillary Rodham Clinton …….his ……….. secretary of state ……………………..He ….………her 14

Mention-Ranking (MR) Model Denis & Baldridge 2007, 2008 Imposes a ranking on a set of candidate antecedents Strength – Considers all the candidate antecedents simultaneously Weakness – Insufficient information to make an informed coreference decision. Barack Obama ………………Hillary Rodham Clinton …….his ……….. secretary of state ……………………..He ….………her 15

Goal Propose a cluster ranking (CR) model – ranks all the preceding clusters for a mention – combines the strengths of EM and MR models Improve expressiveness by using cluster level features. Considers all the candidate clusters simultaneously 16

Plan for the talk Existing learning based coreference models – Overview – Implementation details Our cluster ranking model Evaluation Summary 17

Mention-Pair (MP) Model Classifies whether two mentions are coreferent or not Training – Each instance is created between m j and m k – 39 features 18

Mention-Pair (MP) Model Classifies whether two mentions are coreferent or not Training – Each instance is created between m j and m k – 39 features  Features describing m j a candidate antecedent  Pronoun ? Subject ? Nested ? 19

Mention-Pair (MP) Model Classifies whether two mentions are coreferent or not Training – Each instance is created between m j and m k – 39 features  Features describing m j a candidate antecedent  Pronoun ? Subject ? Nested ?  Features describing m k, the mention to be resolved  Number ? Gender ? Pronoun2 ? Semantic Class ? Animacy ? 20

Mention-Pair (MP) Model Classifies whether two mentions are coreferent or not Training – Each instance is created between m j and m k – 39 features  Features describing m j a candidate antecedent  Pronoun ? Subject ? Nested ?  Features describing m k, the mention to be resolved  Number ? Gender ? Pronoun2 ? Semantic Class ? Animacy ?  Features describing the relation between m j a candidate antecedent m k the mention to be resolved  Head match ? String match ? Gender match ? Span ? Appositive ? Alias ? Distance ? 21

Mention-Pair (MP) Model Training instance creation (Soon et al.) – create  positive instance for each anaphoric mention, m j and its closest preceding antecedent mention, m i  negative instance for m j and each intervening mention, m i+1, m i+2, …, m j-1  No instance for non-anaphors. 22

Mention-Pair (MP) Model Training instance creation (Soon et al.) – create  positive instance for each anaphoric mention, m j, and its closest preceding antecedent mention, m i  negative instance for m j and each intervening mention, m i+1, m i+2, …, m j-1  No instance for non-anaphors. Testing (Soon et al.) – For each m j  Select as the antecedent of m j the closest preceding mention that is classified as the coreferent with m j  if no such mention exist m j is considered non-anaphoric 23

Entity-Mention (EM) Model Classifies whether a mention and a preceding cluster are coreferent or not. Training – Each instance is between a mention and a preceding partially formed cluster. – Cluster level features 24

Entity-Mention (EM) Model Training instance creation: – Positive Instance For each anaphoric mention m k and preceding cluster c j to which it belongs 25

Entity-Mention (EM) Model Training instance creation: – Positive Instance For each anaphoric mention m k and preceding cluster c j to which it belongs – No instance for non-anaphors. 26

Entity-Mention (EM) Model Training instance creation: – Positive Instance For each anaphoric mention m k and preceding cluster c j to which it belongs – No instance for non-anaphors. – Negative Instance For each anaphoric mention m k and partial cluster whose last mention appears between m k and its closest antecedent in c j to which it belongs 27

Entity-Mention (EM) Model continued ……...m 1 ……… m 2 …….. m 3 …… m 4 ………m 5 ….….….m 6 For mention m 6 Positive instance : – features between m 6 and the cluster {m 2, m 4 } Negative instance : – features between m 6 and the cluster {m 1, m 5 } – No negative instance created between m 6 and {m 3 } 28

Entity-Mention (EM) Model continued ……...m 1 ……… m 2 …….. m 3 …… m 4 ………m 5 ….….….m 6 For mention m 6 Positive instance : – features between m 6 and the cluster {m 2, m 4 } Negative instance : – features between m 6 and the cluster {m 1, m 5 } – No negative instance created between m 6 and {m 3 } 29

Entity-Mention (EM) Model continued ……...m 1 ……… m 2 …….. m 3 …… m 4 ………m 5 ….….….m 6 For mention m 6 Positive instance : – features between m 6 and the cluster {m 2, m 4 } Negative instance : – features between m 6 and the cluster {m 1, m 5 } – No negative instance created between m 6 and {m 3 } 30

Entity-Mention (EM) Model Testing – Like MP model except now we resolve the mention to the closest preceding cluster that is classified as coreferent. 31

Entity-Mention(EM) Model continued For each relational feature used by MP model we create a set of cluster level features – Example : Gender = Compatible ? Incompatible ? Not Applicable? For the mention “him” – #C = 0 Normalized_C = 0/3 = 0.00 #I = 2 Normalized_I = 2/3 = 0.66 #NA = 1 Normalized_NA = 1/3 = = NONE <0.5 = MOST-FALSE >0.5 = MOST-TRUE 1.0 = ALL – GenderC=NONE GenderI=MOST-TRUE GenderNA=MOST-FALSE 32

Entity-Mention(EM) Model continued For each relational feature used by MP model we create a set of cluster level features – Example : Gender = Compatible ? Incompatible ? Not Applicable? For the mention “him” – #C = 0 Normalized_C = 0/3 = 0.00 #I = 2 Normalized_I = 2/3 = 0.66 #NA = 1 Normalized_NA = 1/3 = = NONE <0.5 = MOST-FALSE >0.5 = MOST-TRUE 1.0 = ALL – GenderC=NONE GenderI=MOST-TRUE GenderNA=MOST-FALSE Hillary Clinton…..whose………..…she……….…him……. (female) (neutral) (female) (male) 33

Entity-Mention(EM) Model continued For each relational feature used by MP model we create a set of cluster level features – Example : Gender = Compatible ? Incompatible ? Not Applicable? For the mention “him” – #C = 0 Normalized_C = 0/3 = 0.00 #I = 2 Normalized_I = 2/3 = 0.66 #NA = 1 Normalized_NA = 1/3 = = NONE <0.5 = MOST-FALSE >0.5 = MOST-TRUE 1.0 = ALL – GenderC=NONE GenderI=MOST-TRUE GenderNA=MOST-FALSE Hillary Clinton…..whose………..…she……….…him……. (female) (neutral) (female) (male) 34

Entity-Mention(EM) Model continued For each relational feature used by MP model we create a set of cluster level features – Example : Gender = Compatible ? Incompatible ? Not Applicable? For the mention “him” – #C = 0 Normalized_C = 0/3 = 0.00 #I = 2 Normalized_I = 2/3 = 0.66 #NA = 1 Normalized_NA = 1/3 = = NONE <0.5 = MOST-FALSE >0.5 = MOST-TRUE 1.0 = ALL – GenderC=NONE GenderI=MOST-TRUE GenderNA=MOST-FALSE Hillary Clinton…..whose………..…she……….…him……. (female) (neutral) (female) (male) 35

Mention-Ranking (MR) Model Ranks a set of candidate antecedents for each mention Training – Each instance represents 2 mentions (m j, m k ) – Same 39 features as in Mention-pair (MP) model – Used the SVM ranker learning algorithm (Joachims 2002). 36

Mention-Ranking (MR) Model continued Training instance creation – According to Soon et al’s method  Rank value is 2 if positive  Otherwise rank 1 37

Mention-Ranking (MR) Model continued Training instance creation – According to Soon et al’s method  Rank value is 2 if positive  Otherwise rank 1 Testing – First check anaphoricity of m j using separate anaphoricity classifier.  If m j is non-anaphoric then create a new cluster.  Otherwise, resolve m j to the highest ranked m k among ALL the candidate antecedents. 38

Plan for the talk Existing learning based coreference models – Overview – Implementation details Our cluster ranking model Evaluation Summary 39

Cluster Ranking(CR) model Combines the strength of MP and EM model Ranks all the preceding clusters for each mention Training – Each instance is comprised of features between a mention m k and its preceding cluster c j – Instances are created like the EM model. – Rank values are assigned like the MR model. 40

Cluster Ranking (CR) Model 41 Since no instances were created from the non- anaphors, we need to rely on a separate classifier is to determine whether a mention is anaphoric Problem Errors in anaphoricity determination can be propagated to coreference resolution Hypothesis A model for jointly determining anaphoricity and coreference resolution can overcome this problem with the pipeline approach

Joint anaphoricity determination and coreference resolution 42 Idea Create additional training instances from non- anaphors – If m k is non-anaphoric, assign rank value  1 to each instance formed between m k and each preceding cluster  2 to all instances formed between m k and a (hypothetical) null cluster  Use only features describing m k Same idea can be applied to create joint version of MR model

Plan for the talk Existing learning based coreference models – Overview – Implementation details Our cluster ranking model Evaluation Summary 43

Evaluation Experimental setup  ACE2005 corpus  599 documents of 7 sources -BC, BN, CTS, NW, UN, WL  80% for Training and 20% for Testing.  True mentions  System mentions (extracted by a learned mention extractor that is trained on train docs)  Scoring programs MUC (Vilain et al. 1995) CEAF ( Lu et al. 2005) B 3 ( Bagga & Baldwin 1998 )  Recall, precision and f-measure 44

System Mention Results Baseline Systems – MP model – EM model – MR model All models are trained using SVM-light. All learning parameters are set to default values 45

System Mention Results(MP baseline) 46 Coreference Models CEAFB3B3 RecallPrecisionF RecallF MP model CEAF F score is 53.4 B 3 F score is 54.1

System Mention Results ( EM baseline) F score change is insignificant despite the improved expressiveness of EM model. Similar trends have been reported by Luo et al Coreference Models CEAFB3B3 RecallPrecisionF RecallF MP model EM model

System Mention Results(MR baseline) 2 architectures for using anaphoricity information – Pipeline – Joint Both show significant improvements over MP baseline Joint architecture outperforms pipeline architecture. 48 Coreference Models CEAFB3B3 RecallPrecisionF RecallF MP model EM model MR model (pipeline) MR model (joint)

System Mention Results(MR baseline) 2 architectures for using anaphoricity information – Pipeline – Joint Both show significant improvements over MP baseline Joint architecture outperforms pipeline architecture. 49 Coreference Models CEAFB3B3 RecallPrecisionF RecallF MP model EM model MR model (pipeline) MR model (joint)

System Mention Results(MR baseline) 2 architectures for using anaphoricity information – Pipeline – Joint Both show significant improvements over MP baseline Joint architecture outperforms pipeline architecture. 50 Coreference Models CEAFB3B3 RecallPrecisionF RecallF MP model EM model MR model (pipeline) MR model (joint)

System Mention Results (CR model) 51 Coreference Models CEAFB3B3 RecallPrecisionF RecallF MP model EM model MR model (pipeline) MR model (joint) CR model (pipeline) CR model (joint)

System Mention Results (CR model) Cluster-ranking model outperforms Mention-ranking model Due to simultaneous gains in recall and precision 52 Coreference Models CEAFB3B3 RecallPrecisionF RecallF MP model EM model MR model (pipeline) MR model (joint) CR model (pipeline) CR model (joint)

System Mention Results (CR model) Cluster-ranking model outperforms Mention-ranking model Due to simultaneous gains in recall and precision 53 Coreference Models CEAFB3B3 RecallPrecisionF RecallF MP model EM model MR model (pipeline) MR model (joint) CR model (pipeline) CR model (joint)

System Mention Results (CR model) Joint architecture outperforms pipeline architecture. Due to simultaneous gains in recall and precision 54 Coreference Models CEAFB3B3 RecallPrecisionF RecallF MP model EM model MR model (pipeline) MR model (joint) CR model (pipeline) CR model (joint)

Performance of Anaphoricity Classifier  Joint models outperform their pipelined counterparts 55 Coreference modelsAccuracy Mention-ranking model(pipelined) Mention-ranking model(joint) Cluster-ranking model(pipelined) Cluster-ranking model(joint) 66.53

Plan for the talk Existing learning based coreference models – Overview – Implementation details Our cluster ranking model Evaluation Summary 56

Summary Proposed a cluster ranking approach – Combines the strengths of EM and MR models. – Jointly learns coreference resolution and anaphoricity determination – Significantly outperforms three commonly-used learning-based coreference models 57

Software The tool will be publicly available for download in the following site (in early September) CherryPicker 58

Software The tool will be publicly available for download in the following site (in early September) CherryPicker Thank You 59