CS 4705 Lecture 21 Algorithms for Reference Resolution.

Slides:



Advertisements
Similar presentations
Referring Expressions: Definition Referring expressions are words or phrases, the semantic interpretation of which is a discourse entity (also called referent)
Advertisements

Computational language: week 10 Lexical Knowledge Representation concluded Syntax-based computational language Sentence structure: syntax Context free.
Specialized models and ranking for coreference resolution Pascal Denis ALPAGE Project Team INRIA Rocquencourt F Le Chesnay, France Jason Baldridge.
Albert Gatt LIN3022 Natural Language Processing Lecture 11.
Curs 7: Teorii ale discursului Centering Dan Cristea Selecţie de slide-uri prezentate în tutoriale (RANLP-03, Borovits, Sept. 2003; ICON-04, Hyderabad,
Discourse Analysis David M. Cassel Natural Language Processing Villanova University April 21st, 2005 David M. Cassel Natural Language Processing Villanova.
Automatic Essay Scoring Evaluation of text coherence for electronic essay scoring systems (E. Miltsakaki and K. Kukich, 2004) Universität des Saarlandes.
Processing of large document collections Part 6 (Text summarization: discourse- based approaches) Helena Ahonen-Myka Spring 2006.
1 Discourse, coherence and anaphora resolution Lecture 16.
Discourse Martin Hassel KTH NADA Royal Institute of Technology Stockholm
Anaphora Resolution Spring 2010, UCSC – Adrian Brasoveanu [Slides based on various sources, collected over a couple of years and repeatedly modified –
Chapter 18: Discourse Tianjun Fu Ling538 Presentation Nov 30th, 2006.
Reference Resolution #1 CSCI-GA.2590 Ralph Grishman NYU.
ZERO PRONOUN RESOLUTION IN JAPANESE Jeffrey Shu Ling 575 Discourse and Dialogue.
Discourse: Reference Ling571 Deep Processing Techniques for NLP March 2, 2011.
Amirkabir University of Technology Computer Engineering Faculty AILAB Efficient Parsing Ahmad Abdollahzadeh Barfouroush Aban 1381 Natural Language Processing.
CS 4705 Algorithms for Reference Resolution. Anaphora resolution Finding in a text all the referring expressions that have one and the same denotation.
Final Review CS4705 Natural Language Processing. Semantics Meaning Representations –Predicate/argument structure and FOPC Thematic roles and selectional.
1 Special Electives of Comp.Linguistics: Processing Anaphoric Expressions Eleni Miltsakaki AUTH Fall 2005-Lecture 4.
CS 4705 Lecture 7 Parsing with Context-Free Grammars.
Improving Machine Learning Approaches to Coreference Resolution Vincent Ng and Claire Cardie Cornell Univ. ACL 2002 slides prepared by Ralph Grishman.
1 Pragmatics: Discourse Analysis J&M’s Chapter 21.
Pragmatics I: Reference resolution Ling 571 Fei Xia Week 7: 11/8/05.
CS 4705 Final Review CS4705 Julia Hirschberg. Format and Coverage Covers only material from thru (i.e. beginning with Probabilistic Parsing) Same format.
SI485i : NLP Set 14 Reference Resolution. 2 Kraken, also called the Crab-fish, which is not that huge, for heads and tails counted, he is no larger than.
1 Basic Parsing with Context Free Grammars Chapter 13 September/October 2012 Lecture 6.
Pronouns and Reference Resolution CS  HW3 deadline extended to Wednesday, Nov. 25 th at 11:58 pm  Michael Collins will talk Thursday, Dec. 3 rd.
11 CS 388: Natural Language Processing: Syntactic Parsing Raymond J. Mooney University of Texas at Austin.
A Light-weight Approach to Coreference Resolution for Named Entities in Text Marin Dimitrov Ontotext Lab, Sirma AI Kalina Bontcheva, Hamish Cunningham,
Empirical Methods in Information Extraction Claire Cardie Appeared in AI Magazine, 18:4, Summarized by Seong-Bae Park.
Tree Kernels for Parsing: (Collins & Duffy, 2001) Advanced Statistical Methods in NLP Ling 572 February 28, 2012.
1 Natural Language Processing Computational Discourse.
1 Special Electives of Comp.Linguistics: Processing Anaphoric Expressions Eleni Miltsakaki AUTH Fall 2005-Lecture 3.
Differential effects of constraints in the processing of Russian cataphora Kazanina and Phillips 2010.
A multiple knowledge source algorithm for anaphora resolution Allaoua Refoufi Computer Science Department University of Setif, Setif 19000, Algeria .
On the Issue of Combining Anaphoricity Determination and Antecedent Identification in Anaphora Resolution Ryu Iida, Kentaro Inui, Yuji Matsumoto Nara Institute.
Incorporating Extra-linguistic Information into Reference Resolution in Collaborative Task Dialogue Ryu Iida Shumpei Kobayashi Takenobu Tokunaga Tokyo.
1 Exploiting Syntactic Patterns as Clues in Zero- Anaphora Resolution Ryu Iida, Kentaro Inui and Yuji Matsumoto Nara Institute of Science and Technology.
1 Special Electives of Comp.Linguistics: Processing Anaphoric Expressions Eleni Miltsakaki AUTH Fall 2005-Lecture 2.
1 Learning Sub-structures of Document Semantic Graphs for Document Summarization 1 Jure Leskovec, 1 Marko Grobelnik, 2 Natasa Milic-Frayling 1 Jozef Stefan.
A Cross-Lingual ILP Solution to Zero Anaphora Resolution Ryu Iida & Massimo Poesio (ACL-HLT 2011)
Opinion Holders in Opinion Text from Online Newspapers Youngho Kim, Yuchul Jung and Sung-Hyon Myaeng Reporter: Chia-Ying Lee Advisor: Prof. Hsin-Hsi Chen.
Expert Systems with Applications 34 (2008) 459–468 Multi-level fuzzy mining with multiple minimum supports Yeong-Chyi Lee, Tzung-Pei Hong, Tien-Chin Wang.
Reference Resolution. Sue bought a cup of coffee and a donut from Jane. She met John as she left. He looked at her enviously as she drank the coffee.
Coherence and Coreference Introduction to Discourse and Dialogue CS 359 October 2, 2001.
Rules, Movement, Ambiguity
PARSING 2 David Kauchak CS159 – Spring 2011 some slides adapted from Ray Mooney.
Chapter 9 Genetic Algorithms.  Based upon biological evolution  Generate successor hypothesis based upon repeated mutations  Acts as a randomized parallel.
For Monday Finish chapter 19 Take-home exam due. Program 4 Any questions?
CS 4705 Lecture 10 The Earley Algorithm. Review Top-Down vs. Bottom-Up Parsers –Both generate too many useless trees –Combine the two to avoid over-generation:
Reference Resolution CMSC Discourse and Dialogue September 30, 2004.
Supertagging CMSC Natural Language Processing January 31, 2006.
Evaluation issues in anaphora resolution and beyond Ruslan Mitkov University of Wolverhampton Faro, 27 June 2002.
CS 4705 Lecture 7 Parsing with Context-Free Grammars.
Support Vector Machines and Kernel Methods for Co-Reference Resolution 2007 Summer Workshop on Human Language Technology Center for Language and Speech.
1 Special Electives of Comp.Linguistics: Processing Anaphoric Expressions Eleni Miltsakaki AUTH Fall 2005-Lecture 5.
Reference Resolution CMSC Natural Language Processing January 15, 2008.
An evolutionary approach for improving the quality of automatic summaries Constantin Orasan Research Group in Computational Linguistics School of Humanities,
PARSING David Kauchak CS159 – Fall Admin Assignment 3 Quiz #1  High: 36  Average: 33 (92%)  Median: 33.5 (93%)
Simone Paolo Ponzetto University of Heidelberg Massimo Poesio
Basic Parsing with Context Free Grammars Chapter 13
Entity- & Topic-Based Information Ordering
Referring Expressions: Definition
Location Recommendation — for Out-of-Town Users in Location-Based Social Network Yina Meng.
Clustering Algorithms for Noun Phrase Coreference Resolution
Lecture 17: Discourse: Anaphora Resolution and Coherence
Anaphora Resolution Spring 2010, UCSC – Adrian Brasoveanu
Algorithms for Reference Resolution
Parsing and More Parsing
CS4705 Natural Language Processing
Presentation transcript:

CS 4705 Lecture 21 Algorithms for Reference Resolution

Issues Which features should we make use of? How should we order them? I.e. which override which? What should appear in our discourse model? I.e., what types of information do we need to keep track of? How to evaluate?

Lappin & Leass ‘94 Weights candidate antecedents by recency and syntactic preference (86% accuracy) Two major functions to perform: –Update the discourse model when an NP that evokes a new entity is found in the text, computing the salience of this entity for future anaphora resolution –Find most likely referent for current anaphor by considering possible antecedents and their salience values Partial example for 3P, non-reflexives

Salience Factors and Weights (Trained)

Implicit ordering of arguments: subj/exist pred/obj/indobj-oblique/dem.advPP On the sofa, the cat was eating bonbons. sofa: =220 cat: =310 bonbons: =280 Update: –Weights accumulate over time –Cut in half after each sentence processed –Salience values for subsequent referents accumulate for equivalence class of correferential items (exceptions, e.g. multiple references in same sentence)

The bonbons were clearly very tasty. sofa: 140/2=70 cat: 310/2=155 bonbons: 280/2 +( )=450 –Additional salience weights for grammatical role parallelism (35) and cataphora (-175) calculated when pronoun to be resolved –Additional constraints on gender/number agrmt/syntax They were a gift from an unknown admirer. sofa: 70/2=35 cat: 155/2=77.5 bonbons: 450/2=225 (+35)

Reference Resolution Collect potential referents (up to four sentences back): {sofa,cat,bonbons} Remove those that don’t agree in number/gender with pronoun {bonbons} Remove those that don’t pass intra-sentential syntactic coreference constraints The cat washed it. (it  cat) Add applicable values for role parallelism (+35) or cataphora (-175) to current salience value for each potential antecedent Select referent with highest salience; if tie, select closest referent in string

Solution They were a gift from an unknown admirer. (they=bonbons) bonbons: 225/2=112.5 (+35) Why? Only potential antecedent to agree in gender/number Also has highest salience (147.5) Or… It ate them all. (it=?) sofa: 70/2=35 cat: 155/2=77.5 bonbons: 225/2=112.5

Centering (Grosz et al ‘95;Brennan et al ‘87) Adds concept of discourse center –Un: an utterance –Backward-looking center C b (U n ): current focus after U n interpreted –Forward-looking centers C f (U n ): ordered list of potential focii referred to in U n C b (U n+1 ) is highest ranked member of Cf(U n ) C p (U n ): preferred center of C f (U n ) Other C f ordered subj<exist. Prednom<obj<indobj- oblique<dem. advPP (Brennan et al)

Transitions from Un to Un+1

Rules If any element of C f (U n ) is pronominalized in U n+1, then C b (U n+1 ) must be Preference: Continue > Retain > Smooth-Shift > Rough-Shift Algorithm –Generate C b -C f combinations for all possible reference assignments –Filter by constraints (syntactic coreference, selectional restrictions,…) –Rank by preference among transition orderings

Example U 1 :George saw a mouse on the floor. U 2 :He showed it to Harry. U 3 :He ate it. Grammatical role hierarchy –C f (U 1 ): {George,mouse,floor} –C p (U 1 ): George –C b (U 1 ): undefined Assume it=mouse –C f (U 2 ): {George, mouse,Harry} –C p (U 2 ): George –C b (U 2 ): George –Continue (C p (U 2 )=C b (U 2 ); C b (U 1 ) undefined

Assume it=floor –C f (U 2 ): {George,floor,Harry} –C p (U 2 ): George –C b (U 2 ): George –Continue (C p (U 2 )=C b (U 2 ); C b (U 1 ) undefined) What did George show Harry? –How to break a tie? Ordering of previous C f list? U 3 : He ate it. Assume he=George, it=mouse –C f (U 3 ): {George,mouse} –C p (U 3 ): George –C b (U 3 ): George –Continue (C p (U 3 )=C b (U 3 ); C b (U 3 )=C b (U 2 ))

Assume he=Harry, it=mouse –C f (U3): {Harry,mouse} –C p (U3): Harry –C b (U 3 ): Harry –Continue (C p (U 3 )=C b (U 3 ); C b (U 3 )  C b (U 2 )) So, who ate the mouse?

Centering Theory vs. Lappin & Leass Centering sometimes prefers an antecedent Lappin and Leass (or Hobbs) would consider to have low salience –Always prefers a single pronominalization strategy: prescriptive, assumes discourse coherent –Constraints too simple: grammatical role, recency, repeated mention –Assumes correct syntactic information available as input

Hobbs ‘78: Syntax-Based Reference Resolution Search for antecedent in parse tree of current sentence, then prior sentences in order of recency –For current S, search for NP nodes to the left of a path p from the pronoun up to the first NP or S node (X) above it in L2R, breadth-first –Propose as pronoun’s antecedent any NP you find as long as it has an NP or S node between itself and X –If X is highest node in sentence, search prior sentences, L2R breadth-first, for candidate NPs –O.w., continue searching current tree by going to next S or NP above X before going to prior sentences

Evaluation Centering only now being specified enough to be tested automatically on real data –Specifying the Parameters of Centering Theory: A Corpus-Based Evaluation using Text from Application- Oriented Domains (Poesio et al., ACL 2000) Walker ‘89 manual comparison of Centering vs. Hobbs ‘78 –Only 281 examples from 3 genres –Assumed correct features given as input to each –Centering 77.6% vs. Hobbs 81.8% –Lappin and Leass’ 86% accuracy on test set from computer training manuals