Question Classification (cont’d) Ling573 NLP Systems & Applications April 12, 2011.

Slides:



Advertisements
Similar presentations
ThemeInformation Extraction for World Wide Web PaperUnsupervised Learning of Soft Patterns for Generating Definitions from Online News Author Cui, H.,
Advertisements

Sequence Classification: Chunking Shallow Processing Techniques for NLP Ling570 November 28, 2011.
Shallow Parsing CS 4705 Julia Hirschberg 1. Shallow or Partial Parsing Sometimes we don’t need a complete parse tree –Information extraction –Question.
1/1/ Integrating Genetic Algorithms with Conditional Random Fields to Enhance Question Informer Prediction Min-Yuh Day 1, 2, Chun-Hung Lu 1, 2, Chorng-Shyong.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Applications Chapter 9, Cimiano Ontology Learning Textbook Presented by Aaron Stewart.
Q/A System First Stage: Classification Project by: Abdullah Alotayq, Dong Wang, Ed Pham.
Passage Retrieval and Re-ranking Ling573 NLP Systems and Applications May 3, 2011.
Question-Answering: Shallow & Deep Techniques for NLP Ling571 Deep Processing Techniques for NLP March 9, 2011 Examples from Dan Jurafsky)
Deliverable #2: Question Classification Group 5 Caleb Barr Maria Alexandropoulou.
Shallow Processing: Summary Shallow Processing Techniques for NLP Ling570 December 7, 2011.
PCFG Parsing, Evaluation, & Improvements Ling 571 Deep Processing Techniques for NLP January 24, 2011.
Chapter 11 Beyond Bag of Words. Question Answering n Providing answers instead of ranked lists of documents n Older QA systems generated answers n Current.
CS Word Sense Disambiguation. 2 Overview A problem for semantic attachment approaches: what happens when a given lexeme has multiple ‘meanings’?
Introduction to CL Session 1: 7/08/2011. What is computational linguistics? Processing natural language text by computers  for practical applications.
1/1/ Question Classification in English-Chinese Cross-Language Question Answering: An Integrated Genetic Algorithm and Machine Learning Approach Min-Yuh.
Machine Learning in Natural Language Processing Noriko Tomuro November 16, 2006.
1 Natural Language Processing for the Web Prof. Kathleen McKeown 722 CEPSR, Office Hours: Wed, 1-2; Tues 4-5 TA: Yves Petinot 719 CEPSR,
Course Summary LING 572 Fei Xia 03/06/07. Outline Problem description General approach ML algorithms Important concepts Assignments What’s next?
The classification problem (Recap from LING570) LING 572 Fei Xia, Dan Jinguji Week 1: 1/10/08 1.
Overview of Search Engines
Question-Answering: Systems & Resources Ling573 NLP Systems & Applications April 8, 2010.
STRUCTURED PERCEPTRON Alice Lai and Shi Zhi. Presentation Outline Introduction to Structured Perceptron ILP-CRF Model Averaged Perceptron Latent Variable.
Mining the Peanut Gallery: Opinion Extraction and Semantic Classification of Product Reviews K. Dave et al, WWW 2003, citations Presented by Sarah.
Empirical Methods in Information Extraction Claire Cardie Appeared in AI Magazine, 18:4, Summarized by Seong-Bae Park.
Query Processing: Query Formulation Ling573 NLP Systems and Applications April 14, 2011.
Ling 570 Day 17: Named Entity Recognition Chunking.
Complex Linguistic Features for Text Classification: A Comprehensive Study Alessandro Moschitti and Roberto Basili University of Texas at Dallas, University.
Mastering the Pipeline CSCI-GA.2590 Ralph Grishman NYU.
A Language Independent Method for Question Classification COLING 2004.
Date: 2014/02/25 Author: Aliaksei Severyn, Massimo Nicosia, Aleessandro Moschitti Source: CIKM’13 Advisor: Jia-ling Koh Speaker: Chen-Yu Huang Building.
CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov
NTCIR /21 ASQA: Academia Sinica Question Answering System for CLQA (IASL) Cheng-Wei Lee, Cheng-Wei Shih, Min-Yuh Day, Tzong-Han Tsai, Tian-Jian Jiang,
Mining Topic-Specific Concepts and Definitions on the Web Bing Liu, etc KDD03 CS591CXZ CS591CXZ Web mining: Lexical relationship mining.
Deep Processing QA & Information Retrieval Ling573 NLP Systems and Applications April 11, 2013.
CSA2050 Introduction to Computational Linguistics Lecture 1 Overview.
Mining Binary Constraints in Feature Models: A Classification-based Approach Yi Li.
Enhanced Answer Type Inference from Questions using Sequential Models Vijay Krishnan Sujatha Das Soumen Chakrabarti IIT Bombay.
LING 573 Deliverable 3 Jonggun Park Haotian He Maria Antoniak Ron Lockwood.
1/21 Automatic Discovery of Intentions in Text and its Application to Question Answering (ACL 2005 Student Research Workshop )
CPSC 503 Computational Linguistics
Languages at Inxight Ian Hersey Co-Founder and SVP, Corporate Development and Strategy.
Deep Learning for Efficient Discriminative Parsing Niranjan Balasubramanian September 2 nd, 2015 Slides based on Ronan Collobert’s Paper and video from.
For Friday Finish chapter 23 Homework –Chapter 23, exercise 15.
Supertagging CMSC Natural Language Processing January 31, 2006.
4. Relationship Extraction Part 4 of Information Extraction Sunita Sarawagi 9/7/2012CS 652, Peter Lindes1.
11 Project, Part 3. Outline Basics of supervised learning using Naïve Bayes (using a simpler example) Features for the project 2.
CS 4705 Lecture 17 Semantic Analysis: Robust Semantics.
1 Fine-grained and Coarse-grained Word Sense Disambiguation Jinying Chen, Hoa Trang Dang, Martha Palmer August 22, 2003.
Using Wikipedia for Hierarchical Finer Categorization of Named Entities Aasish Pappu Language Technologies Institute Carnegie Mellon University PACLIC.
Question Classification using Support Vector Machine Dell Zhang National University of Singapore Wee Sun Lee National University of Singapore SIGIR2003.
Information Extraction Entity Extraction: Statistical Methods Sunita Sarawagi.
Programming Languages and Design Lecture 2 Syntax Specifications of Programming Languages Instructor: Li Ma Department of Computer Science Texas Southern.
Virtual Examples for Text Classification with Support Vector Machines Manabu Sassano Proceedings of the 2003 Conference on Emprical Methods in Natural.
(Pseudo)-Relevance Feedback & Passage Retrieval Ling573 NLP Systems & Applications April 28, 2011.
CS Machine Learning Instance Based Learning (Adapted from various sources)
Department of Computer Science The University of Texas at Austin USA Joint Entity and Relation Extraction using Card-Pyramid Parsing Rohit J. Kate Raymond.
Overview of Statistical NLP IR Group Meeting March 7, 2006.
Integrating linguistic knowledge in passage retrieval for question answering J¨org Tiedemann Alfa Informatica, University of Groningen HLT/EMNLP 2005.
Consumer Health Question Answering Systems Rohit Chandra Sourabh Singh
Dan Roth University of Illinois, Urbana-Champaign 7 Sequential Models Tutorial on Machine Learning in Natural.
Relation Extraction (RE) via Supervised Classification See: Jurafsky & Martin SLP book, Chapter 22 Exploring Various Knowledge in Relation Extraction.
Mastering the Pipeline CSCI-GA.2590 Ralph Grishman NYU.
Question Classification Ling573 NLP Systems and Applications April 25, 2013.
Question Classification II
Ling573 NLP Systems and Applications May 16, 2013
Tokenizer and Sentence Splitter CSCI-GA.2591
Relation Extraction CSCI-GA.2591
Overfitting and Underfitting
Presentation transcript:

Question Classification (cont’d) Ling573 NLP Systems & Applications April 12, 2011

Upcoming Events Two seminars: Friday 3:30 Linguistics seminar: Janet Pierrehumbert: Northwestern Example-Based Learning and the Dynamics of the Lexicon AI Seminar: Patrick Pantel: MSR Associating Web Queries with Strongly-Typed Entities

Roadmap Question classification variations: SVM classifiers Sequence classifiers Sense information improvements Question series

Question Classification with Support Vector Machines Hacioglu & Ward 2003 Same taxonomy, training, test data as Li & Roth

Question Classification with Support Vector Machines Hacioglu & Ward 2003 Same taxonomy, training, test data as Li & Roth Approach: Shallow processing Simpler features Strong discriminative classifiers

Question Classification with Support Vector Machines Hacioglu & Ward 2003 Same taxonomy, training, test data as Li & Roth Approach: Shallow processing Simpler features Strong discriminative classifiers

Features & Processing Contrast: (Li & Roth) POS, chunk info; NE tagging; other sense info

Features & Processing Contrast: (Li & Roth) POS, chunk info; NE tagging; other sense info Preprocessing: Only letters, convert to lower case, stopped, stemmed

Features & Processing Contrast: (Li & Roth) POS, chunk info; NE tagging; other sense info Preprocessing: Only letters, convert to lower case, stopped, stemmed Terms: Most informative 2000 word N-grams Identifinder NE tags (7 or 9 tags)

Classification & Results Employs support vector machines for classification Best results: Bi-gram, 7 NE classes

Classification & Results Employs support vector machines for classification Best results: Bi-gram, 7 NE classes Better than Li & Roth w/POS+chunk, but no semantics Fewer NE categories better More categories, more errors

Classification & Results Employs support vector machines for classification Best results: Bi-gram, 7 NE classes Better than Li & Roth w/POS+chunk, but no semantics

Classification & Results Employs support vector machines for classification Best results: Bi-gram, 7 NE classes Better than Li & Roth w/POS+chunk, but no semantics Fewer NE categories better More categories, more errors

Enhanced Answer Type Inference … Using Sequential Models Krishnan, Das, and Chakrabarti 2005 Improves QC with CRF extraction of ‘informer spans’

Enhanced Answer Type Inference … Using Sequential Models Krishnan, Das, and Chakrabarti 2005 Improves QC with CRF extraction of ‘informer spans’ Intuition: Humans identify Atype from few tokens w/little syntax

Enhanced Answer Type Inference … Using Sequential Models Krishnan, Das, and Chakrabarti 2005 Improves QC with CRF extraction of ‘informer spans’ Intuition: Humans identify Atype from few tokens w/little syntax Who wrote Hamlet?

Enhanced Answer Type Inference … Using Sequential Models Krishnan, Das, and Chakrabarti 2005 Improves QC with CRF extraction of ‘informer spans’ Intuition: Humans identify Atype from few tokens w/little syntax Who wrote Hamlet?

Enhanced Answer Type Inference … Using Sequential Models Krishnan, Das, and Chakrabarti 2005 Improves QC with CRF extraction of ‘informer spans’ Intuition: Humans identify Atype from few tokens w/little syntax Who wrote Hamlet? How many dogs pull a sled at Iditarod?

Enhanced Answer Type Inference … Using Sequential Models Krishnan, Das, and Chakrabarti 2005 Improves QC with CRF extraction of ‘informer spans’ Intuition: Humans identify Atype from few tokens w/little syntax Who wrote Hamlet? How many dogs pull a sled at Iditarod?

Enhanced Answer Type Inference … Using Sequential Models Krishnan, Das, and Chakrabarti 2005 Improves QC with CRF extraction of ‘informer spans’ Intuition: Humans identify Atype from few tokens w/little syntax Who wrote Hamlet? How many dogs pull a sled at Iditarod? How much does a rhino weigh?

Enhanced Answer Type Inference … Using Sequential Models Krishnan, Das, and Chakrabarti 2005 Improves QC with CRF extraction of ‘informer spans’ Intuition: Humans identify Atype from few tokens w/little syntax Who wrote Hamlet? How many dogs pull a sled at Iditarod? How much does a rhino weigh?

Enhanced Answer Type Inference … Using Sequential Models Krishnan, Das, and Chakrabarti 2005 Improves QC with CRF extraction of ‘informer spans’ Intuition: Humans identify Atype from few tokens w/little syntax Who wrote Hamlet? How many dogs pull a sled at Iditarod? How much does a rhino weigh? Single contiguous span of tokens

Enhanced Answer Type Inference … Using Sequential Models Krishnan, Das, and Chakrabarti 2005 Improves QC with CRF extraction of ‘informer spans’ Intuition: Humans identify Atype from few tokens w/little syntax Who wrote Hamlet? How many dogs pull a sled at Iditarod? How much does a rhino weigh? Single contiguous span of tokens How much does a rhino weigh?

Enhanced Answer Type Inference … Using Sequential Models Krishnan, Das, and Chakrabarti 2005 Improves QC with CRF extraction of ‘informer spans’ Intuition: Humans identify Atype from few tokens w/little syntax Who wrote Hamlet? How many dogs pull a sled at Iditarod? How much does a rhino weigh? Single contiguous span of tokens How much does a rhino weigh? Who is the CEO of IBM?

Informer Spans as Features Sensitive to question structure What is Bill Clinton’s wife’s profession?

Informer Spans as Features Sensitive to question structure What is Bill Clinton’s wife’s profession?

Informer Spans as Features Sensitive to question structure What is Bill Clinton’s wife’s profession? Idea: Augment Q classifier word ngrams w/IS info

Informer Spans as Features Sensitive to question structure What is Bill Clinton’s wife’s profession? Idea: Augment Q classifier word ngrams w/IS info Informer span features: IS ngrams

Informer Spans as Features Sensitive to question structure What is Bill Clinton’s wife’s profession? Idea: Augment Q classifier word ngrams w/IS info Informer span features: IS ngrams Informer ngrams hypernyms: Generalize over words or compounds

Informer Spans as Features Sensitive to question structure What is Bill Clinton’s wife’s profession? Idea: Augment Q classifier word ngrams w/IS info Informer span features: IS ngrams Informer ngrams hypernyms: Generalize over words or compounds WSD?

Informer Spans as Features Sensitive to question structure What is Bill Clinton’s wife’s profession? Idea: Augment Q classifier word ngrams w/IS info Informer span features: IS ngrams Informer ngrams hypernyms: Generalize over words or compounds WSD? No

Effect of Informer Spans Classifier: Linear SVM + multiclass

Effect of Informer Spans Classifier: Linear SVM + multiclass Notable improvement for IS hypernyms

Effect of Informer Spans Classifier: Linear SVM + multiclass Notable improvement for IS hypernyms Better than all hypernyms – filter sources of noise Biggest improvements for ‘what’, ‘which’ questions

Effect of Informer Spans Classifier: Linear SVM + multiclass Notable improvement for IS hypernyms Better than all hypernyms – filter sources of noise Biggest improvements for ‘what’, ‘which’ questions

Perfect vs CRF Informer Spans

Recognizing Informer Spans Idea: contiguous spans, syntactically governed

Recognizing Informer Spans Idea: contiguous spans, syntactically governed Use sequential learner w/syntactic information

Recognizing Informer Spans Idea: contiguous spans, syntactically governed Use sequential learner w/syntactic information Tag spans with B(egin),I(nside),O(outside) Employ syntax to capture long range factors

Recognizing Informer Spans Idea: contiguous spans, syntactically governed Use sequential learner w/syntactic information Tag spans with B(egin),I(nside),O(outside) Employ syntax to capture long range factors Matrix of features derived from parse tree

Recognizing Informer Spans Idea: contiguous spans, syntactically governed Use sequential learner w/syntactic information Tag spans with B(egin),I(nside),O(outside) Employ syntax to capture long range factors Matrix of features derived from parse tree Cell:x[i,l], i is position, l is depth in parse tree, only 2 Values: Tag: POS, constituent label in the position Num: number of preceding chunks with same tag

Parser Output Parse

Parse Tabulation Encoding and table:

CRF Indicator Features Cell: IsTag, IsNum: e.g. y 4 = 1 and x[4,2].tag=NP Also, IsPrevTag, IsNextTag

CRF Indicator Features Cell: IsTag, IsNum: e.g. y 4 = 1 and x[4,2].tag=NP Also, IsPrevTag, IsNextTag Edge: IsEdge: (u,v), y i-1 =u and y i =v IsBegin, IsEnd

CRF Indicator Features Cell: IsTag, IsNum: e.g. y 4 = 1 and x[4,2].tag=NP Also, IsPrevTag, IsNextTag Edge: IsEdge: (u,v), y i-1 =u and y i =v IsBegin, IsEnd All features improve

CRF Indicator Features Cell: IsTag, IsNum: e.g. y 4 = 1 and x[4,2].tag=NP Also, IsPrevTag, IsNextTag Edge: IsEdge: (u,v), y i-1 =u and y i =v IsBegin, IsEnd All features improve Question accuracy: Oracle: 88%; CRF: 86.2%

Question Classification Using Headwords and Their Hypernyms Huang, Thint, and Qin 2008 Questions: Why didn’t WordNet/Hypernym features help in L&R?

Question Classification Using Headwords and Their Hypernyms Huang, Thint, and Qin 2008 Questions: Why didn’t WordNet/Hypernym features help in L&R? Best results in L&R - ~200,000 feats; ~700 active Can we do as well with fewer features?

Question Classification Using Headwords and Their Hypernyms Huang, Thint, and Qin 2008 Questions: Why didn’t WordNet/Hypernym features help in L&R? Best results in L&R - ~200,000 feats; ~700 active Can we do as well with fewer features? Approach: Refine features:

Question Classification Using Headwords and Their Hypernyms Huang, Thint, and Qin 2008 Questions: Why didn’t WordNet/Hypernym features help in L&R? Best results in L&R - ~200,000 feats; ~700 active Can we do as well with fewer features? Approach: Refine features: Restrict use of WordNet to headwords

Question Classification Using Headwords and Their Hypernyms Huang, Thint, and Qin 2008 Questions: Why didn’t WordNet/Hypernym features help in L&R? Best results in L&R - ~200,000 feats; ~700 active Can we do as well with fewer features? Approach: Refine features: Restrict use of WordNet to headwords Employ WSD techniques SVM, MaxEnt classifiers

Head Word Features Head words: Chunks and spans can be noisy

Head Word Features Head words: Chunks and spans can be noisy E.g. Bought a share in which baseball team?

Head Word Features Head words: Chunks and spans can be noisy E.g. Bought a share in which baseball team? Type: HUM: group (not ENTY:sport) Head word is more specific

Head Word Features Head words: Chunks and spans can be noisy E.g. Bought a share in which baseball team? Type: HUM: group (not ENTY:sport) Head word is more specific Employ rules over parse trees to extract head words

Head Word Features Head words: Chunks and spans can be noisy E.g. Bought a share in which baseball team? Type: HUM: group (not ENTY:sport) Head word is more specific Employ rules over parse trees to extract head words Issue: vague heads E.g. What is the proper name for a female walrus? Head = ‘name’?

Head Word Features Head words: Chunks and spans can be noisy E.g. Bought a share in which baseball team? Type: HUM: group (not ENTY:sport) Head word is more specific Employ rules over parse trees to extract head words Issue: vague heads E.g. What is the proper name for a female walrus? Head = ‘name’? Apply fix patterns to extract sub-head (e.g. walrus)

Head Word Features Head words: Chunks and spans can be noisy E.g. Bought a share in which baseball team? Type: HUM: group (not ENTY:sport) Head word is more specific Employ rules over parse trees to extract head words Issue: vague heads E.g. What is the proper name for a female walrus? Head = ‘name’? Apply fix patterns to extract sub-head (e.g. walrus) Also, simple regexp for other feature type E.g. ‘what is’ cue to definition type

WordNet Features Hypernyms: Enable generalization: dog->..->animal Can generate noise: also

WordNet Features Hypernyms: Enable generalization: dog->..->animal Can generate noise: also dog ->…-> person

WordNet Features Hypernyms: Enable generalization: dog->..->animal Can generate noise: also dog ->…-> person Adding low noise hypernyms Which senses?

WordNet Features Hypernyms: Enable generalization: dog->..->animal Can generate noise: also dog ->…-> person Adding low noise hypernyms Which senses? Restrict to matching WordNet POS

WordNet Features Hypernyms: Enable generalization: dog->..->animal Can generate noise: also dog ->…-> person Adding low noise hypernyms Which senses? Restrict to matching WordNet POS Which word senses?

WordNet Features Hypernyms: Enable generalization: dog->..->animal Can generate noise: also dog ->…-> person Adding low noise hypernyms Which senses? Restrict to matching WordNet POS Which word senses? Use Lesk algorithm: overlap b/t question & WN gloss

WordNet Features Hypernyms: Enable generalization: dog->..->animal Can generate noise: also dog ->…-> person Adding low noise hypernyms Which senses? Restrict to matching WordNet POS Which word senses? Use Lesk algorithm: overlap b/t question & WN gloss How deep?

WordNet Features Hypernyms: Enable generalization: dog->..->animal Can generate noise: also dog ->…-> person Adding low noise hypernyms Which senses? Restrict to matching WordNet POS Which word senses? Use Lesk algorithm: overlap b/t question & WN gloss How deep? Based on validation set: 6

WordNet Features Hypernyms: Enable generalization: dog->..->animal Can generate noise: also dog ->…-> person Adding low noise hypernyms Which senses? Restrict to matching WordNet POS Which word senses? Use Lesk algorithm: overlap b/t question & WN gloss How deep? Based on validation set: 6 Q Type similarity: compute similarity b/t headword & type Use type as feature

Other Features Question wh-word: What,which,who,where,when,how,why, and rest

Other Features Question wh-word: What,which,who,where,when,how,why, and rest N-grams: uni-,bi-,tri-grams

Other Features Question wh-word: What,which,who,where,when,how,why, and rest N-grams: uni-,bi-,tri-grams Word shape: Case features: all upper, all lower, mixed, all digit, other

Results Per feature-type results:

Results: Incremental Additive improvement:

Error Analysis Inherent ambiguity: What is mad cow disease? ENT: disease or DESC:def

Error Analysis Inherent ambiguity: What is mad cow disease? ENT: disease or DESC:def Inconsistent labeling: What is the population of Kansas? NUM: other What is the population of Arcadia, FL ?

Error Analysis Inherent ambiguity: What is mad cow disease? ENT: disease or DESC:def Inconsistent labeling: What is the population of Kansas? NUM: other What is the population of Arcadia, FL ? NUM:count Parser error

Question Classification: Summary Issue: Integrating rich features/deeper processing

Question Classification: Summary Issue: Integrating rich features/deeper processing Errors in processing introduce noise

Question Classification: Summary Issue: Integrating rich features/deeper processing Errors in processing introduce noise Noise in added features increases error

Question Classification: Summary Issue: Integrating rich features/deeper processing Errors in processing introduce noise Noise in added features increases error Large numbers of features can be problematic for training

Question Classification: Summary Issue: Integrating rich features/deeper processing Errors in processing introduce noise Noise in added features increases error Large numbers of features can be problematic for training Alternative solutions:

Question Classification: Summary Issue: Integrating rich features/deeper processing Errors in processing introduce noise Noise in added features increases error Large numbers of features can be problematic for training Alternative solutions: Use more accurate shallow processing, better classifier

Question Classification: Summary Issue: Integrating rich features/deeper processing Errors in processing introduce noise Noise in added features increases error Large numbers of features can be problematic for training Alternative solutions: Use more accurate shallow processing, better classifier Restrict addition of features to

Question Classification: Summary Issue: Integrating rich features/deeper processing Errors in processing introduce noise Noise in added features increases error Large numbers of features can be problematic for training Alternative solutions: Use more accurate shallow processing, better classifier Restrict addition of features to Informer spans Headwords

Question Classification: Summary Issue: Integrating rich features/deeper processing Errors in processing introduce noise Noise in added features increases error Large numbers of features can be problematic for training Alternative solutions: Use more accurate shallow processing, better classifier Restrict addition of features to Informer spans Headwords Filter features to be added

Question Series TREC 2003-… Target: PERS, ORG,.. Assessors create series of questions about target Intended to model interactive Q/A, but often stilted Introduces pronouns, anaphora

Question Series TREC 2003-… Target: PERS, ORG,.. Assessors create series of questions about target Intended to model interactive Q/A, but often stilted Introduces pronouns, anaphora

Handling Question Series Given target and series, how deal with reference?

Handling Question Series Given target and series, how deal with reference? Shallowest approach: Concatenation: Add the ‘target’ to the question

Handling Question Series Given target and series, how deal with reference? Shallowest approach: Concatenation: Add the ‘target’ to the question Shallow approach: Replacement: Replace all pronouns with target

Handling Question Series Given target and series, how deal with reference? Shallowest approach: Concatenation: Add the ‘target’ to the question Shallow approach: Replacement: Replace all pronouns with target Least shallow approach: Heuristic reference resolution

Question Series Results No clear winning strategy

Question Series Results No clear winning strategy All largely about the target So no big win for anaphora resolution If using bag-of-words features in search, works fine

Question Series Results No clear winning strategy All largely about the target So no big win for anaphora resolution If using bag-of-words features in search, works fine ‘Replacement’ strategy can be problematic E.g. Target=Nirvana: What is their biggest hit?

Question Series Results No clear winning strategy All largely about the target So no big win for anaphora resolution If using bag-of-words features in search, works fine ‘Replacement’ strategy can be problematic E.g. Target=Nirvana: What is their biggest hit? When was the band formed?

Question Series Results No clear winning strategy All largely about the target So no big win for anaphora resolution If using bag-of-words features in search, works fine ‘Replacement’ strategy can be problematic E.g. Target=Nirvana: What is their biggest hit? When was the band formed? Wouldn’t replace ‘the band’

Question Series Results No clear winning strategy All largely about the target So no big win for anaphora resolution If using bag-of-words features in search, works fine ‘Replacement’ strategy can be problematic E.g. Target=Nirvana: What is their biggest hit? When was the band formed? Wouldn’t replace ‘the band’ Most teams concatenate