Download presentation
Presentation is loading. Please wait.
Published byAllen Horn Modified over 9 years ago
1
A Data Driven Approach to Query Expansion in Question Answering Leon Derczynski, Robert Gaizauskas, Mark Greenwood and Jun Wang Natural Language Processing Group Department of Computer Science University of Sheffield, UK
2
Summary Introduce a system for QA Find that its IR component limits system performance Explore alternative IR components Identify which questions cause IR to stumble Using answer lists, find extension words that make these questions easier Show how knowledge of these words can make rapidly accelerate the development of query expansion methods Show why one simple relevance feedback technique cannot improve IR for QA
3
How we do QA Question answering system follows a linear procedure to get from question to answers Pre-processing Text retrieval Answer Extraction Performance at each stage affects later results
4
Measuring QA Performance Overall metrics Coverage Redundancy TREC provides answers Regular expressions for matching text IDs of documents deemed helpful Ways of assessing correctness Lenient: the document text contains an answer Strict: further, the document ID is listed by TREC
5
Assessing IR Performance Low initial system performance Analysed each component in the system Question pre-processing correct Coverage and redundancy checked in IR part
6
IR component issues Only 65% of questions generate any text to be prepared for answer extraction IR failings cap the entire system performance Need to balance the amount of information retrieved for AE Retrieving more text boosts coverage, but also introduces excess noise
7
Initial performance Lucene statistics Using strict matching, at paragraph level Question yearCoverageRedundancy 200463.6%1.62 200556.6%1.15 200656.8%1.18
8
Potential performance inhibitors IR Engine Is Lucene causing problems? Profile some alternative engines Difficult questions Identify which questions cause problems Examine these: Common factors How can they be made approachable?
9
Information Retrieval Engines AnswerFinder uses a modular framework, including an IR plugin for Lucene Indri and Terrier are two public domain IR engines, which have both been adapted to perform TREC tasks Indri – based on the Lemur toolkit and INQUERY engine Terrier – developed in Glasgow for dealing with terabyte corpora Plugins are created for Indri and Terrier, which are then used as replacement IR components Automated testing of overall QA performance done using multiple IR engines
10
IR Engine performance EngineCoverageRedundancy Indri55.2%1.15 Lucene56.8%1.18 Terrier49.3%1.00 With n=20; strict retrieval; TREC 2006 question set; paragraph-level texts. Performance between engines does not seem to vary significantly Non-QA-specific IR Engine tweaking possibly not a great avenue for performance increases
11
Identification of difficult questions Coverage of 56.8% indicates that for over 40% of questions, no documents are found. Some questions are difficult for all engines How to define a “difficult” question? Calculate average redundancy (over multiple engines) for each question in a set Questions with average redundancy less than a certain threshold are deemed difficult A threshold of zero is usually enough to find a sizeable dataset
12
Examining the answer data TREC answer data provides hints to what documents an IR engine ideal for QA should retrieve Helpful document lists Regular expressions of answers Some questions are marked by TREC as having no answer; these are excluded from the difficult question set
13
Making questions accessible Given the answer bearing documents and answer text, it’s easy to extract words from answer-bearing paragraphs For example, where the answer is “baby monitor”: The inventor of the baby monitor found this device almost accidentally These surrounding words may improve coverage when used as query extensions How can we find out which extension words are most helpful?
14
Rebuilding the question set Only use answerable difficult questions For each question: Add original question to the question set as a control Find target paragraphs in “correct” texts Build a list of all words in that paragraph, except: answers, stop words, and question words For each word: Create a sub-question which consists of the original question, extended by that word
15
Rebuilding the question set Example: Single factoid question: Q + E How tall is the Eiffel tower? + height Question in a series: Q + T + E Where did he play in college? + Warren Moon + NFL
16
Do data-driven extensions help? Base performance is at or below the difficult question threshold (typically zero) Any extension that brings performance above zero is deemed a “helpful word” From the set of difficult questions, 75% were made approachable by using a data-driven extension If we can add these terms accurately to questions, the cap on answer extraction performance is raised
17
Do data-driven extensions help? QuestionWhere did he play in college? TargetWarren Moon Base redundancy is zero Extensions FootballRedundancy: 1 NFLRedundancy: 2.5 Adding some generic related words improves performance
18
Do data-driven extensions help? QuestionWho was the nominal leader after the overthrow? TargetPakistani government overthrown in 1999 Base redundancy is zero Extensions IslamabadRedundancy: 2.5 PakistanRedundancy: 4 KashmirRedundancy: 4 Location based words can raise redundancy
19
Do data-driven extensions help? QuestionWho have commanded the division? Target82 nd Airborne Division Base redundancy is zero Question expects a list of answers Extensions ColRedundancy: 2 GenRedundancy: 3 officerRedundancy: 1 decimatedRedundancy: 1 The proper names for ranks help; this can be hinted at by “Who” Events related to the target may suggest words Possibly not a victorious unit!
20
Observations on helpful words Inclusion of pertainyms has a positive effect on performance, agreeing with more general observations in Greenwood (2004) Army ranks stood out highly Use of an always-include list Some related words help, though there’s often no deterministic relationship between them and the questions
21
Measuring automated expansion Known helpful words are also the target set of words that any expansion method should aim for Once the target expansions are known, measuring automated expansion becomes easier No need to perform IR for every candidate expanded query (some runs over AQUAINT took up to 14 hours on a 4-core 2.3GHz system) Rapid evaluation permits faster development of expansion techniques
22
Relevance feedback in QA Simple RF works by using features of an initial retrieval to alter a query We picked the highest frequency words in the “initially retrieved texts”, and used them to expand a query The size of the IRT set is denoted r Previous work (Monz 2003) looked at relevance feedback using a small range of values for r Different sizes of initial retrievals are used, between r=5 and r=50
23
Rapidly evaluating RF Three metrics show how a query expansion technique performs: Percentage of all helpful words found in IRT This shows the intersection between words in initially retrieved texts, and the helpful words. Percentage of texts containing helpful words If this is low, then the IR system does not retrieve many documents containing helpful words, given the initial query Percentage of expansion terms that are helpful This is a key statistic; the higher this is, the better performance is likely to be
24
Relevance feedback predictions Less than 35% of the documents used in relevance feedback actually contain helpful words Picking helpful words out from initial retrievals is not easy, when there’s so much noise Due to the small probability of adding helpful words, relevance feedback is likely not to make difficult questions accessible. Adding noise to the query will drown out otherwise helpful documents for non-difficult questions 200420052006 Helpful words found in IRT4.2%18.6%8.9% IRT containing helpful words10.0%33.3%34.3% RF words that are “helpful”1.25%1.67%5.71% RF selects some words to be added on to a query, based on an initial search.
25
Relevance feedback results Only 1.25% - 5.71% of the words that relevance feedback chose were actually helpful; the rest only add noise Performance using TF-based relevance feedback is consistently lower than the baseline Hypothesis of poor performance is supported Coverage at n docsr=5r=50Baseline 10 34.7%28.4%43.4% 20 44.4%39.8%55.3%
26
Conclusions IR engine performance for QA does not vary wildly Identifying helpful words provides a tool for assessing query expansion methods TF-based relevance feedback cannot be generally effective in IR for QA Linguistic relationships exist that can help in query expansion
27
Any questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.