Download presentation
Presentation is loading. Please wait.
Published byGeorge Farmer Modified over 9 years ago
1
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 1/28 Question Answering Passage Retrieval Using Dependency Parsing Hang Cui Renxu Sun Keya Li Min-Yen Kan Tat-Seng Chua Department of Computer Science National University of Singapore
2
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 2/28 Passage Retrieval in Question Answering Document Retrieval Answer Extraction Passage Retrieval QA System To narrow down the search scope Can answer questions with more context Lexical density based Distance between question words
3
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 3/28 Density Based Passage Retrieval Method However, density based can err when … What percent of the nation's cheese does Wisconsin produce? Incorrect: … the number of consumers who mention California when asked about cheese has risen by 14 percent, while the number specifying Wisconsin has dropped 16 percent. Incorrect: The wry “It's the Cheese” ads, which attribute California's allure to its cheese _ and indulge in an occasional dig at the Wisconsin stuff'' … sales of cheese in California grew three times as fast as sales in the nation as a whole 3.7 percent compared to 1.2 percent, … Incorrect: Awareness of the Real California Cheese logo, which appears on about 95 percent of California cheeses, has also made strides. Correct: In Wisconsin, where farmers produce roughly 28 percent of the nation's cheese, the outrage is palpable. Relationships between matched words differ …
4
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 4/28 Our Solution Examine the relationship between words –Dependency relations Exact match of relations for answer extraction Has low recall because same relations are often phrased differently Fuzzy match of dependency relationship –Statistical similarity of relations
5
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 5/28 Measuring Sentence Similarity Sentence 1Sentence 2 Sim (Sent1, Sent2) = ? Matched words Lexical matching Similarity of relations between matched words + Similarity of individual relations
6
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 6/28 Outline Extracting and Paring Relation Paths Measuring Path Match Scores Learning Relation Mapping Scores Evaluations Conclusions
7
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 7/28 Outline Extracting and Paring Relation Paths Measuring Path Match Scores Learning Relation Mapping Scores Evaluations Conclusions
8
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 8/28 What Dependency Parsing is Like Minipar (Lin, 1998) for dependency parsing Dependency tree –Nodes: words/chunks in the sentence –Edges (ignoring the direction): labeled by relation types What percent of the nation's cheese does Wisconsin produce?
9
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 9/28 Extracting Relation Paths Relation path –Vector of relations between two nodes in the tree produce Wisconsin percent cheese Two constraints for relation paths: 1.Path length (less than 7 relations) 2.Ignore those between two words that are within a chunk, e.g. New York.
10
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 10/28 Paired Paths from Question and Answer What percent of the nation's cheese does Wisconsin produce? In Wisconsin, where farmers produce roughly 28 percent of the nation's cheese, the outrage is palpable. Paired Relation Paths Sim Rel (Q, Sent) = ∑ i,j Sim (P i (Q), P j (Sent) )
11
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 11/28 Outline Extracting and Paring Relation Paths Measuring Path Match Scores Learning Relation Mapping Scores Evaluations Conclusions
12
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 12/28 Measuring Path Match Degree Employ a variation of IBM Translation Model 1 –Path match degree (similarity) as translation probability MatchScore (P Q, P S ) → Prob (P S | P Q ) Relations as words Why IBM Model 1? –No “word order” – bag of undirected relations –No need to estimate “target sentence length” Relation paths are determined by the parsing tree
13
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 13/28 Calculating Translation Probability (Similarity) of Paths Considering the most probable alignment (finding the most probable mapped relations) Take logarithm and ignore the constants (for all sentences, question path length is a constant) MatchScores of paths are combined to give the sentence’s relevance to the question. ? Given two relation paths from the question and a candidate sentence
14
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 14/28 Outline Extracting and Paring Relation Paths Measuring Path Match Scores Learning Relation Mapping Scores Evaluations Conclusions
15
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 15/28 Training and Testing TestingTraining Sim ( Q, Sent ) = ? Relation Mapping Scores Prob ( P Sent | P Q ) = ? P ( Rel (Sent) | Rel (Q) ) = ? Q - A pairs Paired Relation Paths Relation Mapping Model Similarity between relation vectors Similarity between individual relations 1.Mutual information (MI) based 2.Expectation Maximization (EM) based
16
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 16/28 Approach 1: MI Based Measures bipartite co-occurrences in training path pairs Accounts for path length (penalize those long paths) Uses frequencies to approximate mutual information
17
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 17/28 Approach – 2: EM Based Employ the training method from IBM Model 1 –Relation mapping scores = word translation probability –Utilize GIZA to accomplish training –Iteratively boosting the precision of relation translation probability Initialization – assign 1 to identical relations and a small constant otherwise
18
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 18/28 Outline Extracting and Paring Relation Paths Measuring Path Match Scores Learning Relation Mapping Scores Evaluations –Can relation matching help? –Can fuzzy match perform better than exact match? –Can long questions benefit more? –Can relation matching work on top of query expansion? Conclusions
19
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 19/28 Evaluation Setup Training data –3k corresponding path pairs from 10k QA pairs (TREC-8, 9) Test data –324 factoid questions from TREC-12 QA task Passage retrieval on top 200 relevant documents by TREC
20
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 20/28 Comparison Systems MITRE –baseline –Stemmed word overlapping –Baseline in previous work on passage retrieval evaluation SiteQ – top performing density based method –using 3 sentence window NUS –Similar to SiteQ, but using sentences as passages Strict Matching of Relations –Simulate strict matching in previous work for answer selection –Counting the number of exactly matched paths Relation matching are applied on top of MITRE and NUS
21
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 21/28 Evaluation Metrics Mean reciprocal rank (MRR) –Measure the mean rank position of the correct answer in the returned rank list –On the top 20 returned passages Percentage of questions with incorrect answers Precision at the top one passage
22
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 22/28 Performance Evaluation All improvements are statistically significant (p<0.001) MI and EM do not make much difference given our training data –EM needs more training data –MI is more susceptible to noise, so may not scale well Passage retrieval systems MITRESiteQNUS Rel_Strict (MITRE) Rel_Strict (NUS) Rel_MI (MITRE) Rel_EM (MITRE) Rel_MI (NUS) Rel_EM (NUS) MRR0.20000.27650.26770.29900.36250.41610.42180.47560.4761 % MRR improvement over MITRE SiteQ NUS N/A N/A N/A +38.26 N/A N/A +33.88 N/A N/A +49.50 +8.14 +11.69 +81.25 +31.10 +35.41 +108.09 +50.50 +55.43 +110.94 +52.57 +57.56 +137.85 +72.03 +77.66 +138.08 +72.19 +77.83 % Incorrect45.68%37.65%33.02%41.96%32.41%29.63%29.32%24.69%24.07% Precision at top one passage 0.12350.19750.17590.22530.27160.33640.34570.3889 Fuzzy matching outperforms strict matching significantly.
23
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 23/28 Performance Variation to Question Length Long questions, with more paired paths, tend to improve more –Using the number of non-trivial question terms to approximate question length
24
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 24/28 Error Analysis Mismatch of question terms e.g. In which city is the River Seine Introduce question analysis Paraphrasing between the question and the answer sentence e.g. write the book → be the author of the book Most of current techniques fail to handle it Finding paraphrasing via dependency parsing (Lin and Pantel)
25
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 25/28 Performance on Top of Query Expansion On top of query expansion, fuzzy relation matching brings a further 50% improvement However –query expansion doesn’t help much on a fuzzy relation matching system –Expansion terms do not help in paring relation paths Passage Retrieval Systems NUS (baseline) NUS+QE Rel_MI (NUS+QE) Rel_EM (NUS+QE) MRR (% improvement over baseline) 0.2677 0.3293 (+23.00%) 0.4924 (+83.94%) 0.4935 (+84.35%) % MRR improvement over NUS+QE N/A +49.54%+49.86% % Incorrect33.02%28.40%22.22% Precision at top one passage 0.17590.23150.4074 Rel_EM (NUS) 0.4761
26
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 26/28 Outline Extracting and Paring Relation Paths Measuring Path Match Scores Learning Relation Mapping Scores Evaluations Conclusions
27
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 27/28 Conclusions Proposed a novel fuzzy relation matching method for factoid QA passage retrieval –Brings dramatic 70%+ improvement over the state-of- the-art systems –Brings further 50% improvement over query expansion –Future QA systems should bring in relations between words for better performance Query expansion should be integrated to relation matching seamlessly
28
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 28/28 Q & A Thanks!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.