Download presentation
Presentation is loading. Please wait.
1
Semantic Retrieval for Question Answering Student Research Symposium Language Technologies Institute Matthew W. Bilotti mbilotti@cs.cmu.edu September 23, 2005
2
Outline What is Question Answering? What is the cause of wrong answers? What is Semantic Retrieval, and can it help? What have other teams tried? How is JAVELIN using Semantic Retrieval? How can we evaluate the impact of Semantic Retrieval on Question Answering systems? Where can we go from here?
3
What is Question Answering? A process that finds succinct answers to questions phrased in natural language Q: “Where is Carnegie Mellon?” A: “Pittsburgh, Pennsylvania, USA” Q: “Who is Jared Cohon?” A: “... is the current President of Carnegie Mellon University?” Q: “When was Herbert Simon born?” A: “15 June 1916” Question Answering Input Question Output Answers Google. http://www.google.com
4
Classic “Pipelined” QA Architecture A sequence of discrete modules cascaded such that the output of the previous module is the input to the next module. Input Question Output Answers Question Analysis Document Retrieval Post- Processing Answer Extraction
5
Classic “Pipelined” QA Architecture Input Question Output Answers Question Analysis Document Retrieval Post- Processing Answer Extraction “Where was Andy Warhol born?
6
Classic “Pipelined” QA Architecture Input Question Output Answers Question Analysis Document Retrieval Post- Processing Answer Extraction “Where was Andy Warhol born? Discover keywords in the question, generate alternations, and determine answer type. Keywords: Andy (Andrew), Warhol, born Answer type: Location (City)
7
Classic “Pipelined” QA Architecture Input Question Output Answers Question Analysis Document Retrieval Post- Processing Answer Extraction Formulate IR queries using the keywords, and retrieve answer- bearing documents ( Andy OR Andrew ) AND Warhol AND born
8
Classic “Pipelined” QA Architecture Input Question Output Answers Question Analysis Document Retrieval Post- Processing Answer Extraction Extract answers of the expected type from retrieved documents. “Andy Warhol was born on August 6, 1928 in Pittsburgh and died February 22, 1927 in New York.” “Andy Warhol was born to Slovak immigrants as Andrew Warhola on August 6, 1928, on 73 Orr Street in Soho, Pittsburgh, Pennsylvania.”
9
Classic “Pipelined” QA Architecture Input Question Output Answers Question Analysis Document Retrieval Post- Processing Answer Extraction Answer cleanup and merging, consistency or constraint checking, answer selection and presentation. Pittsburgh 73 Orr Street in Soho, Pittsburgh, Pennsylvania New York 1. “Pittsburgh, Pennsylvania” 2. “New York” merge 1. 2. rank Pittsburgh, Pennsylvania select appropriate granularity
10
What is the cause of wrong answers? A pipelined QA system is only as good as its weakest module Poor retrieval and/or query formulation can result in low ranks for answer-bearing documents, or no answer-bearing documents retrieved Input Question Output Answers Question Analysis Document Retrieval Post- Processing Answer Extraction Failure point
11
What is Semantic Retrieval, and can it help? Semantic Retrieval is a broad term for a document retrieval technique that makes use of semantic information and language understanding Hypothesis: Use of Semantic Retrieval can improve performance, retrieving more, and more highly-ranked, relevant documents
12
What have other teams tried? LCC/SMU approach –Use an existing IR system as a black box; rich query expansion CL Research approach –Process top documents retrieved from an IR engine, extracting semantic relation triples, index and retrieve using RDBMS IBM (Prager) Predictive Annotation –Store answer types (QA-Tokens) in the IR system’s index, and retrieve on them
13
LCC/SMU Approach Syntactic relationships (controlled synonymy), morphological and derivational expansions for Boolean keywords Statistical passage extraction finds windows around keywords Semantic constraint check for filtering (unification) NE recognition and pattern matching as a third pass for answer extraction Ad hoc relevance scoring: term proximity, occurrence of answer in an apposition, etc Moldovan, et. al., Performance issues and error analysis in an open-domain QA system, ACM TOIS, vol. 21, no. 2. 2003 Passage Extraction IR Boolean query Documents Passages Keywords and Alternations Constraint Checking Named Entity Extraction Answer Candidates Extended Wordnet
14
Litkowski/CL Research Approach Relation triples: discourse entity (NP) + semantic role or relation + governing word; essentially similar to our predicates Unranked XPath querying against RDBMS Litkowski, K.C. Question Answering Using XML-Tagged Documents. TREC 2003 The quick brown fox jumped over the lazy dog. Docs RDBMS Sentences XML/xpath 10-20 top PRISE documents jumped lazy dog quick brown fox entity mention canonicalization Semantic relationship triples
15
Predictive Annotation Textract identifies candidate answers at indexing time QA-Tokens are indexed as text items along with actual doc tokens Passage retrieval, with simple bag-of-words combo- match (heuristic) ranking formula Prager, et. al. Question-answering by predictive annotation. SIGIR 2000 Gasoline cost $0.78 per gallon in 1999. Docs IR Corpus Textract (IE/NLP) Answer type taxonomy QA-Tokens Gasoline cost $0.78 MONEY$ per gallon VOLUME$ in 1999 YEAR$.
16
How is JAVELIN using Semantic Retrieval? Annotate corpus with semantic content (e.g. predicates), and index this content At runtime, perform similar analysis on input questions to get predicate templates Maximal recall of documents that contain matching predicate instances Constraint checking at the answer extraction stage to filter out false positives and rank best matches Nyberg, et. al. “Extending the JAVELIN QA System with Domain Semantics”, AAAI 2005.
17
Annotating and Indexing the Corpus Nyberg, et. al. “Extending the JAVELIN QA System with Domain Semantics”, AAAI 2005. RDBMS Text IR Indexer Annotation Framework loves MaryJohn Predicate- Argument Structure ARG0 ARG1 loves Mary John Actual Index Content ARG0 ARG1
18
Retrieval on Predicate-Argument Structure Nyberg, et. al. “Extending the JAVELIN QA System with Domain Semantics”, AAAI 2005. Input Question Output Answers Question Analysis Document Retrieval Post- Processing Answer Extraction “Who does John love?"
19
Retrieval on Predicate-Argument Structure Nyberg, et. al. “Extending the JAVELIN QA System with Domain Semantics”, AAAI 2005. Predicate-Argument Template ARG0ARG1 loves ?x Input Question Output Answers Question Analysis Document Retrieval Post- Processing Answer Extraction “Who does John love?" John
20
Retrieval on Predicate-Argument Structure Nyberg, et. al. “Extending the JAVELIN QA System with Domain Semantics”, AAAI 2005. IR What the IR engine sees: ARG0ARG1 loves ?x Input Question Output Answers Question Analysis Document Retrieval Post- Processing Answer Extraction “Who does John love?" John “Frank loves Alice. John dislikes Bob." "John loves Mary.” Some Retrieved Documents:
21
Retrieval on Predicate-Argument Structure Nyberg, et. al. “Extending the JAVELIN QA System with Domain Semantics”, AAAI 2005. RDBMS Input Question Output Answers Question Analysis Document Retrieval Post- Processing Answer Extraction “Who does John love?" “Frank loves Alice. John dislikes Bob." "John loves Mary.” X Matching Predicate Instance ARG0ARG1 loves Mary John “Mary” ARG0ARG1 loves Alice Frank ARG0ARG1 dislikes Bob John
22
How can we evaluate the impact of Semantic Retrieval on QA systems? Performance can be indirectly evaluated by measuring the performance of the end-to-end QA system while varying the document retrieval strategy employed, in one of two ways: –NIST-style comparative evaluation –Absolute evaluation against new test sets Direct analysis of document retrieval performance –Requires an assumption such as, “maximal recall of relevant documents translates to best end-to-end system performance”
23
NIST-style Comparative Evaluation Answer keys developed by pooling –All answers gathered by all systems are checked by a human to develop the answer key –Voorhees showed that the comparative orderings between systems are stable regardless of exhaustiveness of judgments –Answer keys from TREC evaluations are never suitable for post- hoc evaluation (nor were they intended to be), since they may penalize a new strategy for discovering good answers not in the original pool Manual scoring –Judging system output involves semantics (Voorhees 2003) –Abstract away from differences in vocabulary or syntax, and robustly handle paraphrase This is the same methodology used in the Definition QA evaluation in TREC 2003 and 2004
24
Absolute Evaluation Requires building new test collections –Not dependent on pooled results from systems, so suitable for post-hoc experimentation –Human effort is required; a methodology is described in (Katz and Lin 2005), (Bilotti, Katz and Lin 2004) and (Bilotti 2004) Automatic scoring methods based on n-grams, or fuzzy unification on predicate-argument structure (Lin and Demner-Fushman 2005), (Vandurme et al. 2003) can be applied Can evaluate at the level of documents or passages retrieved, predicates matched, or answers extracted, depending on the level of detail in the test set.
25
Preliminary Results: TREC 2005 Relationship QA Track 25 scenario-type questions; the first time such questions have occurred officially in the TREC QA track Semi-automatic runs were allowed: JAVELIN submitted a second run using manual question analysis Results (in MRR of relevant nuggets): –Run 1: 0.1356 –Run 2: 0.5303 Example on the next slide!
26
Example: Question Analysis The analyst is interested in Iraqi oil smuggling. Specifically, is Iraq smuggling oil to other countries, and if so, which countries? In addition, who is behind the Iraqi oil smuggling? interested Iraqi oil smuggling The analyst ARG0ARG1 smuggling oil Iraq ARG0 ARG1 other countries ARG2 smuggling oil Iraq ARG0 ARG1 which countries ARG2 is behind the Iraqi oil smuggling Who ARG0ARG1
27
Example: Results The analyst is interested in Iraqi oil smuggling. Specifically, is Iraq smuggling oil to other countries, and if so, which countries? In addition, who is behind the Iraqi oil smuggling? 1. “The amount of oil smuggled out of Iraq has doubled since August last year, when oil prices began to increase,” Gradeck said in a telephone interview Wednesday from Bahrain. 2. U.S.: Russian Tanker Had Iraqi Oil By ROBERT BURNS, AP Military Writer WASHINGTON (AP) – Tests of oil samples taken from a Russian tanker suspected of violating the U.N. embargo on Iraq show that it was loaded with petroleum products derived from both Iranian and Iraqi crude, two senior defense officials said. 5. With no American or allied effort to impede the traffic, between 50,000 and 60,000 barrels of Iraqi oil and fuel products a day are now being smuggled along the Turkish route, Clinton administration officials estimate. (7 of 15 relevant)
28
Where do we go from here? What to index and how to represent it –Moving to Indri 1 allows exact representation of our predicate structure in the index Building a Scenario QA test collection Query formulation and relaxation –Learning or planning strategies Ranking retrieved predicate instances –Aggregating information across documents Inference and evidence combination Extracting answers from predicate-argument structure 1. http://www.lemurproject.org
29
References Bilotti. Query Expansion Techniques for Question Answering. Masters’ Thesis, MIT. 2004. Bilotti, et. al. What Works Better for Question Answering: Stemming or Morphological Query Expansion? IR4QA workshop at SIGIR 2004. Lin and Demner-Fushman. Automatically Evaluating Answers to Definition Questions. HLT/EMNLP 2005. Litkowski, K.C. Question Answering Using XML-Tagged Documents. TREC 2003. Metzler and Croft. Combining the Language Model and Inference Network Approaches to Retrieval. Information Processing and Management Special Issue on Bayesian Networks and Information Retrieval, 40(5), 735-750, 2004. Metzler, et. al., Indri at TREC 2004: Terabyte Track. TREC 2004. Moldovan, et. al., Performance issues and error analysis in an open-domain question answering system, ACM TOIS, vol. 21, no. 2. 2003. Nyberg, et. al. “Extending the JAVELIN QA System with Domain Semantics”, Proceedings of the 20 th National Conference on Artificial Intelligence (AAAI 2005). Pradhan, S., et. al. Shallow Semantic Parsing using Support Vector Machines. HTL/NAACL-2004. Prager, et. al. Question-answering by predictive annotation. SIGIR 2000. Vandurme, B. et. al. Towards Light Semantic Processing for Question Answering. HLT/NAACL 2003. Voorhees, E. Overview of the TREC 2003 question answering track. TREC 2003.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.