JAVELIN Project Briefing AQUAINT Program 1 AQUAINT 6-month Meeting 10/08/04 JAVELIN II: Scenarios and Variable Precision Reasoning for Advanced QA from Multilingual, Distributed Sources Eric Nyberg, Teruko Mitamura, Jamie Callan, Jaime Carbonell, Bob Frederking Language Technologies Institute Carnegie Mellon University
JAVELIN Project Briefing AQUAINT Program 2 AQUAINT 6-month Meeting 10/08/04 JAVELIN II Research Areas Emerging Fact Base 1. Scenario Dialog User: “I’m focusing on the new Iraqi minister Al Tikriti. What can you tell me about his family and associates?” 6. Answer Visualization and Scenario Refinement System: <displays instantiated scenario> User: “Can you find more about his brother-in-law’s business associates?” 2. Scenario Representation e1 e2e3 r1r2 e4 e5 r3r4 4. Multi-Strategy Information Gathering NL Parsing Statistical Extraction Pattern Matching Relevant Documents 3. Distributed, Multilingual Retrieval 5. Variable-Precision Knowledge Representation & Reasoning Scenario Reasoning Search Guidance Belief Revision Answer Justification
JAVELIN Project Briefing AQUAINT Program 3 AQUAINT 6-month Meeting 10/08/04 Recent Highlights Multi-Strategy Information Gathering –Participation in Relationship Pilot –Training Extractors with Minor Third Variable-Precision KR and Reasoning –Text Processor Module (1 st version complete) –Fact Base (1 st prototype complete) Distributed, Multilingual QA –Keyword Translation for CLQA (English to Chinese)
JAVELIN Project Briefing AQUAINT Program 4 AQUAINT 6-month Meeting 10/08/04 Relationship Pilot 50 sample scenarios, e.g. The analyst is interested in knowing if a particular country is a member of an international organization. Is Chechnya a member of the United Nations? Phase I JAVELIN system was used with manual tweaking Output of Question Analyzer module was manually corrected –Decompose into subquestions (17 of 50 scenarios) –Gather key terms from background text
JAVELIN Project Briefing AQUAINT Program 5 AQUAINT 6-month Meeting 10/08/04 NIST Evaluation Methodology Two categories of information “nuggets” vital : must be present okay : relevant but not necessary Each item could match more than one nugget Recall determined by vital nuggets Precision based on answer length Computed F-scores with recall 3 times as important as precision
JAVELIN Project Briefing AQUAINT Program 6 AQUAINT 6-month Meeting 10/08/04 JAVELIN Performance Statistics Average F-score computed by NIST Average F-score with recall based on both vital and okay nuggets0.322 Total scenarios with F=10 Total scenarios with all vital information correct: 9 1/1 – 18, 19, 36, 38 2/2 – 4, 16, 34, 37 3/3 – 33 Total scenarios with F=019 Total scenarios without any (vital or okay) correct answers10 no answer found - 3, 5 bad answers - 6, 8, 10, 11, 13, 27, 29, 30
JAVELIN Project Briefing AQUAINT Program 7 AQUAINT 6-month Meeting 10/08/04 JAVELIN Performance Statistics Average recall (vital) Average precision0.261 Matches per answer item: no nuggets nugget matched 57 2 nuggets matched 10 3 nuggets matched 6 4 nuggets matched 1 Not done (but potentially useful?): determine which decomposed questions we provided relevant information for
JAVELIN Project Briefing AQUAINT Program 8 AQUAINT 6-month Meeting 10/08/04 General Observations Nugget quality and assessment varies considerably (e.g., question #3, #8) Nuggets overlap, repeat given information, sometimes represent cues not answers; doesn’t count other relevant information if it was not in the assessors’ original set Difficult to assess retrieval performance No document IDs provided in the nugget file Difficult to reproduce the precision scores Relevant text spans appear to have been manually determined and are not noted in the annotated file
JAVELIN Project Briefing AQUAINT Program 9 AQUAINT 6-month Meeting 10/08/04 A standardized testbed to build and evaluate machine learning algorithms that work on text Includes a pattern language (Mixup) for building taggers (compiles to FSTs) Can we utilize MinorThird as a factory to build new information extractors for the QA task?
JAVELIN Project Briefing AQUAINT Program 10 AQUAINT 6-month Meeting 10/08/04 Initial Training Experiments Can Minor Third train new taggers for specific tags and corpora, based on bootstrap information from existing tagger(s)? Set Up: –Use Identifinder to annotate 101 messages (focus: ORGANIZATION) –Manually fix incorrect tags –Training set: 81; Test set: 20 Experiments: –Vary training set size: 40, 61, 81 messages –Vary history size and window size parameters used by the Minor Third Learner class
JAVELIN Project Briefing AQUAINT Program 11 AQUAINT 6-month Meeting 10/08/04 Varying Size of Training Set
JAVELIN Project Briefing AQUAINT Program 12 AQUAINT 6-month Meeting 10/08/04 The Text Processor (TP) A server capable of processing text annotation requests (batch or run-time) Receives a text stream input and assigns multiple levels of tags or features Application can specify which processors to run on a text, and in what order Provides a single API for a variety of processors: –Brill Tagger –BBN Identifinder –MXTerminator –Link Parser –RASP –WordNet –CLAWS –FrameNet
JAVELIN Project Briefing AQUAINT Program 13 AQUAINT 6-month Meeting 10/08/04 TP Object Model
JAVELIN Project Briefing AQUAINT Program 14 AQUAINT 6-month Meeting 10/08/04 Fact Base Relational data model containing: –Documents and metadata –Standoff annotations for: Linguistic analysis (segmentation, POS, parsing, predicate extraction) Semantic interpretation (frame filling -> facts/events/etc.) Reasoning (reference resolution, inference)
JAVELIN Project Briefing AQUAINT Program 15 AQUAINT 6-month Meeting 10/08/04 Fact Base [2] Text Processor API SegmenterTaggersParsersFramers Text 1. Relevant documents or passages are processed by the TP modules Features 2. Results are stored as features on text spans Facts 3. Extracted frames are stored as possible facts, events, etc. * All derived information directly linked to input source(s) at each level * Persistent storage in RDBMS supports: - training/learning on any combination of features - reuse of results across sessions, analysts, etc. when appropriate - use of relational querying for association chains (cf. G. Bhalotia, et al., Keyword searching and browsing in databases using BANKS. In ICDE, San Jose, CA, 2002.)
JAVELIN Project Briefing AQUAINT Program 16 AQUAINT 6-month Meeting 10/08/04 CLQA: The Keyword Translation Problem Given keywords extracted from the question, how do we correctly translate them into languages of the information sources? Keyword Translator Keywords in Language B Keywords in Language A Keywords in Language C
JAVELIN Project Briefing AQUAINT Program 17 AQUAINT 6-month Meeting 10/08/04 Tools For Query/Keyword Translation Machine Readable Dictionaries (MRD) –Pros: Easily obtained for high-density languages Domain-specific dictionaries provide good coverage in-domain –Cons: Publicly available general dictionaries usually have low coverage Cannot translate sentences MT Systems –Pros: Usually provide more coverage than publicly available MRD Translate whole sentences –Cons: Translation quality varies Low language-pair coverage compared to MRD Parallel Corpora –Pros: Good for domain-specific translation –Cons: Poor for open-domain translation
JAVELIN Project Briefing AQUAINT Program 18 AQUAINT 6-month Meeting 10/08/04 Tools For Query/Keyword Translation Machine Readable Dictionaries (MRD) –Pros: Easily obtained for high-density languages Domain-specific dictionaries provide good coverage in-domain –Cons: Publicly available general dictionaries usually have low coverage Cannot translate sentences MT Systems –Pros: Usually provide more coverage than publicly available MRD Translate whole sentences –Cons: Translation quality varies Low language-pair coverage compared to MRD Parallel Corpora –Pros: Good for domain-specific translation –Cons: Poor for open-domain translation
JAVELIN Project Briefing AQUAINT Program 19 AQUAINT 6-month Meeting 10/08/04 Research Questions Can we improve keyword translation correctness by building a keyword selection model that selects one translation from translations produced by multiple MT systems? Can we improve keyword translation correctness by using the question sentence?
JAVELIN Project Briefing AQUAINT Program 20 AQUAINT 6-month Meeting 10/08/04 The Translation Selection Problem Given a set of translation candidates and the question sentence, how do we select a translation that is most likely a correct translation of the keyword? Selection Model Source Keyword Source Question MT System 1 MT System 2 MT System 3 Target Keyword 1 Target Question 1 Score for Target Keyword 1 Selection Model Target Keyword 2 Target Question 2 Score for Target Keyword 2 Selection Model Target Keyword 3 Target Question 3 Score for Target Keyword 3
JAVELIN Project Briefing AQUAINT Program 21 AQUAINT 6-month Meeting 10/08/04 Keyword Selection Model A set of scoring metrics: –A translation candidate is assigned an initial base score of 0 –Each scoring metric adds to or subtracts from running total of the score –After all candidates go through the model, the translation candidate with the highest score is selected as the most likely correct translation
JAVELIN Project Briefing AQUAINT Program 22 AQUAINT 6-month Meeting 10/08/04 The Experiment Language Pair: From English to Chinese Uses three free web-based MT systems – – – Training Data: –50 Input questions (125 Keywords) from TREC-8, TREC-9, and TREC-10 Testing Data: –50 Input questions (147 Keywords) from TREC-8, TREC-9, and TREC-10 Evaluation: Translation correctness
JAVELIN Project Briefing AQUAINT Program 23 AQUAINT 6-month Meeting 10/08/04 Scoring Metrics In this experiment, we constructed different selection models, each uses a combination of following 5 scoring metrics: I.Baseline II.Segmented Word-Matching and Partial Word- Matching III.Full Sentence Word-Matching without Fall Back to Partial Word-Matching IV.Full Sentence Word-Matching with Fall Back to Partial Word-Matching V.Penalty for Partially Translated or Un-Translated Keywords
JAVELIN Project Briefing AQUAINT Program 24 AQUAINT 6-month Meeting 10/08/04 Scoring Metrics Summary Description of Scoring Metrics: Scoring Legend: ████ Full MatchPartial MatchSupport by >1 MTNot Fully Translated Abbr.DescriptionScoring BBaseline █ SSegmented Word-Matching and Partial Word-Matching █ ██ █ F¹F¹ Full Sentence Word-Matching without Fall Back to Partial Word- Matching █ F²F² Full Sentence Word-Matching with Fall Back to Partial Word- Matching █ ██ █ PPenalty for Partially Translated or Un-Translated Keywords █
JAVELIN Project Briefing AQUAINT Program 25 AQUAINT 6-month Meeting 10/08/04 Results ModelS¹S²S¹S² S¹S²S³S¹S²S³ B78.23%78.91% B+S61.90%64.63% B+F ¹ 80.27%80.95% B+F ² 75.51%78.91% B+P78.23%78.91% B+F ¹ +P82.99%85.71% B+F ² +P78.23%83.67% ModelS¹S²S¹S² S¹S²S³S¹S²S³ B0% B+S-20.87%-18.10% B+F ¹ 2.61%2.59% B+F ² -3.48%0.00% B+P0.00% B+F ¹ +P6.08%8.62% B+F ² +P0.00%6.95% Keyword Translation Accuracy of Different Models on the Test Set Improvement of Different Models over the Base Model [Lin, F. and T. Mitamura, “Keyword Translation from English to Chinese for Multilingual QA”, Proceedings of AMTA 2004, Georgetown.]
JAVELIN Project Briefing AQUAINT Program 26 AQUAINT 6-month Meeting 10/08/04 Results ModelS¹S²S¹S² S¹S²S³S¹S²S³ B78.23%78.91% B+S61.90%64.63% B+F ¹ 80.27%80.95% B+F ² 75.51%78.91% B+P78.23%78.91% B+F ¹ +P82.99%85.71% B+F ² +P78.23%83.67% Keyword Translation Accuracy of Different Models on the Test Set Best single MT system performance: 78.23% Best multiple MT model performance: 85.71% Best possible result if the correct keywords are selected every time they are produced: 92.52% [Lin, F. and T. Mitamura, “Keyword Translation from English to Chinese for Multilingual QA”, Proceedings of AMTA 2004, Georgetown.]
JAVELIN Project Briefing AQUAINT Program 27 AQUAINT 6-month Meeting 10/08/04 Observations Models which include scoring metrics that require segmentation did poorly Using more MT systems improves translation correctness Using the translated question improves keyword translation accuracy There is still room for improvement (85.71% to 92.52%)
JAVELIN Project Briefing AQUAINT Program 28 AQUAINT 6-month Meeting 10/08/04 More to Do… Use statistical/machine learning techniques –Result of each scoring metric a feature in a classification problem (SVM, MaxEnt) –Train weights for each scoring metric (EM) Use additional / improved scoring metrics –Validate translation using search engines –Use better segmentation tools Compare with other evaluation methods –retrieval performance –end-to-end system (QA) performance
JAVELIN Project Briefing AQUAINT Program 29 AQUAINT 6-month Meeting 10/08/04 Questions?