Question Answering CS 290N, 2010. Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris.

Slides:



Advertisements
Similar presentations
Symantec 2010 Windows 7 Migration Global Results.
Advertisements

1 A B C
AGVISE Laboratories %Zone or Grid Samples – Northwood laboratory
Simplifications of Context-Free Grammars
PDAs Accept Context-Free Languages
ALAK ROY. Assistant Professor Dept. of CSE NIT Agartala
AP STUDY SESSION 2.
STATISTICS INTERVAL ESTIMATION Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National Taiwan University.
David Burdett May 11, 2004 Package Binding for WS CDL.
1. 2 Begin with the end in mind! 3 Understand Audience Needs Stakeholder Analysis WIIFM Typical Presentations Expert Peer Junior.
Create an Application Title 1Y - Youth Chapter 5.
Add Governors Discretionary (1G) Grants Chapter 6.
CALENDAR.
The 5S numbers game..
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Media-Monitoring Final Report April - May 2010 News.
Welcome. © 2008 ADP, Inc. 2 Overview A Look at the Web Site Question and Answer Session Agenda.
Break Time Remaining 10:00.
The basics for simulations
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Factoring Quadratics — ax² + bx + c Topic
EE, NCKU Tien-Hao Chang (Darby Chang)
PP Test Review Sections 6-1 to 6-6
1 IMDS Tutorial Integrated Microarray Database System.
Briana B. Morrison Adapted from William Collins
Outline Minimum Spanning Tree Maximal Flow Algorithm LP formulation 1.
Lexical Analysis Arial Font Family.
Dynamic Access Control the file server, reimagined Presented by Mark on twitter 1 contents copyright 2013 Mark Minasi.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
Biology 2 Plant Kingdom Identification Test Review.
MaK_Full ahead loaded 1 Alarm Page Directory (F11)
Facebook Pages 101: Your Organization’s Foothold on the Social Web A Volunteer Leader Webinar Sponsored by CACO December 1, 2010 Andrew Gossen, Senior.
TCCI Barometer September “Establishing a reliable tool for monitoring the financial, business and social activity in the Prefecture of Thessaloniki”
1 Termination and shape-shifting heaps Byron Cook Microsoft Research, Cambridge Joint work with Josh Berdine, Dino Distefano, and.
Artificial Intelligence
When you see… Find the zeros You think….
2011 WINNISQUAM COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=1021.
Before Between After.
2011 FRANKLIN COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=332.
Slide R - 1 Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Prentice Hall Active Learning Lecture Slides For use with Classroom Response.
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
Types of selection structures
Static Equilibrium; Elasticity and Fracture
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Clock will move after 1 minute
famous photographer Ara Guler famous photographer ARA GULER.
Select a time to count down from the clock above
Copyright Tim Morris/St Stephen's School
1.step PMIT start + initial project data input Concept Concept.
WARNING This CD is protected by Copyright Laws. FOR HOME USE ONLY. Unauthorised copying, adaptation, rental, lending, distribution, extraction, charging.
1 There are some pictures of Jinan Maybe you can imagine what is going on there Here we go…… most form Yang.
9. Two Functions of Two Random Variables
A Data Warehouse Mining Tool Stephen Turner Chris Frala
1 Dr. Scott Schaefer Least Squares Curves, Rational Representations, Splines and Continuity.
Outlook 2013 Web App (OWA) User Guide Durham Technical Community College.
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
Information Retrieval and Question-Answering
Web-based Factoid Question Answering (including a sketch of Information Retrieval) Slides adapted from Dan Jurafsky, Jim Martin and Ed Hovy.
Learning Surface Text Patterns for a Question Answering System Deepak Ravichandran Eduard Hovy Information Sciences Institute University of Southern California.
1 Question-Answering via the Web: the AskMSR System Note: these viewgraphs were originally developed by Professor Nick Kushmerick, University College Dublin,
A Technical Seminar on Question Answering SHRI RAMDEOBABA COLLEGE OF ENGINEERING & MANAGEMENT Presented By: Rohini Kamdi Guided By: Dr. A.J.Agrawal.
QUESTION AND ANSWERING. Overview What is Question Answering? Why use it? How does it work? Problems Examples Future.
Information Retrieval and Web Search Question Answering Instructor: Rada Mihalcea.
Using a Named Entity Tagger to Generalise Surface Matching Text Patterns for Question Answering Mark A. Greenwood and Robert Gaizauskas Natural Language.
Answer Mining by Combining Extraction Techniques with Abductive Reasoning Sanda Harabagiu, Dan Moldovan, Christine Clark, Mitchell Bowden, Jown Williams.
1 Answering English Questions by Computer Jim Martin University of Colorado Computer Science.
Presentation transcript:

Question Answering CS 290N, Instructor: Tao Yang (some of these slides were adapted from presentations of Giuseppe Attardi, Rada Mihalcea, Chris Manning’, Nicholas Kushmerick)

Slide 1 Question Answering Earlier IR systems focus on queries with short keywords –Most of search engine queries are short queries. QA systems focus in natural language question answering. Outline –What is QA –Examples of QA systems/algorithms.

Slide 2 People want to ask questions… Examples from Ask.com query log how much should i weigh what does my name mean how to get pregnant where can i find pictures of hairstyles who is the richest man in the world what is the meaning of life why is the sky blue what is the difference between white eggs and brown eggs can you drink milk after the expiration date what is true love what is the jonas brothers address Around 10-20% of query logs

Slide 3 Why QA? QA engines attempt to let you ask your question the way you'd normally ask it. More specific than short keyword queries –Orange chicken –what is orange chicken –how to make orange chicken Inexperienced search users

Slide 4 General Search Engine Include question words etc. in stop-list with standard IR Sometime it works. Sometime it requires users to do more investigation. –Question: Who was the prime minister of Australia during the Great Depression? Answer: James Scullin (Labor) 1929–31. Ask.com gives an explicit answer. Google’s top 1-2 results are also good. –what is phone number for united airlines Ask.com gives a direct answer Google gives no direct answers in top 10.

Slide 5 Difficult questions Question: How much money did IBM spend on advertising in 2006? No engine can answer

Slide 6 What is involved in QA? Natural Language Processing –Question type analysis and answer patterns –Semantic Processing –Syntactic Processing and Parsing Knowledge Base to store candidate answers Candidate answer search and answer processing

Slide 7 Question Types Class 1 Answer: single datum or list of items C: who, when, where, how (old, much, large) Class 2 A: multi-sentence C: extract from multiple sentences Class 3 A: across several texts C: comparative/contrastive Class 4 A: an analysis of retrieved information C: synthesized coherently from several retrieved fragments Class 5 A: result of reasoning C: word/domain knowledge and common sense reasoning

Slide 8 Question subtypes Class 1.A About subjects, objects, manner, time or location Class 1.B About properties or attributes Class 1.C Taxonomic nature

Slide 9 QA SystemOutput AnswerBusSentences AskJeeves (ask.com) Documents/direct answers IONAUTPassages LCCSentences MulderExtracted answers QuASMDocument blocks STARTMixture WebclopediaSentences Example of Answer Processing

Slide 10 AskJeeves (now Ask.com) Eariler AskJeeves is probably most well-known QA site –It largely does pattern matching to match your question to their own knowledge base of questions –Has own knowledge base and uses partners to answer questions –Catalogues previous questions –Answer processing engine Question template response –If that works, you get template-driven answers to that known question –If that fails, it falls back to regular web search Ask.com: More advanced QA systems developed in last 3 years. –Search answers from a large web database –Deep integration with structured answers

Slide 11 Question Answering at TREC Question answering competition at TREC consists of answering a set of 500 fact-based questions, e.g., “When was Mozart born?”. For the first three years systems were allowed to return 5 ranked answer snippets (50/250 bytes) to each question. –IR think –Mean Reciprocal Rank (MRR) scoring: 1, 0.5, 0.33, 0.25, 0.2, 0 for 1, 2, 3, 4, 5, 6+ doc –Mainly Named Entity answers (person, place, date, …) From 2002 the systems were only allowed to return a single exact answer and the notion of confidence has been introduced.

Slide 12 The TREC Document Collection The current collection uses news articles from the following sources: AP newswire, New York Times newswire, Xinhua News Agency newswire, In total there are 1,033,461 documents in the collection. 3GB of text Clearly this is too much text to process entirely using advanced NLP techniques so the systems usually consist of an initial information retrieval phase followed by more advanced processing. Many supplement this text with use of the web, and other knowledge bases

Slide 13 Sample TREC questions 1. Who is the author of the book, "The Iron Lady: A Biography of Margaret Thatcher"? 2. What was the monetary value of the Nobel Peace Prize in 1989? 3. What does the Peugeot company manufacture? 4. How much did Mercury spend on advertising in 1993? 5. What is the name of the managing director of Apricot Computer? 6. Why did David Koresh ask the FBI for a word processor? 7. What debts did Qintex group leave? 8. What is the name of the rare neurological disease with symptoms such as: involuntary movements (tics), swearing, and incoherent vocalizations (grunts, shouts, etc.)?

Slide 14 Top Performing Systems Currently the best performing systems at TREC can answer approximately 70% of the questions Approaches and successes have varied a fair deal –Knowledge-rich approaches, using a vast array of NLP techniques got the best results in 2000, 2001 –AskMSR system stressed how much could be achieved by very simple methods with enough text (and now various copycats) –Middle ground is to use large collection of surface matching patterns (ISI)

Slide 15 AskMSR Web Question Answering: Is More Always Better? –Dumais, Banko, Brill, Lin, Ng (Microsoft, MIT, Berkeley) Q: “Where is the Louvre located?” Want “Paris” or “France” or “75058 Paris Cedex 01” or a map Don’t just want URLs

Slide 16 AskMSR: Shallow approach In what year did Abraham Lincoln die? Ignore hard documents and find easy ones

Slide 17 AskMSR: Details

Slide 18 Step 1: Rewrite queries Intuition: The user’s question is often syntactically quite close to sentences that contain the answer –Where is the Louvre Museum located? –The Louvre Museum is located in Paris –Who created the character of Scrooge? –Charles Dickens created the character of Scrooge.

Slide 19 Query rewriting Classify question into seven categories –Who is/was/are/were…? –When is/did/will/are/were …? –Where is/are/were …? a. Category-specific transformation rules eg “For Where questions, move ‘is’ to all possible locations” “Where is the Louvre Museum located”  “is the Louvre Museum located”  “the is Louvre Museum located”  “the Louvre is Museum located”  “the Louvre Museum is located”  “the Louvre Museum located is” b. Expected answer “Datatype” (eg, Date, Person, Location, …) When was the French Revolution?  DATE Hand-crafted classification/rewrite/datatype rules (Could they be automatically learned?) Nonsense, but who cares? It’s only a few more queries to Google.

Slide 20 Query Rewriting - weights One wrinkle: Some query rewrites are more reliable than others +“the Louvre Museum is located” Where is the Louvre Museum located? Weight 5 if we get a match, it’s probably right +Louvre +Museum +located Weight 1 Lots of non-answers could come back too

Slide 21 Step 2: Query search engine Send all rewrites to a Web search engine Retrieve top N answers (100?) For speed, rely just on search engine’s “snippets”, not the full text of the actual document

Slide 22 Step 3: Mining N-Grams Unigram, bigram, trigram, … N-gram: list of N adjacent terms in a sequence Eg, “Web Question Answering: Is More Always Better” –Unigrams: Web, Question, Answering, Is, More, Always, Better –Bigrams: Web Question, Question Answering, Answering Is, Is More, More Always, Always Better –Trigrams: Web Question Answering, Question Answering Is, Answering Is More, Is More Always, More Always Betters

Slide 23 Mining N-Grams Simple: Enumerate all N-grams (N=1,2,3 say) in all retrieved snippets Use hash table and other fancy footwork to make this efficient Weight of an n-gram: occurrence count, each weighted by “reliability” (weight) of rewrite that fetched the document Example: “Who created the character of Scrooge?” –Dickens –Christmas Carol - 78 –Charles Dickens - 75 –Disney - 72 –Carl Banks - 54 –A Christmas - 41 –Christmas Carol - 45 –Uncle - 31

Slide 24 Step 4: Filtering N-Grams Each question type is associated with one or more “data-type filters” = regular expression When… Where… What … Who … Boost score of n-grams that do match regexp Lower score of n-grams that don’t match regexp Date Location Person

Slide 25 Step 5: Tiling the Answers Dickens Charles Dickens Mr Charles Scores merged, discard old n-grams Mr Charles Dickens Score 45 N-Grams tile highest-scoring n-gram N-Grams Repeat, until no more overlap

Slide 26 Results Standard TREC contest test-bed: ~1M documents; 900 questions Technique doesn’t do too well (though would have placed in top 9 of ~30 participants!) –MRR = (ie, right answered ranked about #4-#5) Using the Web as a whole, not just TREC’s 1M documents… MRR = 0.42 (ie, on average, right answer is ranked about #2-#3) –Why? Because it relies on the enormity of the Web!

Slide 27 ISI: Surface patterns approach ISI’s approach Use of Characteristic Phrases "When was born” –Typical answers "Mozart was born in 1756.” "Gandhi ( )...” –Suggests phrases like " was born in ” " ( -” –as Regular Expressions can help locate correct answer

Slide 28 Use Pattern Learning Example: “The great composer Mozart ( ) achieved fame at a young age” “Mozart ( ) was a genius” “The whole world would always be indebted to the great music of Mozart ( )” –Longest matching substring for all 3 sentences is "Mozart ( )” –Suffix tree would extract "Mozart ( )" as an output, with score of 3 Reminiscent of IE pattern learning

Slide 29 Algorithm 1 for Pattern Learning Select an example for a given question –Ex. for BIRTHYEAR questions we select “Mozart 1756” (“Mozart” as the question term and “1756”as the answer term). Submit the question and the answer term as queries to a search engine. Download the top 1000 web documents Apply a sentence breaker to the documents. Retain only those sentences that contain both the question and the answer term. Tokenize and pass each retained sentence through a suffix tree constructor. This finds all substrings, of all lengths, along with their counts. For example consider the

Slide 30 Example Given 3 sentences: –“The great composer Mozart (1756–1791) achieved fame at a young age” –“Mozart (1756–1791) was a genius”. –“The whole world would always be indebted to the great music of Mozart (1756–1791)”. The longest matching substring for all 3 sentences is “Mozart (1756– 1791)”, which the suffix tree would extract as one of the outputs along with the score of 3. Pass each phrase in the suffix tree through a filter to retain only those phrases that contain both the question and the answer term. For the example, we extract only those phrases from the suffix tree that contain the words “Mozart” and “1756”. Replace the word for the question term by the tag “ ” and the word for the answer term by the term “ ”.

Slide 31 Pattern Learning (cont.) Repeat with different examples of same question type –“Gandhi 1869”, “Newton 1642”, etc. Some patterns learned for BIRTHDATE –a. born in, –b. was born on, –c. ( - –d. ( - )

Slide 32 Algorithm 2: Calculating Precision of Derived Patterns Query the search engine by using only the question term (in the example, only “Mozart”). Download the top 1000 web documents Segment these documents into sentences. Retain only those sentences that contain the question term. For each pattern obtained from Algorithm, check the presence of each pattern in the sentence obtained from above for two instances: – Presence of the pattern with tag matched by any word. –Presence of the pattern in the sentence with tag matched by the correct answer term.

Slide 33 Algorithm 2: Example For the pattern “ was born in ” check the presence of the following strings in the answer sentence –Mozart was born in –Mozart was born in 1756 Calculate the precision of each pattern by the formula P = Ca / Co where –Ca = total number of patterns with the answer term present –Co = total number of patterns present with answer term replaced by any word Retain only the patterns matching a sufficient number of examples (> 5).

Slide 34 Experiments 6 different Q types –from Webclopedia QA Typology (Hovy et al., 2002a) BIRTHDATE LOCATION INVENTOR DISCOVERER DEFINITION WHY-FAMOUS

Slide 35 Experiments: pattern precision BIRTHDATE table: 1.0 ( - ) 0.85 was born on, 0.6 was born in 0.59 was born 0.53 was born ( 0.36 ( - INVENTOR 1.0 invents 1.0the was invented by 1.0 invented the in

Slide 36 Experiments (cont.) DISCOVERER 1.0when discovered 1.0 's discovery of 0.9 was discovered by in DEFINITION 1.0 and related 1.0form of, 0.94as, and

Slide 37 Experiments (cont.) WHY-FAMOUS 1.0 called 1.0laureate 0.71 is the of LOCATION 1.0 's 1.0regional : : 0.92near in Depending on question type, get high MRR (0.6–0.9), with higher results from use of Web than TREC QA collection

Slide 38 Shortcomings & Extensions Need for POS &/or semantic types "Where are the Rocky Mountains?” "Denver's new airport, topped with white fiberglass cones in imitation of the Rocky Mountains in the background, continues to lie empty” in NE tagger &/or ontology could enable system to determine "background" is not a location

Slide 39 Shortcomings... (cont.) Long distance dependencies "Where is London?” "London, which has one of the most busiest airports in the world, lies on the banks of the river Thames” would require pattern like:, ( )*, lies on –Abundance & variety of Web data helps system to find an instance of patterns w/o losing answers to long distance dependencies

Slide 40 Shortcomings... (cont.) System currently has only one anchor word –Doesn't work for Q types requiring multiple words from question to be in answer "In which county does the city of Long Beach lie?” "Long Beach is situated in Los Angeles County” required pattern: is situated in

Slide 41 References AskMSR: Question Answering Using the Worldwide Web –Michele Banko, Eric Brill, Susan Dumais, Jimmy Lin – AAAI02.pdfhttp:// AAAI02.pdf –In Proceedings of 2002 AAAI SYMPOSIUM on Mining Answers from Text and Knowledge Bases, March 2002 Web Question Answering: Is More Always Better? –Susan Dumais, Michele Banko, Eric Brill, Jimmy Lin, Andrew Ng – Conf.pdfhttp://research.microsoft.com/~sdumais/SIGIR2002-QA-Submit- Conf.pdf D. Ravichandran and E.H. Hovy Learning Surface Patterns for a Question Answering System. ACL conference, July 2002.

Slide 42 Falcon: Architecture Question Question Semantic Form Expected Answer Type Answer Paragraphs Answer Semantic Form Answer Answer Logical Form Paragraph Index Question Processing Paragraph Processing Answer Processing Paragraph filtering Collins Parser + NE Extraction Abduction Filter Coreference Resolution Question Taxonomy Question Expansion WordNet Collins Parser + NE Extraction Question Logical Form

Slide 43 Question parse Who was the first Russian astronaut to walk in space WPVBDDTJJNNPNPTOVBINNN NP PP VP S S

Slide 44 Question semantic form astronaut walkspace Russian first PERSON first(x)  astronaut(x)  Russian(x)  space(z)  walk(y, z, x)  PERSON(x) Question logic form: Answer type

Slide 45 Expected Answer Type sizeArgentina dimension QUANTITY WordNet Question: What is the size of Argentina?

Slide 46 Questions about definitions Special patterns: –What {is|are} …? –What is the definition of …? –Who {is|was|are|were} …? Answer patterns: –…{is|are} –…, {a|an|the} –… -

Slide 47 Question Taxonomy Reason Number Manner Location Organization Product Language Mammal Currency Nationality Question Game Reptile Country City Province Continent Speed Degree Dimension Rate Duration Percentage Count

Slide 48 Question expansion Morphological variants –invented  inventor Lexical variants –killer  assassin –far  distance Semantic variants –like  prefer

Slide 49 Indexing for Q/A Alternatives: –IR techniques –Parse texts and derive conceptual indexes Falcon uses paragraph indexing: –Vector-Space plus proximity –Returns weights used for abduction

Slide 50 Abduction to justify answers Backchaining proofs from questions Axioms: –Logical form of answer –World knowledge (WordNet) –Coreference resolution in answer text Effectiveness: –14% improvement –Filters 121 erroneous answers (of 692) –Requires 60% question processing time

Slide 51 TREC 13 QA Several subtasks: –Factoid questions –Definition questions –List questions –Context questions LCC still best performance, but different architecture

Slide 52 LCC Block Architecture Passage Retrieval Passage Retrieval Answer Extraction  Theorem Prover  Answer Justification  Answer Reranking Axiomatic Knowledge Base Answer Extraction  Theorem Prover  Answer Justification  Answer Reranking Axiomatic Knowledge Base WordNetNER WordNetNER Document Retrieval Document Retrieval Keywords Passages Question Semantics Captures the semantics of the question Selects keywords for PR Extracts and ranks passages using surface-text techniques Extracts and ranks answers using NL techniques QA Question Parse  Semantic Transformation  Recognition of Expected Answer Type  Keyword Extraction Question Parse  Semantic Transformation  Recognition of Expected Answer Type  Keyword Extraction Question ProcessingAnswer Processing

Slide 53 Question Processing Two main tasks –Determining the type of the answer –Extract keywords from the question and formulate a query

Slide 54 Answer Types Factoid questions… –Who, where, when, how many… –The answers fall into a limited and somewhat predictable set of categories Who questions are going to be answered by… Where questions… –Generally, systems select answer types from a set of Named Entities, augmented with other types that are relatively easy to extract

Slide 55 Answer Types Of course, it isn’t that easy… –Who questions can have organizations as answers Who sells the most hybrid cars? –Which questions can have people as answers Which president went to war with Mexico?

Slide 56 Answer Type Taxonomy Contains ~9000 concepts reflecting expected answer types Merges named entities with the WordNet hierarchy

Slide 57 Answer Type Detection Most systems use a combination of hand-crafted rules and supervised machine learning to determine the right answer type for a question. Not worthwhile to do something complex here if it can’t also be done in candidate answer passages.

Slide 58 Keyword Selection Answer Type indicates what the question is looking for: –It can be mapped to a NE type and used for search in enhanced index Lexical terms (keywords) from the question, possibly expanded with lexical/semantic variations provide the required context.

Slide 59 Keyword Extraction Questions approximated by sets of unrelated keywords Question (from TREC QA track) Keywords Q002: What was the monetary value of the Nobel Peace Prize in 1989? monetary, value, Nobel, Peace, Prize Q003: What does the Peugeot company manufacture? Peugeot, company, manufacture Q004: How much did Mercury spend on advertising in 1993? Mercury, spend, advertising, 1993 Q005: What is the name of the managing director of Apricot Computer? name, managing, director, Apricot, Computer

Slide 60 Keyword Selection Algorithm 1. Select all non-stopwords in quotations 2. Select all NNP words in recognized named entities 3. Select all complex nominals with their adjectival modifiers 4. Select all other complex nominals 5. Select all nouns with adjectival modifiers 6. Select all other nouns 7. Select all verbs 8. Select the answer type word

Slide 61 Passage Retrieval Extracts and ranks passages using surface-text techniques Passage Retrieval Passage Retrieval Answer Extraction  Theorem Prover  Answer Justification  Answer Reranking Axiomatic Knowledge Base Answer Extraction  Theorem Prover  Answer Justification  Answer Reranking Axiomatic Knowledge Base WordNetNER WordNetNER Document Retrieval Document Retrieval Keywords Passages Question Semantics QA Question Parse  Semantic Transformation  Recognition of Expected Answer Type  Keyword Extraction Question Parse  Semantic Transformation  Recognition of Expected Answer Type  Keyword Extraction Question ProcessingAnswer Processing

Slide 62 Passage Extraction Loop Passage Extraction Component –Extracts passages that contain all selected keywords –Passage size dynamic –Start position dynamic Passage quality and keyword adjustment –In the first iteration use the first 6 keyword selection heuristics –If the number of passages is lower than a threshold  query is too strict  drop a keyword –If the number of passages is higher than a threshold  query is too relaxed  add a keyword

Slide 63 Passage Scoring Passages are scored based on keyword windows –For example, if a question has a set of keywords: {k1, k2, k3, k4}, and in a passage k1 and k2 are matched twice, k3 is matched once, and k4 is not matched, the following windows are built: k1 k2 k3 k2 k1 Window 1 k1 k2 k3 k2 k1 Window 2 k1 k2 k3 k2 k1 Window 3 k1 k2 k3 k2 k1 Window 4

Slide 64 Passage Scoring Passage ordering is performed using a sort that involves three scores: –The number of words from the question that are recognized in the same sequence in the window –The number of words that separate the most distant keywords in the window –The number of unmatched keywords in the window

Slide 65 Answer Extraction Extracts and ranks answers using NL techniques Passage Retrieval Passage Retrieval Answer Extraction  Theorem Prover  Answer Justification  Answer Reranking Axiomatic Knowledge Base Answer Extraction  Theorem Prover  Answer Justification  Answer Reranking Axiomatic Knowledge Base WordNetNER WordNetNER Document Retrieval Document Retrieval Keywords Passages Question Semantics QA Question Parse  Semantic Transformation  Recognition of Expected Answer Type  Keyword Extraction Question Parse  Semantic Transformation  Recognition of Expected Answer Type  Keyword Extraction Question ProcessingAnswer Processing

Slide 66 Ranking Candidate Answers n Answer type: Person n Text passage: “Among them was Christa McAuliffe, the first private citizen to fly in space. Karen Allen, best known for her starring role in “Raiders of the Lost Ark”, plays McAuliffe. Brian Kerwin is featured as shuttle pilot Mike Smith...” Q066: Name the first private citizen to fly in space.

Slide 67 Ranking Candidate Answers n Answer type: Person n Text passage: “Among them was Christa McAuliffe, the first private citizen to fly in space. Karen Allen, best known for her starring role in “Raiders of the Lost Ark”, plays McAuliffe. Brian Kerwin is featured as shuttle pilot Mike Smith...” n Best candidate answer: Christa McAuliffe Q066: Name the first private citizen to fly in space.

Slide 68 Features for Answer Ranking Number of question terms matched in the answer passage Number of question terms matched in the same phrase as the candidate answer Number of question terms matched in the same sentence as the candidate answer Flag set to 1 if the candidate answer is followed by a punctuation sign Number of question terms matched, separated from the candidate answer by at most three words and one comma Number of terms occurring in the same order in the answer passage as in the question Average distance from candidate answer to question term matches