Download presentation
Presentation is loading. Please wait.
Published byJoy Hamilton Modified over 9 years ago
1
Talk Schedule Question Answering from Email Bryan Klimt July 28, 2005
2
Project Goals To build a practical working question answering system for personal email To learn about the technologies that go into QA (IR,IE,NLP,MT) To discover which techniques work best and when
3
System Overview
4
Dataset 18 months of email (Sept 2003 to Feb 2005) 4799 total 196 are talk announcements hand labelled and annotated 478 questions and answers
5
A new email arrives… Is it a talk announcement? If so, we should index it.
6
Email Classifier Email Data Logistic Regression Decision Logistic Regression Combo
7
Classification Performance precision = 0.81 recall = 0.66 (previous works had better performance) Top features: –abstract, bio, speaker, copeta, multicast, esm, donut, talk, seminar, cmtv, broadcast, speech, distinguish, ph, lectur, ieee, approach, translat, professor, award
8
Annotator Use Information Extraction techniques to identify certains types of data in the emails –speaker names and affiliations –dates and times –locations –lecture series and titles
9
Annotator
10
Rule-based Annotator Combine regular expressions and dictionary lookups defSpanType date =:...[re('^\d\d?$') ai(dayEnd)? ai(month)]...; matches “ 23rd September ”
11
Conditional Random Fields Probabilistic framework for labelling sequential data Known to outperform HMMs (relaxation of independence assumptions) and MEMMs (avoid “label bias” problem) Allow for multiple output features at each node in the sequence
12
Rule-based vs. CRFs
13
Both results are much higher than in previous study For dates, times, and locations, rules are easy to write and perform extremely well For names, titles, affiliations, and series, rules are very difficult to write, and CRFs are preferable
14
Template Filler Creates a database record for each talk announced in the email This database is used by the NLP answer extractor
15
Filled Template Seminar { title = “Keyword Translation from English to Chinese for Multilingual QA” name = Frank Lin time = 5:30pm date = Thursday, Sept. 23 location = 4513 Newell Simon Hall affiliation = series = }
16
Search Time Now the email is index The user can ask questions
17
IR Answer Extractor Where is Frank Lin’s talk? 0.50553451.txt search[468:473]: "frank" search[2025:2030]: "frank" search[474:477]: "lin” 0.12492547.txt search[580:583]: "lin” 0.06422535.txt search[2283:2286]: "lin" Performs a traditional IR (TF-IDF) search using the question as a query Determines the answer type from simple heuristics (“Where”- >LOCATION)
18
IR Answer Extractor
19
NL Question Analyzer Uses Tomita Parser to fully parse questions to translate them into a structured query language “Where is Frank Lin’s talk?” ((FIELD LOCATION) (FILTER (NAME “FRANK LIN”)))
20
NL Answer Extractor Simply executes the structured query produced by the Question Analyzer ((FIELD LOCATION) (FILTER (NAME “FRANK LIN”))) select LOCATION from seminar_templates where NAME=“FRANK LIN”;
21
Results NL Answer Extractor -> 0.870 IR Answer Extractor -> 0.755
22
Results Both answer extractors have similar (good) performance IR based extractor –easy to implement (1-2 days) –better on questions w/ titles and names –very bad on yes/no questions NLP based extractor –more difficult to implement (4-5 days) –better on questions w/ dates and times
23
Examples “Where is the lecture on dolphin language?” –NLP Answer Extractor: Fails to find any talk –IR Answer Extractor: Finds the correct talk –Actual Title: “Natural History and Communication of Spotted Dolphin, Stenella Frontalis, in the Bahamas” “Who is speaking on September 10?” –NLP Extractor: Finds the correct record(s) –IR Extractor: Extracts the wrong answer –A talk “10 am, November 10” ranks higher than one on “Sept 10th”
24
Future Work Add an annotation “feedback loop” for the classifier Add a planner module to decide which answer extractor to apply to each individual question Tune parameters for classifier and TF-IDF search engine Integrate into a mail client!
25
Conclusions Overall performance is good enough for the system to be helpful to end users Both rule-based and automatic annotators should be used, but for different types of annotations Both IR-based and NLP-based answer extractors should be used, but for different types of questions
26
DEMO
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.