Presentation is loading. Please wait.

Presentation is loading. Please wait.

Discovery of Inference Rules for Question Answering

Similar presentations


Presentation on theme: "Discovery of Inference Rules for Question Answering"— Presentation transcript:

1 Discovery of Inference Rules for Question Answering
Dekang Lin and Patrick Pantel Natural Language Engineering 7(4): , 2001 as (mis-)interpreted by Peter Clark

2 Goal Observation: “mismatch” between expressions in qns and text
e.g. “X writes Y” vs. “X is the author of Y” Need “inference rules” to answer questions “X writes Y”  “X is the author of Y” “X manufactures Y”  “X’s Y factory” Question: Can we learn these inference rules from text? (aka “paraphrases”, “variants”) DIRT (Discovering Inference Rules from Text)

3 The limits of word search…
Who is the author of ‘Star Spangled Banner?’ A. …Francis Scott Key wrote the “Star Spangled Banner” in 1814. …comedian-acress Roseanne Barr sang her famous shrieking rendition of the “Star Spangled Banner” before a San Diego Padres-Cincinnati Reds game. B. What does Peugot manufacture? Chrétien visited Peugot’s newly renovated car factory in the afternoon.

4 Approach Parse sentences in a giant (1GB) corpus
Extract instantiated “paths” from the parse tree, e.g.: X buys something from Y X manufactures Y X’s Y factory For each path, collect the sets of X’s and Y’s For a given path (pattern), find other paths where the X’s and Y’s are pretty similar

5 Approach Parse sentences in a giant (1GB) corpus, then:
Extract “paths” from the parse tree, e.g.: X buys something from Y X manufactures Y X’s Y factory Collect statistics on what the X’s and Y’s are Compare the X-Y sets: For a given path (pattern), find other paths where the X’s and Y’s are similar

6 Results (examples)

7 Method: 1. Parse Corpus 1GB newspaper (Reuters?) corpus Use MiniPar
Chart parser self-trained statistical ranking of parse (“dependency”) trees

8 Method: 2. Extract “paths”

9 Method: 3. Collect the X’s and Y’s

10 Method: 4. Compare the X-Y sets
?…

11 Method: 4. Compare the X-Y sets
? …and

12 Method: 4. Compare the X-Y sets
1. Characterizing a single X-Y set: Count frequencies of words for X (and Y) Weight by ‘saliency’ (slot-X mutual information)

13 Method: 4. Compare the X-Y sets
2. Comparing two X-Y sets Two paths have high similarity if there are a large number of common features. Mathematically:

14 Example: Learned Inference rules

15 Example: vs. Hand-crafted inference rules (by ISI)

16 Results

17 Observations Little overlap in manual and automatic rules
DIRT performance varies a lot Much better with verb rather than noun roots If less than 2 modifiers, no paths found For some TREC examples, no “correct” rules found “X leaves Y”  “X flees Y” Where X’s and Y’s are similar, can get agent-patient the wrong way round E.g. “X asks Y” vs. “Y asks X”

18 The Big Question Can we acquire the vast amount of common-sense knowledge from text? Lin and Pantel suggests: “yes” (at least in a semi-automated way)


Download ppt "Discovery of Inference Rules for Question Answering"

Similar presentations


Ads by Google