Discovery of Inference Rules for Question Answering

Slides:



Advertisements
Similar presentations
Toward Dependency Path based Entailment Rodney Nielsen, Wayne Ward, and James Martin.
Advertisements

Web Mining Research: A Survey Authors: Raymond Kosala & Hendrik Blockeel Presenter: Ryan Patterson April 23rd 2014 CS332 Data Mining pg 01.
Computational Models of Discourse Analysis Carolyn Penstein Rosé Language Technologies Institute/ Human-Computer Interaction Institute.
Discovery of Inference Rules for Question Answering Dekang Lin and Patrick Pantel (Univ Alberta, CA) [ Lin now at Google, Pantel now at ISI ] as (mis-)interpreted.
LEDIR : An Unsupervised Algorithm for Learning Directionality of Inference Rules Advisor: Hsin-His Chen Reporter: Chi-Hsin Yu Date: From EMNLP.
Katrin Erk Distributional models. Representing meaning through collections of words Doc 1: Abdullah boycotting challenger commission dangerous election.
Event Extraction: Learning from Corpora Prepared by Ralph Grishman Based on research and slides by Roman Yangarber NYU.
Learning syntactic patterns for automatic hypernym discovery Rion Snow, Daniel Jurafsky and Andrew Y. Ng Prepared by Ang Sun
Style, Grammar and Punctuation
Feature Selection for Automatic Taxonomy Induction The Features Input: Two terms Output: A numeric score, or. Lexical-Syntactic Patterns Co-occurrence.
Learning Information Extraction Patterns Using WordNet Mark Stevenson and Mark A. Greenwood Natural Language Processing Group University of Sheffield,
 Knowledge Acquisition  Machine Learning. The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Push Singh & Tim Chklovski. AI systems need data – lots of it! Natural language processing: Parsed & sense-tagged corpora, paraphrases, translations Commonsense.
Interpreting Dictionary Definitions Dan Tecuci May 2002.
Scott Duvall, Brett South, Stéphane Meystre A Hands-on Introduction to Natural Language Processing in Healthcare Annotation as a Central Task for Development.
Katrin Erk Vector space models of word meaning. Geometric interpretation of lists of feature/value pairs In cognitive science: representation of a concept.
Focusing on text-to-self connections: What does this story remind you of? Can you relate to the characters in the story? Does anything in this story.
Markov Logic and Deep Networks Pedro Domingos Dept. of Computer Science & Eng. University of Washington.
C. Lawrence Zitnick Microsoft Research, Redmond Devi Parikh Virginia Tech Bringing Semantics Into Focus Using Visual.
1/21 Automatic Discovery of Intentions in Text and its Application to Question Answering (ACL 2005 Student Research Workshop )
Statistical Decision-Tree Models for Parsing NLP lab, POSTECH 김 지 협.
1 Automatic indexing Salton: When the assignment of content identifiers is carried out with the aid of modern computing equipment the operation becomes.
August 17, 2005Question Answering Passage Retrieval Using Dependency Parsing 1/28 Question Answering Passage Retrieval Using Dependency Parsing Hang Cui.
Mining Dependency Relations for Query Expansion in Passage Retrieval Renxu Sun, Chai-Huat Ong, Tat-Seng Chua National University of Singapore SIGIR2006.
SEMANTIC VERIFICATION IN AN ONLINE FACT SEEKING ENVIRONMENT DMITRI ROUSSINOV, OZGUR TURETKEN Speaker: Li, HueiJyun Advisor: Koh, JiaLing Date: 2008/5/1.
Refine and Analyze Search Results In the ISI Web of Knowledge.
Overview of Statistical NLP IR Group Meeting March 7, 2006.
Feature Assignment LBSC 878 February 22, 1999 Douglas W. Oard and Dagobert Soergel.
Semantic Wiki: Automating the Read, Write, and Reporting functions Chuck Rehberg, Semantic Insights.
Question Answering Passage Retrieval Using Dependency Relations (SIGIR 2005) (National University of Singapore) Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan,
Short Text Similarity with Word Embedding Date: 2016/03/28 Author: Tom Kenter, Maarten de Rijke Source: CIKM’15 Advisor: Jia-Ling Koh Speaker: Chih-Hsuan.
Major Issues n Information is mostly online n Information is increasing available in full-text (full-content) n There is an explosion in the amount of.
Automatic Ontology Extraction Miloš Husák RASLAN 2010.
Queensland University of Technology
Introduction to Corpus Linguistics
CSC 594 Topics in AI – Natural Language Processing
Place Value: Key to Making Sense of Numbers
THE QUESTIONS—SKILLS ANALYSE EVALUATE INFER UNDERSTAND SUMMARISE
CH. 1: Introduction 1.1 What is Machine Learning Example:
Machine Learning Ali Ghodsi Department of Statistics
Illustrations of different approaches Peter Clark and John Thompson
Mining the Data Charu C. Aggarwal, ChengXiang Zhai
Introduction Data Mining for Business Analytics.
Programming Language Syntax 7
Reading Strategies.
Corpus Linguistics I ENG 617
Social Knowledge Mining
Mining Query Subtopics from Search Log Data
Qualitative Observation
Students as self-teachers
The CoNLL-2014 Shared Task on Grammatical Error Correction
How Scientists use Statistics
Automatic Detection of Causal Relations for Question Answering
PAE-DIRT Paraphrase Acquisition Enhancing DIRT
Five Requirements of a Complete Sentence
Language and Learning Introduction to Artificial Intelligence COS302
Introduction: Statistics meets corpus linguistics
CS246: Information Retrieval
The Scientific Method.
A User study on Conversational Software
Period Ⅱ 教学目标:In this class the students will be able to
A Path-based Transfer Model for Machine Translation
Remembering and Learning
Discrete Maths 13. Grammars Objectives
Extracting Information from Diverse and Noisy Scanned Document Images
By Hossein Hematialam and Wlodek Zadrozny Presented by
Artificial Intelligence 2004 Speech & Natural Language Processing
Extracting Information from Diverse and Noisy Scanned Document Images
chased Word of the Day (verb) to go after someone or something
Natural Language Processing at NYU: the Proteus Project
Presentation transcript:

Discovery of Inference Rules for Question Answering Dekang Lin and Patrick Pantel Natural Language Engineering 7(4):343-360, 2001 as (mis-)interpreted by Peter Clark

Goal Observation: “mismatch” between expressions in qns and text e.g. “X writes Y” vs. “X is the author of Y” Need “inference rules” to answer questions “X writes Y”  “X is the author of Y” “X manufactures Y”  “X’s Y factory” Question: Can we learn these inference rules from text? (aka “paraphrases”, “variants”) DIRT (Discovering Inference Rules from Text)

The limits of word search… Who is the author of ‘Star Spangled Banner?’ A. …Francis Scott Key wrote the “Star Spangled Banner” in 1814. …comedian-acress Roseanne Barr sang her famous shrieking rendition of the “Star Spangled Banner” before a San Diego Padres-Cincinnati Reds game. B. What does Peugot manufacture? Chrétien visited Peugot’s newly renovated car factory in the afternoon.

Approach Parse sentences in a giant (1GB) corpus Extract instantiated “paths” from the parse tree, e.g.: X buys something from Y X manufactures Y X’s Y factory For each path, collect the sets of X’s and Y’s For a given path (pattern), find other paths where the X’s and Y’s are pretty similar

Approach Parse sentences in a giant (1GB) corpus, then: Extract “paths” from the parse tree, e.g.: X buys something from Y X manufactures Y X’s Y factory Collect statistics on what the X’s and Y’s are Compare the X-Y sets: For a given path (pattern), find other paths where the X’s and Y’s are similar

Results (examples)

Method: 1. Parse Corpus 1GB newspaper (Reuters?) corpus Use MiniPar Chart parser self-trained statistical ranking of parse (“dependency”) trees

Method: 2. Extract “paths”

Method: 3. Collect the X’s and Y’s

Method: 4. Compare the X-Y sets  ?…

Method: 4. Compare the X-Y sets  ? …and

Method: 4. Compare the X-Y sets 1. Characterizing a single X-Y set: Count frequencies of words for X (and Y) Weight by ‘saliency’ (slot-X mutual information)

Method: 4. Compare the X-Y sets 2. Comparing two X-Y sets Two paths have high similarity if there are a large number of common features. Mathematically:

Example: Learned Inference rules

Example: vs. Hand-crafted inference rules (by ISI)

Results

Observations Little overlap in manual and automatic rules DIRT performance varies a lot Much better with verb rather than noun roots If less than 2 modifiers, no paths found For some TREC examples, no “correct” rules found “X leaves Y”  “X flees Y” Where X’s and Y’s are similar, can get agent-patient the wrong way round E.g. “X asks Y” vs. “Y asks X”

The Big Question Can we acquire the vast amount of common-sense knowledge from text? Lin and Pantel suggests: “yes” (at least in a semi-automated way)