Toward Dependency Path based Entailment Rodney Nielsen, Wayne Ward, and James Martin.

Slides:



Advertisements
Similar presentations
Arnd Christian König Venkatesh Ganti Rares Vernica Microsoft Research Entity Categorization Over Large Document Collections.
Advertisements

Spelling Correction for Search Engine Queries Bruno Martins, Mario J. Silva In Proceedings of EsTAL-04, España for Natural Language Processing Presenter:
Toward Dependency Path based Entailment Rodney Nielsen, Wayne Ward, and James Martin.
Baselines for Recognizing Textual Entailment Ling 541 Final Project Terrence Szymanski.
1 Fuchun Peng Microsoft Bing 7/23/  Query is often treated as a bag of words  But when people are formulating queries, they use “concepts” as.
A Self Learning Universal Concept Spotter By Tomek Strzalkowski and Jin Wang Original slides by Iman Sen Edited by Ralph Grishman.
Joint Sentiment/Topic Model for Sentiment Analysis Chenghua Lin & Yulan He CIKM09.
LEDIR : An Unsupervised Algorithm for Learning Directionality of Inference Rules Advisor: Hsin-His Chen Reporter: Chi-Hsin Yu Date: From EMNLP.
GENERATING AUTOMATIC SEMANTIC ANNOTATIONS FOR RESEARCH DATASETS AYUSH SINGHAL AND JAIDEEP SRIVASTAVA CS DEPT., UNIVERSITY OF MINNESOTA, MN, USA.
On feature distributional clustering for text categorization Bekkerman, El-Yaniv, Tishby and Winter The Technion. June, 27, 2001.
Predicting Text Quality for Scientific Articles Annie Louis University of Pennsylvania Advisor: Ani Nenkova.
Predicting Text Quality for Scientific Articles AAAI/SIGART-11 Doctoral Consortium Annie Louis : Louis A. and Nenkova A Automatically.
Predicting Cloze Task Quality for Vocabulary Training Adam Skory, Maxine Eskenazi Language Technologies Institute Carnegie Mellon University
1 Language Model CSC4170 Web Intelligence and Social Computing Tutorial 8 Tutor: Tom Chao Zhou
A Markov Random Field Model for Term Dependencies Donald Metzler and W. Bruce Croft University of Massachusetts, Amherst Center for Intelligent Information.
Text Classification from Labeled and Unlabeled Documents using EM Kamal Nigam Andrew K. McCallum Sebastian Thrun Tom Mitchell Machine Learning (2000) Presented.
Introduction to Language Models Evaluation in information retrieval Lecture 4.
Distributional clustering of English words Authors: Fernando Pereira, Naftali Tishby, Lillian Lee Presenter: Marian Olteanu.
ML ALGORITHMS. Algorithm Types Classification (supervised) Given -> A set of classified examples “instances” Produce -> A way of classifying new examples.
Scalable Text Mining with Sparse Generative Models
Language Identification in Web Pages Bruno Martins, Mário J. Silva Faculdade de Ciências da Universidade Lisboa ACM SAC 2005 DOCUMENT ENGENEERING TRACK.
Flash talk by: Aditi Garg, Xiaoran Wang Authors: Sarah Rastkar, Gail C. Murphy and Gabriel Murray.
An Automatic Segmentation Method Combined with Length Descending and String Frequency Statistics for Chinese Shaohua Jiang, Yanzhong Dang Institute of.
Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification on Reviews Peter D. Turney Institute for Information Technology National.
Language Identification of Search Engine Queries Hakan Ceylan Yookyung Kim Department of Computer Science Yahoo! Inc. University of North Texas 2821 Mission.
COMP423: Intelligent Agent Text Representation. Menu – Bag of words – Phrase – Semantics – Bag of concepts – Semantic distance between two words.
1 Wikification CSE 6339 (Section 002) Abhijit Tendulkar.
Towards Improving Classification of Real World Biomedical Articles Kostas Fragos TEI of Athens Christos Skourlas TEI of Athens
©2008 Srikanth Kallurkar, Quantum Leap Innovations, Inc. All rights reserved. Apollo – Automated Content Management System Srikanth Kallurkar Quantum Leap.
Knowledge and Tree-Edits in Learnable Entailment Proofs Asher Stern, Amnon Lotan, Shachar Mirkin, Eyal Shnarch, Lili Kotlerman, Jonathan Berant and Ido.
2007. Software Engineering Laboratory, School of Computer Science S E Towards Answering Opinion Questions: Separating Facts from Opinions and Identifying.
A Simple Unsupervised Query Categorizer for Web Search Engines Prashant Ullegaddi and Vasudeva Varma Search and Information Extraction Lab Language Technologies.
Designing Ranking Systems for Consumer Reviews: The Economic Impact of Customer Sentiment in Electronic Markets Anindya Ghose Panagiotis Ipeirotis Stern.
Similar Document Search and Recommendation Vidhya Govindaraju, Krishnan Ramanathan HP Labs, Bangalore, India JOURNAL OF EMERGING TECHNOLOGIES IN WEB INTELLIGENCE.
Partially Supervised Classification of Text Documents by Bing Liu, Philip Yu, and Xiaoli Li Presented by: Rick Knowles 7 April 2005.
Opinion Holders in Opinion Text from Online Newspapers Youngho Kim, Yuchul Jung and Sung-Hyon Myaeng Reporter: Chia-Ying Lee Advisor: Prof. Hsin-Hsi Chen.
Info rm atics luis rocha 2007 uncovering protein-protein interactions in the bibliome BioCreative.
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
Chapter 23: Probabilistic Language Models April 13, 2004.
Retrieval of Highly Related Biomedical References by Key Passages of Citations Rey-Long Liu Dept. of Medical Informatics Tzu Chi University Taiwan.
Information Retrieval at NLC Jianfeng Gao NLC Group, Microsoft Research China.
LING 573 Deliverable 3 Jonggun Park Haotian He Maria Antoniak Ron Lockwood.
A Word Clustering Approach for Language Model-based Sentence Retrieval in Question Answering Systems Saeedeh Momtazi, Dietrich Klakow University of Saarland,Germany.
CoCQA : Co-Training Over Questions and Answers with an Application to Predicting Question Subjectivity Orientation Baoli Li, Yandong Liu, and Eugene Agichtein.
Creating Subjective and Objective Sentence Classifier from Unannotated Texts Janyce Wiebe and Ellen Riloff Department of Computer Science University of.
Number Sense Disambiguation Stuart Moore Supervised by: Anna Korhonen (Computer Lab)‏ Sabine Buchholz (Toshiba CRL)‏
UWMS Data Mining Workshop Content Analysis: Automated Summarizing Prof. Marti Hearst SIMS 202, Lecture 16.
Comparative Experiments on Sentiment Classification for Online Product Reviews Hang Cui, Vibhu Mittal, and Mayur Datar AAAI 2006.
Mining Dependency Relations for Query Expansion in Passage Retrieval Renxu Sun, Chai-Huat Ong, Tat-Seng Chua National University of Singapore SIGIR2006.
Dependence Language Model for Information Retrieval Jianfeng Gao, Jian-Yun Nie, Guangyuan Wu, Guihong Cao, Dependence Language Model for Information Retrieval,
Jen-Tzung Chien, Meng-Sung Wu Minimum Rank Error Language Modeling.
Finding document topics for improving topic segmentation Source: ACL2007 Authors: Olivier Ferret (18 route du Panorama, BP6) Reporter:Yong-Xiang Chen.
1 Measuring the Semantic Similarity of Texts Author : Courtney Corley and Rada Mihalcea Source : ACL-2005 Reporter : Yong-Xiang Chen.
FILTERED RANKING FOR BOOTSTRAPPING IN EVENT EXTRACTION Shasha Liao Ralph York University.
Natural Language Processing Statistical Inference: n-grams
Relevance Models and Answer Granularity for Question Answering W. Bruce Croft and James Allan CIIR University of Massachusetts, Amherst.
From Words to Senses: A Case Study of Subjectivity Recognition Author: Fangzhong Su & Katja Markert (University of Leeds, UK) Source: COLING 2008 Reporter:
DISTRIBUTED INFORMATION RETRIEVAL Lee Won Hee.
Cell Segmentation in Microscopy Imagery Using a Bag of Local Bayesian Classifiers Zhaozheng Yin RI/CMU, Fall 2009.
N-Gram Model Formulas Word sequences Chain rule of probability Bigram approximation N-gram approximation.
Twitter as a Corpus for Sentiment Analysis and Opinion Mining
COMP423: Intelligent Agent Text Representation. Menu – Bag of words – Phrase – Semantics Semantic distance between two words.
The University of Illinois System in the CoNLL-2013 Shared Task Alla RozovskayaKai-Wei ChangMark SammonsDan Roth Cognitive Computation Group University.
Queensland University of Technology
Sentiment analysis algorithms and applications: A survey
Discovery of Inference Rules for Question Answering
iSRD Spam Review Detection with Imbalanced Data Distributions
PAE-DIRT Paraphrase Acquisition Enhancing DIRT
CMU Y2 Rosetta GnG Distillation
Presentation transcript:

Toward Dependency Path based Entailment Rodney Nielsen, Wayne Ward, and James Martin

Why Entailment Intelligent Tutoring Systems Student Interaction Analysis Are all aspects of the student’s answer entailed by the text and the gold standard answer Are all aspects of the desired answer entailed by the student’s response

Dependency Path-based Entailment DIRT (Lin and Pantel, 2001) Unsupervised method to discover inference rules “X is author of Y ≈ X wrote Y” “X solved Y ≈ X found a solution to Y” Based on Harris’ Distributional Hypothesis words occurring in the same contexts tend to be similar If two dependency paths tend to link the same sets of words, they hypothesize that their meanings are similar

ML Classification Approach Features derived from corpus statistics Unigram co-occurrence Surface form bigram co-occurrence Dependency-derived bigram co-occurrence Mixture of experts: About 18 ML classifiers from Weka toolkit Classify by majority vote or average probability Bag of WordsGraph Matching Dependency Path Based Entailment

Corpora 7.4M articles, 2.5B words, 347 words/doc Gigaword (Graff, 2003) – 77% of documents Reuters Corpus (Lewis et al., 2004) TIPSTER Lucene IR engine Two indices Word surface form Porter stem filter Stop words = {a, an, the}

Word Alignment Features Unigram word alignment

Core Features Core Repeated Features Product of Probabilities Average of Probabilities Geometric Mean of Probabilities Worst Non-Zero Probability Entailing Ngrams for the Lowest Non-Zero Probability Largest Entailing Ngram Count with a Zero Probability Smallest Entailing Ngram Count with a Non-Zero Probability Count of Ngrams in h that do not Co-occur with any Ngrams from t Count of Ngrams in h that do Co-occur with Ngrams in t

Word Alignment Features Bigram word alignment Example: Newspapers choke on rising paper costs and falling revenue. The cost of paper is rising. MLE(cost, t) = n cost of, costs of /n costs of = 6086/35800 = 0.17

Dependency Features Dependency bigram features Hypothesis hText t rising costis Theof paper choke Newspaperson costs and falling risingpaperrevenues

Dependency Features Hypothesis hText t rising costis Theof paper choke Newspaperson costs and falling risingpaperrevenues Descendent relation statistics

Dependency Features Hypothesis hText t rising costis The of paper choke Newspaperson costs and falling risingpaperrevenues Descendent relation statistics

Dependency Features Hypothesis hText t rising cost is Theof paper choke Newspaperson costs and falling risingpaperrevenues Descendent relation statistics

Dependency Features Hypothesis hText t rising costis Theof paper choke Newspaperson costs and falling risingpaperrevenues Descendent relation statistics

Verb Dependency Features Hypothesis hText t rising costis Theof paper choke Newspaperson costs and falling risingpaperrevenues Combined verb descendent relation features Worst verb descendent relation features

Subject Dependency Features Combined and worst subject descendent relations Combined and worst subject-to-verb paths Hypothesis hText t rising cost is Theof paper choke Newspaperson costs and falling risingpaperrevenues

Other Dependency Features Repeat these same features for: Object pcomp-n Other descendent relations

Results RTE2 by Task:IEIRQASUMOverall Accuracy Average Precision RTE2 AccuracySUMNonSUMOverall Test Set Training Set CV RTE1 AccuracyCDNonCDOverall Test Set ( Best submission )83.3 (83.3)56.8 (52.8)61.8 (58.6) Training Set CV

Feature Analysis All feature sets are contributing according to cross validation on the training set Most significant feature set: Unigram stem based word alignment Most significant core repeated feature: Average Probability

Conclusions While our current dependency path features are only a step in the direction of our proposed inference system, they provided a significant improvement over the best results from the first PASCAL Recognizing Textual Entailment challenge (RTE1) Our system (after fixing a couple of bugs) ranked 6 th in accuracy and 4 th in average precision out of 23 entrants at this year’s RTE2 challenge We believe our proposed system will provide an effective foundation for the detailed assessment of students’ responses to an intelligent tutor

Questions Mixture of experts classifier using corpus co-occurrence statistics Moving in the direction of DIRT Domain of Interest: Student response analysis in intelligent tutoring systems RTE2 Task:IEIRQASUMAll Accuracy Average Precision Bag of WordsGraph Matching Dependency Path Based Entailment Hypothesis h RTE2 AccuracySUMNonSUMOverall Test Set Training Set CV Text t rising costis Theof paper choke Newspaperson costs and falling risingpaperrevenues RTE1 AccuracyCDNonCDOverall Test Set (Best Subm) 83.3 (83.3)56.8 (52.8)61.8 (58.6) Training Set CV