Download presentation
Presentation is loading. Please wait.
Published byHenry Page Modified over 9 years ago
1
Carnegie Mellon Novelty and Redundancy Detection in Adaptive Filtering Yi Zhang, Jamie Callan, Thomas Minka Carnegie Mellon University {yiz, callan, minka}@cs.cmu.edu
2
Carnegie Mellon Outline Introduction : task definition and related work Building an filtering system –Filtering system structure –Redundancy measures Experimental methodology –Creating testing datasets –Evaluation measures Experimental result Conclusion and future work
3
Carnegie Mellon Task Definition What user want in adaptive filtering: relevant & novel information as soon as the document arrives Current filtering systems are relevant oriented. –Optimization: deliver as much relevant information as possible –Evaluation: relevant recall/precision. System gets credit for relevant but redundant information
4
Carnegie Mellon Relates to First Story Detection in TDT No work on novelty detection in adaptive filtering Current research on FSD in TDT: –Goal : identify the first story of an event –Current performance: far from solved FSD in TDT != Novelty Detection while filtering –Assumption on redundancy definition –Unsupervised learning vs. supervised learning. –Novelty Detection in filtering is about user specified domain, and user information is available
5
Carnegie Mellon Outline Introduction : task definition and related work Building an filtering system –Filtering system Structure –Redundancy measures Experimental methodology –Creating testing datasets –Evaluation measures Experimental result Conclusion and future work
6
Carnegie Mellon Relevancy vs. Novelty User wants: relevant and novel information Contradiction? –Relevant: deliver document similar to previously delivered relevant documents to user –Novel: deliver document not similar to previously delivered relevant documents to user Solution: two stages system –Use different similarity measure to model relevancy and novelty
7
Carnegie Mellon Two Stages Filtering System OR....... OR Stream of Documents Relevance Filtering Redundancy Filtering Novel Redundant
8
Carnegie Mellon Two Problems for Novelty Detection Input: –A sequence of document user read –User feedback Redundancy measure (our current focus): –Measures redundancy of current document with previous documents –Profile specific any time updating of redundancy/novelty measure Thresholding –only document with a redundancy score below threshold is considered novel
9
Carnegie Mellon Redundancy Measures Use similarity/distance/difference between two documents to measure redundancy 3 types of document representation –Set difference –Geometric distance (cosine similarity) –Distributional Similarity (language model)
10
Carnegie Mellon Set Difference Main idea: –Boolean bag-of-words representation –Use smoothing to add frequent words to the doc representation Algorithm: –w j Set(d) iff Count (w j, d) > k Count (w j, d) = 1 * tf wj,d + 3 *rdf w + 2 * df wj –Using the number of new words in d t to measure the novelty R(d t | d i )= -|Set(d t ) Set(d i )|
11
Carnegie Mellon Geometric Distance Main idea: –Basic vector space approach Algorithm: –Represent a document as a vector, and the weight of each dimension is the tf*idf score of corresponding word –Using cosine distance to measure the redundancy R(d t | d i ) = Cosine(d t, d i )
12
Carnegie Mellon Distributional Similarity (1) Main idea: –Unigram language models Algorithm: –Represent a document d as a words distribution d –Measure the redundancy/novelty between two documents using Kullback-Leibler (KL) distance of the corresponding two distributions R(d t | d i ) = - KL ( dt, di,)
13
Carnegie Mellon Distributional Similarity (2):Smoothing Why smoothing: –maximum likelihood estimation of d will make KL ( dt, di, ) infinite because of unseen words –make the estimate of language model more accurate Smoothing algorithms for d : –Bayesian smoothing using dirichlet priors (Zhai&Lafferty SIGIR 01) –Smoothing using shrinkage (McCallum ICML98) –A mixture model based smoothing
14
Carnegie Mellon A Mixture Model: Relevancy vs. Novelty M T : T Topic M E : E General English M I : d_core New Information E T T d_core Relevancy detection: focus on learning T Redundancy detection: focus on learning d_core
15
Carnegie Mellon Outline Introduction : task definition and related work Building an filtering system –Filtering system structure –Redundancy measures Experimental methodology –Creating testing datasets –Evaluation measures Experimental result Conclusion and future work
16
Carnegie Mellon A New Evaluation dataset:APWSJ Combine 1988-1990 AP+WSJ to get a corpus which is likely to contain redundant documents Hired undergraduates to read all relevant documents chronologically sorted and let them to judge: –Whether a document is redundant –If yes, identify document set that make this document redundant Two degree of redundancy: absolutely redundant vs. somewhat redundant Adjudicated by two assessors
17
Carnegie Mellon Another Evaluation Dataset: TREC Interactive Data Combine TREC-6, TREC-7 and TREC-8 interactive dataset (20 TREC topics) Each topic contains several aspects NIST assessors identify aspects for each document Assume d t is redundant if all aspects related to d t have already been covered by previous documents user seen. –Strong assumption on what’s novel/redundant –Can still provide useful information
18
Carnegie Mellon Evaluation Methodology (1) Four components of an adaptive filtering system –relevancy measure –relevance threshold –redundancy measure –redundancy threshold Goal: focus on redundancy measures, and avoid the influence of other part of the filtering system Assume we have a perfect relevancy detection stage to avoid influence of that stage Use 11-pt average recall and precision graph to avoid the influence of thresholding module
19
Carnegie Mellon Evaluation Methodology (2) RedundantNon- Redundant Delivered R + N+N+ Not deliveredR-R- N-N-
20
Carnegie Mellon Outline Introduction : task definition and related work Building an filtering system –Filtering system Structure –Redundancy measures Experimental methodology –Creating testing datasets –Evaluation measures Experimental result Conclusion and future work
21
Carnegie Mellon Comparing Different Redundancy Measures on Two Datasets Cosine measure is consistently good (ONE SLIDE TO EXPLAIN) Mixture language model works much better than other LM approach
22
Carnegie Mellon Mistakes After Thresholding Measuresabsolutely redundant or somewhat redundant absolutely redundant only Set Distance 43.5%28% Cosine Distance 28.1%18.7% Shrinkage (LM) 44.3%21% Dirichlet Prior (LM) 42.4%21% Mixture Model (LM) 27.4%16.7% A simple thresholding algorithm that makes the system complete Learning user’s preference is important Similar results for interactive track data on paper
23
Carnegie Mellon Outline Introduction : task definition and related work Building an filtering system –Filtering system Structure –Redundancy measures Experimental methodology –Creating testing datasets –Evaluation measures Experimental result Conclusion and future work
24
Carnegie Mellon Conclusion: Our Contributions Novelty/redundancy detection in an adaptive filtering system –Two stages approach Reasonably good at identifying redundant documents –Cosine similarity –Mixture language model Factors affecting accuracy –Accuracy at finding relevant documents –Redundancy measure –Redundancy threshold
25
Carnegie Mellon Future work Cosine similarity is far from the optimal (symmetric vs. asymmetric) Feature engineering: time, source, author, name entity… Better novelty measure –Doc.-doc. distance vs. doc-cluster distance (?) –Depend on user: what is novel/redundant for the user? Learning user redundancy preferences –Thresholding: sparse training data problem
26
Carnegie Mellon Appendix: Threshold Algorithm Initialize Rthreshold to let only near duplicates as redundant For each d t delivered: If user said it is redundant and R(d t )> argmax R(d i ) for all d i (delivered relevant document) Rthreshold=R(d t ) Else Rthreshold=Rthreshold-(Rthreshold-R(d t ))/10
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.