Exploration & Exploitation in Adaptive Filtering Based on Bayesian Active Learning Yi Zhang, Jamie Callan Carnegie Mellon Univ. Wei Xu NEC Lab America
A Typical Adaptive Filtering System Filtering System … Accumulated docs document stream Delivered docs Feedback User Profile Learning First Request initialization (Binary Classifier) (Utility Function)
Commonly Used Evaluation RelevantNon-Relevant DeliveredARAR ANAN Not DeliveredBRBR BNBN If we assume user satisfaction is mostly influenced by what she/he has seen, then a simplified version for utility is: For example: Utility=2R + -N + (Used in TREC9, TREC10, TREC11 Adaptive Filtering Track trec.nist.gov)
Common Approach in Adaptive Filtering Set the dissemination threshold where the immediate utility gain of delivering a document is zero: For example: in order to optimize Utility=2R + -N +, system delivers iff P(Rel)>=0.33 Because U immediate =2P(Rel)-P(Nrel)>=0
Problem with Current Adaptive Filtering Research Why deliver a document to the user? 1.Satisfies the information need immediate 2.Get user feedback so the system can improve its model of the user’s information need, thus satisfy the information need better in the future Current research in adaptive filtering: underestimates the utility gain of delivering a document by ignoring the second effect –Related work: active learning, Bayesian experimental design
Solution: Explicitly Model the Future Utility of Delivering a Document N future : number of discounted documents in the future Exploitation: estimation of the immediate utility of delivering a new document based on model learned Exploration: estimation of the future utility of delivering a new document by considering the improvement of the model learned if we can get user feedback obout the document.
Exploitation: Estimate U immediate Using Bayesian Inference Let P( |D t-1 ) be the posterior distribution of model parameters given training data set D t-1. Using Bayesian Inference, we have: A y is the credit/penalty defined by the utility function that model user satisfaction Y=R if relevant, y=N if none relevant
Exploration: Utility Divergence to Measure Loss (1) If we use while the true model is, we incur some loss (utility divergence (UD)): Document Space deliver
Exploration: Utility Divergence to Measure Loss (2) We do not know. However, based on our beliefs about its distribution, we can estimate the expected loss of using : Thus we can measure the quality of Training data D as the expected loss if we use the estimator
The Whole Process Step 1: Step 2: x y
Adaptive Filtering: Logistic Regression to Find Dissemination Threshold X: score* indicates how well each document matches the profile Metropolis-Hasting algorithm to sample I for integration. *scoring function is learned adaptively using Rocchio algorithm
Experimental Data Sets and Evaluation Measures TREC9 OHSUMED TREC10 Reuter’s relevant total+300, ,000 Relevant%0.016%1.2% Initialization 2 relevant documents + topic description
Trec-10 Filtering Data: Reuters Dataset Bayesian Active Bayesian Immediate Norm. Exp. ML N-E T9U T11SU Precision Recall Docs/Profile Active learning is very effective on TREC10 dataset
Trec-9 Filtering Data: OHSUMED Dataset Bayesian Active Bayesian Immediate Norm. Exp. ML N-E T9U T11SU Precision Recall Docs/Profile On average, only 51 out of are relevant documents. Active learning didn’t improve utility on TREC9 dataset. But it didn’t hurt either. (The algorithm is robust)
Related Work –active learning Uncertainty about the label of document –Request the label of the most uncertain document –Minimize the uncertainty about future labels Uncertainty about the model parameters (KL divergence, variance) –Bayesian Experimental Design Improvement of the utility of the model –Information Retrieval Mutual Information between document and label
Contribution and Future Work Our Contribution –Derivation of Utility Divergence to measure model quality –Combining immediate utility and future utility gain in adaptive filtering task –Empirically robust algorithm Future Work –High dimensional space Computational issues: variational algorithms, Gaussian approximations, Gibbs sampling, … Number of training data needed –Other active learning applications Online marketing Interactive retrieval …
The End Thanks