Beliefs and Biases in Web Search

Slides:



Advertisements
Similar presentations
Beliefs & Biases in Web Search
Advertisements

Answering Approximate Queries over Autonomous Web Databases Xiangfu Meng, Z. M. Ma, and Li Yan College of Information Science and Engineering, Northeastern.
Accurately Interpreting Clickthrough Data as Implicit Feedback Joachims, Granka, Pan, Hembrooke, Gay Paper Presentation: Vinay Goel 10/27/05.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Modelling Relevance and User Behaviour in Sponsored Search using Click-Data Adarsh Prasad, IIT Delhi Advisors: Dinesh Govindaraj SVN Vishwanathan* Group:
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
Beliefs & Biases in Web Search Ryen White Microsoft Research
1 Learning User Interaction Models for Predicting Web Search Result Preferences Eugene Agichtein Eric Brill Susan Dumais Robert Ragno Microsoft Research.
Precision and Recall.
Evaluating Search Engine
Presenters: Başak Çakar Şadiye Kaptanoğlu.  Typical output of an IR system – static predefined summary ◦ Title ◦ First few sentences  Not a clear view.
ISP 433/633 Week 6 IR Evaluation. Why Evaluate? Determine if the system is desirable Make comparative assessments.
1 Context-Aware Search Personalization with Concept Preference CIKM’11 Advisor : Jia Ling, Koh Speaker : SHENG HONG, CHUNG.
Evaluation Experiments and Experience from the Perspective of Interactive Information Retrieval Ross Wilkinson Mingfang Wu ICT Centre CSIRO, Australia.
SLB /04/07 Thinking and Communicating “The Spiritual Life is Thinking!” (R.B. Thieme, Jr.)
An Analysis of Assessor Behavior in Crowdsourced Preference Judgments Dongqing Zhu and Ben Carterette University of Delaware.
1 Applying Collaborative Filtering Techniques to Movie Search for Better Ranking and Browsing Seung-Taek Park and David M. Pennock (ACM SIGKDD 2007)
Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research.
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
Hao Wu Nov Outline Introduction Related Work Experiment Methods Results Conclusions & Next Steps.
Universit at Dortmund, LS VIII
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
Math Information Retrieval Zhao Jin. Zhao Jin. Math Information Retrieval Examples: –Looking for formulas –Collect teaching resources –Keeping updated.
Qi Guo Emory University Ryen White, Susan Dumais, Jue Wang, Blake Anderson Microsoft Presented by Tetsuya Sakai, Microsoft Research.
Retroactive Answering of Search Queries Beverly Yang Glen Jeh.
By Pamela Drake SEARCH ENGINE OPTIMIZATION. WHAT IS SEO? Search engine optimization (SEO) is the process of affecting the visibility of a website or a.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Date: 2013/9/25 Author: Mikhail Ageev, Dmitry Lagun, Eugene Agichtein Source: SIGIR’13 Advisor: Jia-ling Koh Speaker: Chen-Yu Huang Improving Search Result.
A Framework to Predict the Quality of Answers with Non-Textual Features Jiwoon Jeon, W. Bruce Croft(University of Massachusetts-Amherst) Joon Ho Lee (Soongsil.
Predicting Short-Term Interests Using Activity-Based Search Context CIKM’10 Advisor: Jia Ling, Koh Speaker: Yu Cheng, Hsieh.
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 10 Evaluation.
Uncertainty and confidence Although the sample mean,, is a unique number for any particular sample, if you pick a different sample you will probably get.
Potential for Personalization Transactions on Computer-Human Interaction, 17(1), March 2010 Data Mining for Understanding User Needs Jaime Teevan, Susan.
Sampath Jayarathna Cal Poly Pomona
Splash Screen.
Splash Screen.
Part 4 Reading Critically
Accurately Interpreting Clickthrough Data as Implicit Feedback
Improving Search Relevance for Short Queries in Community Question Answering Date: 2014/09/25 Author : Haocheng Wu, Wei Wu, Ming Zhou, Enhong Chen, Lei.
Cognition: Thinking and Language
Nature of Estimation.
Business Research Methods
Evaluation of IR Systems
6 June PME Topic “Outsmart Your Own Biases”
How Psychologists Ask and Answer Questions Statistics Unit 2 – pg
The Experimental Method
Improvements to Search
Whole Health Rx Pharmacy Search
How Do Psychologists Ask & Answer Questions?
IB Psychology Turn in: Nothing Socio-Cultural Level of Analysis
Sampling and Surveys How do we collect data? 8/20/2012.
A Study of Immediate Requery Behavior in Search
Cognition mental activities associated with thinking, reasoning, knowing, remembering, and communicating.
Exploratory search: New name for an old hat?
Evaluation of IR Performance
iSRD Spam Review Detection with Imbalanced Data Distributions
Test of Hypothesis Art of decision making.
Question Bias and memory effect
GIRLS 78% BOYS 22%.
2.Personality And Attitude
Bias in Studies Section 1.4.
How does Clickthrough Data Reflect Retrieval Quality?
Zhixiang Chen & Xiannong Meng U.Texas-PanAm & Bucknell Univ.
S.M.JOSHI COLLEGE,HADAPSAR Basics of Hypothesis Testing
recognizing personal bias and how it impacts your work
Chapter One Univariate Data Student Notes
Mean vs Median Sampling Techniques
Precision and Recall.
Designing Samples Section 5.1.
Presentation transcript:

Beliefs and Biases in Web Search Hua zhang

Questions: Do people’s beliefs in different outcomes change as a result of searching? Do search engine results bias in favor of particular outcomes? What is the impact of bias on search outcomes, specifically on answer accuracy?

Outline: Brief introduction Related Work on bias and result ranking Retrospective survey Log analysis of search behavior Discussion and conclusion

What is bias? Bias is an inclination or outlook to present or hold a partial perspective, often accompanied by a refusal to consider the possible alternative points of view.  Biases in cognition may lead people to create beliefs based on false premises and behave in a seemingly irrational manner. Biases can be observed in information retrieval in situations where searchers seek or are presented with information that significantly deviates from the truth.

Why bias? Cause searchers to be interested in particular results for reasons beyond relevance. Affect aggregated behavioral signals used by search engine for ranking, result click-through Both satisfying users’ preference and correct Specific domain, medical

Why bias? Models of search process ignore situations where seemingly irrational behavior is observed Bias plays a central role in human judgement and decision making Improve result relevance based on behavioral data

Beliefs during searching Distribution of (a) prior and (b) posterior reported recalled beliefs about the answer (ranging from Yes to No).

Findings: Leaning yes or no before searching either retain that belief, or become more certain Unlikely to change their opinion to the opposite perspective after searching If unsure, twice more likely to move forward to a positive answer rather than negative

Design search questions Extract yes-no questions from a random sample of the logs of queries Filter questions with medical intent from the rest Label the questions by physicians Base Rate: 674 questions where 55% answers are yes and 45% are no

Crowdsourced judgments 1. Caption judgments Review the captions and provide one rating Yes only No only Both Neither

Presence and distribution Table 4 shows a bias toward yes Table 5 shows that results and captions containing yes are much more likely to appear than those with no

Crowdsourced judgments 2. Landing page judgment Judge the full text of the results returned by the search engine What’s next? Analyze the SERPs generated by the search engine Analyze the top-ranked results returned by the engine

Distribution of highest-ranked answers Topmost captions and the results with yes are ranked above those with no and other answer labels.

Relative ordering of Yes and No Captions with yes are ranked well above those with no for both captions and results. Captions exhibit this bias slightly more than the results themselves Yes was ranked above no more often than the 55% that would be expected

Summary More likely to present captions/results answering a yes- no question positively (yes) in the top results More likely to rank results with positive answers at the top of the list, and above no distributions were skewed more positive than expected from base rates.

Questions?