Future Direction #3: Collaborative Filtering

Slides:



Advertisements
Similar presentations
Recommender Systems & Collaborative Filtering
Advertisements

Improvements and extras Paul Thomas CSIRO. Overview of the lectures 1.Introduction to information retrieval (IR) 2.Ranked retrieval 3.Probabilistic retrieval.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
SEARCHING QUESTION AND ANSWER ARCHIVES Dr. Jiwoon Jeon Presented by CHARANYA VENKATESH KUMAR.
COMP423 Intelligent Agents. Recommender systems Two approaches – Collaborative Filtering Based on feedback from other users who have rated a similar set.
Section Based Relevance Feedback Student: Nat Young Supervisor: Prof. Mark Sanderson.
Web Search - Summer Term 2006 III. Web Search - Introduction (Cont.) (c) Wolfgang Hürst, Albert-Ludwigs-University.
Evaluating Search Engine
Machine Learning Case study. What is ML ?  The goal of machine learning is to build computer systems that can adapt and learn from their experience.”
Basic IR: Queries Query is statement of user’s information need. Index is designed to map queries to likely to be relevant documents. Query type, content,
PROBLEM BEING ATTEMPTED Privacy -Enhancing Personalized Web Search Based on:  User's Existing Private Data Browsing History s Recent Documents 
A machine learning approach to improve precision for navigational queries in a Web information retrieval system Reiner Kraft
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Retrieval Evaluation: Precision and Recall. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity.
1 CS 430 / INFO 430 Information Retrieval Lecture 24 Usability 2.
Recommender Systems; Social Information Filtering.
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
CEDROM-SNi’s DITA- based Project From Analysis to Delivery By France Baril Documentation Architect.
Evaluation David Kauchak cs458 Fall 2012 adapted from:
1 Information Filtering & Recommender Systems (Lecture for CS410 Text Info Systems) ChengXiang Zhai Department of Computer Science University of Illinois,
Recommender systems Drew Culbert IST /12/02.
Aardvark Anatomy of a Large-Scale Social Search Engine.
Philosophy of IR Evaluation Ellen Voorhees. NIST Evaluation: How well does system meet information need? System evaluation: how good are document rankings?
©2008 Srikanth Kallurkar, Quantum Leap Innovations, Inc. All rights reserved. Apollo – Automated Content Management System Srikanth Kallurkar Quantum Leap.
Personalization features to accelerate research Presented by: Armond DiRado Account Development Manager
PageRank for Product Image Search Kevin Jing (Googlc IncGVU, College of Computing, Georgia Institute of Technology) Shumeet Baluja (Google Inc.) WWW 2008.
Beyond Glitz and Into Content Gregory B. Newby Univ. North Carolina at Chapel Hill ASIS 1999 Midyear Meeting.
Implicit An Agent-Based Recommendation System for Web Search Presented by Shaun McQuaker Presentation based on paper Implicit:
Xiaoying Gao Computer Science Victoria University of Wellington Intelligent Agents COMP 423.
Web Searching Basics Dr. Dania Bilal IS 530 Fall 2009.
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
Presented By :Ayesha Khan. Content Introduction Everyday Examples of Collaborative Filtering Traditional Collaborative Filtering Socially Collaborative.
The Development of the Ceramics and Glass website Mia Ridge Museum Systems Team Museum of London.
Personalized Search Xiao Liu
Collaborative Information Retrieval - Collaborative Filtering systems - Recommender systems - Information Filtering Why do we need CIR? - IR system augmentation.
Getting Started with SharePoint 2010 Gareth Johns IT Skills Development Advisor.
What is an Annotated Bibliography? First, what is an annotation?  More than just a brief summary of an article, book, Web site etc.  It combines summary.
To create an Interactive Learning Template To create new learning content for new courses to be hosted on the Virtual Learning Environment (vle) Learning.
SciVal Spotlight Training for KU Huiling Ng, SciVal Product Sales Manager (South East Asia) Cassandra Teo, Account Manager (South East Asia) June 2013.
Data for secondary analysis: the experience of the UK Data Archive Hilary Beedham UK Data Archive.
26/01/20161Gianluca Demartini Ranking Categories for Faceted Search Gianluca Demartini L3S Research Seminars Hannover, 09 June 2006.
Cs Future Direction : Collaborative Filtering Motivating Observations:  Relevance Feedback is useful, but expensive a)Humans don’t often have time.
Augmenting (personal) IR Readings Review Evaluation Papers returned & discussed Papers and Projects checkin time.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Usefulness of Quality Click- through Data for Training Craig Macdonald, ladh Ounis Department of Computing Science University of Glasgow, Scotland, UK.
My Path Awareness Part 1 Kuder Career Planning Systems Presented by: Veronica Allen, Career Coach Betsy Richards, Director of Career Resources Wade Britt,
Shuang Wu REU-DIMACS, 2010 Mentor: James Abello. Project description Our research project Input: time data recorded from the ‘Name That Cluster’ web page.
COPA Rollover How to successfully complete the COPA School Year End Rollover from to
AdisInsight User Guide July 2015
Overview of E-Learning Authoring Software
Developments in Evaluation of Search Engines
Recommender Systems & Collaborative Filtering
Information Organization: Overview
Chapter 25 - Automated Web Search (Search Engines)
Ohio Child Licensing and Quality System (OCLQS)
Unit4 Customer Portal Accessing Knowledge and Documentation.
Using Family Connection
IR Theory: Evaluation Methods
Author: Kazunari Sugiyama, etc. (WWW2004)
DEVELOPMENTAL LEARNING AND TARGETED TEACHING
Summon – Hinari Search Part A (Basic Course Module 7)
Summon - HINARI Search (Basic Course Module 7 Part A)
Recommender Systems Copyright: Dietmar Jannah, Markus Zanker and Gerhard Friedrich (slides based on their IJCAI talk „Tutorial: Recommender Systems”)
Unit4 Customer Portal Knowledge User Access.
Tutorial support.ebsco.com.
Web Information retrieval (Web IR)
Information Organization: Overview
Using Link Information to Enhance Web Page Classification
Future Direction : Collaborative Filtering
Presentation transcript:

Future Direction #3: Collaborative Filtering Motivating Observations: Relevance Feedback is useful, but expensive Humans don’t have time to give positive/negative judgements on a long list of returned web pages to improve search Effort is used once, then wasted want pooling of efforts among individuals and reuse

Collaborative Filtering Motivating Observations (continued) 2) Relevance ¹ Quality Query: bootleg CD’s Medical School Admissions REM NAFTA Simulated Annealing Alzheimer’s Many web pages can be “about” a topic (specialized unit) But there are great differences in quality of presentation, detail, professionalism, substance, etc. Possible Solution: build a supervised learnerfor quality/ NOT topic matter Train on examples of each, learn distinguishing properties

Supervised Learner for “Quality” of a Page P(Quality|Features)  independent of topic similarity salient features may include: # of links Size How often cited Variety of content “Top 5th of Web” etc, assessment of usage counter (hit count) Complexity of graphics µ quality?? Prior quality rating of server

Collaborative Filtering Problem: Different humans have different profiles of relevance/quality Query: Alzheimer’s disease Appropriate for Care Giver Relevant (High Quality) for 6th Grader Medical Researcher = A document or web page

One Solution: Pool Collective Wisdom and Compute weighted average of: ranking(pagej, Queryi) across multiple users (taking into account relevance, quality, and other intangibles However: humans have a better idea than machines of what other humans will find interesting

Collaborative Filtering Idea: instead of trying to model (often intangible) quality judgments, keep a record of previous human relevance and quality judgments Query: Alzheimer’s Users 3 1 5 7 4 2 6 Table of user rankings of web pages for a query Web pages

Solution 1: Identify individual with similar tastes (High Pearson’s coefficient on similar ranking judgments) instead of: P(relevant to me | Pagei content) compute: P(relevant to me | relevant to you)  My similarity to you * P(relevant to you | Pagei content)  Your Judgments

Solution 2: Model Group Profiles for relevance judgments (e.g. Junior High School vs. Medical Researchers) compute: P(relevant to me | relevant to groupg)  My similarity to the group * P(relevant to groupg | Pagei content)  group’s collective (avg) relevance judgments Supervised Learning