Exercising these ideas  You have a description of each item in a small collection. (30 web sites)  Assume we are looking for information about boxers,

Slides:



Advertisements
Similar presentations
Introduction to Information Retrieval
Advertisements

Information Retrieval (IR) on the Internet. Contents  Definition of IR  Performance Indicators of IR systems  Basics of an IR system  Some IR Techniques.
Retrieval Evaluation J. H. Wang Mar. 18, Outline Chap. 3, Retrieval Evaluation –Retrieval Performance Evaluation –Reference Collections.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Introduction to Information Retrieval (Part 2) By Evren Ermis.
Precision and Recall.
Evaluating Search Engine
Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs.
Modern Information Retrieval
Searching the Web II. The Web Why is it important: –“Free” ubiquitous information resource –Broad coverage of topics and perspectives –Becoming dominant.
INFO 624 Week 3 Retrieval System Evaluation
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Reference Collections: Task Characteristics. TREC Collection Text REtrieval Conference (TREC) –sponsored by NIST and DARPA (1992-?) Comparing approaches.
Retrieval Evaluation: Precision and Recall. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity.
Evaluating the Performance of IR Sytems
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Evaluation CSC4170 Web Intelligence and Social Computing Tutorial 5 Tutor: Tom Chao Zhou
Web Search – Summer Term 2006 II. Information Retrieval (Basics Cont.) (c) Wolfgang Hürst, Albert-Ludwigs-University.
WXGB6106 INFORMATION RETRIEVAL Week 3 RETRIEVAL EVALUATION.
ISP 433/633 Week 6 IR Evaluation. Why Evaluate? Determine if the system is desirable Make comparative assessments.
Evaluation.  Allan, Ballesteros, Croft, and/or Turtle Types of Evaluation Might evaluate several aspects Evaluation generally comparative –System A vs.
LIS618 lecture 11 i/r performance evaluation Thomas Krichel
Evaluation David Kauchak cs458 Fall 2012 adapted from:
Evaluation David Kauchak cs160 Fall 2009 adapted from:
Search Engines and Information Retrieval Chapter 1.
Information Retrieval and Web Search IR Evaluation and IR Standard Text Collections.
IR Evaluation Evaluate what? –user satisfaction on specific task –speed –presentation (interface) issue –etc. My focus today: –comparative performance.
Xiaoying Gao Computer Science Victoria University of Wellington Intelligent Agents COMP 423.
Improving Web Spam Classification using Rank-time Features September 25, 2008 TaeSeob,Yun KAIST DATABASE & MULTIMEDIA LAB.
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
Data Mining Chapter 1 Introduction -- Basic Data Mining Tasks -- Related Concepts -- Data Mining Techniques.
Clustering Top-Ranking Sentences for Information Access Anastasios Tombros, Joemon Jose, Ian Ruthven University of Glasgow & University of Strathclyde.
Search engines are the key to finding specific information on the vast expanse of the World Wide Web. Without sophisticated search engines, it would be.
Parallel and Distributed Searching. Lecture Objectives Review Boolean Searching Indicate how Searches may be carried out in parallel Overview Distributed.
WIRED Week 3 Syllabus Update (next week) Readings Overview - Quick Review of Last Week’s IR Models (if time) - Evaluating IR Systems - Understanding Queries.
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Basic Implementation and Evaluations Aj. Khuanlux MitsophonsiriCS.426 INFORMATION RETRIEVAL.
1 CS 430: Information Discovery Sample Midterm Examination Notes on the Solutions.
C.Watterscs64031 Evaluation Measures. C.Watterscs64032 Evaluation? Effectiveness? For whom? For what? Efficiency? Time? Computational Cost? Cost of missed.
Performance Measurement. 2 Testing Environment.
Performance Measures. Why to Conduct Performance Evaluation? 2 n Evaluation is the key to building effective & efficient IR (information retrieval) systems.
Information Retrieval Transfer Cycle Dania Bilal IS 530 Fall 2007.
Supporting Knowledge Discovery: Next Generation of Search Engines Qiaozhu Mei 04/21/2005.
What Does the User Really Want ? Relevance, Precision and Recall.
Intelligent Database Systems Lab Presenter: CHANG, SHIH-JIE Authors: Longzhuang Li, Yi Shang, Wei Zhang 2002.ACM. Improvement of HITS-based Algorithms.
1 What Makes a Query Difficult? David Carmel, Elad YomTov, Adam Darlow, Dan Pelleg IBM Haifa Research Labs SIGIR 2006.
Chapter. 3: Retrieval Evaluation 1/2/2016Dr. Almetwally Mostafa 1.
Evaluation. The major goal of IR is to search document relevant to a user query. The evaluation of the performance of IR systems relies on the notion.
Information Retrieval Quality of a Search Engine.
Relevant Document Distribution Estimation Method for Resource Selection Luo Si and Jamie Callan School of Computer Science Carnegie Mellon University
INFORMATION RETRIEVAL MEASUREMENT OF RELEVANCE EFFECTIVENESS 1Adrienn Skrop.
Information Retrieval in Practice
Evaluation of Information Retrieval Systems
Information Retrieval on the World Wide Web
Evaluation.
Modern Information Retrieval
IR Theory: Evaluation Methods
Query Caching in Agent-based Distributed Information Retrieval
Data Mining Chapter 6 Search Engines
Finding the right book - Amazon vs Kyobo 한동우
Cost incurred per search Users’ efforts involved in
Evaluation of Information Retrieval Systems
Dr. Sampath Jayarathna Cal Poly Pomona
Project 3 Image Retrieval
Retrieval Evaluation - Measures
Retrieval Evaluation - Measures
Retrieval Performance Evaluation - Measures
Precision and Recall Reminder:
Precision and Recall.
Information Retrieval
Presentation transcript:

Exercising these ideas  You have a description of each item in a small collection. (30 web sites)  Assume we are looking for information about boxers, a type of dog.  List the items that are relevant to this information need. (If it is impossible to tell, as with #9, mark it not relevant.)  Rq = { }  Assume that an unspecified search system has returned a ranked result as follows:  (1, 4, 6, 8, 10, 12, 15, 16, 17, 18, 20, 23,26, 28, 30)  Show the ranking for the query as done on page 76.  Plot the precision vs. recall curve

Exercise, continued  We do a second query against the same collection. This time we are interested in the Boxer rescue organizations.  Rq = { }  Ranked result = (4, 6, 7, 10, 11, 15, 16, 17, 18, 19, 22, 23, 25, 28, 29)  Calculate the average precision at recall level 3 for these two queries.

More exercise  We redo the first query, using a new search algorithm. The ranked result is now this:  (4, 5, 6, 7, 8, 10, 11, 15, 16, 17, 18, 20, 23, 28, 29)  Produce the average recall vs. precision figures for these two algorithms. How would you describe the performance?  We redo the second query using the new search algorithm. The ranked result is this:  (7, 10, 11, 15, 17, 18, 19, 21, 23, 24, 26, 28, 29, 30)

Precision Histogram

 For each of the four searches, what is the R- precision?  Use a precision histogram to compare these two algorithms for the two queries shown.  Calculate the harmonic mean at document 5 in the rankings, using the first query and repeating for each algorithm.  Calculate the E measure for algorithm 1 using query 1 and a moderate preference for recall rather than precision.

Relevant docs known to the User |U| Answer Set |A| Relevant docs known to the User which were retrieved |Rk| |Ru| |Ru| + |Rk| Novelty = Coverage = |U| |Rk| Relevant docs unknown to the User which were retrieved |Ru| Relevant docs |R| Collection

 Assume that the even number elements in the collection are known to the user.  Calculate the coverage ratio for algorithm 1 on search 1  Calculate the novelty ratio for the same algorithm and search