2006 3D Shape Retrieval Contest Remco Veltkamp, Utrecht University.

Slides:



Advertisements
Similar presentations
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Advertisements

Introduction to Information Retrieval
Super Awesome Presentation Dandre Allison Devin Adair.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Search Engines Information Retrieval in Practice All slides ©Addison Wesley, 2008.
Mesh modeling and processing M. Ramanathan STTP CAD 2011Mesh modeling and processing.
Optimizing Estimated Loss Reduction for Active Sampling in Rank Learning Presented by Pinar Donmez joint work with Jaime G. Carbonell Language Technologies.
Computer Vision Group, University of BonnVision Laboratory, Stanford University Abstract This paper empirically compares nine image dissimilarity measures.
Evaluating Search Engine
Evaluating Evaluation Measure Stability Authors: Chris Buckley, Ellen M. Voorhees Presenters: Burcu Dal, Esra Akbaş.
Gimme’ The Context: Context- driven Automatic Semantic Annotation with CPANKOW Philipp Cimiano et al.
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Retrieval Evaluation: Precision and Recall. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity.
Selecting Distinctive 3D Shape Descriptors for Similarity Retrieval Philip Shilane and Thomas Funkhouser.
The Princeton Shape Benchmark Philip Shilane, Patrick Min, Michael Kazhdan, and Thomas Funkhouser.
The Second International Chinese Word Segmentation Bakeoff Coordinated by Thomas Emerson.
Presented by Zeehasham Rasheed
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
Using Relevance Feedback in Multimedia Databases
Online Learning for Web Query Generation: Finding Documents Matching a Minority Concept on the Web Rayid Ghani Accenture Technology Labs, USA Rosie Jones.
The Relevance Model  A distribution over terms, given information need I, (Lavrenko and Croft 2001). For term r, P(I) can be dropped w/o affecting the.
Evaluation of Image Retrieval Results Relevant: images which meet user’s information need Irrelevant: images which don’t meet user’s information need Query:
Dependency Network Based Real-time Query Expansion Jiaqi Zou, Xiaojie Wang Center for Intelligence Science and Technology, BUPT.
Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Eric Brill Susan Dumais Microsoft Research.
Enhancing Biomedical Text Rankers by Term Proximity Information 劉瑞瓏 慈濟大學醫學資訊學系 2012/06/13.
Evaluation INST 734 Module 5 Doug Oard. Agenda Evaluation fundamentals Test collections: evaluating sets  Test collections: evaluating rankings Interleaving.
Shape Analysis and Retrieval
21/11/20151Gianluca Demartini Ranking Clusters for Web Search Gianluca Demartini Paul–Alexandru Chirita Ingo Brunkhorst Wolfgang Nejdl L3S Info Lunch Hannover,
Chapter 8 Evaluating Search Engine. Evaluation n Evaluation is key to building effective and efficient search engines  Measurement usually carried out.
Diversifying Search Results Rakesh Agrawal, Sreenivas Gollapudi, Alan Halverson, Samuel Ieong Search Labs, Microsoft Research WSDM, February 10, 2009 TexPoint.
Effective Automatic Image Annotation Via A Coherent Language Model and Active Learning Rong Jin, Joyce Y. Chai Michigan State University Luo Si Carnegie.
2004/03/03Sheun-Huei Guan, CML, NTU1 3D Model Retrieval After Shape Distributions.
Learning to Rank From Pairwise Approach to Listwise Approach.
Enhancing Web Search by Promoting Multiple Search Engine Use Ryen W. W., Matthew R. Mikhail B. (Microsoft Research) Allison P. H (Rice University) SIGIR.
Performance Measures. Why to Conduct Performance Evaluation? 2 n Evaluation is the key to building effective & efficient IR (information retrieval) systems.
Advantages of Query Biased Summaries in Information Retrieval by A. Tombros and M. Sanderson Presenters: Omer Erdil Albayrak Bilge Koroglu.
Mining Dependency Relations for Query Expansion in Passage Retrieval Renxu Sun, Chai-Huat Ong, Tat-Seng Chua National University of Singapore SIGIR2006.
Post-Ranking query suggestion by diversifying search Chao Wang.
Automated Controller Synthesis in QFT Designs … IIT Bombay P.S.V. Nataraj and Sachin Tharewal 1 An Interval Analysis Algorithm for Automated Controller.
Proximity-based Ranking of Biomedical Texts Rey-Long Liu * and Yi-Chih Huang * Dept. of Medical Informatics Tzu Chi University Taiwan.
Final Project Mei-Chen Yeh May 15, General In-class presentation – June 12 and June 19, 2012 – 15 minutes, in English 30% of the overall grade In-class.
26/01/20161Gianluca Demartini Ranking Categories for Faceted Search Gianluca Demartini L3S Research Seminars Hannover, 09 June 2006.
Divided Pretreatment to Targets and Intentions for Query Recommendation Reporter: Yangyang Kang /23.
RICHES TM Connections Tool Connie Harper Amy Larner Giroux, PhD.
Feature Selection Poonam Buch. 2 The Problem  The success of machine learning algorithms is usually dependent on the quality of data they operate on.
Survey on Long Queries in Keyword Search : Phrase-based IR Sungchan Park
Predicting User Interests from Contextual Information R. W. White, P. Bailey, L. Chen Microsoft (SIGIR 2009) Presenter : Jae-won Lee.
Learning to Rank: From Pairwise Approach to Listwise Approach Authors: Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li Presenter: Davidson Date:
Introduction to Information Retrieval Introduction to Information Retrieval Lecture 10 Evaluation.
Relevant Document Distribution Estimation Method for Resource Selection Luo Si and Jamie Callan School of Computer Science Carnegie Mellon University
Team Review. Norms O Equity of Voice O Active Listening O Respect for All Perspectives O Safety and Confidentiality O Respectful Use of Technology.
SHREC’16 Track: 3D Sketch-Based 3D Shape Retrieval
About My Fitness Test Results UNIT 1 By _______________________ P5– Interpret their test results and personal level of fitness. M2 – Explain their test.
Sampath Jayarathna Cal Poly Pomona
Walid Magdy Gareth Jones
Evaluation of IR Systems
An Empirical Study of Learning to Rank for Entity Search
Wei Wei, PhD, Zhanglong Ji, PhD, Lucila Ohno-Machado, MD, PhD
Lecture 10 Evaluation.
SHREC’18 Track: 2D Scene Sketch-Based 3D Scene Retrieval
Accounting for the relative importance of objects in image retrieval
کنکور کارشناسی ارشد 93 مدیریت
Lecture 6 Evaluation.
A Suite to Compile and Analyze an LSP Corpus
Feature Selection for Ranking
Cumulated Gain-Based Evaluation of IR Techniques
1Micheal T. Adenibuyan, 2Oluwatoyin A. Enikuomehin and 2Benjamin S
SHREC’19 Track: Extended 2D Scene Image-Based 3D Scene Retrieval
Introduction to information retrieval
SHREC’19 Track: Extended 2D Scene Sketch-Based 3D Scene Retrieval
Presentation transcript:

2006 3D Shape Retrieval Contest Remco Veltkamp, Utrecht University

Participants 1. Chaouch et al., INRIA, France, 2. Daras et al., Thessaloniki, Greece, 3. Jayanti et al., Purdue University, Indiana, 4. Laga et al., NAIST, Japan, 5. Makadia et al., University of Pennsylvania, 6. Papadakis et al., Athens, Greece, 7. Shilane et al., Princeton University, New Jersey, 8. Zaharia et al., INT, France,

Test collection  Princeton Shape Benchmark, training set + test set  1814 models, renamed, reordered, flattened  PSB classification: 197 classes

Queries

Performance evaluation  Highly relevant: score 2 Marginally relevant: score 1  Precision, Recall  First, Second Tier  (Normalized) (Discounted) Cumulated Gain  Average Dynamic Recall

Performance evaluation Cumulated gain [1,100]

Performance evaluation

Mean average dynamic recall

Conclusions  Excellent opportunity to compare algorithms and analyze strengths and weaknesses  More tracks wanted (polygon soup/watertight, whole/partial, 3D face, molecules?)  More test collections and ground truths wanted

Resources Proceedings, all performance numbers, test collection, queries available via:

What’s next  SHREC2007  SMI2007

Thanks … … to all participants!