Dept. of Computer and Information Sciences

Slides:



Advertisements
Similar presentations
CMPUT 615 Applications of Machine Learning in Image Analysis
Advertisements

The A-tree: An Index Structure for High-dimensional Spaces Using Relative Approximation Yasushi Sakurai (NTT Cyber Space Laboratories) Masatoshi Yoshikawa.
k-Nearest Neighbors Search in High Dimensions
Clustering Basic Concepts and Algorithms
Learning Trajectory Patterns by Clustering: Comparative Evaluation Group D.
PARTITIONAL CLUSTERING
1 Sensor Deployment and Target Localization Based on Virtual Forces Y. Zou and K. Chakrabarty IEEE Infocom 2003 Conference, pp ,. ACM Transactions.
Lecture 4 Divide and Conquer for Nearest Neighbor Problem
1 NNH: Improving Performance of Nearest- Neighbor Searches Using Histograms Liang Jin (UC Irvine) Nick Koudas (AT&T Labs Research) Chen Li (UC Irvine)
Multimedia DBs. Multimedia dbs A multimedia database stores text, strings and images Similarity queries (content based retrieval) Given an image find.
Data Clustering Methods
Content Based Image Clustering and Image Retrieval Using Multiple Instance Learning Using Multiple Instance Learning Xin Chen Advisor: Chengcui Zhang Department.
Principal Component Analysis
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
Slide 1 EE3J2 Data Mining Lecture 16 Unsupervised Learning Ali Al-Shahib.
Distance Functions for Sequence Data and Time Series
Optimization Methods One-Dimensional Unconstrained Optimization
Techniques and Data Structures for Efficient Multimedia Similarity Search.
Euripides G.M. PetrakisIR'2001 Oulu, Sept Indexing Images with Multiple Regions Euripides G.M. Petrakis Dept.
Nearest Neighbour Condensing and Editing David Claus February 27, 2004 Computer Vision Reading Group Oxford.
K-means Clustering. What is clustering? Why would we want to cluster? How would you determine clusters? How can you do this efficiently?
Radial Basis Function Networks
Ch. 11: Optimization and Search Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 some slides from Stephen Marsland, some images.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
CSIE Dept., National Taiwan Univ., Taiwan
A Quantitative Analysis and Performance Study For Similar- Search Methods In High- Dimensional Space Presented By Umang Shah Koushik.
1 Motivation Web query is usually two or three words long. –Prone to ambiguity –Example “keyboard” –Input device of computer –Musical instruments How can.
CHAPTER 7: Clustering Eick: K-Means and EM (modified Alpaydin transparencies and new transparencies added) Last updated: February 25, 2014.
Ground Truth Free Evaluation of Segment Based Maps Rolf Lakaemper Temple University, Philadelphia,PA,USA.
VQ for ASR 張智星 多媒體資訊檢索實驗室 清華大學 資訊工程系.
Chapter 9 DTW and VQ Algorithm  9.1 Basic idea of DTW  9.2 DTW algorithm  9.3 Basic idea of VQ  9.4 LBG algorithm  9.5 Improvement of VQ.
Efficient EMD-based Similarity Search in Multimedia Databases via Flexible Dimensionality Reduction / 16 I9 CHAIR OF COMPUTER SCIENCE 9 DATA MANAGEMENT.
VLDB 2006, Seoul1 Indexing For Function Approximation Biswanath Panda Mirek Riedewald, Stephen B. Pope, Johannes Gehrke, L. Paul Chew Cornell University.
Halftoning With Pre- Computed Maps Objective Image Quality Measures Halftoning and Objective Quality Measures for Halftoned Images.
RSVM: Reduced Support Vector Machines Y.-J. Lee & O. L. Mangasarian First SIAM International Conference on Data Mining Chicago, April 6, 2001 University.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology A modified version of the K-means algorithm with a distance.
Feature Selection in k-Median Clustering Olvi Mangasarian and Edward Wild University of Wisconsin - Madison.
A Fast LBG Codebook Training Algorithm for Vector Quantization Presented by 蔡進義.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
CIS 350 Principles and Applications Of Computer Vision Dr. Rolf Lakaemper.
Shape-Representation and Shape Similarity PART 2 Dr. Rolf Lakaemper.
Vector Quantization Vector quantization is used in many applications such as image and voice compression, voice recognition (in general statistical pattern.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
Using the Particle Filter Approach to Building Partial Correspondences between Shapes Rolf Lakaemper, Marc Sobel Temple University, Philadelphia,PA,USA.
Shape-Representation and Shape Similarity Dr. Rolf Lakaemper Part 1: Shapes.
Collaborative Filtering via Euclidean Embedding M. Khoshneshin and W. Street Proc. of ACM RecSys, pp , 2010.
Cluster Analysis Dr. Bernard Chen Assistant Professor Department of Computer Science University of Central Arkansas.
Intelligent Database Systems Lab Advisor : Dr. Hsu Graduate : Jian-Lin Kuo Author : Aristidis Likas Nikos Vlassis Jakob J.Verbeek 國立雲林科技大學 National Yunlin.
Cluster Analysis This work is created by Dr. Anamika Bhargava, Ms. Pooja Kaul, Ms. Priti Bali and Ms. Rajnipriya Dhawan and licensed under a Creative Commons.
Prof. Yu-Chee Tseng Department of Computer Science
An Image Database Retrieval Scheme Based Upon Multivariate Analysis and Data Mining Presented by C.C. Chang Dept. of Computer Science and Information.
Distance Functions for Sequence Data and Time Series
CIS Introduction to Computer Vision
Image Retrieval Longin Jan Latecki.
Content-Based Image Retrieval
Content-Based Image Retrieval
Applications of Shape Similarity.
Sequence Alignment 11/24/2018.
ISEN 601 Location Logistics
Problem Definition Input: Output: Requirement:
Clustering 77B Recommender Systems
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Similarity Search: A Matching Based Approach
Text Categorization Berlin Chen 2003 Reference:
Introduction to Scientific Computing II
Minwise Hashing and Efficient Search
Liang Jin (UC Irvine) Nick Koudas (AT&T Labs Research)
Clustering Algorithms for Perceptual Image Hashing
Presentation transcript:

Dept. of Computer and Information Sciences Vantage Objects Dr. Rolf Lakaemper Dept. of Computer and Information Sciences Temple University

The Application: ISS Database Task: Create Image Database Problem: Response Time Comparison of 2 Shapes: 23ms on Pentium1Ghz ISS contains 15,000 images: Response Time about 6 min. Clustering not possible (not a metric)

Vantage Objects Solution: Full search on entire database using a simpler comparison Vantage Objects (Vleugels / Veltkamp, 1999) provide a simple comparison of n- dimensional vectors (n typically < 100) Paper: Vleugels/Veltkamp: Efficient Image Retrieval through Vantage Objects (1999)

Vantage Objects The Idea: Compare the query-shape q to a predefined subset S of the shapes in the database D The result is an n-dimensional Vantage Vector V, n = |S| s1 v1 s2 v2 q s3 v3 … sn vn

Vantage Objects - Each shape can be represented by a single Vantage Vector - The computation of the Vantage Vector calls the ASR – comparison only n times - ISS uses 54 Vantage Objects, reducing the comparison time (needed to create the Vantage Vector) to < 1.5s - How to compare the query object to the database ?

Vantage Objects - Create the Vantage Vector vi for every shape di in the database D - Create the Vantage Vector vq for the query-shape q - compute the euclidean distance between vq and vi - best response is minimum distance Note: computing the Vantage Vectors for the database objects is an offline process !

How to define the set S of Vantage Objects ?

k=1..i-1 e(di , sk) maximal. (e = eucl. dist.) Vantage Objects Algorithm 1 (Vleugels / Veltkamp 2000): Predefine the number n of Vantage Objects S0 = { } Iteratively add shapes di  D\Si-1 to Si-1 such that Si = Si-1  di and k=1..i-1 e(di , sk) maximal. (e = eucl. dist.) Stop if i = n.

Vantage Objects Result: Did not work for ISS.

Algorithm 2 (Latecki / Henning / Lakaemper): Vantage Objects Algorithm 2 (Latecki / Henning / Lakaemper): Def.: A(s1,s2): ASR distance of shapes s1,s2 q: query shape ‘Vantage Query’ : determining the result r by minimizing e(vq , vi ) vi = Vantage Vector to si ‘ASR Query’: determining the result r by minimizing A(q,di ) Vantage Query has certain loss of retrieval quality compared to ASR query. Define a loss function l to model the extent of retrieval performance

Vantage Objects Given a Database D and a set V of Vantage Vectors, the loss of retrieval performance for a single query by shape q is given by: lV,D (q) = A(q,r), Where r denotes the resulting shape of the vantage query to D using q. Property: lV,D (q) is minimal if r is the result of the ASR-Query.

L(S) = 1/n  lS,D\{si} (si) Vantage Objects Now define retrieval error function L(S) of set S={s1 ,…, sn }  D of Vantage Vectors of Database D: L(S) = 1/n  lS,D\{si} (si) Task: Find subset S  D such that L(S) is minimal.

Vantage Objects Algorithm: V0={ } iteratively determine sj in D\Sj-1 such that Sj =Sj-1  sj and L(Vj) minimal. Stop if improvement is low

Number of Vantage Objects Result: Worked fine for ISS, though handpicked objects stil performed better. Handpicked Algorithm 2 L(S) Number of Vantage Objects

Vantage Objects …some of the Vantage Objects used in ISS:

Vantage Objects helped in times of need, but discussion is required !