Download presentation
Published byStefan Frayne Modified over 9 years ago
1
Efficiently searching for similar images (Kristen Grauman)
Universidad Católica San Pablo Cristina Patricia Cáceres Jáuregui
2
Motivation Fast image search is a useful component for a number of vision problems. Plenty of nuisance parameters (lighting, pose, background clutter, etc.)
3
Nuisance parameters
4
Outline Scalable image search
• Fast correspondence-based search with local features • Fast similarity search for learned metrics
5
Local image features
6
How to handle sets of features?
Want to compare, index, cluster, etc. local representations, but: • Each instance is unordered set of vectors • Varying number of vectors per instance
7
Comparing sets of local features
Previous strategies: Match features individually, vote on small sets to verify Explicit search for one-to- one correspondences Bag-of-words: Compare frequencies of prototype features
8
optimal partial matching
Pyramid match kernel Optimal match: O(m3) Pyramid match: O(mL) m = # features L = # levels in pyramid optimal partial matching
9
Pyramid match: main idea
Feature space partitions serve to “match” the local descriptors within successively wider regions. descriptor space
10
Pyramid match: main idea
Histogram intersection counts number of possible matches at a given partitioning.
11
Image search with matching- sensitive hash functions
• Main idea: – Map point sets to a vector space in such a way that a dot product reflects partial match similarity (normalized PMK value). – Exploit random hyperplane properties to construct matching-sensitive hash functions. – Perform approximate similarity search on hashed examples.
12
Locality Sensitive Hashing (LSH)
Guarantee “approximate”-nearest neighbors in sub-linear time, given appropriate hash functions. Randomized LSH functions Xi N h r1…rk << N Q 110101 h r1…rk 110111 Q 111101
13
LSH functions for dot products
The probability that a random hyperplane separates two unit vectors depends on the angle between them: Corresponding hash function: A) High dot product: unlikely to split B) Lower dot product: likely to split
14
Metric learning There are various ways to judge appearance/shape similarity… but often we know more about (some) data than just their appearance.
15
Metric learning Exploit partially labeled data and/or (dis)similarity constraints to construct more useful distance function Can dramatically boost performance on clustering, indexing, classification tasks. Various existing techniques
16
Fast similarity search for learned metrics
• Goal: – Maintain query time guarantees while performing approximate search with a learned metric • Main idea: – Learn Mahalanobis distance parameterization – Use it to affect distribution from which random hash functions are selected • LSH functions that preserve the learned metric • Approximate NN search with existing methods
17
Fast Image Search for Learned Metrics
Learn a Malhanobis metric for LSH h( ) = h( ) h( ) ≠ h( ) It should be unlikely that a hash function will split examples like those having similarity constraints… …but likely that it splits those having dissimilarity constraints.
18
Summary • Local image features useful, important to handle efficiently
• Introduced scalable methods to allow fast similarity search methods with – Local feature matching – Learned Mahalanobis metrics • Key idea: design hash functions that encode matching process, or the constraints provided
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.