Download presentation
Presentation is loading. Please wait.
1
Approximate Nearest Subspace Search with Applications to Pattern Recognition Ronen Basri, Tal Hassner, Lihi Zelnik-Manor presented by Andrew Guillory and Ian Simon
2
The Problem Given n linear subspaces S i :
3
The Problem Given n linear subspaces S i : And a query point q:
4
The Problem Given n linear subspaces S i : And a query point q: Find the subspace S i that minimizes dist(S i,q).
5
Why? object appearance variation = subspace – fast queries on object database
6
Why? object appearance variation = subspace – fast queries on object database Other reasons?
7
Approach Solve by reduction to nearest neighbor. – point-to-point distances
8
Approach Solve by reduction to nearest neighbor. – point-to-point distances not actual reduction
9
Approach Solve by reduction to nearest neighbor. – point-to-point distances In higher-dimensional space. not actual reduction
10
Point-Subspace Distance Use squared distance.
11
Point-Subspace Distance Use squared distance.
12
Point-Subspace Distance Use squared distance. Squared point-subspace distance can be represented as a dot product.
13
The Reduction Let: Remember:
14
The Reduction Let: Then: Remember:
15
The Reduction
16
constant over query
17
The Reduction ?constant over query
18
The Reduction Z T Z = I ?constant over query
19
The Reduction Z T Z = I Z is d-by-(d-k), columns orthonormal. ?constant over query
20
The Reduction Z T Z = I Z is d-by-(d-k), columns orthonormal. ?constant over query
21
The Reduction For query point q:
22
The Reduction For query point q: Can we decrease the additive constant?
23
Observation 1 All data points lie on a hyperplane.
24
Observation 1 All data points lie on a hyperplane. Let: Now the hyperplane contains the origin.
25
Observation 2 After hyperplane projection: All data points lie on a hypersphere.
26
Observation 2 After hyperplane projection: All data points lie on a hypersphere. Let: Now the query point lies on the hypersphere.
27
Observation 2 After hyperplane projection: All data points lie on a hypersphere. Let: Now the query point lies on the hypersphere.
28
Reduction Geometry What is happening?
29
Reduction Geometry What is happening?
30
Finally Additive constant depends only on dimension of points and subspaces. This applies to linear subspaces, all of the same dimension.
31
Extensions subspaces of different dimension – lines and planes, e.g. – Not all data points have the same norm. Add extra dimension to fix this.
32
Extensions subspaces of different dimension – lines and planes, e.g. – Not all data points have the same norm. Add extra dimension to fix this. affine subspaces – Again, not all data points have the same norm.
33
Approximate Nearest Neighbor Search Find point x with distance d(x, q) <= (1 + ε) min i d(x i,q) Tree based approaches: KD-trees, metric / ball trees, cover trees Locality sensitive hashing This paper uses multiple KD-Trees with (different) random projections
34
KD-Trees Decompose space into axis aligned rectangles Image from Dan Pelleg
35
Random Projections Multiply data with a random matrix X with X(i,j) drawn from N(0,1) Several different justifications – Johnson-Lindenstrauss (data set that is small compared to dimensionality) – Compressed Sensing (data set that is sparse in some linear basis) – RP-Trees (data set that has small doubling dimension)
36
Results Two goals – show their method is fast – show nearest subspace is useful Four experiments – Synthetic Experiments – Image Approximation – Yale Faces – Yale Patches
37
Image Reconstruction
38
Yale Faces
39
Questions / Issues Should random projections be applied before or after the reduction? Why does the effective distance error go down with the ambient dimensionality? The reduction tends to make query points far away from the points in the database. Are there better approximate nearest neighbor algorithms in this case?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.