Presentation is loading. Please wait.

Presentation is loading. Please wait.

Motion illusion, rotating snakes

Similar presentations


Presentation on theme: "Motion illusion, rotating snakes"— Presentation transcript:

1

2 Motion illusion, rotating snakes

3 Local features: main components
Detection: Find a set of distinctive key points. Description: Extract feature descriptor around each interest point as vector. Matching: Compute distance between feature vectors to find correspondence. K. Grauman, B. Leibe

4 Review: Harris corner detector
E(u, v) Approximate distinctiveness by local auto-correlation. Approximate local auto-correlation by second moment matrix M. Distinctiveness (or cornerness) relates to the eigenvalues of M. Instead of computing eigenvalues directly, we can use determinant and trace of M. (max)-1/2 (min)-1/2 James Hays

5 Trace / determinant and eigenvalues
Given n x n matrix A with eigenvalues λ1…n R = λ1λ2 – α(λ1+λ2)2 = det(M) – αtrace(M)2

6 How invariant are harris corners?

7 Affine intensity change
I  a I + b Only derivatives are used => invariance to intensity shift I  I + b Intensity scaling: I  a I R x (image coordinate) Threshold Partially invariant to affine intensity change James Hays

8 Corner location is covariant w.r.t. translation
Image translation Derivatives and window function are shift-invariant. Corner location is covariant w.r.t. translation James Hays

9 Corner location is covariant w.r.t. rotation
Image rotation Second moment ellipse rotates but its shape (i.e., eigenvalues) remains the same. Corner location is covariant w.r.t. rotation James Hays

10 Corner location is not covariant to scaling!
All points will be classified as edges Corner location is not covariant to scaling! James Hays

11 What is the ‘scale’ of a feature point?

12 Automatic Scale Selection
How to find patch sizes at which f response is equal? What is a good f ? K. Grauman, B. Leibe

13 Automatic Scale Selection
Function responses for increasing scale (scale signature) Response of some function f K. Grauman, B. Leibe

14 Automatic Scale Selection
Function responses for increasing scale (scale signature) Response of some function f K. Grauman, B. Leibe

15 Automatic Scale Selection
Function responses for increasing scale (scale signature) Response of some function f K. Grauman, B. Leibe

16 Automatic Scale Selection
Function responses for increasing scale (scale signature) Response of some function f K. Grauman, B. Leibe

17 Automatic Scale Selection
Function responses for increasing scale (scale signature) Response of some function f K. Grauman, B. Leibe

18 Automatic Scale Selection
Function responses for increasing scale (scale signature) Response of some function f K. Grauman, B. Leibe

19 What Is A Useful Signature Function f ?
1st Derivative of Gaussian (Laplacian of Gaussian) Earl F. Glynn

20 What Is A Useful Signature Function f ?
“Blob” detector is common for corners - Laplacian (2nd derivative) of Gaussian (LoG) Scale space Function response More notes: Image blob size K. Grauman, B. Leibe

21 Find local maxima in position-scale space
Find maxima s4 Blob detector is distinct from harris corner detector s3  List of (x, y, s) s2 s K. Grauman, B. Leibe

22 Alternative approach Approximate LoG with Difference-of-Gaussian (DoG). Ruye Wang

23 Alternative approach Approximate LoG with Difference-of-Gaussian (DoG). 1. Blur image with σ Gaussian kernel 2. Blur image with kσ Gaussian kernel 3. Subtract 2. from 1. Small k gives a closer approximation to LoG, but usually we want to build a scale space quickly out of this. k = 1.6 gives an appropriate scale space - = K. Grauman, B. Leibe

24 Find local maxima in position-scale space of DoG
Find maxima Blob detector is distinct from harris corner detector ks 2ks - = s -  List of (x, y, s) ks = - = s Input image K. Grauman, B. Leibe

25 Results: Difference-of-Gaussian
Larger circles = larger scale Descriptors with maximal scale response K. Grauman, B. Leibe

26 Maximally Stable Extremal Regions [Matas ‘02]
Based on Watershed segmentation algorithm Select regions that stay stable over a large parameter range K. Grauman, B. Leibe

27 Example Results: MSER K. Grauman, B. Leibe

28 Review: Interest points
Keypoint detection: repeatable and distinctive Corners, blobs, stable regions Harris, DoG, MSER

29 Review: Choosing an interest point detector
Why choose? Collect more points with more detectors, for more possible matches What do you want it for? Precise localization in x-y: Harris Good localization in scale: Difference of Gaussian Flexible region shape: MSER Best choice often application dependent Harris-/Hessian-Laplace/DoG work well for many natural categories MSER works well for buildings and printed things There have been extensive evaluations/comparisons [Mikolajczyk et al., IJCV’05, PAMI’05] All detectors/descriptors shown here work well

30 Comparison of Keypoint Detectors
Tuytelaars Mikolajczyk 2008

31 Local Image Descriptors
Read Szeliski 4.1 Acknowledgment: Many slides from James Hays, Derek Hoiem and Grauman & Leibe 2008 AAAI Tutorial

32 Local features: main components
Detection: Find a set of distinctive key points. Description: Extract feature descriptor around each interest point as vector. Matching: Compute distance between feature vectors to find correspondence. K. Grauman, B. Leibe

33 Image Representations: Histograms
Global histogram to represent distribution of features Color, texture, depth, … Local histogram per detected point Space Shuttle Cargo Bay Images from Dave Kauchak

34 For what things do we compute histograms?
Color Model local appearance L*a*b* color space HSV color space James Hays

35 For what things do we compute histograms?
Texture Local histograms of oriented gradients SIFT: Scale Invariant Feature Transform Extremely popular (40k citations) SIFT – Lowe IJCV 2004 James Hays

36 SIFT Find Difference of Gaussian scale-space extrema Post-processing
Position interpolation Discard low-contrast points Eliminate points along edges

37 SIFT Find Difference of Gaussian scale-space extrema Post-processing
Position interpolation Discard low-contrast points Eliminate points along edges Orientation estimation

38 SIFT Orientation Normalization
Compute orientation histogram Select dominant orientation ϴ Normalize: rotate to fixed orientation 2 p 2 p T. Tuytelaars, B. Leibe [Lowe, SIFT, 1999]

39 SIFT Find Difference of Gaussian scale-space extrema Post-processing
Position interpolation Discard low-contrast points Eliminate points along edges Orientation estimation Descriptor extraction Motivation: We want some sensitivity to spatial layout, but not too much, so blocks of histograms give us that.

40 SIFT Descriptor Extraction
Given a keypoint with scale and orientation: Pick scale-space image which most closely matches estimated scale Resample image to match orientation OR Subtract detector orientation from vector to give invariance to general image rotation.

41 SIFT Orientation Normalization
Compute orientation histogram Select dominant orientation ϴ Normalize: rotate to fixed orientation 2 p 2 p T. Tuytelaars, B. Leibe [Lowe, SIFT, 1999]

42 SIFT Descriptor Extraction
Given a keypoint with scale and orientation Gradient magnitude and orientation 8 bin ‘histogram’ - add magnitude amounts! Utkarsh Sinha

43 SIFT Descriptor Extraction
Within each 4x4 window Gradient magnitude and orientation 8 bin ‘histogram’ - add magnitude amounts! Weight magnitude that is added to ‘histogram’ by Gaussian Utkarsh Sinha

44 SIFT Descriptor Extraction
Extract 8 x 16 values into 128-dim vector Illumination invariance: Working in gradient space, so robust to I = I + b Normalize vector to [0…1] Robust to I = αI brightness changes Clamp all vector values > 0.2 to 0.2. Robust to “non-linear illumination effects” Image value saturation / specular highlights Renormalize “A change in image contrast in which each pixel value is multiplied by a constant will multiply gradients by the same constant, so this contrast change will be canceled by vector normalization.” “However, non-linear illumination changes can also occur due to camera saturation or due to illumination changes that affect 3D surfaces with differing orientations by different amounts. These effects can cause a large change in relative magnitudes for some gradients, but are less likely to affect the gradient orientations. Therefore, we reduce the influence of large gradient magnitudes by thresholding the values in the unit feature vector to each be no larger than 0.2, and then renormalizing to unit length. This means that matching the magnitudes for large gradients is no longer as important, and that the distribution of orientations has greater emphasis. The value of 0.2 was determined experimentally using images containing differing illuminations for the same 3D objects.” Quotes from Lowe paper

45 Specular highlights move between image pairs!

46

47 SIFT-like descriptor in Project 2
SIFT is hand designed based on intuition You implement your own SIFT-like descriptor Ignore scale/orientation to start. Parameters: stick with defaults + minor tweaks Feel free to look at papers / resources for inspiration 2017 Spring TA Martin Zhu recommends this tutorial: Lowe’s original paper:

48 Efficient Implementation
Filter using oriented kernels based on directions of histogram bins. Called ‘steerable filters’

49 Local Descriptors: SURF
Fast approximation of SIFT idea Efficient computation by 2D box filters & integral images  6 times faster than SIFT Equivalent quality for object identification GPU implementation available Feature 200Hz (detector + descriptor, 640×480 img) [Bay, ECCV’06], [Cornelis, CVGPU’08] K. Grauman, B. Leibe

50 Local Descriptors: Shape Context
Count the number of points inside each bin, e.g.: Count = 4 ... Count = 10 Log-polar binning: More precision for nearby points, more flexibility for farther points. Belongie & Malik, ICCV 2001 K. Grauman, B. Leibe

51 Shape Context Descriptor

52 Self-similarity Descriptor
Matching Local Self-Similarities across Images and Videos, Shechtman and Irani, 2007 James Hays

53 Self-similarity Descriptor
Matching Local Self-Similarities across Images and Videos, Shechtman and Irani, 2007 James Hays

54 Self-similarity Descriptor
Matching Local Self-Similarities across Images and Videos, Shechtman and Irani, 2007 James Hays

55 Learning Local Image Descriptors Winder and Brown, 2007

56 Review: Local Descriptors
Most features can be thought of as templates, histograms (counts), or combinations The ideal descriptor should be Robust and Distinctive Compact and Efficient Most available descriptors focus on edge/gradient information Capture texture information Color rarely used K. Grauman, B. Leibe

57 Available at a web site near you…
Many local feature detectors have executables available online: K. Grauman, B. Leibe

58 Feature Matching Read Szeliski 4.1
Many slides from James Hays, Derek Hoiem, and Grauman&Leibe 2008 AAAI Tutorial

59 Local features: main components
Detection: Find a set of distinctive key points. Description: Extract feature descriptor around each interest point as vector. Matching: Compute distance between feature vectors to find correspondence. K. Grauman, B. Leibe

60 How do we decide which features match?
Distance: 0.34, 0.30, 0.40 Distance: 0.61, 1.22

61 Think-Pair-Share Design a feature point matching scheme.
Bijective Think-Pair-Share Design a feature point matching scheme. Two images, I1 and I2 Two sets X1 and X2 of feature points Each feature point x1 has a descriptor Distance, bijective/injective/surjective, noise, confidence, computational complexity, generality… Injective Bijective: each element of one set is paired with exactly one element of the other set Injective: every element of codomain Y matches to at most one element of its domain X Surjective: every element y in Y has a corresponding element x in X Surjective

62 Euclidean distance vs. Cosine Similarity
Wikipedia

63 Feature Matching Criteria 1: Problems:
Compute distance in feature space, e.g., Euclidean distance between 128-dim SIFT descriptors Match point to lowest distance (nearest neighbor) Problems: Does everything have a match?

64 Feature Matching Criteria 2: Problems:
Compute distance in feature space, e.g., Euclidean distance between 128-dim SIFT descriptors Match point to lowest distance (nearest neighbor) Ignore anything higher than threshold (no match!) Problems: Threshold is hard to pick Non-distinctive features could have lots of close matches, only one of which is correct

65 Nearest Neighbor Distance Ratio
Compare distance of closest (NN1) and second-closest (NN2) feature vector neighbor. If NN1 ≈ NN2, ratio 𝑁𝑁1 𝑁𝑁2 will be ≈ 1 -> matches too close. As NN1 << NN2, ratio 𝑁𝑁1 𝑁𝑁2 tends to 0. Sorting by this ratio puts matches in order of confidence. Threshold ratio – but how to choose?

66 Nearest Neighbor Distance Ratio
Lowe computed a probability distribution functions of ratios 40,000 keypoints with hand-labeled ground truth Ratio threshold depends on your application’s view on the trade-off between the number of false positives and true positives! Figure 11: The probability that a match is correct can be determined by taking the ratio of distance from the closest neighbor to the distance of the second closest. Using a database of 40,000 keypoints, the solid line shows the PDF of this ratio for correct matches, while the dotted line is for matches that were incorrect. Lowe IJCV 2004

67 Efficient compute cost
Naïve looping: Expensive Operate on matrices of descriptors E.g., for row vectors, features_image1 * features_image2T produces matrix of dot product results for all pairs of features

68 How good is sift?

69 SIFT Repeatability Lowe IJCV 2004
‘Nearest descriptor in database’ = if you just picked the nearest descriptor, rather than using a NNDR Lowe IJCV 2004

70 SIFT Repeatability Lowe IJCV 2004

71 SIFT Repeatability Lowe IJCV 2004

72 SIFT Repeatability Lowe IJCV 2004

73 Review: Interest points
Keypoint detection: repeatable and distinctive Corners, blobs, stable regions Harris, DoG Descriptors: robust and selective Spatial histograms of orientation SIFT


Download ppt "Motion illusion, rotating snakes"

Similar presentations


Ads by Google