Download presentation
Presentation is loading. Please wait.
Published byDennis Cameron Modified over 9 years ago
1
Image Features Computer Vision 6.869 Bill Freeman Image and shape descriptors: Harris corner detectors and SIFT features. With slides from Darya Frolova, Denis Simakov, David Lowe szeliski 2.1.5
2
2 depict magical feature finder take a photo. finds distinctive features. matches their appearance in another image of the same thing (under different lighting or viewpoint etc) conditions. match the appearance of a feature on a similar thing. feature found should be: rich enough to be distinctive. ((hey, read over this randomness stuff for iris features)) what could you do with such a magical feature? make panoramas (easy) make 3d (harder) make matches over time (harder) make matches within an object category (harder) 2
3
3 object pov 1
4
44
5
5
6
6
7
7 http://www.mooseyscountrygarden.com/cat-dog-pictures/sleeping-dog.jpg
8
8 http://tripatlas.com/list.html?id=50
9
9
10
Uses for feature point detectors and descriptors in computer vision and graphics. –Image alignment and building panoramas –3D reconstruction –Motion tracking –Object recognition –Indexing and database retrieval –Robot navigation –… other
11
How do we build a panorama? We need to match (align) images
12
Matching with Features Detect feature points in both images
13
Matching with Features Detect feature points in both images Find corresponding pairs
14
Matching with Features Detect feature points in both images Find corresponding pairs Use these matching pairs to align images - the required mapping is called a homography.
15
Matching with Features Problem 1: –Detect the same point independently in both images no chance to match! We need a repeatable detector counter-example:
16
Matching with Features Problem 2: –For each point correctly recognize the corresponding one ? We need a reliable and distinctive descriptor
17
Building a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003
18
18 x, y in Harris Taylor Descriptor needs work some pics are missing around eigenvalues in the real image x, y in Harris Taylor Descriptor needs work some pics are missing around eigenvalues in the real image
19
19 DESCRIPTOR PART NEEDS WORK: more illustration. Say that simplest descriptor is pixel values. Show examples of feature neighborhoods in real pano images 19
20
Last time We assumed we know feature correspondences between images Align the images into a virtual wider-angle view –mapping is a homography –given 4 pairs of points, linear system Today: automatic correspondence Hpp’
21
21 What correspondences would you pick? Goal: click on same scene point in both images 21
22
22 Discussion In this lecture, we’ll seek to derive detectors and descriptors that work perfectly But in reality, they will never be 100% perfect Next time, we’ll see how we can use the large number of feature points we can extract to reject outliers 22
23
Preview Descriptor detector location Note: here viewpoint is different, not panorama (they show off) Detector: detect same scene points independently in both images Descriptor: encode local neighboring window –Note how scale & rotation of window are the same in both image (but computed independently) Correspondence: find most similar descriptor in other image
24
24 Outline Feature point detection –Harris corner detector –finding a characteristic scale Local image description –SIFT features 24
25
25 Important We run the detector and compute descriptors independently in the various images. It is only at the end, given a bag of feature points and their descriptors, that we compute correspondences But for the derivation, we will look how the detector and descriptor behave for corresponding scene points, to check for consistency 25
26
Selecting Good Features What is a “good feature”? –Satisfies brightness constancy –Has sufficient texture variation –Does not have too much texture variation –Corresponds to a “real” surface patch –Does not deform too much over time
27
27 Goals Detector (location of feature point) –precisely localized in space Descriptor (used to match between images) 27
28
More motivation… Feature points are used also for: –Image alignment (homography, fundamental matrix) –3D reconstruction –Motion tracking –Object recognition –Indexing and database retrieval –Robot navigation –… other
29
Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant
30
Harris corner detector C.Harris, M.Stephens. “A Combined Corner and Edge Detector”. 1988 Darya Frolova, Denis Simakov The Weizmann Institute of Science http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt
31
The Basic Idea We should easily localize the point by looking through a small window Shifting a window in any direction should give a large change in pixels intensities in window –makes location precisely define Darya Frolova, Denis Simakov The Weizmann Institute of Science http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt
32
Corner Detector: Basic Idea “flat” region: no change in all directions “edge”: no change along the edge direction “corner”: significant change in all directions Darya Frolova, Denis Simakov The Weizmann Institute of Science http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt
33
The Moravec Detector Compute SSD (sum square difference) between window & 8 neighbor windows (shifted by 1 pixel) Corner strength: minimum difference with neighbors Problem: not rotation invariant because we take a bigger step in diagonal, and we only check a small number of directions (8)
34
Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant
35
Harris Detector: Mathematics Window-averaged squared change of intensity induced by shifting the image data by [u,v]: Intensity Shifted intensity Window function or Window function w(x,y) = Gaussian1 in window, 0 outside Darya Frolova, Denis Simakov The Weizmann Institute of Science http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt
36
Taylor series approximation to shifted image gives quadratic form for error as function of image shifts.
37
Harris Detector: Mathematics Expanding I(x,y) in a Taylor series expansion, we have, for small shifts [u,v ], a quadratic approximation to the error surface between a patch and itself, shifted by [u,v ]: where M is a 2 2 matrix computed from image derivatives: M is often called structure tensor Darya Frolova, Denis Simakov The Weizmann Institute of Science http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt
38
Harris Detector: Mathematics Intensity change in shifting window: eigenvalue analysis 1, 2 – eigenvalues of M direction of the slowest change direction of the fastest change ( max ) -1/2 ( min ) -1/2 Ellipse E(u,v) = const Iso-contour of the squared error, E(u,v) Darya Frolova, Denis Simakov The Weizmann Institute of Science http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt
39
Harris Detector: Mathematics Start with Moravec: Window-averaged change of intensity for the shift [u,v]: Intensity Shifted intensity Window function or Window function w(x,y) = Gaussian1 in window, 0 outside x, y is a little confusing because there is no window center notion
40
Go through Taylor series expansion on board Need to work on how to get to the matrix version
41
Harris Detector: Mathematics Expanding E(u,v) in a order Taylor series expansion, we have,for small shifts [u,v ], a bilinear approximation: where M is a 2 2 matrix computed from image 1st derivatives: M is often called structure tensor
42
Selecting Good Features 1 and 2 are large Image patch Error surface Darya Frolova, Denis Simakov The Weizmann Institute of Science http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt 12x10^5
43
Selecting Good Features large 1, small 2 Image patch Error surface Darya Frolova, Denis Simakov The Weizmann Institute of Science http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt 9x10^5
44
Selecting Good Features small 1, small 2 (contrast auto-scaled) Image patch Error surface (vertical scale exaggerated relative to previous plots) Darya Frolova, Denis Simakov The Weizmann Institute of Science http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt 200
45
Harris Detector: Mathematics 11 22 “Corner” 1 and 2 are large, 1 ~ 2 ; E increases in all directions 1 and 2 are small; E is almost constant in all directions “Edge” 1 >> 2 “Edge” 2 >> 1 “Flat” region Classification of image points using eigenvalues of M: http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
46
Harris Detector: Mathematics Measure of corner response: (k – empirical constant, k = 0.04-0.06) (Shi-Tomasi variation: use min(λ 1,λ 2 ) instead of R) http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
47
Harris Detector: Mathematics 11 22 “Corner” “Edge” “Flat” R depends only on eigenvalues of M R is large for a corner R is negative with large magnitude for an edge |R| is small for a flat region R > 0 R < 0 |R| small http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
48
Harris Detector The Algorithm: –Find points with large corner response function R (R > threshold) –Take the points of local maxima of R
49
49 Harris corner detector algorithm Compute image gradients I x I y for all pixels For each pixel –Compute by looping over neighbors x,y –compute Find points with large corner response function R (R > threshold) Take the points of locally maximum R as the detected feature points (ie, pixels where R is bigger than for all the 4 or 8 neighbors). 49 http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
50
50 Algorithm Compute image gradients Ix Iy for all pixels For each pixel –Compute by looping over neighbors x,y –compute Find points with large corner response function R (R > threshold) Take the points of local maxima of R 50 last two steps messy to explain
51
Harris Detector The Algorithm: –Find points with large corner response function R (R > threshold) –Take the points of local maxima of R
52
Harris Detector: Workflow http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
53
Harris Detector: Workflow Compute corner response R http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
54
Harris Detector: Workflow Find points with large corner response: R>threshold http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
55
Harris Detector: Workflow Take only the points of local maxima of R http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
56
Harris Detector: Workflow Take only the points of local maxima of R MAKE POINTS MORE VISIBLE
57
Harris Detector: Workflow http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
58
Harris Detector: Summary Average intensity change in direction [u,v] can be expressed as a bilinear form: Describe a point in terms of eigenvalues of M : measure of corner response A good (corner) point should have a large intensity change in all directions, i.e. R should be large and positive http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
59
59 Comments on Harris detector We haven’t analyzed how consistent in position the local- max-of-R feature is. We’ll show some results on that later. If we reverse the image contrast, the local-max-of-R feature gives the exact same feature positions--do we want that? How might you address the problem below: Camera A view Camera B view Bad feature (indistinguishable locally from a good feature) Good feature
60
60 Analysis of Harris corner detector invariance properties Geometry –rotation –scale Photometry –intensity change 60
61
Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant
62
62 Possible changes between two images from same viewpoint Geometry –rotation along optical axis (2D rotation) –perspective distortion (wide angle) –scale (different zoom, or local scale due to perspective distortion) –radial distortion (lens aberration) Photometry (pixel values) –lighting changes, clouds moves –exposure not the same –vignetting 62
63
Models of Image Change Geometry –Rotation –Similarity (rotation + uniform scale) –Affine (scale dependent on direction) valid for: orthographic camera, locally planar object Photometry –Affine intensity change ( I → a I + b)
64
We want to: detect the same interest points regardless of image changes
65
Harris Detector: Some Properties Rotation invariance?
66
Harris Detector: Some Properties Rotation invariance Ellipse rotates but its shape (i.e. eigenvalues) remains the same Corner response R is invariant to image rotation Eigen analysis allows us to work in the canonical frame of the linear form.
67
Rotation Invariant Detection Harris Corner Detector C.Schmid et.al. “Evaluation of Interest Point Detectors”. IJCV 2000 This gives us rotation invariant detection, but we’ll need to do more to ensure a rotation invariant descriptor... ImpHarris: derivatives are computed more precisely by replacing the [-2 -1 0 1 2] mask with derivatives of a Gaussian (sigma = 1).
68
Evaluation plots are from this paper
69
Harris Detector: Some Properties Invariance to image intensity change?
70
Harris Detector: Some Properties Partial invariance to additive and multiplicative intensity changes Only derivatives are used => invariance to intensity shift I → I + b Intensity scaling: I → a I fine, except for the threshold that’s used to specify when R is large enough. R x (image coordinate) threshold R x (image coordinate) http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
71
Harris Detector: Some Properties Invariant to image scale? image zoomed image
72
Harris Detector: Some Properties Not invariant to image scale! All points will be classified as edges Corner ! http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
73
Harris Detector: Some Properties Quality of Harris detector for different scale changes Repeatability rate: # correspondences # possible correspondences C.Schmid et.al. “Evaluation of Interest Point Detectors”. IJCV 2000 http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
74
Harris Detector: Some Properties Quality of Harris detector for different scale changes Repeatability rate: # correspondences # possible correspondences C.Schmid et.al. “Evaluation of Interest Point Detectors”. IJCV 2000 http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
75
75 Note So far we have not talked about how to compute correspondences We just want to make sure that the same points are detected in both images, so that the later correspondence step has a chance to work 75
76
Contents Harris Corner Detector –Description –Analysis Detectors –Rotation invariant –Scale invariant –Affine invariant Descriptors –Rotation invariant –Scale invariant –Affine invariant
77
Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant
78
78 Goals for handling scale/rotation Ensure invariance –Same scene points must be detected in all images Prepare terrain for descriptor –Descriptors are based on a local neighborhood –Find canonical scale and angle for neighborhood (independently in all images)
79
Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant
80
Scale Invariant Detection Consider regions (e.g. circles) of different sizes around a point Regions of corresponding sizes will look the same in both images The features look the same to these two operators. http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
81
Scale Invariant Detection The problem: how do we choose corresponding circles independently in each image? Do objects in the image have a characteristic scale that we can identify? http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
82
Detection over scale Requires a method to repeatably select points in location and scale: The only reasonable scale-space kernel is a Gaussian (Koenderink, 1984; Lindeberg, 1994) An efficient choice is to detect peaks in the difference of Gaussian pyramid (Burt & Adelson, 1983; Crowley & Parker, 1984 – but examining more scales) Difference-of-Gaussian with constant ratio of scales is a close approximation to Lindeberg’s scale-normalized Laplacian (can be shown from the heat diffusion equation)
83
Scale Invariant Detection Functions for determining scale Kernels: where Gaussian Note: both kernels are invariant to scale and rotation (Laplacian: 2nd derivative of Gaussian) (Difference of Gaussians) http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
84
Scale Invariant Detectors Harris-Laplacian 1 Find local maximum of: –Harris corner detector in space (image coordinates) –Laplacian in scale 1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 scale x y ← Harris → ← Laplacian → http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
85
85 Laplacian of Gaussian for selection of characteristic scale http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/mikolajczyk_ijcv2004.pdf
86
Scale Invariant Detectors Harris-Laplacian 1 Find local maximum of: –Harris corner detector in space (image coordinates) –Laplacian in scale 1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 2 D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. Accepted to IJCV 2004 scale x y ← Harris → ← Laplacian → http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science SIFT (Lowe) 2 Find local maximum of: –Difference of Gaussians in space and scale scale x y ← DoG →
87
87 Scale-space example: 3 bumps of different widths. scale space 1-d bumps display as an image blur with Gaussians of increasing width
88
Scale Invariant Detection Solution: –Design a function on the region (circle), which is “scale invariant” (the same response for corresponding, scaled regions, even if they are at different scales) Example : average intensity. For corresponding regions (even of different sizes) it will be the same. scale = 1/2 –For a point in one image, we can consider it as a function of region size (circle radius) f region size Image 1 f region size Image 2 http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
89
Scale Invariant Detection Solution: –Design a function on the region (circle), which is “scale invariant” (the same for corresponding regions, even if they are at different scales) Example : average intensity. For corresponding regions (even of different sizes) it will be the same. scale = 1/2 –For a point in one image, we can consider it as a function of region size (circle radius) f region size Image 1 f region size Image 2 NEEDS WORK
90
Scale Invariant Detection Common approach: scale = 1/2 f region size Image 1 f region size Image 2 Take a local maximum (wrt radius) of this function Observation : region size, for which the maximum is achieved, should be invariant to image scale, and only sensitive to the underlying image data. s1s1 s2s2 Important: this scale invariant region size is found in each image independently! http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
91
Scale Invariant Detection A “good” function for scale detection: has one stable sharp peak f region size bad f region size bad f region size Good ! For usual images: a good function would be a one which responds to contrast (sharp local intensity change) http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
92
92
93
93 Gaussian and difference-of-Gaussian filters 93
94
94 scale space The bumps, filtered by difference-of- Gaussian filters
95
95 cross-sections along red lines plotted next slide The bumps, filtered by difference-of- Gaussian filters abc 1.7 3.0 5.2
96
96 Scales of peak responses are proportional to bump width (the characteristic scale of each bump): [1.7, 3, 5.2]./ [5, 9, 15] = 0.3400 0.3333 0.3467 sigma = 1.7 sigma = 3 sigma = 5.2 5915 a b c Diff of Gauss filter giving peak response
97
97 Scales of peak responses are proportional to bump width (the characteristic scale of each bump): [1.7, 3, 5.2]./ [5, 9, 15] = 0.3400 0.3333 0.3467 Note that the max response filters each has the same relationship to the bump that it favors (the zero crossings of the filter are about at the bump edges). So the scale space analysis correctly picks out the “characteristic scale” for each of the bumps. More generally, this happens for the features of the images we analyze.
98
Scale Invariant Detectors K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 Experimental evaluation of detectors w.r.t. scale change Repeatability rate: # correspondences # possible correspondences http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
99
Scale Invariant Detection Functions for determining scale Kernels: where Gaussian Note: both kernels are invariant to scale and rotation (Laplacian: 2nd derivative of Gaussian) (Difference of Gaussians) NEEDS WORK
100
Scale Invariant Detection Compare to human vision: eye’s response Shimon Ullman, Introduction to Computer and Human Vision Course, Fall 2003 http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
101
blob detection; Marr 1982; Voorhees and Poggio 1987; Blostein and Ahuja 1989; … tracedet scale From Lindeberg 1998
102
Scale Invariant Detection: Summary Given: two images of the same scene with a large scale difference between them Goal: find the same interest points independently in each image Solution: search for maxima of suitable functions in scale and in space (over the image) Methods: 1.Harris-Laplacian [Mikolajczyk, Schmid]: maximize Laplacian over scale, Harris’ measure of corner response over the image SIFT [Lowe]: maximize Difference of Gaussians over scale and space
103
Scale space processed one octave at a time http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
104
Repeatability vs number of scales sampled per octave David G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, 60, 2 (2004), pp. 91-110
105
Some details of key point localization over scale and space Detect maxima and minima of difference-of-Gaussian in scale space Fit a quadratic to surrounding values for sub-pixel and sub-scale interpolation (Brown & Lowe, 2002) Taylor expansion around point: Offset of extremum (use finite differences for derivatives): http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
106
Scale Invariant Detectors K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 Experimental evaluation of detectors w.r.t. scale change Repeatability rate: # correspondences # possible correspondences
107
Scale and Rotation Invariant Detection: Summary Given: two images of the same scene with a large scale difference and/or rotation between them Goal: find the same interest points independently in each image Solution: search for maxima of suitable functions in scale and in space (over the image). Also, find characteristic orientation. Methods: 1.Harris-Laplacian [Mikolajczyk, Schmid]: maximize Laplacian over scale, Harris’ measure of corner response over the image 2.SIFT [Lowe]: maximize Difference of Gaussians over scale and space
108
Example of keypoint detection http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/mikolajczyk_ijcv2004.pdf
109
Recap We have scale & rotation invariant distinctive feature We have a characteristic scale We have a characteristic orientation Now we need to find a descriptor of this point –For matching with other images
110
Recall: Matching with Features Problem 1: –Detect the same point independently in both images no chance to match! We need a repeatable detector: DONE Good http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
111
Recall: Matching with Features Problem 2: –For each point correctly recognize the corresponding one ? We need a reliable and distinctive descriptor http://www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt Darya Frolova, Denis Simakov The Weizmann Institute of Science
112
Contents Harris Corner Detector –Algorithm –Analysis Detectors –Rotation invariant –Scale invariant Descriptors –Rotation invariant –Scale invariant
113
Point Descriptors We know how to detect points Next question: How to match them? ? Point descriptor should be: 1.Invariant 2.Distinctive
114
Contents Harris Corner Detector –Overview –Analysis Detectors –Rotation invariant –Scale invariant –Affine invariant Descriptors –Rotation invariant –Scale invariant –Affine invariant
115
Contents Harris Corner Detector –Overview –Analysis Detectors –Rotation invariant –Scale invariant –Affine invariant Descriptors –Rotation invariant –Scale invariant –Affine invariant
116
Descriptors Invariant to Rotation (1) Harris corner response measure: depends only on the eigenvalues of the matrix M C.Harris, M.Stephens. “A Combined Corner and Edge Detector”. 1988
117
Descriptors Invariant to Rotation (2) Image moments in polar coordinates J.Matas et.al. “Rotational Invariants for Wide-baseline Stereo”. Research Report of CMP, 2003 Rotation in polar coordinates is translation of the angle: → + 0 This transformation changes only the phase of the moments, but not its magnitude Rotation invariant descriptor consists of magnitudes of moments: Matching is done by comparing vectors [|m kl |] k,l
118
Descriptors Invariant to Rotation (3) Find local orientation Dominant direction of gradient Compute image derivatives relative to this orientation 1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 2 D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. Accepted to IJCV 2004
119
Contents Harris Corner Detector –Overview –Analysis Detectors –Rotation invariant –Scale invariant –Affine invariant Descriptors –Rotation invariant –Scale invariant –Affine invariant
120
Descriptors Invariant to Scale Use the scale determined by detector to compute descriptor in a normalized frame For example: moments integrated over an adapted window derivatives adapted to scale: sI x
121
Contents Harris Corner Detector –Overview –Analysis Detectors –Rotation invariant –Scale invariant –Affine invariant Descriptors –Rotation invariant –Scale invariant –Affine invariant
122
Feature detector and descriptor summary Stable (repeatable) feature points can be detected regardless of image changes –Scale: search for correct scale as maximum of appropriate function –Affine: approximate regions with ellipses (this operation is affine invariant) Invariant and distinctive descriptors can be computed –Invariant moments –Normalizing with respect to scale and affine transformation
123
123 Outline Feature point detection –Harris corner detector –finding a characteristic scale Local image description –SIFT features 123
124
Recall: Matching with Features Problem 1: –Detect the same point independently in both images no chance to match! We need a repeatable detector Good
125
Recall: Matching with Features Problem 2: –For each point correctly recognize the corresponding one ? We need a reliable and distinctive descriptor
126
Assume we've found a Rotation & Scale Invariant Point Harris-Laplacian 1 Find local maximum of: –Harris corner detector in space (image coordinates) –Laplacian in scale 1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 2 D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. Accepted to IJCV 2004 scale x y ← Harris → ← Laplacian → SIFT (Lowe) 2 Find local maximum of: –Difference of Gaussians in space and scale scale x y ← DoG →
127
Freeman et al, 1998 http://people.csail.mit.edu/billf/papers/cga1.pdfhttp://people.csail.mit.edu/billf/papers/cga1.pdf
128
128 Freeman et al, 1998 http://people.csail.mit.edu/billf/papers/cga1.pdfhttp://people.csail.mit.edu/billf/papers/cga1.pdf
129
CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf
130
Advantages of invariant local features Locality: features are local, so robust to occlusion and clutter (no prior segmentation) Distinctiveness: individual features can be matched to a large database of objects Quantity: many features can be generated for even small objects Efficiency: close to real-time performance Extensibility: can easily be extended to wide range of differing feature types, with each adding robustness
131
SIFT vector formation Computed on rotated and scaled version of window according to computed orientation & scale –resample the window Based on gradients weighted by a Gaussian of variance half the window (for smooth falloff)
132
SIFT vector formation 4x4 array of gradient orientation histograms –not really histogram, weighted by magnitude 8 orientations x 4x4 array = 128 dimensions Motivation: some sensitivity to spatial layout, but not too much. showing only 2x2 here but is 4x4
133
SIFT vector formation Thresholded image gradients are sampled over 16x16 array of locations in scale space Create array of orientation histograms 8 orientations x 4x4 histogram array = 128 dimensions showing only 2x2 here but is 4x4
134
Ensure smoothness Gaussian weight Trilinear interpolation –a given gradient contributes to 8 bins: 4 in space times 2 in orientation
135
Reduce effect of illumination 128-dim vector normalized to 1 Threshold gradient magnitudes to avoid excessive influence of high gradients –after normalization, clamp gradients >0.2 –renormalize
136
136 Tuning and evaluating the SIFT descriptors Database images were subjected to rotation, scaling, affine stretch, brightness and contrast changes, and added noise. Feature point detectors and descriptors were compared before and after the distortions, and evaluated for: Sensitivity to number of histogram orientations and subregions. Stability to noise. Stability to affine change. Feature distinctiveness 136
137
Sensitivity to number of histogram orientations and subregions, n.
138
Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features
139
Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features
140
Feature stability to affine change Match features after random change in image scale & orientation, with 2% image noise, and affine distortion Find nearest neighbor in database of 30,000 features
141
Affine Invariant Descriptors Find affine normalized frame J.Matas et.al. “Rotational Invariants for Wide-baseline Stereo”. Research Report of CMP, 2003 A A1A1 A2A2 rotation Compute rotational invariant descriptor in this normalized frame
142
Distinctiveness of features Vary size of database of features, with 30 degree affine change, 2% image noise Measure % correct for single nearest neighbor match
143
Application of invariant local features to object (instance) recognition. Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters SIFT Features
144
Ratio of distances reliable for matching
147
SIFT features impact A good SIFT features tutorial: http://www.cs.toronto.edu/~jepson/csc2503/tutSIFT04.pdf By Estrada, Jepson, and Fleet. The original SIFT paper: http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf SIFT feature paper citations: Distinctive image features from scale-invariant keypointsDG Lowe - International journal of computer vision, 2004 - Springer International Journal of Computer Vision 60(2), 91–110, 2004 cс 2004 Kluwer Academic Publishers. Computer Science Department, University of British Columbia...Cited by 6537
148
Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features
149
Feature stability to affine change Match features after random change in image scale & orientation, with 2% image noise, and affine distortion Find nearest neighbor in database of 30,000 features
150
Distinctiveness of features Vary size of database of features, with 30 degree affine change, 2% image noise Measure % correct for single nearest neighbor match
153
Recap Stable (repeatable) feature points can be detected regardless of image changes –Scale: search for correct scale as maximum of appropriate function Invariant and distinctive descriptors can be computed
154
Now we have Well-localized feature points Distinctive descriptor Now we need to –match pairs of feature points in different images –Robustly compute homographies (in the presence of errors/outliers)
155
155 I think I can delete all the slides below 155
156
CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf
157
Assume we've found a Rotation & Scale Invariant Point Harris-Laplacian 1 Find local maximum of: –Harris corner detector in space (image coordinates) –Laplacian in scale 1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 2 D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. Accepted to IJCV 2004 scale x y ← Harris → ← Laplacian → SIFT (Lowe) 2 Find local maximum of: –Difference of Gaussians in space and scale scale x y ← DoG →
158
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters SIFT Features
159
Freeman et al, 1998 http://people.csail.mit.edu/billf/papers/cga1.pdfhttp://people.csail.mit.edu/billf/papers/cga1.pdf
160
Advantages of invariant local features Locality: features are local, so robust to occlusion and clutter (no prior segmentation) Distinctiveness: individual features can be matched to a large database of objects Quantity: many features can be generated for even small objects Efficiency: close to real-time performance Extensibility: can easily be extended to wide range of differing feature types, with each adding robustness
161
SIFT vector formation Computed on rotated and scaled version of window according to computed orientation & scale –resample a 16x16 version of the window Based on gradients weighted by a Gaussian of variance half the window (for smooth falloff)
162
SIFT vector formation 4x4 array of gradient orientation histograms –not really histogram, weighted by magnitude 8 orientations x 4x4 array = 128 dimensions Motivation: some sensitivity to spatial layout, but not too much. showing only 2x2 here but is 4x4
163
SIFT vector formation Thresholded image gradients are sampled over 16x16 array of locations in scale space Create array of orientation histograms 8 orientations x 4x4 histogram array = 128 dimensions showing only 2x2 here but is 4x4
164
Ensure smoothness Gaussian weight Trilinear interpolation –a given gradient contributes to 8 bins: 4 in space times 2 in orientation
165
Reduce effect of illumination 128-dim vector normalized to 1 Threshold gradient magnitudes to avoid excessive influence of high gradients –after normalization, clamp gradients >0.2 –renormalize
166
Select canonical orientation Create histogram of local gradient directions computed at selected scale Assign canonical orientation at peak of smoothed histogram Each key specifies stable 2D coordinates (x, y, scale, orientation) One option: use maximum eigenvector in Harris Alternative
167
Affine Invariant Descriptors Find affine normalized frame J.Matas et.al. “Rotational Invariants for Wide-baseline Stereo”. Research Report of CMP, 2003 A A1A1 A2A2 rotation Compute rotational invariant descriptor in this normalized frame
168
Sensitivity to number of histogram orientations
169
Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features
170
Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features
171
Feature stability to affine change Match features after random change in image scale & orientation, with 2% image noise, and affine distortion Find nearest neighbor in database of 30,000 features
172
Distinctiveness of features Vary size of database of features, with 30 degree affine change, 2% image noise Measure % correct for single nearest neighbor match
173
SIFT – Scale Invariant Feature Transform 1 Empirically found 2 to show very good performance, invariant to image rotation, scale, intensity change, and to moderate affine transformations 1 D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. IJCV 2004 2 K.Mikolajczyk, C.Schmid. “A Performance Evaluation of Local Descriptors”. CVPR 2003 Scale = 2.5 Rotation = 45 0
174
Ratio of distances reliable for matching
177
A good SIFT features tutorial http://www.cs.toronto.edu/~jepson/csc2503/tutSIFT04.pdf By Estrada, Jepson, and Fleet. The original paper: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.2.8899
178
Feature stability to noise Match features after random change in image scale & orientation, with differing levels of image noise Find nearest neighbor in database of 30,000 features
179
Feature stability to affine change Match features after random change in image scale & orientation, with 2% image noise, and affine distortion Find nearest neighbor in database of 30,000 features
180
Distinctiveness of features Vary size of database of features, with 30 degree affine change, 2% image noise Measure % correct for single nearest neighbor match
183
Recap Stable (repeatable) feature points can be detected regardless of image changes –Scale: search for correct scale as maximum of appropriate function Invariant and distinctive descriptors can be computed
184
Now we have Well-localized feature points Distinctive descriptor Next lecture (Tuesday) we need to –match pairs of feature points in different images –Robustly compute homographies (in the presence of errors/outliers)
185
Feature matching Exhaustive search –for each feature in one image, look at all the other features in the other image(s) –Usually not so bad Hashing –compute a short descriptor from each feature vector, or hash longer descriptors (randomly) Nearest neighbor techniques –k-trees and their variants (Best Bin First)
186
Nearest neighbor techniques k-D tree and Best Bin First (BBF) Indexing Without Invariants in 3D Object Recognition, Beis and Lowe, PAMI’99
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.