Download presentation
Presentation is loading. Please wait.
1
Feature matching and tracking Class 5 Read Section 4.1 of course notes http://www.cs.unc.edu/~marc/tutorial/node49.html Read Shi and Tomasi’s paper on good features to track http://www.unc.edu/courses/2004fall/comp/290/089/papers/shi-tomasi-good- features-cvpr1994.pdf Read Lowe’s paper on SIFT features http://www.unc.edu/courses/2004fall/comp/290/089/papers/Lowe_ijcv03.pdf
2
Feature matching vs. tracking Extract features independently and then match by comparing descriptors Extract features in first images and then try to find same feature back in next view What is a good feature? Image-to-image correspondences are key to passive triangulation-based 3D reconstruction
3
Compare intensities pixel-by-pixel Comparing image regions I(x,y) I´(x,y) Sum of Square Differences Dissimilarity measures
4
Compare intensities pixel-by-pixel Comparing image regions I(x,y) I´(x,y) Zero-mean Normalized Cross Correlation Similarity measures
5
Feature points Required properties: Well-defined (i.e. neigboring points should all be different) Stable across views (i.e. same 3D point should be extracted as feature for neighboring viewpoints)
6
Feature point extraction homogeneous edge corner Find points that differ as much as possible from all neighboring points
7
Feature point extraction Approximate SSD for small displacement Δ Image difference, square difference for pixel SSD for window
8
Feature point extraction homogeneous edge corner Find points for which the following is maximum i.e. maximize smallest eigenvalue of M
9
Harris corner detector Only use local maxima, subpixel accuracy through second order surface fitting Select strongest features over whole image and over each tile (e.g. 1000/image, 2/tile) Use small local window: Maximize „cornerness“:
10
Simple matching for each corner in image 1 find the corner in image 2 that is most similar (using SSD or NCC) and vice-versa Only compare geometrically compatible points Keep mutual best matches What transformations does this work for?
11
Feature matching: example 0.96-0.40-0.16-0.390.19 -0.050.75-0.470.510.72 -0.18-0.390.730.15-0.75 -0.270.490.160.790.21 0.080.50-0.450.280.99 1 5 2 4 3 15 2 4 3 What transformations does this work for? What level of transformation do we need?
12
Wide baseline matching Requirement to cope with larger variations between images Translation, rotation, scaling Foreshortening Non-diffuse reflections Illumination geometric transformations photometric changes
13
Invariant detectors Scale invariant Affine invariant Rotation invariant
14
Normalization (Or how to use affine invariant detectors for matching)
15
Wide-baseline matching example (Tuytelaars and Van Gool BMVC 2000)
16
Lowe’s SIFT features Recover features with position, orientation and scale (Lowe, ICCV99)
17
Position Look for strong responses of DOG filter (Difference-Of-Gaussian) Only consider local maxima
18
Scale Look for strong responses of DOG filter (Difference-Of-Gaussian) over scale space Only consider local maxima in both position and scale Fit quadratic around maxima for subpixel
19
Orientation Create histogram of local gradient directions computed at selected scale Assign canonical orientation at peak of smoothed histogram Each key specifies stable 2D coordinates (x, y, scale, orientation)
20
Minimum contrast and “cornerness”
21
SIFT descriptor Thresholded image gradients are sampled over 16x16 array of locations in scale space Create array of orientation histograms 8 orientations x 4x4 histogram array = 128 dimensions
23
Matas et al.’s maximally stable regions Look for extremal regions http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc02.pdf
24
Mikolaczyk and Schmid LoG Features
25
Feature tracking Identify features and track them over video Small difference between frames potential large difference overall Standard approach: KLT (Kanade-Lukas-Tomasi)
26
Good features to track Use same window in feature selection as for tracking itself Compute motion assuming it is small Affine is also possible, but a bit harder (6x6 in stead of 2x2) differentiate:
27
Example Simple displacement is sufficient between consecutive frames, but not to compare to reference template
28
Example
29
Synthetic example
30
Good features to keep tracking Perform affine alignment between first and last frame Stop tracking features with too large errors
31
Live demo OpenCV (try it out!) LKdemo
32
Brightness constancy assumption Optical flow (small motion) 1D example possibility for iterative refinement
33
Brightness constancy assumption Optical flow (small motion) 2D example (2 unknowns) (1 constraint) ? isophote I(t)=I isophote I(t+1)=I the “aperture” problem
34
Optical flow How to deal with aperture problem? Assume neighbors have same displacement (3 constraints if color gradients are different)
35
Lucas-Kanade Assume neighbors have same displacement least-squares:
36
Revisiting the small motion assumption Is this motion small enough? Probably not—it’s much larger than one pixel (2 nd order terms dominate) How might we solve this problem? * From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
37
Reduce the resolution! * From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
38
image I t-1 image I Gaussian pyramid of image I t-1 Gaussian pyramid of image I image I image I t-1 u=10 pixels u=5 pixels u=2.5 pixels u=1.25 pixels Coarse-to-fine optical flow estimation slides from Bradsky and Thrun
39
image I image J Gaussian pyramid of image I t-1 Gaussian pyramid of image I image I image I t-1 Coarse-to-fine optical flow estimation run iterative L-K warp & upsample...... slides from Bradsky and Thrun
40
Next class: triangulation and reconstruction C1C1 m1m1 L1L1 m2m2 L2L2 M C2C2 Triangulation - calibration - correspondences
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.