Download presentation
Presentation is loading. Please wait.
1
1 Invariant Local Feature for Object Recognition Presented by Wyman 2/05/2006
2
2 Introduction Object Recognition A task of finding 3D objects from 2D images (or even video) and classifying them into one of the many known object types Closely related to the success of many computer vision applications robotics, surveillance, registration … etc. A difficult problem that a general and comprehensive solution to this problem has not been made
3
3 Introduction Two main streams of approaches: Model-Based Object Recognition View-Based Object Recognition 2D representations of the same object viewed at different angles and distances when available Extract features (as the representations of object) and compare them to those in the feature database
4
4 Matching with Local Features One of the possible solution Matching with invariant local features Robust to Occlusion, clutter background cf. global features Three phases: Detection Description Matching Repeatedly Detected Distinctive Accurate, Fast Invariance
5
5 Research Direction Study and improve the invariant local features Detection, description and matching Study and improve object recognition / matching using invariant local features Area to improve Distinctiveness Invariance Efficiency
6
6 Outline State-of-the-art techniques Descriptor Matching Conclusion & Future Works
7
7 Outline State-of-the-art techniques Descriptor Performance evaluation Current extension using color Possible way to improve – Color Orientation Matching Conclusion & Future Work
8
8 Outline State-of-the-art techniques Descriptor Performance evaluation Current extension using color Possible way to improve – Color Orientation Matching Cross-bin distance Performance evaluation Possible way to improve – Aggregation of Content Conclusion & Future Work
9
9 Performance Evaluation of Descriptors We aim to compare the performance of three state-of-the-art local feature descriptors: SIFT, PCA-SIFT and GLOH Same experimental setup as that used in “Performance Evaluation of Local Descriptors” TPAMI 2005 Different evaluation criterion Different result In each experiment, each descriptor describe features from Harris corner detector Harris-affine covariant detector Output regions that are invariant to viewpoint change
10
10 SIFT – Scale Invariant Feature Transform Descriptor overview: Find local orientation as the dominant gradient direction Rotation Invariant Compute gradient orientation histograms of several small windows (128 values for each point) relative to the local orientation Viewpoint Invariant Normalize the descriptor to make it invariant to intensity change Illumination D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. IJCV 2004 DetectorDescriptor InvarianceScaleRotationIlluminationViewpoint
11
11 PCA-SIFT Rotate feature region to dominant gradient direction same as SIFT Pre-compute an eigenspace for local gradient patches of size 41x41 2x39x39=3042 elements Only keep 20 components A more compact descriptor Sensitive to viewpoint change Y. K. Rahul. Pca-sift: A more distinctive representation for local image descriptors. CVPR 2004
12
12 GLOH (Gradient location-orientation histogram) Different from SIFT in sampling method 17 log-polar location bins 16 orientation bins Analyze the 17x16=272 Dimensions Apply PCA analysis, keep 128 components 17 Log-polar location bins C. S. Krystian Mikolajczyk. A performance evaluation of local descriptors. TPAMI 2005 PCA on Orientation Histogram VS PCA on Gradient Patch
13
13 Performance Evaluation Data Set From Visual Geometry Group Scale + Rotation (bark) Blur Illumination change (leuven) Viewpoint change (wall) Viewpoint change (graf) Blurring (bikes)
14
14 Performance Evaluation Evaluation Criteria Match features from first image to the second one based on the nearest neighbor distance ratio That is, two features are matched if first nearest neighbor is much closer than the second nearest neighbor This is different from the threshold-based criterion used in “A Performance Evaluation of Local Descriptors” TPAMI 2005 Count the number of correct matches and the number of false matches obtained for an image pair The results are plotted in form of recall versus 1-precision curves Total # possible matches
15
15 Performance Evaluation Scale + Rotation (bark) Illumination change (leuven) Viewpoint change (wall) Viewpoint change (graf) Blurring (bikes)
16
16 Performance Evaluation Result DescriptorDistinctivenessComplexityFeature Size SIFTHighMedium128 PCA-SIFTMediumLow20 GLOHHigh 128 For accuracy SIFT For speed PCA-SIFT In large database ?
17
17 Start from Scratch Comparison of my descriptor with SIFT Simply designed vs carefully designed Result SIFT is a carefully designed descriptor, it remains robust when the degree of transformation increases Increasing illumination changeIncreasing affine change Increasing blur
18
18 Extension using Color Weijier extends local feature descriptors with color information, by concatenating a color descriptor, K, to the shape descriptor, S, according to where B is the combined color and shape descriptor and is a weighting parameter and ^ indicates that the vector is normalized. J. van de Weijer and C. Schmid. Coloring local feature extraction. ECCV2006.
19
19 Proposed Extension using Color Problem statement Orientation of local feature patch are obtained from the monochrome intensity image Color feature patches on the right has the same grayscale patches shown on the left. Thus, they are assigned the same orientation histogram If we can generate significant orientation histogram for each of them, we can further improve the distinctiveness of the shape descriptor, SIFT …
20
20 Feature Matching Original distance metric designed for SIFT, PCA-SIFT and GLOH is bin-to-bin Euclidean distance Problems: Sensitive to quantization effects Sensitive to distortion problems due to deformation, illumination change and noise
21
21 Feature Matching – Diffusion Distance Haibin Ling proposed a new distance metric for histogram- based descriptor called diffusion distance Summing value in all layers of the distance pyramid with exponentially decreasing size H. Ling and K. Okada. Diffusion distance for histogram comparison. CVPR06. Gaussian Blur In 3 directions 3D case Gaussian Blur In 1 direction 1D case
22
22 Feature Matching – Performance Evaluation Same setup as the previous experiment Recall vs 1-prevision curve for image pair with affine transformation
23
23 Feature Matching – Performance Evaluation Images in the data set and the evaluation method needs to be improved Data set. The synthetic deformation data set from Haibin Ling
24
24 Proposed Extension Robust aggregation of the histogram, such as average orientation direction and center of mass of derivatives, can be also used in comparison Diffusion distance can be viewed as a form of comparison using the aggregate information Its aggregation of histogram bins is obtained by repeatedly convolving the histogram with Gaussian kernels Summation of the distance between each aggregation pair of two histograms gives the diffusion distance Aggregation: 1. Average of gradient magnitude over location bins 2. Bin reduction in orientation bins 128 bins 64 bins 32 bins Histogram A 128 bins 64 bins 32 bins Histogram B
25
25 Conclusion and Future Work Presented Result of performance evaluation of some state-of-the-art descriptors and feature matching distance metric Possible way to improve the description and matching step TODO Incorporate color information into local features Improve feature’s distinctiveness Design a distance metric for comparing SIFT feature’s histogram Invariant to deformation (like diffusion distance) Improve feature’s distinctiveness
26
26 Q & A Thank you very much!
27
27 Models of Image Change Geometry Rotation Similarity (rotation + uniform scale) Affine (scale dependent on direction) valid for: orthographic camera, locally planar object Photometry Affine intensity change ( I a I + b)
28
28 Image Alignment Many applications 3D reconstruction, motion tracking, indexing and database retrieval, robot navigation … Image alignment for building panorama
29
29 Image Alignment Detect features in both images
30
30 Image Alignment Detect features in both images Find corresponding pairs
31
31 Image Alignment Detect features in both images Find corresponding pairs Use these pairs to align images
32
32 Difficulties Problem 1: Detect the same point independently in both images no chance to match! We need a repeatable detector
33
33 Difficulties Problem 2: For each point correctly recognize the corresponding one ? We need a reliable and distinctive descriptor
34
34 Difficulties Problem 3: Image transformation may exist in the two images Change in scale, rotation, illumination and viewpoint ? We need an invariant local feature descriptor
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.