The SIFT (Scale Invariant Feature Transform) Detector and Descriptor

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Feature Detection. Description Localization More Points Robust to occlusion Works with less texture More Repeatable Robust detection Precise localization.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe.
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Presented by Xinyu Chang
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
CSE 473/573 Computer Vision and Image Processing (CVIP)
Distinctive Image Features from Scale- Invariant Keypoints Mohammad-Amin Ahantab Technische Universität München, Germany.
Matching with Invariant Features
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Distinctive Image Feature from Scale-Invariant KeyPoints
Feature extraction: Corners and blobs
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Object Recognition Using Distinctive Image Feature From Scale-Invariant Key point D. Lowe, IJCV 2004 Presenting – Anat Kaspi.
Scale Invariant Feature Transform (SIFT)
Automatic Matching of Multi-View Images
Image Features: Descriptors and matching
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe – IJCV 2004 Brien Flewelling CPSC 643 Presentation 1.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Interest Point Descriptors
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
Internet-scale Imagery for Graphics and Vision James Hays cs195g Computational Photography Brown University, Spring 2010.
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
Reporter: Fei-Fei Chen. Wide-baseline matching Object recognition Texture recognition Scene classification Robot wandering Motion tracking.
CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Distinctive Image Features from Scale-Invariant Keypoints Ronnie Bajwa Sameer Pawar * * Adapted from slides found online by Michael Kowalski, Lehigh University.
Harris Corner Detector & Scale Invariant Feature Transform (SIFT)
Features Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2006/3/15 with slides by Trevor Darrell Cordelia Schmid, David Lowe, Darya Frolova, Denis Simakov,
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Local features: detection and description
Distinctive Image Features from Scale-Invariant Keypoints
Presented by David Lee 3/20/2006
Keypoint extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Blob detection.
SIFT.
SIFT Scale-Invariant Feature Transform David Lowe
CS262: Computer Vision Lect 09: SIFT Descriptors
Interest Points EE/CSE 576 Linda Shapiro.
Presented by David Lee 3/20/2006
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Distinctive Image Features from Scale-Invariant Keypoints
Scale Invariant Feature Transform (SIFT)
SIFT paper.
TP12 - Local features: detection and description
Local Invariant Features
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
From a presentation by Jimmy Huff Modified by Josiah Yoder
Interest Points & Descriptors 3 - SIFT
SIFT keypoint detection
SIFT.
Edge detection f(x,y) viewed as a smooth function
Lecture VI: Corner and Blob Detection
SIFT SIFT is an carefully designed procedure with empirically determined parameters for the invariant and distinctive features.
Presented by Xu Miao April 20, 2005
Lecture: Feature Descriptors
Presentation transcript:

The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper 1999 Newer journal paper 2004

Motivation The Harris operator is not invariant to scale and its descriptor was not invariant to rotation1. For better image matching, Lowe’s goal was to develop an operator that is invariant to scale and rotation. The operator he developed is both a detector and a descriptor and can be used for both image matching and object recognition. 1But Schmidt and Mohr developed a rotation invariant descriptor for it in 1997. 12/6/2018

Idea of SIFT Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters SIFT Features 12/6/2018

Claimed Advantages of SIFT Locality: features are local, so robust to occlusion and clutter (no prior segmentation) Distinctiveness: individual features can be matched to a large database of objects Quantity: many features can be generated for even small objects Efficiency: close to real-time performance Extensibility: can easily be extended to wide range of differing feature types, with each adding robustness 12/6/2018

Overall Procedure at a High Level Scale-space extrema detection Keypoint localization Orientation assignment Keypoint description Search over multiple scales and image locations. Fit a model to detrmine location and scale. Select keypoints based on a measure of stability. Compute best orientation(s) for each keypoint region. Use local image gradients at selected scale and rotation to describe each keypoint region. 12/6/2018

1. Scale-space extrema detection Goal: Identify locations and scales that can be repeatably assigned under different views of the same scene or object. Method: search for stable features across multiple scales using a continuous function of scale. Prior work has shown that under a variety of assumptions, the best function is a Gaussian function. The scale space of an image is a function L(x,y,) that is produced from the convolution of a Gaussian kernel (at different scales) with the input image. 12/6/2018

Aside: Image Pyramids And so on. 3rd level is derived from the 2nd level according to the same funtion 2nd level is derived from the original image according to some function Bottom level is the original image. 12/6/2018

Aside: Mean Pyramid And so on. At 3rd level, each pixel is the mean of 4 pixels in the 2nd level. At 2nd level, each pixel is the mean of 4 pixels in the original image. mean Bottom level is the original image. 12/6/2018

Aside: Gaussian Pyramid At each level, image is smoothed and reduced in size. And so on. At 2nd level, each pixel is the result of applying a Gaussian mask to the first level and then subsampling to reduce the size. Apply Gaussian filter Bottom level is the original image. 12/6/2018

Example: Subsampling with Gaussian pre-filtering 12/6/2018

Lowe’s Scale-space extrema detection Scale-space function L Gaussian convolution Laplacian of Gaussian kernel has been used in other work on scale invariance Difference of Gaussian kernel is a close approximate to scale-normalized Laplacian of Gaussian where  is the width of the Gaussian. 2 scales:  and k 12/6/2018

Scale-space extrema detection Gaussian is an ad hoc solution of heat diffusion equation Hence k is not necessarily very small in practice K has almost no impact on the stability of extrema detection even if it is significant as sqrt of 2 12/6/2018

Lowe’s Pyramid Scheme Scale space is separated into octaves: Octave 1 uses scale  Octave 2 uses scale 2 etc. In each octave, the initial image is repeatedly convolved with Gaussians to produce a set of scale space images. Adjacent Gaussians are subtracted to produce the DOG After each octave, the Gaussian image is down-sampled by a factor of 2 to produce an image ¼ the size to start the next level. 12/6/2018

Lowe’s Pyramid Scheme s+2 filters s+1=2(s+1)/s0 . i=2i/s0 differ- ence images s+3 images including original The parameter s determines the number of images per octave. 12/6/2018

Key point localization s+2 difference images. top and bottom ignored. s planes searched. Detect maxima and minima of difference-of-Gaussian in scale space Each point is compared to its 8 neighbors in the current image and 9 neighbors each in the scales above and below For each max or min found, output is the location and the scale. 12/6/2018

Sampling in scale for efficiency Scale-space extrema detection: experimental results over 32 images that were synthetically transformed and noise added. % detected % correctly matched average no. detected average no. matched Stability Expense Sampling in scale for efficiency How many scales should be used per octave? S=? More scales evaluated, more keypoints found S < 3, stable keypoints increased too S > 3, stable keypoints decreased S = 3, maximum stable keypoints found We could not evaluate the 12/6/2018

2. Keypoint localization Detailed keypoint determination Sub-pixel and sub-scale location scale determination Ratio of principal curvature to reject edges and flats (like detecting corners) 12/6/2018

Keypoint localization Once a keypoint candidate is found, perform a detailed fit to nearby data to determine location, scale, and ratio of principal curvatures In initial work keypoints were found at location and scale of a central sample point. In newer work, they fit a 3D quadratic function to improve interpolation accuracy. The Hessian matrix was used to eliminate edge responses. 12/6/2018

Eliminating the Edge Response Reject flats: < 0.03 Reject edges: r < 10 What does this look like? Let  be the eigenvalue with larger magnitude and  the smaller. Let r = /. So  = r (r+1)2/r is at a min when the 2 eigenvalues are equal. 12/6/2018

3. Orientation assignment Create histogram of local gradient directions at selected scale Assign canonical orientation at peak of smoothed histogram Each key specifies stable 2D coordinates (x, y, scale,orientation) If 2 major orientations, use both. 12/6/2018

Keypoint localization with orientation 832 233x189 initial keypoints 536 729 keypoints after gradient threshold keypoints after ratio threshold 12/6/2018

4. Keypoint Descriptors At this point, each keypoint has location scale orientation Next is to compute a descriptor for the local image region about each keypoint that is highly distinctive invariant as possible to variations such as changes in viewpoint and illumination 12/6/2018

Normalization Rotate the window to standard orientation Scale the window size based on the scale at which the point was found. 12/6/2018

Lowe’s Keypoint Descriptor use the normalized circular region about the keypoint compute gradient magnitude and orientation at each point in the region weight them by a Gaussian window overlaid on the circle create an orientation histogram over the 4 X 4 subregions of the window 4 X 4 descriptors over 16 X 16 sample array were used in practice. 4 X 4 times 8 directions gives a vector of 128 values. 12/6/2018

Lowe’s Keypoint Descriptor (shown with 2 X 2 descriptors over 8 X 8) Invariant to other changes (Complex Cell) In experiments, 4x4 arrays of 8 bin histogram is used, a total of 128 features for one keypoint 12/6/2018

Scale Invariant Detectors Harris-Laplacian1 Find local maximum of: Harris corner detector in space (image coordinates) Laplacian in scale scale x y  Harris   Laplacian  SIFT (Lowe)2 Find local maximum of: Difference of Gaussians in space and scale scale x y  DoG  1 K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 2 D.Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. IJCV 2004 12/6/2018

Scale Invariant Detectors Experimental evaluation of detectors w.r.t. scale change Repeatability rate: # correspondences # possible correspondences K.Mikolajczyk, C.Schmid. “Indexing Based on Scale Invariant Interest Points”. ICCV 2001 12/6/2018

Schmid’s Comparison with Harris-Laplacian Affine-invariant comparison Translation-invariant – local features: both OK Rotation-invariant Harris-Laplacian PCA SIFT Orientation Shear-invariant Eigenvalues No Within 50 degree of viewpoint, SIFT is better than HL, after 70 degree, HL is better. 12/6/2018

Comparison with Harris-Laplacian Computational time: SIFT uses few floating point calculation HL uses iterative calculation which costs much more 12/6/2018

Using SIFT for Matching “Objects” 12/6/2018

12/6/2018

Uses for SIFT Feature points are used also for: Image alignment (homography, fundamental matrix) 3D reconstruction Motion tracking Object recognition Indexing and database retrieval Robot navigation … other 12/6/2018