Interest Points & Descriptors 3 - SIFT

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Feature Detection. Description Localization More Points Robust to occlusion Works with less texture More Repeatable Robust detection Precise localization.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe.
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
CSE 473/573 Computer Vision and Image Processing (CVIP)
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
1 Interest Operators Find “interesting” pieces of the image –e.g. corners, salient regions –Focus attention of algorithms –Speed up computation Many possible.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Lecture 4: Feature matching
1 Interest Operator Lectures lecture topics –Interest points 1 (Linda) interest points, descriptors, Harris corners, correlation matching –Interest points.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Distinctive Image Feature from Scale-Invariant KeyPoints
Feature extraction: Corners and blobs
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Scale Invariant Feature Transform (SIFT)
Automatic Matching of Multi-View Images
Image Features: Descriptors and matching
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe – IJCV 2004 Brien Flewelling CPSC 643 Presentation 1.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
Internet-scale Imagery for Graphics and Vision James Hays cs195g Computational Photography Brown University, Spring 2010.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
Reporter: Fei-Fei Chen. Wide-baseline matching Object recognition Texture recognition Scene classification Robot wandering Motion tracking.
CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Distinctive Image Features from Scale-Invariant Keypoints Ronnie Bajwa Sameer Pawar * * Adapted from slides found online by Michael Kowalski, Lehigh University.
Harris Corner Detector & Scale Invariant Feature Transform (SIFT)
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Project 3 questions? Interest Points and Instance Recognition Computer Vision CS 143, Brown James Hays 10/21/11 Many slides from Kristen Grauman and.
Local features: detection and description
Presented by David Lee 3/20/2006
Blob detection.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
SIFT.
SIFT Scale-Invariant Feature Transform David Lowe
Interest Points EE/CSE 576 Linda Shapiro.
Presented by David Lee 3/20/2006
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Distinctive Image Features from Scale-Invariant Keypoints
Scale Invariant Feature Transform (SIFT)
SIFT paper.
TP12 - Local features: detection and description
Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2006/3/22
Local features: detection and description May 11th, 2017
Feature description and matching
Features Readings All is Vanity, by C. Allan Gilbert,
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
From a presentation by Jimmy Huff Modified by Josiah Yoder
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
SIFT keypoint detection
SIFT.
Edge detection f(x,y) viewed as a smooth function
Lecture VI: Corner and Blob Detection
Feature descriptors and matching
SIFT SIFT is an carefully designed procedure with empirically determined parameters for the invariant and distinctive features.
Presented by Xu Miao April 20, 2005
Presentation transcript:

Interest Points & Descriptors 3 - SIFT Today’s lecture: Invariant Features, SIFT Interest Points and Descriptors Recap: Harris, Correlation Adding scale and rotation invariance Multi-Scale Oriented Patch descriptors SIFT Scale Invariant Feature Transform Interest point detector (DOG) Descriptor 12/29/2018

Harris Corners & Correlation Classic approach to image feature matching Harris corners Low autocorrelation / high SSD in 2D Correlation matching Equivalent to SSD for normalised patches Robust image matching cf. pixel based error functions BUT: poor invariance properties 12/29/2018

Rotation/Scale Invariance original translated rotated scaled Translation Rotation Scale Is Harris invariant? ? Is correlation invariant? 12/29/2018

Rotation/Scale Invariance original translated rotated scaled Translation Rotation Scale Is Harris invariant? ? Is correlation invariant? 12/29/2018

Rotation/Scale Invariance original translated rotated scaled Translation Rotation Scale Is Harris invariant? YES ? Is correlation invariant? 12/29/2018

Rotation/Scale Invariance original translated rotated scaled Translation Rotation Scale Is Harris invariant? YES ? Is correlation invariant? 12/29/2018

Rotation/Scale Invariance original translated rotated scaled Translation Rotation Scale Is Harris invariant? YES NO Is correlation invariant? ? 12/29/2018

Rotation/Scale Invariance original translated rotated scaled Translation Rotation Scale Is Harris invariant? YES NO Is correlation invariant? ? 12/29/2018

Rotation/Scale Invariance original translated rotated scaled Translation Rotation Scale Is Harris invariant? YES NO Is correlation invariant? ? 12/29/2018

Rotation/Scale Invariance original translated rotated scaled Translation Rotation Scale Is Harris invariant? YES NO Is correlation invariant? ? 12/29/2018

Rotation/Scale Invariance original translated rotated scaled Translation Rotation Scale Is Harris invariant? YES NO Is correlation invariant? 12/29/2018

Invariant Features Local image descriptors that are invariant (unchanged) under image transformations 12/29/2018

Interest Points and Descriptors Focus attention for image understanding Points with 2D structure in the image Examples: Harris corners, DOG maxima Descriptors Describe region around an interest point Must be invariant under Geometric distortion Photometric changes Examples: Oriented patches, SIFT 12/29/2018

Canonical Frames 12/29/2018

Multi-Scale Oriented Patches Extract oriented patches at multiple scales 12/29/2018 [ Brown, Szeliski, Winder CVPR 2005 ]

Multi-Scale Oriented Patches Interest point x, y, scale = multi-scale Harris corners rotation = blurred gradient Descriptor Bias gain normalised image patch Sampled at 5x scale 12/29/2018

Multi-Scale Oriented Patches Sample scaled, oriented patch 12/29/2018

Multi-Scale Oriented Patches Sample scaled, oriented patch 8x8 patch, sampled at 5 x scale 12/29/2018

Multi-Scale Oriented Patches Sample scaled, oriented patch 8x8 patch, sampled at 5 x scale Bias/gain normalised 8 pixels 40 pixels 12/29/2018

Application: Image Stitching 12/29/2018 [ Microsoft Digital Image Pro version 10 ]

Image Sampling : Scale downsample ↓2 blur, downsample ↓2 12/29/2018

Image Sampling : Scale This is aliasing! downsample ↓2 blur, downsample ↓2 12/29/2018

Image Sampling : Scale ↓2 ↓4 ↓8 12/29/2018

Image Sampling : Rotation Problem: sample grids are not aligned… X, Y U, V 12/29/2018

Bilinear Interpolation Use all 4 adjacent samples I01 I11 y I00 I10 x 12/29/2018

The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004

Motivation The Harris operator is not invariant to scale and correlation is not invariant to rotation1. For better image matching, Lowe’s goal was to develop an interest operator that is invariant to scale and rotation. Also, Lowe aimed to create a descriptor that was robust to the variations corresponding to typical viewing conditions. 1But Schmid and Mohr developed a rotation invariant descriptor for it in 1997. 12/29/2018

Idea of SIFT Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters SIFT Features 12/29/2018

Claimed Advantages of SIFT Locality: features are local, so robust to occlusion and clutter (no prior segmentation) Distinctiveness: individual features can be matched to a large database of objects Quantity: many features can be generated for even small objects Efficiency: close to real-time performance Extensibility: can easily be extended to wide range of differing feature types, with each adding robustness 12/29/2018

Overall Procedure at a High Level Scale-space extrema detection Keypoint localization Orientation assignment Keypoint description Search over multiple scales and image locations. Fit a model to detrmine location and scale. Select keypoints based on a measure of stability. Compute best orientation(s) for each keypoint region. Use local image gradients at selected scale and rotation to describe each keypoint region. 12/29/2018

1. Scale-space extrema detection Goal: Identify locations and scales that can be repeatably assigned under different views of the same scene or object. Method: search for stable features across multiple scales using a continuous function of scale. Prior work has shown that under a variety of assumptions, the best function is a Gaussian function. The scale space of an image is a function L(x,y,) that is produced from the convolution of a Gaussian kernel (at different scales) with the input image. 12/29/2018

Aside: Image Pyramids And so on. 3rd level is derived from the 2nd level according to the same funtion 2nd level is derived from the original image according to some function Bottom level is the original image. 12/29/2018

Aside: Mean Pyramid And so on. At 3rd level, each pixel is the mean of 4 pixels in the 2nd level. At 2nd level, each pixel is the mean of 4 pixels in the original image. mean Bottom level is the original image. 12/29/2018

Aside: Gaussian Pyramid At each level, image is smoothed and reduced in size. And so on. At 2nd level, each pixel is the result of applying a Gaussian mask to the first level and then subsampling to reduce the size. Apply Gaussian filter Bottom level is the original image. 12/29/2018

Example: Subsampling with Gaussian pre-filtering 12/29/2018

Lowe’s Scale-space Interest Points Laplacian of Gaussian kernel Scale normalised (x by scale2) Proposed by Lindeberg Scale-space detection Find local maxima across scale/space A good “blob” detector 12/29/2018 [ T. Lindeberg IJCV 1998 ]

Lowe’s Scale-space Interest Points Gaussian is an ad hoc solution of heat diffusion equation Hence k is not necessarily very small in practice K has almost no impact on the stability of extrema detection even if it is significant as sqrt of 2 12/29/2018

Lowe’s Scale-space Interest Points Scale-space function L Gaussian convolution Difference of Gaussian kernel is a close approximate to scale-normalized Laplacian of Gaussian Can approximate the Laplacian of Gaussian kernel with a difference of separable convolutions where  is the width of the Gaussian. 2 scales:  and k 12/29/2018

Lowe’s Pyramid Scheme Scale space is separated into octaves: Octave 1 uses scale  Octave 2 uses scale 2 etc. In each octave, the initial image is repeatedly convolved with Gaussians to produce a set of scale space images. Adjacent Gaussians are subtracted to produce the DOG After each octave, the Gaussian image is down-sampled by a factor of 2 to produce an image ¼ the size to start the next level. 12/29/2018

Lowe’s Pyramid Scheme s+2 filters s+1=2(s+1)/s0 . i=2i/s0 differ- ence images s+3 images including original The parameter s determines the number of images per octave. 12/29/2018

Key point localization s+2 difference images. top and bottom ignored. s planes searched. Detect maxima and minima of difference-of-Gaussian in scale space Each point is compared to its 8 neighbors in the current image and 9 neighbors each in the scales above and below For each max or min found, output is the location and the scale. 12/29/2018

Sampling in scale for efficiency Scale-space extrema detection: experimental results over 32 images that were synthetically transformed and noise added. % detected % correctly matched average no. detected average no. matched Stability Expense Sampling in scale for efficiency How many scales should be used per octave? S=? More scales evaluated, more keypoints found S < 3, stable keypoints increased too S > 3, stable keypoints decreased S = 3, maximum stable keypoints found We could not evaluate the 12/29/2018

Keypoint localization Once a keypoint candidate is found, perform a detailed fit to nearby data to determine location, scale, and ratio of principal curvatures In initial work keypoints were found at location and scale of a central sample point. In newer work, they fit a 3D quadratic function to improve interpolation accuracy. The Hessian matrix was used to eliminate edge responses. 12/29/2018

Eliminating the Edge Response Reject flats: < 0.03 Reject edges: r < 10 What does this look like? Let  be the eigenvalue with larger magnitude and  the smaller. Let r = /. So  = r (r+1)2/r is at a min when the 2 eigenvalues are equal. 12/29/2018

3. Orientation assignment Create histogram of local gradient directions at selected scale Assign canonical orientation at peak of smoothed histogram Each key specifies stable 2D coordinates (x, y, scale,orientation) If 2 major orientations, use both. 12/29/2018

Keypoint localization with orientation 832 233x189 initial keypoints 536 729 keypoints after gradient threshold keypoints after ratio threshold 12/29/2018

4. Keypoint Descriptors At this point, each keypoint has location scale orientation Next is to compute a descriptor for the local image region about each keypoint that is highly distinctive invariant as possible to variations such as changes in viewpoint and illumination 12/29/2018

Normalization Rotate the window to standard orientation Scale the window size based on the scale at which the point was found. 12/29/2018

Lowe’s Keypoint Descriptor (shown with 2 X 2 descriptors over 8 X 8) In experiments, 4x4 arrays of 8 bin histogram is used, a total of 128 features for one keypoint 12/29/2018

Statistical Motivation Interest points are subject to error in position, scale, orientation 12/29/2018 [ Geometric Blur Berg Malik, GLOH Mikolajczyk etc. ]

Biological Motivation Mimic complex cells in primary visual cortex Hubel & Wiesel found that cells are sensitive to orientation of edges, but insensitive to their position This justifies spatial pooling of edge responses 12/29/2018 [ “Eye, Brain and Vision” – Hubel and Wiesel 1988 ]

Lowe’s Keypoint Descriptor use the normalized region about the keypoint compute gradient magnitude and orientation at each point in the region weight them by a Gaussian window overlaid on the circle create an orientation histogram over the 4 X 4 subregions of the window 4 X 4 descriptors over 16 X 16 sample array were used in practice. 4 X 4 times 8 directions gives a vector of 128 values. 12/29/2018

Using SIFT for Matching “Objects” 12/29/2018

12/29/2018

Uses for SIFT Feature points are used also for: Image alignment (homography, fundamental matrix) 3D reconstruction (e.g. Photo Tourism) Motion tracking Object recognition Indexing and database retrieval Robot navigation … many others 12/29/2018 [ Photo Tourism: Snavely et al. SIGGRAPH 2006 ]