05 - Feature Detection Overview Feature Detection –Intensity Extrema –Blob Detection –Corner Detection Feature Descriptors Feature Matching Conclusion.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Feature Detection. Description Localization More Points Robust to occlusion Works with less texture More Repeatable Robust detection Precise localization.
Ter Haar Romeny, ICPR 2010 Introduction to Scale-Space and Deep Structure.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe.
Group Meeting Presented by Wyman 10/14/2006
BRISK (Presented by Josh Gleason)
CSE 473/573 Computer Vision and Image Processing (CVIP)
Interest points CSE P 576 Ali Farhadi Many slides from Steve Seitz, Larry Zitnick.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Distinctive Image Features from Scale- Invariant Keypoints Mohammad-Amin Ahantab Technische Universität München, Germany.
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Lecture 6: Feature matching CS4670: Computer Vision Noah Snavely.
(1) Feature-point matching by D.J.Duff for CompVis Online: Feature Point Matching Detection,
Lecture 4: Feature matching
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Feature extraction: Corners and blobs
Scale Invariant Feature Transform (SIFT)
Automatic Matching of Multi-View Images
Blob detection.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
CS4670: Computer Vision Kavita Bala Lecture 8: Scale invariance.
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe – IJCV 2004 Brien Flewelling CPSC 643 Presentation 1.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Overview Introduction to local features
Computer vision.
Internet-scale Imagery for Graphics and Vision James Hays cs195g Computational Photography Brown University, Spring 2010.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Lecture 06 06/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Reporter: Fei-Fei Chen. Wide-baseline matching Object recognition Texture recognition Scene classification Robot wandering Motion tracking.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Harris Corner Detector & Scale Invariant Feature Transform (SIFT)
Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Project 3 questions? Interest Points and Instance Recognition Computer Vision CS 143, Brown James Hays 10/21/11 Many slides from Kristen Grauman and.
Local features: detection and description
Scale Invariant Feature Transform (SIFT)
CS654: Digital Image Analysis
Introduction to Scale Space and Deep Structure. Importance of Scale Painting by Dali Objects exist at certain ranges of scale. It is not known a priory.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
Blob detection.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
SIFT Scale-Invariant Feature Transform David Lowe
CS262: Computer Vision Lect 09: SIFT Descriptors
Interest Points EE/CSE 576 Linda Shapiro.
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Distinctive Image Features from Scale-Invariant Keypoints
Scale Invariant Feature Transform (SIFT)
TP12 - Local features: detection and description
Scale and interest point descriptors
Local features: detection and description May 11th, 2017
Feature description and matching
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
SIFT.
Edge detection f(x,y) viewed as a smooth function
ECE734 Project-Scale Invariant Feature Transform Algorithm
Lecture VI: Corner and Blob Detection
Feature descriptors and matching
Lecture 5: Feature invariance
Presented by Xu Miao April 20, 2005
Lecture 5: Feature invariance
Presentation transcript:

05 - Feature Detection Overview Feature Detection –Intensity Extrema –Blob Detection –Corner Detection Feature Descriptors Feature Matching Conclusion

Overview Goal of feature detection is to find geometric objects in an image that are visually interesting boring interesting

Overview Features typically have descriptors that capture local geometric properties in the image Features descriptors can be used for a wide range of computer vision applications: –Align multiple images to each other –Track the motion of objects in an image sequence –Perform object or face recognition –Detect defects in manufactured objects

Feature Detection Features can be computed from geometric or statistical properties of an image The most common types of image features are points, lines, or regions

Feature Detection In order to be useful features should be invariant to image translation and rotation –Rotating or translating an image will yield same number of features but in different locations In some cases features may be scale invariant –Resizing or blurring an image will yield a subset of the original image features corresponding to the larger scale geometric objects

Feature Detection Points are the most basic geometric feature to detect in an image but finding interesting points is not easy We look at small neighborhoods in the image to find points that stand out in some way –Intensity extrema (maxima and minima) –Find blobs using differential properties –Find corners using statistical properties

Intensity Extrema Intensity maxima = positions of stars or galaxies in a space image

Intensity Extrema Intensity minima = positions of eyes, nose or mouth an image of face

Intensity Extrema How do we locate intensity extrema? –Scan image top-bottom and left-right –Look in NxN neighborhood of each pixel(x,y) –Intensity maxima if pixel(x,y) > all others –Intensity minima if pixel(x,y) < all others is maxima in 3x3 region 9 is minima in 3x3 region

Intensity Extrema for (int y = 1; y < Ydim - 1; y++) for (int x = 1; x < Xdim - 1; x++) { // Check for maximum PIXEL pixel = Data2D[y][x]; if ((pixel > Data2D[y - 1][x - 1]) && (pixel > Data2D[y - 1][x]) && (pixel > Data2D[y - 1][x + 1]) && (pixel > Data2D[y][x - 1]) && (pixel > Data2D[y][x + 1]) && (pixel > Data2D[y + 1][x - 1]) && (pixel > Data2D[y + 1][x]) && (pixel > Data2D[y + 1][x + 1])) out.Data2D[y][x] = 1;

Intensity Extrema // Check for minimum if ((pixel < Data2D[y - 1][x - 1]) && (pixel < Data2D[y - 1][x]) && (pixel < Data2D[y - 1][x + 1]) && (pixel < Data2D[y][x - 1]) && (pixel < Data2D[y][x + 1]) && (pixel < Data2D[y + 1][x - 1]) && (pixel < Data2D[y + 1][x]) && (pixel < Data2D[y + 1][x + 1])) out.Data2D[y][x] = -1; }

Intensity Extrema What about pixels with same intensity value? –Can say maxima if pixel(x,y) >= all others –Can say minima if pixel(x,y) <= all others –Often results in extrema regions instead of points All points value 15 are maxima in corresponding 3x3 regions All points value 9 are minima in corresponding 3x3 regions

Intensity Extrema How can we find important extrema? –Reduce the number of intensity extrema –Use larger neighborhood when scanning image or perform Gaussian blurring before scanning –Size of neighborhood N or blurring standard deviation  will define the scale of the extrema –Can also look at multiple N or  values to obtain scale invariant feature points

Intensity Extrema  = 1

Intensity Extrema  = 2

Intensity Extrema  = 4

Intensity Extrema  = 8

Blob Detection The goal of blob detection is to locate regions of an image that are visibly lighter or darker than their surrounding regions One way to detect blobs is to –Look at NxN regions in an image and calculate average intensities inside / outside radius R –Light blob: average inside >> average outside –Dark blob: average inside << average outside

Blob Detection Another approach would be to convolve image with black / white circle masks with radius R Center of light blobs - peaks of white mask Center of dark blobs - peaks of black mask Vary radius R to find different size blobs

Blob Detection Better approach is to filter image with Laplacian of Gaussian (LoG) mask –Peaks and pits of LoG image mark centers of blobs –Size of blobs given by sigma of Gaussian

Blob Detection

The LoG filter can be approximated using Difference of Gaussian (DoG) filters –Easy to implement in multiscale applications that are already using Gaussian smoothing Determinant of Hessian (DoH) can also be used for blob detection in place of LoG –I xx. I yy – I xy 2 is also rotationally invariant –Does not infringe on SIFTs patent of LoG for multiscale blob detection

Corner Detection A corner can be defined as the intersection of two edges in an image or a point where there are two different dominant edge directions

Corner Detection Corner detectors are typically based on local statistical or differential properties of an image from:

Corner Detection Moravec (1977) –Developed to help navigate the Stanford cart –Sum of square differences SSD is used to measure similarity of overlapping patches in the image –Corner strength is smallest SSD between patch and its 8 neighbors (N,S,E,W,NE,NW,SE,SW) –Corner is present in an image when corner strength is locally maximal and above a threshold

Corner Detection SSD calculation:

Corner Detection Different local neighborhoods for SSD calculation give different corner responses InteriorEdge CornerPoint

Corner Detection Corners on block image

Corner Detection Corners on house image

Feature Descriptors Feature detection locates (x,y) points in an image that are interesting in some way –These (x,y) locations by themselves are not very useful for computer vision applications We use feature descriptors to capture what makes these points interesting –We want local geometric properties that are invariant to translation, rotation, and scaling

Feature Descriptors What local geometric information can we obtain from a point in an image? –We can look at differential properties –We can look at statistical properties The most common approach is to look at the image gradient near feature points Gradient magnitude = (I x 2 + I y 2 ) 1/2 Gradient direction = arctan(I y / I x )

Feature Descriptors Calculate gradient magnitude and direction in a small region around feature point Quantize angles to 8 directions and calculate weighted angle histogram to get 8 features

Feature Descriptors The angle histogram is invariant to translation What about image rotation?

Feature Descriptors How can we fix this problem? –Find the dominant gradient direction –Rotate the region so the dominant gradient direction is at angle zero –Recalculate the angle histogram We can get almost the same result by shifting the angle histogram to put largest value into the angle zero bucket

Feature Descriptors More geometric information can be obtained by calculating multiple angle histograms 4x8=32 features 17x8=136 features

Feature Descriptors How many feature values are best? –More feature values => Increase descriptive power, space required and comparison time –Fewer feature values => Decrease descriptive power, space required and comparison time Answer depends on needs of CV application –SIFT uses 4x4x8=128 features –PCA-SIFT uses principal component analysis to reduce 3042 raw features to features

Feature Matching There are two issues in feature matching –Matching strategy - how feature vectors are compared to each other –Data structures and algorithms - how feature vectors are stored and retrieved We must handle lots of data quickly and get robust and accurate feature matching results

Feature Matching The easiest way to compare feature vectors is to calculate Euclidean distance between them Since square roots are slow some applications compare squared distances or they calculate sum absolute differences (L 1 norm)

Feature Matching Another approach is to normalize the feature vectors and calculate the cosine of the angle between feature vectors When cos  = 1 feature vectors match When cos  = 0 feature vectors do not match

Feature Matching Feature vectors can also be normalized so the dynamic range in each dimension is the same This way each dimension will have the same weight in the difference calculation

Feature Matching Another approach is to normalize vectors based on the standard deviation of each dimension and calculate the Mahalonobis distance between feature vectors

Feature Matching The most basic data structure for feature vectors is a dynamic 2D array –Trivial to insert or remove data –Requires O(N) search to match vectors

Feature Matching A binary tree can be used to quickly insert, delete and search 1D data Trees can be generalized to 2D or 3D data values by having 4 or 8 children per node Quadtree Octree

Feature Matching Simply extending the quadtree/octree concept to more dimensions wastes a lot of space The K-D tree solves this by creating a binary space partition (BSP) of the data

Feature Matching There are many variations on K-D trees with different BSPs and different search algorithms –Best solutions are O(logN) Finally, a number of hash table techniques have been devised to store/search features –Searching is approximate instead of exact –Best solutions are almost constant time

Conclusions Goal of feature detection is to find geometric objects in an image that are visually interesting –These features can then be used to create CV applications for image alignment, object tracking and recognition, and defect detection Feature detection and matching is a large area –Different feature detection methods –Different feature descriptors –Different feature matching techniques