Lecture 13: Feature Descriptors and Matching

Slides:



Advertisements
Similar presentations
CSE 473/573 Computer Vision and Image Processing (CVIP)
Advertisements

Interest points CSE P 576 Ali Farhadi Many slides from Steve Seitz, Larry Zitnick.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Feature Matching and Robust Fitting Computer Vision CS 143, Brown James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial.
Computer Vision CS 143, Brown James Hays
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Computer Vision for Human-Computer InteractionResearch Group, Universität Karlsruhe (TH) cv:hci Dr. Edgar Seemann 1 Computer Vision: Histograms of Oriented.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Patch Descriptors CSE P 576 Larry Zitnick
Interest points CSE P 576 Larry Zitnick Many slides courtesy of Steve Seitz.
Locating and Describing Interest Points
CS4670: Computer Vision Kavita Bala Lecture 10: Feature Matching and Transforms.
Lecture 6: Feature matching CS4670: Computer Vision Noah Snavely.
Lecture 4: Feature matching
Distinctive Image Feature from Scale-Invariant KeyPoints
Feature extraction: Corners and blobs
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
CS4670: Computer Vision Kavita Bala Lecture 8: Scale invariance.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Overview Introduction to local features
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Lecture 4: Feature matching CS4670 / 5670: Computer Vision Noah Snavely.
776 Computer Vision Jan-Michael Frahm Fall SIFT-detector Problem: want to detect features at different scales (sizes) and with different orientations!
CS 1699: Intro to Computer Vision Local Image Features: Extraction and Description Prof. Adriana Kovashka University of Pittsburgh September 17, 2015.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Harris Corner Detector & Scale Invariant Feature Transform (SIFT)
Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison.
CSE 185 Introduction to Computer Vision Feature Matching.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Project 3 questions? Interest Points and Instance Recognition Computer Vision CS 143, Brown James Hays 10/21/11 Many slides from Kristen Grauman and.
Local features: detection and description
CS654: Digital Image Analysis
Presented by David Lee 3/20/2006
Image matching by Diva SianDiva Sian by swashfordswashford TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAA.
Announcements Project 1 Out today Help session at the end of class.
CSE 185 Introduction to Computer Vision Local Invariant Features.
Blob detection.
Computer Vision Jia-Bin Huang, Virginia Tech
CS262: Computer Vision Lect 09: SIFT Descriptors
Interest Points EE/CSE 576 Linda Shapiro.
Presented by David Lee 3/20/2006
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Distinctive Image Features from Scale-Invariant Keypoints
Local features: main components
Project 1: hybrid images
Image Features: Points and Regions
Motion illusion, rotating snakes
TP12 - Local features: detection and description
Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2006/3/22
Scale and interest point descriptors
Local features: detection and description May 11th, 2017
Feature description and matching
Announcements Project 1 Project 2 (panoramas)
From a presentation by Jimmy Huff Modified by Josiah Yoder
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Announcement Project 1 1.
SIFT keypoint detection
CSE 185 Introduction to Computer Vision
Lecture VI: Corner and Blob Detection
Feature descriptors and matching
Lecture 5: Feature invariance
Lecture 5: Feature invariance
Presentation transcript:

Lecture 13: Feature Descriptors and Matching CS4670/5670: Computer Vision Kavita Bala Lecture 13: Feature Descriptors and Matching

Announcements PA 2 out Artifact voting out: please vote Schedule will be updated shortly HW 1 out tonight No slip days for HW 1 Due on Sunday Mar 13, answers released on Monday

Another type of feature The Laplacian of Gaussian (LoG) (very similar to a Difference of Gaussians (DoG) – i.e. a Gaussian minus a slightly smaller Gaussian)

Comparison of Keypoint Detectors Tuytelaars Mikolajczyk 2008

Choosing a detector What do you want it for? Precise localization in x-y: Harris Good localization in scale: Difference of Gaussian Flexible region shape: e.g., Maximal Stable Extremal Regions Best choice often application dependent There have been extensive evaluations/comparisons [Mikolajczyk et al., IJCV’05, PAMI’05] All detectors/descriptors shown here work well

? Feature descriptors We know how to detect good points Next question: How to match them? Answer: Come up with a descriptor for each point, find similar descriptors between the two images ?

? Feature descriptors We know how to detect good points Next question: How to match them? Lots of possibilities (this is a popular research area) Simple option: match square windows around the point State of the art approach: SIFT David Lowe, UBC http://www.cs.ubc.ca/~lowe/keypoints/ ?

Image representations Templates Intensity, gradients, etc. Histograms Color, texture, SIFT descriptors, etc.

Local Descriptors Most features can be thought of as templates, histograms (counts), or combinations The ideal descriptor should be Robust Distinctive Compact Efficient Most available descriptors focus on edge/gradient information Capture texture information K. Grauman, B. Leibe

Image Representations: Histograms Global histogram Represent distribution of features Color, texture, depth, … Space Shuttle Cargo Bay Images from Dave Kauchak

What kind of things do we compute histograms of? Color Texture (filter banks or HOG over regions) HOG: Histogram of Oriented Gradients Histogram of oriented gradients L*a*b* color space HSV color space

Orientation Normalization [Lowe, SIFT, 1999] Compute orientation histogram Select dominant orientation Normalize: rotate to fixed orientation 2 p T. Tuytelaars, B. Leibe

Rotation invariance for feature descriptors Find dominant orientation of the image patch This is given by xmax, the eigenvector of M corresponding to max (the larger eigenvalue) Rotate the patch according to this angle Figure by Matthew Brown

Multiscale Oriented PatcheS descriptor Take 40x40 square window around detected feature Scale to 1/5 size (using prefiltering) Rotate to horizontal Sample 8x8 square window centered at feature Intensity normalize the window by subtracting the mean, dividing by the standard deviation in the window 40 pixels 8 pixels CSE 576: Computer Vision Adapted from slide by Matthew Brown

MOPS descriptor You can combine transformations together to get the final transformation x T = ? Pass this transformation matrix to cv2.warpAffine y

Translate x y T = MT1

Rotate x y T = MRMT1

Scale x y T = MSMRMT1

Translate x y T = MT2MSMRMT1

Crop x cv2.warpAffine also takes care of the cropping y

Detections at multiple scales

Invariance of MOPS Intensity Scale Rotation

What kind of things do we compute histograms of? Do scale-space invariant feature detection using DoG. SIFT – Lowe IJCV 2004

Scale Invariant Feature Transform Basic idea: DoG for scale-space feature detection Take 16x16 square window around detected feature Compute gradient orientation for each pixel Throw out weak edges (threshold gradient magnitude) Create histogram of surviving edge orientations 2 angle histogram Adapted from slide by David Lowe

SIFT descriptor Create histogram Divide the 16x16 window into a 4x4 grid of cells (2x2 case shown below) Compute an orientation histogram for each cell 16 cells * 8 orientations = 128 dimensional descriptor Adapted from slide by David Lowe

SIFT vector formation Computed on rotated and scaled version of window according to computed orientation & scale resample the window Based on gradients weighted by a Gaussian

Ensure smoothness Trilinear interpolation a given gradient contributes to 8 bins: 4 in space times 2 in orientation

Reduce effect of illumination 128-dim vector normalized to 1 Threshold gradient magnitudes to avoid excessive influence of high gradients after normalization, clamp gradients >0.2 renormalize

Properties of SIFT Extraordinarily robust matching technique Can handle changes in viewpoint Up to about 60 degree out of plane rotation Can handle significant changes in illumination Sometimes even day vs. night (below) Fast and efficient—can run in real time Lots of code available: http://people.csail.mit.edu/albert/ladypack/wiki/index.php/Known_implementations_of_SIFT

Other descriptors HOG: Histogram of Gradients (HOG) Dalal/Triggs Sliding window, pedestrian detection FREAK: Fast Retina Keypoint Perceptually motivated http://infoscience.epfl.ch/record/175537/files/2069.pdf FREAK: Fast Retina Keypoint The Binary Robust Independent Ele- mentary Feature (BRIEF) [5], the Oriented Fast and Ro- tated BRIEF (ORB)[26], and the Binary Robust Invariant Scalable Keypoints[16] (BRISK) are good examples.

Summary Keypoint detection: repeatable and distinctive Corners, blobs, stable regions Harris, DoG Descriptors: robust and selective spatial histograms of orientation SIFT and variants are typically good for stitching and recognition But, need not stick to one

Which features match?

Feature matching Given a feature in I1, how to find the best match in I2? Define distance function that compares two descriptors Test all the features in I2, find the one with min distance

Feature distance How to define the difference between two features f1, f2? Simple approach: L2 distance, ||f1 - f2 || can give good scores to ambiguous (incorrect) matches f1 f2 I1 I2

Feature distance How to define the difference between two features f1, f2? Better approach: ratio distance = ||f1 - f2 || / || f1 - f2’ || f2 is best SSD match to f1 in I2 f2’ is 2nd best SSD match to f1 in I2 gives large values for ambiguous matches f1 f2' f2 I1 I2

PA2: Feature detection and matching