Local features and image matching May 7th, 2019

Slides:



Advertisements
Similar presentations
Feature extraction: Corners
Advertisements

Local features: detection and description Tuesday October 8 Devi Parikh Virginia Tech Slide credit: Kristen Grauman 1 Disclaimer: Most slides have been.
Summary of Friday A homography transforms one 3d plane to another 3d plane, under perspective projections. Those planes can be camera imaging planes or.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Interest points CSE P 576 Ali Farhadi Many slides from Steve Seitz, Larry Zitnick.
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Image alignment Image from
Matching with Invariant Features
Features CSE 455, Winter 2010 February 1, Neel Joshi, CSE 455, Winter Review From Last Time.
Algorithms and Applications in Computer Vision
Interest points CSE P 576 Larry Zitnick Many slides courtesy of Steve Seitz.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Feature extraction: Corners and blobs
Interest Points and Corners Computer Vision CS 143, Brown James Hays Slides from Rick Szeliski, Svetlana Lazebnik, Derek Hoiem and Grauman&Leibe 2008 AAAI.
Detecting Patterns So far Specific patterns (eyes) Generally useful patterns (edges) Also (new) “Interesting” distinctive patterns ( No specific pattern:
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Lecture 06 06/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
CSE 185 Introduction to Computer Vision Local Invariant Features.
CS 1699: Intro to Computer Vision Local Image Features: Extraction and Description Prof. Adriana Kovashka University of Pittsburgh September 17, 2015.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Local invariant features 1 Thursday October 3 rd 2013 Neelima Chavali Virginia Tech.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
Project 3 questions? Interest Points and Instance Recognition Computer Vision CS 143, Brown James Hays 10/21/11 Many slides from Kristen Grauman and.
Local features and image matching October 1 st 2015 Devi Parikh Virginia Tech Disclaimer: Many slides have been borrowed from Kristen Grauman, who may.
Local features: detection and description
Features Jan-Michael Frahm.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
CS654: Digital Image Analysis
776 Computer Vision Jan-Michael Frahm Spring 2012.
CSE 185 Introduction to Computer Vision Local Invariant Features.
Keypoint extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
Lecture 5: Feature detection and matching CS4670 / 5670: Computer Vision Noah Snavely.
776 Computer Vision Jan-Michael Frahm Spring 2012.
SIFT Scale-Invariant Feature Transform David Lowe
Interest Points EE/CSE 576 Linda Shapiro.
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Fitting a transformation: feature-based alignment
CS 4501: Introduction to Computer Vision Sparse Feature Detectors: Harris Corner, Difference of Gaussian Connelly Barnes Slides from Jason Lawrence, Fei.
Corners and interest points
TP12 - Local features: detection and description
Source: D. Lowe, L. Fei-Fei Recap: edge detection Source: D. Lowe, L. Fei-Fei.
Lecture 4: Harris corner detection
Local features: detection and description May 11th, 2017
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
Features Readings All is Vanity, by C. Allan Gilbert,
Interest Points and Harris Corner Detector
Lecture 5: Feature detection and matching
Announcement Project 1 1.
Local features and image matching
SIFT keypoint detection
Lecture VI: Corner and Blob Detection
Lecture 5: Feature invariance
Lecture 5: Feature invariance
Corner Detection COMP 4900C Winter 2008.
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Local features and image matching May 7th, 2019 Yong Jae Lee UC Davis

Last time RANSAC for robust fitting Image mosaics Lines, translation Fitting a 2D transformation Homography

Today Mosaics recap: How to warp one image to the other, given H? How to detect which features to match?

How to stitch together a panorama (a.k.a. mosaic)? Basic Procedure Take a sequence of images from the same position Rotate the camera about its optical center Compute transformation between second image and first Transform the second image to overlap with the first Blend the two together to create a mosaic (If there are more images, repeat) Source: Steve Seitz

Mosaics Obtain a wider angle view by combining multiple images. . . . image from S. Seitz Obtain a wider angle view by combining multiple images.

Homography … … To compute the homography given pairs of corresponding points in the images, we need to set up an equation where the parameters of H are the unknowns…

Solving for homographies p’ = Hp Defined up to a scale factor. Constrain Frobenius norm of H to be 1. Problem to be solved: where vector of unknowns h = [h00,h01,h02,h10,h11,h12,h20,h21,h22]T There is a whole set of solution homographies with variation across scale that represent the same geometrical transformation Adapted from Devi Parikh

Solving for homographies There are 9 variables h00,…,h22. Are there 9 degrees of freedom? No. We can multiply all hij by nonzero scalar k without changing the equations: There is a whole set of solution homographies with variation across scale that represent the same geometrical transformation DOF: number of variables that are free to vary

Enforcing 8 DOF Impose unit vector constraint Subject to: However, if h_22=0 this approach fails Also poor results if h_22 close to zero; thus, not recommended

Projective: # correspondences? T(x,y) y y’ x x’ How many correspondences needed for projective? Source: Alyosha Efros

RANSAC for estimating homography RANSAC loop: 1. Select four feature pairs (at random) 2. Compute homography H (exact) 3. Compute inliers where SSD(pi’, Hpi)< ε 4. Keep largest set of inliers 5. Re-compute least-squares H estimate on all of the inliers Slide credit: Steve Seitz

Robust feature-based alignment Source: L. Lazebnik

Robust feature-based alignment Extract features Source: L. Lazebnik

Robust feature-based alignment Extract features Compute putative matches Source: L. Lazebnik

Robust feature-based alignment Extract features Compute putative matches Loop: Hypothesize transformation T (small group of putative matches that are related by T) Source: L. Lazebnik

Robust feature-based alignment Extract features Compute putative matches Loop: Hypothesize transformation T (small group of putative matches that are related by T) Verify transformation (search for other matches consistent with T) Source: L. Lazebnik

Robust feature-based alignment Extract features Compute putative matches Loop: Hypothesize transformation T (small group of putative matches that are related by T) Verify transformation (search for other matches consistent with T) Source: L. Lazebnik

Creating and Exploring a Large Photorealistic Virtual Space a collection of several million images downloaded from Flickr broken into themes that consist of a few hundred thousand images each Josef Sivic, Biliana Kaneva, Antonio Torralba, Shai Avidan and William T. Freeman, Internet Vision Workshop, CVPR 2008. http://www.youtube.com/watch?v=E0rboU10rPo

Creating and Exploring a Large Photorealistic Virtual Space Current view, and desired view in green Synthesized view from new camera Scene matching with camera view transformations. First row: The input image with the desired camera view overlaid in green. Second row: The synthesized view from the new camera. The goal is to find images which can fill-in the unseen portion of the image (shown in black) while matching the visible portion. Third row: the top matching image found in the dataset of street scenes for each motion. Fourth row: induced camera motion between the two pictures. Final row: composite after Poisson blending Induced camera motion

Today Mosaics recap: How to warp one image to the other, given H? How to detect which features to match?

Detecting local invariant features Detection of interest points Harris corner detection (Scale invariant blob detection: LoG) (Next time: description of local patches)

Local features: main components Detection: Identify the interest points Description: Extract vector feature descriptor surrounding each interest point. Matching: Determine correspondence between descriptors in two views 3 main components Kristen Grauman

Local features: desired properties Repeatability The same feature can be found in several images despite geometric and photometric transformations Saliency Each feature has a distinctive description Compactness and efficiency Many fewer features than image pixels Locality A feature occupies a relatively small area of the image; robust to clutter and occlusion 4 desired properties

Applications Local features have be used for: Image alignment 3D reconstruction Motion tracking Robot navigation Indexing and database retrieval Object recognition Lana Lazebnik

A hard feature matching problem NASA Mars Rover images

Answer below (look for tiny colored squares…) NASA Mars Rover images with SIFT feature matches Figure by Noah Snavely

Goal: interest operator repeatability We want to detect (at least some of) the same points in both images. Yet we have to be able to run the detection procedure independently per image. No chance to find true matches! 27

Goal: descriptor distinctiveness We want to be able to reliably determine which point goes with which. Must provide some invariance to geometric and photometric differences between the two views. ?

Local features: main components Detection: Identify the interest points Description:Extract vector feature descriptor surrounding each interest point. Matching: Determine correspondence between descriptors in two views

What points would you choose (for repeatability, distinctiveness)?

Corners as distinctive interest points We should easily recognize the point by looking through a small window Shifting a window in any direction should give a large change in intensity “flat” region: no change in all directions “edge”: no change along the edge direction “corner”: significant change in all directions Slide credit: Alyosha Efros, Darya Frolova, Denis Simakov

Corners as distinctive interest points 2 x 2 matrix of image derivatives (averaged in neighborhood of a point). Notation:

What does this matrix reveal? First, consider an axis-aligned corner:

What does this matrix reveal? First, consider an axis-aligned corner: This means dominant gradient directions align with x or y axis Look for locations where both λ’s are large. If either λ is close to 0, then this is not corner-like. What if we have a corner that is not aligned with the image axes?

What does this matrix reveal? Since M is symmetric, we have (Eigenvalue decomposition) rotation The eigenvalues of M reveal the amount of intensity change in the two principal orthogonal gradient directions in the window.

Corner response function “edge”: “corner”: “flat” region 1 >> 2 1 and 2 are large, 1 ~ 2; 1 and 2 are small; 2 >> 1

Harris corner detector Compute M matrix for each image window to get their cornerness scores. Find points whose surrounding window gave large corner response (f > threshold) Take the points of local maxima, i.e., perform non-maximum suppression 37

Example of Harris application Kristen Grauman

Example of Harris application Compute corner response at every pixel. Kristen Grauman

Example of Harris application Only non-maxes exceeding average (thresholded) Kristen Grauman

Properties of the Harris corner detector Rotation invariant? Yes

Properties of the Harris corner detector Rotation invariant? Translation invariant? Yes Yes

Properties of the Harris corner detector Rotation invariant? Translation invariant? Scale invariant? Yes Yes No All points will be classified as edges Corner !

Summary Image warping to create mosaic, given homography Interest point detection Harris corner detector Next time: Laplacian of Gaussian, automatic scale selection