FAsT-Match: Fast Affine Template Matching

Slides:



Advertisements
Similar presentations
Feature points extraction Many slides are courtesy of Darya Frolova, Denis Simakov A low level building block in many applications: Structure from motion.
Advertisements

DTAM: Dense Tracking and Mapping in Real-Time
CSCE 643 Computer Vision: Template Matching, Image Pyramids and Denoising Jinxiang Chai.
4-Points Congruent Sets for Robust Pairwise Surface Registration
A Fast Local Descriptor for Dense Matching
CSCE643: Computer Vision Bayesian Tracking & Particle Filtering Jinxiang Chai Some slides from Stephen Roth.
Object class recognition using unsupervised scale-invariant learning Rob Fergus Pietro Perona Andrew Zisserman Oxford University California Institute of.
Outline Feature Extraction and Matching (for Larger Motion)
Face Alignment by Explicit Shape Regression
ECE 562 Computer Architecture and Design Project: Improving Feature Extraction Using SIFT on GPU Rodrigo Savage, Wo-Tak Wu.
Image alignment Image from
776 Computer Vision Jared Heinly Spring 2014 (slides borrowed from Jan-Michael Frahm, Svetlana Lazebnik, and others)
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Fast intersection kernel SVMs for Realtime Object Detection
La Parguera Hyperspectral Image size (250x239x118) using Hyperion sensor. INTEREST POINTS FOR HYPERSPECTRAL IMAGES Amit Mukherjee 1, Badrinath Roysam 1,
Model: Parts and Structure. History of Idea Fischler & Elschlager 1973 Yuille ‘91 Brunelli & Poggio ‘93 Lades, v.d. Malsburg et al. ‘93 Cootes, Lanitis,
Computer Vision Group University of California Berkeley Shape Matching and Object Recognition using Shape Contexts Jitendra Malik U.C. Berkeley (joint.
Robust and large-scale alignment Image from
Lecture 6: Feature matching CS4670: Computer Vision Noah Snavely.
1 Improving Entropy Registration Theodor D. Richardson.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Object Recognition Using Distinctive Image Feature From Scale-Invariant Key point D. Lowe, IJCV 2004 Presenting – Anat Kaspi.
Scale Invariant Feature Transform (SIFT)
Geometrically overlaying di ff erent representations of an object in a scene By: Senate Taka CS 104 Final Project.
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe – IJCV 2004 Brien Flewelling CPSC 643 Presentation 1.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Stereo matching Class 10 Read Chapter 7 Tsukuba dataset.
Overview Introduction to local features
CSE 185 Introduction to Computer Vision
Computer vision.
Stereo Matching Information Permeability For Stereo Matching – Cevahir Cigla and A.Aydın Alatan – Signal Processing: Image Communication, 2013 Radiometric.
Generating panorama using translational movement model.
Gwangju Institute of Science and Technology Intelligent Design and Graphics Laboratory Multi-scale tensor voting for feature extraction from unstructured.
Mutual Information-based Stereo Matching Combined with SIFT Descriptor in Log-chromaticity Color Space Yong Seok Heo, Kyoung Mu Lee, and Sang Uk Lee.
Shape Matching for Model Alignment 3D Scan Matching and Registration, Part I ICCV 2005 Short Course Michael Kazhdan Johns Hopkins University.
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
IMAGE MOSAICING Summer School on Document Image Processing
Motion estimation Digital Visual Effects Yung-Yu Chuang with slides by Michael Black and P. Anandan.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Evaluation of interest points and descriptors. Introduction Quantitative evaluation of interest point detectors –points / regions at the same relative.
Object Removal in Multi-View Photos Aaron McClennon-Sowchuk, Michail Greshischev.
Effective Optical Flow Estimation
21 June 2009Robust Feature Matching in 2.3μs1 Simon Taylor Edward Rosten Tom Drummond University of Cambridge.
Puzzle Solver Sravan Bhagavatula EE 638 Project Stanford ECE.
Harris Corner Detector & Scale Invariant Feature Transform (SIFT)
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison.
Fitting image transformations Prof. Noah Snavely CS1114
Presented by David Lee 3/20/2006
776 Computer Vision Jan-Michael Frahm Spring 2012.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Motion estimation Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/4/12 with slides by Michael Black and P. Anandan.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Lucas-Kanade Image Alignment Iain Matthews. Paper Reading Simon Baker and Iain Matthews, Lucas-Kanade 20 years on: A Unifying Framework, Part 1
Photoconsistency constraint C2 q C1 p l = 2 l = 3 Depth labels If this 3D point is visible in both cameras, pixels p and q should have similar intensities.
Motion estimation Parametric motion (image alignment) Tracking Optical flow.
Motion estimation Digital Visual Effects, Spring 2005 Yung-Yu Chuang 2005/3/23 with slides by Michael Black and P. Anandan.
Blob detection.
Presented by David Lee 3/20/2006
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Inverse Compositional Spatial Transformer Networks
Learning Mid-Level Features For Recognition
Feature description and matching
Unsupervised Face Alignment by Robust Nonrigid Mapping
Image filtering Hybrid Images, Oliva et al.,
KFC: Keypoints, Features and Correspondences
Liyuan Li, Jerry Kah Eng Hoe, Xinguo Yu, Li Dong, and Xinqi Chu
Feature descriptors and matching
Optical flow and keypoint tracking
Tracking Many slides adapted from Kristen Grauman, Deva Ramanan.
Presentation transcript:

FAsT-Match: Fast Affine Template Matching Simon Korman, Daniel Reichman, Gilad Tsur, Shai Avidan CVPR 2013 Presented by Lee, YoonSeok Today, I’m gonna talk about fast-match: fast affine template matching for my presentation. 1

Review : Boundary Preserving Dense Local Regions

Overview Template Matching : Related Work Main Idea Algorithm Result Summary

Generalized Template Matching Find the best …/Translation/Euclidean/Similarity/Affine/Projective/… transformation between two given images: FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Generalized Template Matching The algorithm: Take a sample of the Affine transformations Evaluate each transformation in the sample Return the best Questions: Which sample to use? How does is guarantee a bound? FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Related Work : Direct methods Lucas, Kanade “An iterative image registration technique with an application to stereo vision” [ICAI 1981] Baker, Matthews “Lucas-Kanade 20 years on: A unifying framework” [IJCV 04]

Related Work : Indirect methods Lowe “Distinctive image features from scale-invariant key-points” [IJCV 04] Morel, Yu “Asift: A new framework for fully affine invariant image comparison” [SIAM 09] M.A. Fichler, R.C. Bolles “Random sample consensus” [Comm. of ACM 81]

Transformation space (e.g. affine) The Main Idea image template Transformation space (e.g. affine) FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Formal Problem Statement Input: Grayscale image (template) and image Distance with respect to a specific transformation : Distance with respect to any transformation in a family (affinities): Goal: Given 𝛿>0, find a transformation in for which: FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Simple Algorithm For each affine transformation 𝑇 Compute the distance ∆ 𝑇 ( 𝐼 1 , 𝐼 2 ) Return 𝑇 ∗ with smallest distance ∞ transformations – need to discretize “Combinatorial bounds and algorithmic aspects of image matching under projective transformations” [Hundt & Liskiewicz MFCS, 2008] Enumerate ≈ 𝑛 18 affine transformations (for 𝑛×𝑛 images) Guarantee: best possible transformation FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Algorithm – take2 ∆ 𝑇 𝑂𝑃𝑇 𝐼 1 , 𝐼 2 − ∆ 𝑇 ∗ ( 𝐼 1 , 𝐼 2 ) =𝑂(𝛿) For each affine transformation 𝑇 Compute the distance ∆ 𝑇 ( 𝐼 1 , 𝐼 2 ) Return 𝑇 ∗ with smallest distance in a Net 𝐴 𝛿 𝑇 ∗ 𝑇 𝑂𝑃𝑇 Sample transformation space build a Net 𝐴 𝛿 of transformations Guarantee ‘𝛿 – away’ from best possible distance ∆ 𝑇 𝑂𝑃𝑇 𝐼 1 , 𝐼 2 − ∆ 𝑇 ∗ ( 𝐼 1 , 𝐼 2 ) =𝑂(𝛿) FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Algorithm – take3 Estimate the SAD to within O(𝛿) By sampling pixels For each affine transformation 𝑇 Compute the distance ∆ 𝑇 ( 𝐼 1 , 𝐼 2 ) Return 𝑇 ∗ with smallest distance in a Net 𝐴 𝛿 Estimate estimate Estimate the SAD to within O(𝛿) By sampling pixels Thus – total runtime is: ∆ 𝑇 𝑂𝑃𝑇 𝐼 1 , 𝐼 2 − ∆ 𝑇 ∗ ( 𝐼 1 , 𝐼 2 ) =𝑂(𝛿) FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

The net 𝐴 𝛿 𝑇ℎ𝑒 𝑛𝑒𝑡 𝐴 𝛿 The Net 𝐴 𝛿 Transformations T1 and T2 are x-close The Net 𝐴 𝛿 Any affine transformation is δn1-close to some tran s. in 𝐴 𝛿 ( 𝐴 𝛿 is a δn1-cover of affine transformations) Possible construction with size: 𝑇ℎ𝑒 𝑛𝑒𝑡 𝐴 𝛿 T1 T2 <x x = δn1 FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Fast-Match: a Branch-and-Bound Scheme Iteratively increase Net-precision (decrease δ) Throw away irrelevant transformation regions 𝑇 𝐶𝑙𝑜𝑠𝑒𝑠𝑡 is guaranteed to move to next round (off-net neighbors of above- threshold points are worse than 𝑇 ∗ ) 𝑇ℎ𝑒 𝑛𝑒𝑡 𝐴 𝛿 𝑇 𝐶𝑙𝑜𝑠𝑒𝑠𝑡 𝑇 ∗ 𝑇 𝑂𝑃𝑇 FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Result Pascal VOC 2010 data-set 200 random image/templates Template dimensions of 10%, 30%, 50%, 70%, 90% ‘Comparison’ to a feature-based method - ASIFT Image degradations (template left in-tact): Gaussian Blur with STD of {0,1,2,4,7,11} pixels Gaussian Noise with STD of {0,5,10,18,28,41} JPEG compression of quality {75,40,20,10,5,2} FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Result Fast-Match vs. ASIFT – template dimension 50% FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Result Fast-Match vs. ASIFT – template dimension 20% FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Result Runtimes FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Template Dim: 45% FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Template Dim: 35% FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Template Dim: 25% FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Template Dim: 15% FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Template Dim: 10% FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Bad overlap due to ambiguity FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

High SAD due to high TV and ambiguity FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Fast-Match: Summary Handles template matching under arbitrary Affine (6 dof) transformations with Guaranteed error bounds Fast execution Main ingredients Sampling of transformation space (based on variation) Quick transformation evaluation (‘property testing’) Branch-and-Bound scheme FAsT-Match: Fast Affine Template Matching [Simon Korman et al., CVPR 2013]

Q&A