1 Image Matching using Local Symmetry Features Daniel Cabrini Hauagge Noah Snavely Cornell University.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Feature Detection. Description Localization More Points Robust to occlusion Works with less texture More Repeatable Robust detection Precise localization.
Complex Networks for Representation and Characterization of Images For CS790g Project Bingdong Li 9/23/2009.
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Presented by Xinyu Chang
BRISK (Presented by Josh Gleason)
Interest points CSE P 576 Ali Farhadi Many slides from Steve Seitz, Larry Zitnick.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Distinctive Image Features from Scale- Invariant Keypoints Mohammad-Amin Ahantab Technische Universität München, Germany.
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Matching with Invariant Features
Lecture 6: Feature matching CS4670: Computer Vision Noah Snavely.
A Study of Approaches for Object Recognition
1 Model Fitting Hao Jiang Computer Science Department Oct 8, 2009.
Recognising Panoramas
Jianke Zhu From Haibin Ling’s ICCV talk Fast Marching Method and Deformation Invariant Features.
Scale Invariant Feature Transform (SIFT)
Blob detection.
Image Matching via Saliency Region Correspondences Alexander Toshev Jianbo Shi Kostas Daniilidis IEEE Conference on Computer Vision and Pattern Recognition.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
CS4670: Computer Vision Kavita Bala Lecture 8: Scale invariance.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Overview Introduction to local features
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
05 - Feature Detection Overview Feature Detection –Intensity Extrema –Blob Detection –Corner Detection Feature Descriptors Feature Matching Conclusion.
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
776 Computer Vision Jan-Michael Frahm Fall SIFT-detector Problem: want to detect features at different scales (sizes) and with different orientations!
Evaluation of interest points and descriptors. Introduction Quantitative evaluation of interest point detectors –points / regions at the same relative.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
Local invariant features 1 Thursday October 3 rd 2013 Neelima Chavali Virginia Tech.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Harris Corner Detector & Scale Invariant Feature Transform (SIFT)
P ROBING THE L OCAL -F EATURE S PACE OF I NTEREST P OINTS Wei-Ting Lee, Hwann-Tzong Chen Department of Computer Science National Tsing Hua University,
Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison.
CSE 185 Introduction to Computer Vision Feature Matching.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Scale Invariant Feature Transform (SIFT)
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
SIFT DESCRIPTOR K Wasif Mrityunjay
CSE 185 Introduction to Computer Vision Local Invariant Features.
COS 429 PS3: Stitching a Panorama Due November 10 th.
1 Shape Descriptors for Maximally Stable Extremal Regions Per-Erik Forss´en and David G. Lowe Department of Computer Science University of British Columbia.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
Distinctive Image Features from Scale-Invariant Keypoints Presenter :JIA-HONG,DONG Advisor : Yen- Ting, Chen 1 David G. Lowe International Journal of Computer.
Blob detection.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Face recognition using Histograms of Oriented Gradients
Another Example: Circle Detection
SIFT Scale-Invariant Feature Transform David Lowe
Interest Points EE/CSE 576 Linda Shapiro.
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Project 1: hybrid images
Feature description and matching
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
CSDD Features: Center-Surround Distribution Distance
From a presentation by Jimmy Huff Modified by Josiah Yoder
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
CSE 185 Introduction to Computer Vision
ECE734 Project-Scale Invariant Feature Transform Algorithm
Feature descriptors and matching
Lecture 5: Feature invariance
Presented by Xu Miao April 20, 2005
Lecture 6: Feature matching
Presentation transcript:

1 Image Matching using Local Symmetry Features Daniel Cabrini Hauagge Noah Snavely Cornell University

2 Outline Introduction Local Symmetry Scale Space Local Symmetry Features Experimental Results Conclusion

3 Introduction Analysis of symmetry : A long –standing problem in computer vision.

4 Introduction

5 Local Symmetry Method- takes an image as input, and computes local symmetry scores over the image and across scale space. Scoring local symmetries Gradient histogram-based score

6 Local Symmetry Bilateral symmetry 2n-fold rotational symmetry For both symmetry types

7 Local Symmetry

8 Scoring local symmetries Define our score Symmetry type If the image f exhibits perfect symmetry types at location p, then f(q) = f(M s,p (q)) for all q. Distance function d(q,r)= | f(q)-f(r) | Weight mask gives the importance of each set of corresponding point pairs around the center point p in determining the symmetry score at p.

9 Scoring local symmetries Define a function that we call the local symmetry distance SS = L*SD

10 Gradient histogram-based score

11 Scale space For the purpose of feature detection, we choose a different function for r is the distance to the point of interest A is a normalization constant r 0 is the ring radius ψ controls the width of the ring

12 Scale space

13 Local Symmetry Features By finding local maxima of the score, and feature description, by building a feature vector from local symmetry scores Feature detector Feature descriptor

14 Feature detector Using the SYM-IR function as a detector. Represent the support of each feature as a circle of radius s centered at (x,y) in the original image.

15 Feature detector

16 Feature detector

17 Feature descriptor SYMD encodes the distribution of the three SYM-I scores around a feature location at the detected scale

18 Experimental Results Evaluate the detector, comparing its repeatability to that of the common DoG and MSER detectors. Evaluating detections Evaluating descriptors

19 Evaluating detections For each image pair(I 1,I 2 ) in the dataset, and each detector, we detect sets of keypoints K 1 and K 2, and compare these detections using the known homography H 12 mapping points from I 1 to I 2. K1 is rescaled by a factor s so that it has a fixed area A ; we denote this scaled detection K 1 S also applied to K 2. Finally, the relative overlap of the support regions of H 12 K 1 S and K 2 gives an overlap score.

20 Evaluating detections

21 Evaluating detections

22 Evaluating descriptors A precision-call curve that summarizes the quality of the match scores.

23 Evaluating descriptors We generate two sets of perfectly matched synthetic detections by creating a set of keypoints K 1 on a grid in I 1 (in our experiments the spacing between points is 25 pixels and the scale of each keypoint is set to 6.25). We then map these keypoints to I 2 using H 12, creating a matched set of keys K 2. We discard keypoints whose support regions are not fully within the image.

24 Evaluating descriptors

25 Evaluating descriptors

26 Conclusion To evaluate our method, we created a new dataset of image pairs with dramatic appearance changes, and showed that our features are more repeatable, and yield complementary information, compared to standard features such as SIFT.

27 Thanks for your listening.