A NOVEL LOCAL FEATURE DESCRIPTOR FOR IMAGE MATCHING Heng Yang, Qing Wang ICME 2008.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Object Recognition using Local Descriptors Javier Ruiz-del-Solar, and Patricio Loncomilla Center for Web Research Universidad de Chile.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe.
Aggregating local image descriptors into compact codes
Presented by Xinyu Chang
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
CSE 473/573 Computer Vision and Image Processing (CVIP)
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
IBBT – Ugent – Telin – IPI Dimitri Van Cauwelaert A study of the 2D - SIFT algorithm Dimitri Van Cauwelaert.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Fast High-Dimensional Feature Matching for Object Recognition David Lowe Computer Science Department University of British Columbia.
Robust and large-scale alignment Image from
A Study of Approaches for Object Recognition
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Scale Invariant Feature Transform
Distinctive Image Feature from Scale-Invariant KeyPoints
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Object Recognition Using Distinctive Image Feature From Scale-Invariant Key point D. Lowe, IJCV 2004 Presenting – Anat Kaspi.
Scale Invariant Feature Transform (SIFT)
Automatic Matching of Multi-View Images
SIFT - The Scale Invariant Feature Transform Distinctive image features from scale-invariant keypoints. David G. Lowe, International Journal of Computer.
1 Invariant Local Feature for Object Recognition Presented by Wyman 2/05/2006.
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe – IJCV 2004 Brien Flewelling CPSC 643 Presentation 1.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
FLANN Fast Library for Approximate Nearest Neighbors
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
Internet-scale Imagery for Graphics and Vision James Hays cs195g Computational Photography Brown University, Spring 2010.
Bag of Visual Words for Image Representation & Visual Search Jianping Fan Dept of Computer Science UNC-Charlotte.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Reporter: Fei-Fei Chen. Wide-baseline matching Object recognition Texture recognition Scene classification Robot wandering Motion tracking.
Feature Detection and Descriptors
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Dengsheng Zhang and Melissa Chen Yi Lim
21 June 2009Robust Feature Matching in 2.3μs1 Simon Taylor Edward Rosten Tom Drummond University of Cambridge.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Distinctive Image Features from Scale-Invariant Keypoints Ronnie Bajwa Sameer Pawar * * Adapted from slides found online by Michael Kowalski, Lehigh University.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Distinctive Image Features from Scale-Invariant Keypoints
Scale Invariant Feature Transform (SIFT)
776 Computer Vision Jan-Michael Frahm Spring 2012.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
Distinctive Image Features from Scale-Invariant Keypoints Presenter :JIA-HONG,DONG Advisor : Yen- Ting, Chen 1 David G. Lowe International Journal of Computer.
SIFT.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Visual homing using PCA-SIFT
SIFT Scale-Invariant Feature Transform David Lowe
CS262: Computer Vision Lect 09: SIFT Descriptors
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Distinctive Image Features from Scale-Invariant Keypoints
Scale Invariant Feature Transform (SIFT)
Project 1: hybrid images
TP12 - Local features: detection and description
Learning Mid-Level Features For Recognition
Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2006/3/22
Feature description and matching
Invariant Local Feature for Image Matching
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
From a presentation by Jimmy Huff Modified by Josiah Yoder
Aim of the project Take your image Submit it to the search engine
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
ECE734 Project-Scale Invariant Feature Transform Algorithm
Feature descriptors and matching
Presented by Xu Miao April 20, 2005
Presentation transcript:

A NOVEL LOCAL FEATURE DESCRIPTOR FOR IMAGE MATCHING Heng Yang, Qing Wang ICME 2008

Outline Introduction Local feature descriptor Feature matching Experimental result and discussions Image matching experiments Image retrieval experiments Conclusion

Local feature descriptor Local invariant features have been widely used in image matching and other computer vision applications Invariant to Image rotation, scale, illumination changes and even affine distortion Distinctive, robust to partial occlusion, resistant to nearby clutter and noise Two concerns for extracting the local features Detect keypoints Assigning the localization, scale and dominant orientation for each keypoint Compute a descriptor for the detected regions To be highly distinctive As invariant as possible over transformations

Local feature descriptor At present, the SIFT descriptor is generally considered as the most appealing descriptor for practical uses Based on the image gradients in each interest point’s local region Drawback of SIFT in matching step High dimensionality (128-D) SIFT extensions PCA-SIFT descriptor Only change SIFT descriptor step Pre-compute an eigen-space for local gradient patches of size 41x41 2x39x39=3042 elements Only keep 20 components A more compact descriptor

Local feature descriptor GLOH (Gradient location-orientation histogram) Divides local circular region into 17 location bins gradient orientations are quantized in 16 bins Analyze the 17x16=272-d Eigen-space (PCA) keep 128 components Computationally more expensive and need extra offline computation of patch eigen space This paper presents a local feature descriptor (GDOH) Based on the gradient distance and orientation histogram Reduce the dimensional size of the descriptor Maintain distinctness and robustness as much as SIFT

Local feature descriptor First Image gradient magnitudes and orientations are sampled around the keypoint location Assign a weight to the magnitude of each point Gaussian weighting function with equal to half the width of the sample region is employed Reduce the emphasis on the points that are far from the center Second The gradient orientations are rotated relative to the keypoint dominant direction Achieve rotation invariance The distance of each gradient point to the descriptor center is calculated Final Build the histogram based on the gradient distance and orientation 8(distance bins) × 8(orientation bins) = 64 bins

Feature matching Given keypoint descriptors extracted from a pair of two images Find a set of candidate feature matches Using Best-Bin-First (BBF) algorithm Approximate nearest-neighbor searching method in high dimensional spaces Only consider the matches in which the distance ratio of nearest neighbor to the second-nearest neighbor is less than a threshold Correct matches should have the closest neighbor significantly closer than the closest incorrect match

Feature matching Find Nearest neighbor feature points A variant of the k-d tree search algorithm makes indexing in higher dimensional spaces practical. Best Bin First Approximate algorithm Finds the nearest neighbor for a large fraction of the queries A very close neighbor in the remaining cases Standard version of the K-D tree Beginning with a complete set of N points in R k Data space is split on the dimension i which the data exhibits the greatest variance A cut is made at the median value m of the data in that dimension equal number of points fall to one side or the other An internal node is created to store i and m Process iterates with both halves of the data This creates a balanced binary tree with depth d = log 2 N

NN search using K-D tree a b c d e f g c e b d g a f Nearest = ? dist-sqd =  NN(c, x) Nearer = e Further = b NN (e, x) nearer further

NN search using K-D tree a b c d e f g c e b d g a f Nearest = ? dist-sqd =  NN(e, x) Nearer = g Further = d NN (g, x) nearer further

NN search using K-D tree a b c d e f g c e b d g a f Nearest = ? dist-sqd =  NN(g, x) Nearest = g dist-sqd = r r

NN search using K-D tree a b c d e f g c e b d g a f Nearest = g dist-sqd = r NN(e, x) Check d2(e,x) > r No need to update r

NN search using K-D tree a b c d e f g c e b d g a f Nearest = g dist-sqd = r NN(e, x) Check further of e: find p d (p,x) > r No need to update r p

NN search using K-D tree a b c d e f g c e b d g a f Nearest = g dist-sqd = r NN(c, x) Check d2(c,x) > r No need to update r

NN search using K-D tree a b c d e f g c e b d g a f Nearest = g dist-sqd = r NN(c, x) Check further of c: find p d(p,x) < r !! NN (b,x) r p

NN search using K-D tree a b c d e f g c e b d g a f Nearest = g dist-sqd = r NN(b, x) Nearer = f Further = g NN (f,x) r

NN search using K-D tree a b c d e f g c e b d g a f Nearest = g dist-sqd = r NN(f, x) r’ = d2 (f,x) < r dist-sqd  r’ nearest  f r’

NN search using K-D tree a b c d e f g c e b d g a f Nearest = f dist-sqd = r’ NN(b, x) Check d(b,x) < r’ No need to update r’

NN search using K-D tree a b c d e f g c e b d g a f Nearest = f dist-sqd = r’ NN(b, x) Check further of b; find p d(p,x) > r’ No need to update r’ p

NN search using K-D tree a b c d e f g c e b d g a f Nearest = f dist-sqd = r’ NN(c, x) r’

Search Process: BBF Algorithm Set: v: query vector Q: priority queue ordered by distance to v (initially void) r: initially is the root of T vFIRST: initially not defined and with an infinite distance to v ncomp: number of comparisons, initially zero. While (!finish): Make a search for v in T from r => arrive to a leaf c Add all the directions not taken during the search to Q in an ordered way (each division node in the path gives one not-taken direction) If c is more near to v than vFIRST, then vFIRST=c Make r = the first node in Q (the more near to v), ncomp++ If distance(r,v) > distance(vFIRST,v), finish=1 If ncomp > ncompMAX, finish=1

BBF search example  1 >2  2 >3 [1,3]  2 >7  1 >6 Requested vector  1 >2 18 [2,7] [5,1500][9,1000] [20,7] [20,8] 18 20>2 Go right Queue: 1 8>7 Go right  2 >7 1  1 >2 18  2 >7 1  1 >2 18  1 > >6 Go right

 1 >2  2 >3 [1,3]  2 >7  1 >6 Requested vector [2,7] [5,1500][9,1000] [20,7] [20,8] Queue: 992 Distance from best-in-queue is lesser than distance from C MIN Start new search from best in queue Delete best node in queue C MIN : [9,1000] 992  2 >7 1  1 >2 18  1 >6 14 BBF search example  1 >2 18  1 >6 14 [20,7] 1 We arrived to a leaf Store nearest leaf in C MIN Distance from best-in- queue is NOT lesser than distance from c MIN Finish 14

Experimental result Compare the performance of SIFT and GDOH by image matching experiments and an image retrieval application Dataset for image matching experiments contains test images of various transformation types Dataset for image retrieval experiment includes 30 images of 10 household items

Image matching experiments Target images are rotated by 55 degree and scaled by 1.6 Target images are rotated by 65 degree and scaled by 4

Image matching experiments Target images are distorted to simulate a 12 degree viewpoint change Intensity of target images is reduced 20%

Image matching experiments GDOH outperforms SIFT slightly GDOH can performs comparatively with SIFT over various transformation types of images Table lists the comparison result of average matching time of SIFT and GDOH, respectively GDOH is significantly faster than SIFT in the image matching stage GDOH requires about 63% of the time of SIFT to do 65 pairs of image matching

Image retrieval experiments We first extract the descriptors of each image in the image dataset Then we find matches between every pair of images Matches if the distance ratio of the nearest neighbor to the second- nearest neighbor is less than a threshold Similarity measure Number of matched feature vector as a similarity measure between images For each image, the top 2 images with most matched number are returned

Conslusion GDOH Is created based on the gradient distance and orientation histogram Can be invariant to image rotation, scale, illumination and partial viewpoint changes Distinctive and robust as SIFT descriptor. The dimensionality of GDOH is much lower than that of SIFT, which can result in high efficiency in image matching and image retrieval application.