4. CONCLUSIONS The proposed method is computationally efficient and totally automatic. It works on very low quality images that the vascular network is.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Feature Detection. Description Localization More Points Robust to occlusion Works with less texture More Repeatable Robust detection Precise localization.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe.
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
81.3% Affine invariant feature detector and image correlation Martin Bujňák © 2004 Martin Bujňák,
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
A NOVEL LOCAL FEATURE DESCRIPTOR FOR IMAGE MATCHING Heng Yang, Qing Wang ICME 2008.
CSE 473/573 Computer Vision and Image Processing (CVIP)
Distinctive Image Features from Scale- Invariant Keypoints Mohammad-Amin Ahantab Technische Universität München, Germany.
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Sreya Chakraborty Under the guidance of Dr. K. R. Rao Multimedia Processing Lab (MPL) University of Texas at Arlington.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Fast High-Dimensional Feature Matching for Object Recognition David Lowe Computer Science Department University of British Columbia.
Distinctive Image Feature from Scale-Invariant KeyPoints
Feature extraction: Corners and blobs
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Object Recognition Using Distinctive Image Feature From Scale-Invariant Key point D. Lowe, IJCV 2004 Presenting – Anat Kaspi.
Scale Invariant Feature Transform (SIFT)
Automatic Matching of Multi-View Images
Blob detection.
CS4670: Computer Vision Kavita Bala Lecture 8: Scale invariance.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
IMAGE MOSAICING Summer School on Document Image Processing
Pedestrian Detection and Localization
Evaluation of interest points and descriptors. Introduction Quantitative evaluation of interest point detectors –points / regions at the same relative.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Suppression of the eyelash artifact in ultra-widefield retinal images Vanessa Ortiz-Rivera – Dr. Badrinath Roysam, Advisor –
Kylie Gorman WEEK 1-2 REVIEW. CONVERTING AN IMAGE FROM RGB TO HSV AND DISPLAY CHANNELS.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
CSE 185 Introduction to Computer Vision Feature Matching.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Distinctive Image Features from Scale-Invariant Keypoints
Advanced Science and Technology Letters Vol.29 (SIP 2013), pp Electro-optics and Infrared Image Registration.
Presented by David Lee 3/20/2006
776 Computer Vision Jan-Michael Frahm Spring 2012.
Se-Hoon Park 26 th August 2014 Backgrounds for feature extraction.
Finding Clusters within a Class to Improve Classification Accuracy Literature Survey Yong Jae Lee 3/6/08.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
Distinctive Image Features from Scale-Invariant Keypoints Presenter :JIA-HONG,DONG Advisor : Yen- Ting, Chen 1 David G. Lowe International Journal of Computer.
Blob detection.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Face recognition using Histograms of Oriented Gradients
SIFT Scale-Invariant Feature Transform David Lowe
Interest Points EE/CSE 576 Linda Shapiro.
Presented by David Lee 3/20/2006
Distinctive Image Features from Scale-Invariant Keypoints
3D Vision Interest Points.
TP12 - Local features: detection and description
Improving the Performance of Fingerprint Classification
Scale and interest point descriptors
Seunghui Cha1, Wookhyun Kim1
Feature description and matching
Corners and Interest Points
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
From a presentation by Jimmy Huff Modified by Josiah Yoder
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
SIFT keypoint detection
CSE 185 Introduction to Computer Vision
Lecture VI: Corner and Blob Detection
Feature descriptors and matching
Presented by Xu Miao April 20, 2005
Presentation transcript:

4. CONCLUSIONS The proposed method is computationally efficient and totally automatic. It works on very low quality images that the vascular network is hard even impossible to extract. This method can deal with large initial misalignment registration like perspective distortion, arbitrary rotation and <1.5 times scaling. It also can deal with registration of field images with large non- overlapping areas. 1 Institute of Automation, Chinese Academy of Science, Beijing, China 2 Columbia University, Department of Ophthalmology, New York, NY, USA 3 Columbia University, Department of Biomedical Engineering, New York, NY, USA A novel registration method for retinal images based on local features 1, 3 Jian Chen, 2 R. Theodore Smith, 1 Jie Tian, and 3 Andrew F. Laine 1. INTRODUCTIONS Sometimes it is very hard to automatically detect the bifurcations of vascular network in retinal images so that the general feature based registration methods will fail to register two images. We propose an novel Intensity Invariance Local Feature Descriptor for multimodal registration and describe an automatic retinal image registration framework which leave out the vascular bifurcation scheme. Quantitative evaluations on 12 pairs of multimodal retinal images show that the proposed framework far outperform the existing algorithms in terms of runtime and accuracy. 5. ACKNOWLEDGEMENTS This work is supported in part by NEI (R01 EY ), the NYC Community Trust (RTS), and unrestricted funds from Research to prevent blindness. REFERENCES 1.C. Harris and M.J. Stephens. A combined corner and edge detector. In Alvey Vision Conference, pages 147–152, F. Laliberte, L. Gagnon and Y.L. Sheng, Registration and fusion of retinal images--an evaluation study, IEEE Trans. Med. Imag., vol.22, No. 5, pp. 661–673, N. Ryan, C. Heneghan and P. de Chazal, Registration of digital retinal images using landmark correspondence by expectation maximization, Image and Vision Computing 22 (2004), pp. 883– T. Chanwimaluang, G. Fan, and S.R. Fransen, Hybrid retinal image registration. IEEE Trans Info. Tech. in Biomed., Vol. 10, No. 1, pp , D.G.Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision 60(2), 91–110, Lin Hong, Yifei Wan, Anil Jain, Fingerprint image enhancement: Algorithm and performance evaluation, IEEE Trans. Pattern Analysis and Machine Intell., vol. 20, no. 8, pp , Aug Gehua Yang, Charles V. Stewart, Michal Sofka, and Chia-Ling Tsai, "Registration of Challenging Image Pairs: Initialization, Estimation, andDecision" IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 11, pp , Nov METHODS Our suggested algorithm comprises four distinct steps: 1.The Harris corner points detecting in both images. 2.Extracting the local feature around each corner point. 3.Bilateral matching of local features across the two images. 4.Transformation based on the matched features. 3. RESULTS Transformation mode is adaptively selected by the number of matches. Linear conformal, affine and second order polynomial transformations are used in our framework. The simplest transformation (Linear conformal) requires at least two pairs of control points. So we don’t apply any transformation on floating image if there is no match or only one match. We always apply higher order transformation as long as the number of matches is sufficient. 2.4 Transformation based on the matched features We tested the proposed method on 12 retinal image pairs and compared with Dual Bootstrap-ICP algorithm, and the vascular networks are difficult to extract from some of the images. Our method took about 90s to run all 12 cases on a Pentium M 1.5GHz laptop using Matlab, thus averaging about 7.5s per registration. Dual Bootstrap-ICP (download at [8]) took about 500s to run all, thus about averaging 42s per registration. In all 12 image pairs, all of them were registered very well by our method, but 5 of them failed by Dual Bootstrap-ICP algorithm. Three cases are shown in figure 4. The Harris detector is computationally efficient and is easy to implement, and it is invariant to rotation so that it is applied to detect the control point candidates in retinal images. The basic idea of the Harris detector is to calculate the changes in all directions when convolved with a Gaussian window. The bifurcations and the Harris corners are shown on figure 1. A main orientation which is relative to the local gradients must be assigned to each corner point. Thus the local feature can be represented relative to this orientation and therefore achieve invariance to image rotation. 2.2 Extracting the local feature around each corner point 2.1 The Harris corner points detecting in both images (a) (b) Figure 1. (a) Bifurcations of vascular network detected by central line extraction method. (b) Corner points detected by the Harris detector. 2.3 Bilateral matching of local features across the two images We use the Best-Bin-First (BBF) algorithm to match the correspondences between two images. It is an algorithm to identify the approximate closest neighbors of points in high dimensional spaces. This is approximate in the sense that it returns the closest neighbor with high probability. The bilateral BBF algorithm is as simple as the unilateral one. The above unilateral matches are denoted as M(I,J), and another unilateral matches M(J,I) are also applied, then the same matches between these two set of matches are the bilateral matches. Figure 2: Intensity invariance local feature descriptor. (a) the gradient magnitude and orientation at each image sample point in a region around the corner’s location. (b) all gradient orientation are restricted from 0 to 180 degree. (c) the accumulated gradient magnitude in each orientation. (d)-(f) show another accumulated gradient magnitude by rotating 180 degree of the original neighborhood around the corner point. The symmetric descriptor is calculate from (c) and (f) by equation (*). Next we explain how to extract the intensity invariance feature from this local neighborhood. First we extract two original descriptors as SIFT did. But we limit the histogram bins in the range [0,180] degree. As shown in figure 2 (a)-(c). Finally, the intensity invariance descriptor is calculated as follows: Where A and B are two original descriptors with size 4x4x8. Figure 3. The matched corner points which are indentified by our method between the two images in figure 1. Even the bilateral BBF algorithm cannot guarantee all matches are correct. Fortunately it is easy to exclude the incorrect matches using the control points’ orientations and the geometrical size of matches. Figure 4. Comparing results between the proposed method and DB-ICP. The floating images and reference images are shown on the first and second column respectively. The results of DB-ICP are shown on the third column, and the results of the proposed method are shown on the last column. We introduce a continuous method, averaging squared gradients, to assign the orientation to each corner point. This method uses the averaged perpendicular direction to the gradient to represent the corner point’s orientation. Mathematically, the main orientation is calculated as follows: