Feature based deformable registration of neuroimages using interest point and feature selection Leonid Teverovskiy Center for Automated Learning and Discovery.

Slides:



Advertisements
Similar presentations
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Advertisements

Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Optimizing and Learning for Super-resolution
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
Interest points CSE P 576 Ali Farhadi Many slides from Steve Seitz, Larry Zitnick.
Visual Recognition Tutorial
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Lecture 6: Feature matching CS4670: Computer Vision Noah Snavely.
Motion Analysis Slides are from RPI Registration Class.
Non-Rigid Registration. Why Non-Rigid Registration  In many applications a rigid transformation is sufficient. (Brain)  Other applications: Intra-subject:
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Lecture 4: Feature matching
ON THE IMPROVEMENT OF IMAGE REGISTRATION FOR HIGH ACCURACY SUPER-RESOLUTION Michalis Vrigkas, Christophoros Nikou, Lisimachos P. Kondi University of Ioannina.
Feature matching and tracking Class 5 Read Section 4.1 of course notes Read Shi and Tomasi’s paper on.
Computing motion between images
Evaluating Hypotheses
 Image Search Engine Results now  Focus on GIS image registration  The Technique and its advantages  Internal working  Sample Results  Applicable.
Clustering Color/Intensity
CS 376b Introduction to Computer Vision 02 / 25 / 2008 Instructor: Michael Eckmann.
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Lecture 13 (Greene Ch 16) Maximum Likelihood Estimation (MLE)
Algorithm Evaluation and Error Analysis class 7 Multiple View Geometry Comp Marc Pollefeys.
CSci 6971: Image Registration Lecture 5: Feature-Base Regisration January 27, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart,
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Multimodal Interaction Dr. Mike Spann
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
Local invariant features Cordelia Schmid INRIA, Grenoble.
VI. Evaluate Model Fit Basic questions that modelers must address are: How well does the model fit the data? Do changes to a model, such as reparameterization,
LECTURER PROF.Dr. DEMIR BAYKA AUTOMOTIVE ENGINEERING LABORATORY I.
Digital Image Processing CCS331 Relationships of Pixel 1.
Jan Kamenický Mariánská  We deal with medical images ◦ Different viewpoints - multiview ◦ Different times - multitemporal ◦ Different sensors.
Line detection Assume there is a binary image, we use F(ά,X)=0 as the parametric equation of a curve with a vector of parameters ά=[α 1, …, α m ] and X=[x.
Medical Image Analysis Image Registration Figures come from the textbook: Medical Image Analysis, by Atam P. Dhawan, IEEE Press, 2003.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Geometric Hashing: A General and Efficient Model-Based Recognition Scheme Yehezkel Lamdan and Haim J. Wolfson ICCV 1988 Presented by Budi Purnomo Nov 23rd.
EECS 274 Computer Vision Model Fitting. Fitting Choose a parametric object/some objects to represent a set of points Three main questions: –what object.
Computer simulation Sep. 9, QUIZ 2 Determine whether the following experiments have discrete or continuous out comes A fair die is tossed and the.
Methods for 3D Shape Matching and Retrieval
Local features and image matching October 1 st 2015 Devi Parikh Virginia Tech Disclaimer: Many slides have been borrowed from Kristen Grauman, who may.
CS654: Digital Image Analysis
MultiModality Registration Using Hilbert-Schmidt Estimators By: Srinivas Peddi Computer Integrated Surgery II April 6 th, 2001.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
Automated Geo-referencing of Images Dr. Ronald Briggs Yan Li GeoSpatial Information Sciences The University.
Level Set Segmentation ~ 9.37 Ki-Chang Kwak.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Active Flattening of Curved Document Images via Two Structured Beams
Interest Points EE/CSE 576 Linda Shapiro.
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
Classification of unlabeled data:
Feature description and matching
Hidden Markov Models Part 2: Algorithms
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Warmup To check the accuracy of a scale, a weight is weighed repeatedly. The scale readings are normally distributed with a standard deviation of
Image Registration 박성진.
Lecture 5: Feature invariance
Lecture 5: Feature invariance
Image Registration  Mapping of Evolution
Lecture 16. Classification (II): Practical Considerations
Presentation transcript:

Feature based deformable registration of neuroimages using interest point and feature selection Leonid Teverovskiy Center for Automated Learning and Discovery Carnegie Mellon University

Description of the problem Our task is to align given neuroimages so that their corresponding anatomical structures have the same coordinates

Description of the problem Our task is to align given neuroimages so that their corresponding anatomical structures have the same coordinates

Existing approaches Landmark based registration. Deformation between images is calculated based on user defined correspondences between certain points, curves or surfaces on the neuroimages. - Not fully automatic. - Transformation of non-landmark points is interpolated from the transformation of landmark points.

Landmark based registration

Existing approaches Registration driven by a similarity measure. Deformation model is parameterized and then a numerical optimization procedure is used to find parameters that maximize some similarity measure. - Automatic, but prone to local maxima. - The more degrees of freedom deformation model has, the harder it is to find optimal parameters for it.

Affine registration driven by sum of square differences of the images SSD: Sometimes it works

SSD: Affine registration driven by sum of square differences of the images Sometimes it works

SSD: Affine registration driven by sum of square differences of the images Sometimes it works

SSD: Affine registration driven by sum of square differences of the images Sometimes it does not

Existing approaches Feature based registration. A feature vector is computed for each voxel. Correspondences between voxels in the reference image and voxels in the input image are estimated based on the simliarity of their feature vectors. - Best results among existing methods. - Existing systems have many hand tuned parameters, including components of feature vectors.

Feature based registration

Our goals. Fully automatic method that selects which features to use depending on - modality of the images; - anatomical structures we care to register the most No restriction on the degrees of freedom of the deformation model

A few questions… Would it be a hard task to register these two images for a human?

Not a hard task A few questions…

OK, then how about registering this image with a rotated copy of itself? A few questions…

OK, then how about registering this image with a rotated copy of itself? Looks much harder We can get some idea about how difficult a registration task will be even without seeing the other image! A few questions…

Not an easy task indeed A few questions…

If there were some points that “stood out”, we could easily find what the rotation was… A few questions…

If there were some points that “stood out”, we could easily find what the rotation was… provided we can determine correspondences correctly. A few questions…

We can solve both problems using the same mechanism We are facing two different problems: How to find interesting points in the reference image automatically. How to find corresponding points in the input image.

Feature Extraction. … We compute various rotationally invariant features at different scales.

[Feature Vector] h(Pi|F) Probability that given feature vector “belongs” to a certain pixel in the reference image A pixel in the input image Most likely correspondences in the reference image If we knew h(Pi|F) we could do this :

We could find h(Pi|F) … …if we knew what g(F|Pi) and q(Pi) were.  Probability of observing feature F at the pixel Pi Prior

Prior q(Pi) if we have a reason to believe that certain pixels in the reference image are more likely to correspond to the given pixel in the input image, we can express our beliefs through prior. we will use uniform prior for now.

We can estimate g(F|Pi)! We have applied about 1560 affine transforms to the reference image and computed features for each pixels in each of the resulting 1560 images. Thus we obtain 1560 feature vectors for each anatomical location in the reference image. We assume that components of feature vector are independent of each other and distributed according to a gaussian distribution. We find MLE of mean and variance for each gaussian g(F|Pi) is a product of these gaussians.

[Feature Vector] h(Pi|F) A pixel in the input image Blue dots represent correspondences with probability of Probability of other correspondences is negligibly small We are almost done; we need to have a way of distinguishing good correspondences …

[Feature Vector] h(Pi|F) A pixel in the input image Blue dots represent correspondences with probability of Probability of other correspondences is negligibly small … from bad correspondences

Risk=∑h(Pi|F)L(Pi, Po) L(Pi, Po) – loss, which is a geometric distance between estimated corresponding pixel Pi and the correct corresponding pixel Po. When Po is unknown, we use MAP estimate of Po instead. Correspondences with low risk are “good” correspondences Risk

Where are we now? We can find correspondences between pixels in the input image and pixels in the reference image using feature vectors computed on the pixels of the input image. And we can also determine interesting points by finding correspondences between pixels in the reference image and pixels in the … reference image!

Feature Selection Select feature subset for determining interesting points. Select feature subset for determining correspondences. Use sum of square differences (quality of registration) as a means of evaluating feature subsets.

Interesting point feature subset Reference image Input image Correspondence feature subset Interesting points CorrespondencesRANSAC h(Pi|F) Affine Transform TPS transform Driving voxels Registration quality Birds eye view. feedback Feature pool

Experimental results. Reference Image Input Image

Interesting points

Best correspondences

Driving voxels

Reference imageRegistered input image Registration results SSD:

Difference image

More experiments 1.Select random set of interest points; select random subset of features to find correspondences 2.Select random set of interest points; use forward feature selection to find subset of features to be used for estimating correspondences 3.Select random subset of features to find interest points; select random subset of features to find correspondences

More experiments 4. Select random subset of features to find interesting points; use forward selection to choose subset of features used for determining the correspondences. 5. Select random subset of features to find interesting points; use forward selection to choose subset of features used for determining the correspondences. This time start from the subset used to find interesting points without one feature.

More experiments 6. Select random subset of features to find interesting points. Then employ forward selection for choosing subset of features to be used for determining the correspondences. Find a new set of interest points using this subset of features and iterate. 7. Select random subset of features to find interesting points. Then employ forward selection for choosing subset of features to be used for determining the correspondences. This time start from the subset used to find interesting points without one feature. Find a new set of interest points using this selected subset of features and iterate

More experiments. For each feature selection strategy we run registration eight times, each time restarting at a random point. Each run continues for 20 iterations.

Feature pool. 22 features, all at the finest scale (for now). 4. First derivative (D1) 9. Second derivative (D2) 14. Third derivative (D3) 19. Fourth derivative (D4) 24. Fifth derivative (D5) 29. Gabor_0_3 (G1) 34. Gabor_0_5 (G2) 39. Gabor_2_7 (G3) 44. Gabor_3_7 (G4) 49. Gabor_4_9 (G5) 54. Laplacian (L) 59. Harris (H) 64. Intensity_1_mean (M1) 69. Intensity_1_std(S1) 74. Intensity_2_mean(M2) 79. Intensity_2_std(S2) 84. Intensity_4_mean(M3) 89. Intensity_4_std(S3) 94. Intensity_8_mean(M4) 99. Intensity_8_std(S4) 104. Intensity_16_mean(M5) 109. Intensity_16_std(S5) “Intensity_n_mean” is the mean of the pixel intensities inside a ring with inner radius log 2 (n) and outer radius n, centered at the given pixel. “Intensity_n_std” is the standard deviation of the pixel intensities inside a ring with inner radius log 2 (n) and outer radius n, centered at the given pixel.

Feature selection significantly improves registration results. Here, a 100 interest points were used

Random IPRandom edge IPIP selection Registration error when 30 interesting points are used. Interesting points are selected 1) at random from all the image pixels, 2) at random from image pixels that lie on edges, 3) using interesting point selection; Interest point selection has greater positive effect on the registration accuracy when number of interesting points is decreased.

Typical graph for the case when feature selection strategy number 7 is used. Brown line shows registration error if affine deformation is used, green line – when thin plate spline deformation is used.

Feature pool consists of 22 features. 8 features appear to be enough for good registration results.

Histogram of selected interesting point features when feature selection strategy number 7 is used

Histogram of selected correspondence features when feature selection strategy number 7 is used

Histogram of selected interesting point features when feature selection strategy number 6 is used

Histogram of selected correspondence features when feature selection strategy number 6 is used

Reference slice Input slice Reference slice and input slice are midsagittal slices of neuroimages of different subjects. In addition, input slice was affinely transformed. Registration results at each step of feature subset selection

{M4}{M4, S1}{M4, S1, S5} {M4, S1, S5, M5} {M4, S1, S5, M5, H} {M4, S1, S5, M5, H, G2} SSD: SSD: SSD: SSD: SSD: SSD:

Thank you