CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.

Slides:



Advertisements
Similar presentations
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Advertisements

Arc-length computation and arc-length parameterization
Optimizing and Learning for Super-resolution
Isoparametric Elements Element Stiffness Matrices
Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
CSci 6971: Image Registration Lecture 14 Distances and Least Squares March 2, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart,
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
The loss function, the normal equation,
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Visual Recognition Tutorial
Computer Vision Optical Flow
CSci 6971: Image Registration Lecture 3: Images and Transformations January 20, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart,
x – independent variable (input)
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
Function Optimization Newton’s Method. Conjugate Gradients
Motion Analysis Slides are from RPI Registration Class.
Radial Basis Functions
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Motion Analysis (contd.) Slides are from RPI Registration Class.
Curve-Fitting Regression
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 8: Registration Components February 6, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart,
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
CSCE 641 Computer Graphics: Image Registration Jinxiang Chai.
CSci 6971: Image Registration Lecture 5: Feature-Base Regisration January 27, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart,
CSci 6971: Image Registration Lecture 16: View-Based Registration March 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart,
Radial Basis Function Networks

UNCONSTRAINED MULTIVARIABLE
Collaborative Filtering Matrix Factorization Approach
Image Stitching Ali Farhadi CSE 455
CSE 185 Introduction to Computer Vision
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Multimodal Interaction Dr. Mike Spann
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
Geometric Operations and Morphing.
Newton's Method for Functions of Several Variables Joe Castle & Megan Grywalski.
Digital Image Processing Lecture 7: Geometric Transformation March 16, 2005 Prof. Charlene Tsai.
CSci 6971: Image Registration Lecture 3: Images and Transformations March 1, 2005 Prof. Charlene Tsai.
Curve-Fitting Regression
Course 13 Curves and Surfaces. Course 13 Curves and Surface Surface Representation Representation Interpolation Approximation Surface Segmentation.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Ch. 3: Geometric Camera Calibration
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
ADALINE (ADAptive LInear NEuron) Network and
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Numerical Analysis – Data Fitting Hanyang University Jong-Il Park.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
776 Computer Vision Jan-Michael Frahm Spring 2012.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
University of Ioannina
CPSC 641: Image Registration
Non-linear Minimization
Lecture 7: Image alignment
Collaborative Filtering Matrix Factorization Approach
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Image Registration 박성진.
Backpropagation.
The loss function, the normal equation,
Mathematical Foundations of BME Reza Shadmehr
Backpropagation.
Computer Animation Algorithms and Techniques
Presentation transcript:

CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware

Image RegistrationLecture 4 2 Outline  Example  Intensity-based registration  SSD error function  Image mapping  Function minimization:  Gradient descent  Derivative calculation  Algorithm  Results and discussion  Example  Intensity-based registration  SSD error function  Image mapping  Function minimization:  Gradient descent  Derivative calculation  Algorithm  Results and discussion

Image RegistrationLecture 4 3 Reading Material  Paper:  Hill, Batchelor, Holden and Hawkes, Medical Image Registration, Physics of Medicine and Biology 46 (2001) R1-R45.  Copies distributed in class and available electronically from Prof. Stewart  Excellent introduction to registration problem, but heavily slanted toward medical applications using similarity transformations  Paper:  Hill, Batchelor, Holden and Hawkes, Medical Image Registration, Physics of Medicine and Biology 46 (2001) R1-R45.  Copies distributed in class and available electronically from Prof. Stewart  Excellent introduction to registration problem, but heavily slanted toward medical applications using similarity transformations

Image RegistrationLecture 4 4 Running Example  MRI image registration, similarity transformation (rotated by 10 degrees, with a translation of 17mm and 13mm)

Image RegistrationLecture 4 5 Intensity-Based Registration  Use the intensities more or less directly  Compare intensities between  Mapped (transformed) version of the moving image I m (based on an estimated transformation) and  Fixed image I f  Need:  Pixel-by-pixel error measure  Mapping technique  Minimization technique  Use the intensities more or less directly  Compare intensities between  Mapped (transformed) version of the moving image I m (based on an estimated transformation) and  Fixed image I f  Need:  Pixel-by-pixel error measure  Mapping technique  Minimization technique

Image RegistrationLecture 4 6 Example Error Measure: SSD Region of intersection between images Pixel location within region

Image RegistrationLecture 4 7 SSD Example: Initial Alignment

Image RegistrationLecture 4 8 SSD: Sum of Squared Errors  Advantages:  Simple to compute  Differentiable  Optimal for Gaussian error distributions  Disadvantages:  Doesn’t allow varying “gain” between the images, which may be caused by different illuminations or different camera settings  Biased by large errors in intensity  E.g. caused by contrast agent injection  Advantages:  Simple to compute  Differentiable  Optimal for Gaussian error distributions  Disadvantages:  Doesn’t allow varying “gain” between the images, which may be caused by different illuminations or different camera settings  Biased by large errors in intensity  E.g. caused by contrast agent injection

Image RegistrationLecture 4 9 Working in the Parameters  Remember:  This means that to evaluate the effect of a transformation estimate what we really want to evaluate is  Remember:  This means that to evaluate the effect of a transformation estimate what we really want to evaluate is

Image RegistrationLecture 4 10 Aside: The Role of the Region  Observe: the region over which the transformation is evaluated depends on the parameters:  This can cause problems in practice:  A transformation resulting in no overlap leads to 0 error!  Observe: the region over which the transformation is evaluated depends on the parameters:  This can cause problems in practice:  A transformation resulting in no overlap leads to 0 error!

Image RegistrationLecture 4 11 Evaluating the Objective Function  Pixel-by-pixel evaluation within the region  Apply the inverse mapping at each pixel  Problem: inverse mapping of pixel does not “land” on a discrete pixel location!  Pixel-by-pixel evaluation within the region  Apply the inverse mapping at each pixel  Problem: inverse mapping of pixel does not “land” on a discrete pixel location!

Image RegistrationLecture 4 12 Many Interpolation Options  Nearest neighbor  Bilinear (or trilinear in 3d)  Spline  Sinc  Nearest neighbor  Bilinear (or trilinear in 3d)  Spline  Sinc

Image RegistrationLecture 4 13 Bilinear Interpolation in Moving Image  Weighted average of 4 surrounding pixels  8 surrounding pixels in 3d  Weight proportional to distance in x and in y  Weighted average of 4 surrounding pixels  8 surrounding pixels in 3d  Weight proportional to distance in x and in y

Image RegistrationLecture 4 14 Bilinear: Resulting Intensity

Image RegistrationLecture 4 15 Two Options In Practice  Create intensity, pixel-by-pixel, but don’t create an explicit image I m ’  Create actual image I m ’  Create intensity, pixel-by-pixel, but don’t create an explicit image I m ’  Create actual image I m ’

Image RegistrationLecture 4 16 Resetting the Stage  We have:  Formulated the SSD objective function  Discussed how to evaluate it  Next step is how to minimize it with respect to the transformation parameters  We have:  Formulated the SSD objective function  Discussed how to evaluate it  Next step is how to minimize it with respect to the transformation parameters

Image RegistrationLecture 4 17 Before Proceeding  We will estimate the parameters of the backward transformation  Abusing notation, we will minimize the equation  It should be understood (implicitly) that this is the inverse transformation and the parameter values will be different  We will estimate the parameters of the backward transformation  Abusing notation, we will minimize the equation  It should be understood (implicitly) that this is the inverse transformation and the parameter values will be different

Image RegistrationLecture 4 18 Thinking Abstractly: Function Minimization  Function to minimize:  Possibilities  Amoeba (simplex) methods - non- differential  Gradient / steepest descent  Linearization (leading to least- squares)  Newton’s method  Many more …  Function to minimize:  Possibilities  Amoeba (simplex) methods - non- differential  Gradient / steepest descent  Linearization (leading to least- squares)  Newton’s method  Many more …

Image RegistrationLecture 4 19 Gradient / Steepest Descent  Compute gradient of objective function (with respect to transformation parameters), evaluated at current parameter estimate  Make tentative small change in parameters in the negative gradient direction   is called the “learning rate”  Re-evaluate objective function and accept change if it is reduced (otherwise reduce the learning rate)  Continue until no further changes are possible  Compute gradient of objective function (with respect to transformation parameters), evaluated at current parameter estimate  Make tentative small change in parameters in the negative gradient direction   is called the “learning rate”  Re-evaluate objective function and accept change if it is reduced (otherwise reduce the learning rate)  Continue until no further changes are possible

Image RegistrationLecture 4 20 Computing the Derivative  Issue:  Images are discrete  Parameters are continuous  Two methods  Numerical  Continuous (eventually numerical as well)  Abstract definition of parameter vector:  Issue:  Images are discrete  Parameters are continuous  Two methods  Numerical  Continuous (eventually numerical as well)  Abstract definition of parameter vector:

Image RegistrationLecture 4 21 Numerical Derivatives  Form each partial derivative by taking a small step in each parameter, i = 1,..,k:  Choice of step size can be difficult  Requires k+1 function evaluations to compute the derivative  Sometimes this is the only thing you can do!  Form each partial derivative by taking a small step in each parameter, i = 1,..,k:  Choice of step size can be difficult  Requires k+1 function evaluations to compute the derivative  Sometimes this is the only thing you can do!

Image RegistrationLecture 4 22 Continuous Computation of Derivative  Apply chain rule: Intensity gradient in moving image Change in transformation wrt change in parameters Current error at pixel location

Image RegistrationLecture 4 23 Computing Image Derivatives  Many ways.  Simplest is pixel differences.  More sophisticated methods account for image noise  Computed at each pixel  Many ways.  Simplest is pixel differences.  More sophisticated methods account for image noise  Computed at each pixel

Image RegistrationLecture 4 24 Derivative In Moving Image  Equation  In detail  Pre-compute derivatives in moving image I m  During minimization, map pixels back into moving image coordinate system and interpolate  Equation  In detail  Pre-compute derivatives in moving image I m  During minimization, map pixels back into moving image coordinate system and interpolate

Image RegistrationLecture 4 25 Image Derivative Example

Image RegistrationLecture 4 26 dT/d   Similarity transform:  Where  So derivative is 2x4 matrix (Jacobian):  Similarity transform:  Where  So derivative is 2x4 matrix (Jacobian):

Image RegistrationLecture 4 27 Putting It All Together  At each pixel in overlap region:  Calculate intensity difference (scalar)  Multiply by 1x2 intensity gradient vector computed by mapping pixel location back to moving image  Multiply by 2x4 Jacobian of the transformation, evaluated at pixel location  Result is 1x4 gradient vector at each pixel  Sum each component of vector over all pixels  At each pixel in overlap region:  Calculate intensity difference (scalar)  Multiply by 1x2 intensity gradient vector computed by mapping pixel location back to moving image  Multiply by 2x4 Jacobian of the transformation, evaluated at pixel location  Result is 1x4 gradient vector at each pixel  Sum each component of vector over all pixels

Image RegistrationLecture 4 28 Algorithm Outline  Initialize transformation  Repeat  Compute gradient  Make step in gradient direction  Update mapping equation  Remap image  Until convergence  Initialize transformation  Repeat  Compute gradient  Make step in gradient direction  Update mapping equation  Remap image  Until convergence

Image RegistrationLecture 4 29 Initialization  Since this is a minimization technique, an initial estimate is required,  There are many ways to generate this estimate:  Identity transformation, e.g.  Prior information  Different technique  Steepest descent only finds a local minimum of the objective function  We’ll revisit initialization in Lectures 16 and 17  Since this is a minimization technique, an initial estimate is required,  There are many ways to generate this estimate:  Identity transformation, e.g.  Prior information  Different technique  Steepest descent only finds a local minimum of the objective function  We’ll revisit initialization in Lectures 16 and 17

Image RegistrationLecture 4 30 Convergence  Ideal is that gradient is 0.  In practice, algorithm is stopped when:  Step size becomes too small  Objective function change is sufficiently small  Maximum number of iterations is reached  Ideal is that gradient is 0.  In practice, algorithm is stopped when:  Step size becomes too small  Objective function change is sufficiently small  Maximum number of iterations is reached

Image RegistrationLecture 4 31 Example Initial errors Iteration 100Iteration 200 Iteration 300Final: 498 iterations

Image RegistrationLecture 4 32 Discussion  Steepest descent is simple, but has limitations:  Local minima  Slow (linear) convergence  Steepest descent is simple, but has limitations:  Local minima  Slow (linear) convergence

Image RegistrationLecture 4 33 Summary  Intensity-based registration is driven by direct functions of image intensity  SSD is a common, though simple example  Evaluating the SSD objective function (and most other intensity-based functions) requires image interpolation  Gradient descent is a simple, commonly-used minimization technique  Derivatives may be calculated using either numerical approximations or differentiation of the objective function.  Intensity-based registration is driven by direct functions of image intensity  SSD is a common, though simple example  Evaluating the SSD objective function (and most other intensity-based functions) requires image interpolation  Gradient descent is a simple, commonly-used minimization technique  Derivatives may be calculated using either numerical approximations or differentiation of the objective function.

Image RegistrationLecture 4 34 Looking Ahead to Lecture 5  Feature-based registration  Topics:  Features  Correspondences  Least-squares estimation  ICP algorithm  Comparison to intensity-based registration  Feature-based registration  Topics:  Features  Correspondences  Least-squares estimation  ICP algorithm  Comparison to intensity-based registration