Lucas-Kanade Image Alignment Iain Matthews
Paper Reading Simon Baker and Iain Matthews, Lucas-Kanade 20 years on: A Unifying Framework, Part 1 And Project 3 Description
Some operations preserve the range but change the domain of f : What kinds of operations can this perform? Still other operations operate on both the domain and the range of f. Recall - Image Processing Lecture
Face Morphing
Applications of Image Alignment Ubiquitous computer vision technique Mosaicing Tracking Parametric and layered motion estimation Image registration and alignment Face coding / parameterization Super-resolution
Generative Model for an Image Parameterized model ParametersImage shape appearance
Fitting a Model to an Image What are the best model parameters to match an image? ParametersImage shape appearance Nonlinear optimization problem
Region of interest Appearance Shape Active Appearance Model Landmarks Cootes, Edwards, Taylor, 1998 Warp to reference
Image Alignment Image, I(x) Warp, W(x;p) Template, T(x) Image coordinates x = (x, y) T Warp parameters, p = (p 1, p 2, …, p n ) T Warp, W(x;p+ p)
Want to: Minimize the Error Warp image to get compute Template, T(x) Warped, I(W(x;p)) T(x) - I(W(x;p))
How to: Minimize the Error Minimise SSD with respect to p, Solution: solve for increments to current estimate, - = Generally a nonlinear optimisation problem
Taylor series expansion, linearize function f about x 0 : Linearize For image alignment:
Gradient Descent Solution Least squares problem, solve for p Solution, Gradient Error Image Jacobian Hessian
Gradient Images Compute image gradient IxIx IyIy W(x;p)W(x;p)W(x;p)W(x;p) I(W(x;p))
Jacobian Compute Jacobian Mesh parameterization Warp, W(x;p) Template, T(x)Image, I(x) Image coordinates x = (x, y) T Warp parameters, p = (p 1, p 2, …, p n ) T = (dx 1, dy 1, …, dx n, dy n ) T =
Lucas-Kanade Algorithm 1.Warp I with W(x;p) I(W(x;p)) 2.Compute error image T(x) - I(W(x;p)) 3.Warp gradient of I to compute I 4.Evaluate Jacobian 5.Compute Hessian 6.Compute p 7.Update parameters p p p -=
Fast Gradient Descent? To reduce Hessian computation: 1.Make Jacobian simple (or constant) 2.Avoid computing gradients on I
Shum-Szeliski Image Aligment Additive Image Alignment – Lucas, Kanade I(x)I(x) W(x;p)W(x;p) T(x)T(x) W(x;p+ p) I(x)I(x) T(x)T(x) W(x;p)W(x;p) I(W(x;p)) W(x;p) o W(x; p) Compositional Alignment – Shum, Szeliski W(x;p)W(x;p) W(x;0 + p) = W(x; p)
I(x)I(x) W(x;p)W(x;p) T(x)T(x) W(x;p)W(x;p) I(W(x;p)) W(x;p) o W(x; p) Compositional Image Alignment Minimise, Jacobian is constant, evaluated at (x, 0) “simple”.
Compositional Algorithm 1.Warp I with W(x;p) I(W(x;p)) 2.Compute error image T(x) - I(W(x;p)) 3.Warp gradient of I to compute I 4.Evaluate Jacobian 5.Compute Hessian 6.Compute p 7.Update W(x;p) W(x;p) o W(x; p) -=
Inverse Compositional Why compute updates on I? Can we reverse the roles of the images? Yes! [Baker, Matthews CMU-RI-TR-01-03] Proof that algorithms take the same steps (to first order)
I(x)I(x) W(x;p)W(x;p) T(x)T(x) W(x;p)W(x;p) I(W(x;p)) W(x;p) o W(x; p) Inverse Compositional Forwards compositional I(x)I(x) W(x;p)W(x;p) T(x)T(x) W(x;p)W(x;p) I(W(x;p)) W(x;p) o W(x; p) -1 Inverse compositional
Inverse Compositional Minimise, Solution Update
Inverse Compositional Jacobian is constant - evaluated at (x, 0) Gradient of template is constant Hessian is constant Can pre-compute everything but error image!
Inverse Compositional Algorithm 1.Warp I with W(x;p) I(W(x;p)) 2.Compute error image T(x) - I(W(x;p)) 3.Warp gradient of I to compute I 4.Evaluate Jacobian 5.Compute Hessian 6.Compute p 7.Update W(x;p) W(x;p) o W(x; p) -1 -=
Framework Baker and Matthews 2003 Formulated framework, proved equivalence AlgorithmCan be applied toEfficient?Authors Forwards Additive AnyNoLucas, Kanade Forwards Compositional Any semi-groupNoShum, Szeliski Inverse Compositional Any groupYesBaker, Matthews Inverse AdditiveSimple linear 2D+Yes Hager, Belhumeur
Example