Agenda Project 2- Due this Thursday Office Hours Wed 10:30-12 Image blending Background – Constrained optimization
Recall: goal
Formulation: find the best patch f Given vector field v (pasted gradient), find the value of f in unknown region that optimize: Pasted gradient Mask Background unknown region
Notation Destination image: f* (table) Source image: g (table) Output image: f (table) : list of (i,j) pixel coordinates from f* we want to replace d : list of (i,j) pixel coordinates on border of We’ll use p = (i,j) to denote a pixel location – g p is a pixel value at p = (i,j) from source image, – f is the set of pixels we’re trying to find
Notation Destination image: f* (table) Source image: g (table) Output image: f (table) : set of (i,j) pixel coordinates from f we want to replace (list of pairs) d : set of (i,j) pixel coordinates on border of (list of pairs) We’ll use p = (i,j) to denote a pixel location – g p is a pixel value at p = (i,j) from source image, – f is the set of pixels we’re trying to find With constraint that, for p in d sum over all pairs of neighbors in
Optimization What is optimal f without above constraint? What is known versus unknown? Variational formulation of solution: The best patch is the one that produces the lowest score, subject to the constraint Drop subscript for all p in dOmega
Optimization Pretend constraint wasn’t there: how to find lowest scoring f ? 1)Brute-force search -Keep guessing different patches f and score them -Output the best-scoring one 2)Gradient descent -Guess a patch f. Update guess with f = f -
How to estimate gradient? In general, we can always do it numerically For above quadratic function, we can calculate in closed form
How to estimate gradient? In general, we can always do it numerically For above quadratic function, we can calculate in closed form
Constrained optimization 1)Brute-force search -Keep guessing different patches f and score them -Output the best-scoring one 2)Gradient descent -Guess a patch f. Update guess with f = f - What happens when gradient is zero?
Optimization 1)Brute-force search -Keep guessing different patches f and score them -Output the best-scoring one 2)Gradient descent -Guess a patch f. Update guess with f = f – 3) Closed-form solution (for simple functions)
Constrained optimization How to handle constraints? 1)Brute-force search -Keep guessing different patches f and score them -Output the best-scoring one 2)Gradient descent -Guess a patch f. Update guess with f = f - Correct f p = f* p after a gradient update
Constrained optimization How to handle constraints? 1)Brute-force search -Keep guessing different patches f and score them -Output the best-scoring one 2)Gradient descent -Guess a patch f. Update guess with f = f - What happens when gradient is zero?
Lagrangian optimization If there was no constraint, we’d have a closed- form solution Is there a way to get closed-form solutions using the constraint?
Lagrangian optimization min f(x,y) such that g(x,y) = 0 Imagine we want to synthesize a “two-pixel” patch
Lagrangian optimization min f(x,y) such that g(x,y) = 0 and g(x,y) = 0
Write conditions with single equation (just for convenience) At minimum of F, the its gradient is 0 Therefore, the following conditions hold
Multiple constraints min f(x,y) such that g1(x,y) = 0, g2(x,y) = 0 What is f(x,y) in our case? g1(x,y)?
Lagrangian optimization for p in d (border pixels) for all other p in Since S is quadratic in f, the above yeilds a set of linear equations Af =b f = inv(A)b