CS448f: Image Processing For Photography and Vision The Gradient Domain.

Slides:



Advertisements
Similar presentations
Semantics Consistent Parallelism Li-Yi Wei Microsoft Research.
Advertisements

Image acquisition using sparse (pseudo)-random matrices Piotr Indyk MIT.
CS448f: Image Processing For Photography and Vision Sharpening.
Blending and Compositing Computational Photography Connelly Barnes Many slides from James Hays, Alexei Efros.
Special Factoring Forms Solving Polynomial Equations
Recap from Monday Frequency domain analytical tool computational shortcut compression tool.
Approaches for Retinex and Their Relations Yu Du March 14, 2002.
Recap from Monday Fourier transform analytical tool computational shortcut.
Solving Linear Equations
Jonathan Richard Shewchuk Reading Group Presention By David Cline
1cs533d-winter-2005 Notes  Please read Fedkiw, Stam, Jensen, “Visual simulation of smoke”, SIGGRAPH ‘01.
CS448f: Image Processing For Photography and Vision Wavelets and Compression.
Function Optimization Newton’s Method. Conjugate Gradients
Solving systems using matrices
Computer Vision Introduction to Image formats, reading and writing images, and image environments Image filtering.
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
Image deblurring Seminar inverse problems April 18th 2007 Willem Dijkstra.
Image Pyramids and Blending
Warm Up. 7.4 A – Separable Differential Equations Use separation and initial values to solve differential equations.
Image Warping Computational Photography Derek Hoiem, University of Illinois 09/27/11 Many slides from Alyosha Efros + Steve Seitz Photo by Sean Carroll.
Solving Systems with Matrices Can’t figure out how to solve systems using Elimination Method? Substitution Method? Graphing? Why not try…
Session: Image Processing Seung-Tak Noh 五十嵐研究室 M2.
High dynamic range imaging. Camera pipeline 12 bits8 bits.
Integration. Indefinite Integral Suppose we know that a graph has gradient –2, what is the equation of the graph? There are many possible equations for.
ITERATIVE TECHNIQUES FOR SOLVING NON-LINEAR SYSTEMS (AND LINEAR SYSTEMS)
+ Review of Linear Algebra Optimization 1/14/10 Recitation Sivaraman Balakrishnan.
Non Negative Matrix Factorization
Ordinary Least-Squares Emmanuel Iarussi Inria. Many graphics problems can be seen as finding the best set of parameters for a model, given some data Surface.
Image Warping Computational Photography Derek Hoiem, University of Illinois 09/24/15 Many slides from Alyosha Efros + Steve Seitz Photo by Sean Carroll.
Copyright © 2011 Pearson Education, Inc. Publishing as Prentice Hall.
Matrix Entry or element Rows, columns Dimensions Matrix Addition/Subtraction Scalar Multiplication.
Image Warping Computational Photography Derek Hoiem, University of Illinois 09/23/10 Many slides from Alyosha Efros + Steve Seitz Photo by Sean Carroll.
Copyright © 2009 Pearson Education, Inc. CHAPTER 9: Systems of Equations and Matrices 9.1 Systems of Equations in Two Variables 9.2 Systems of Equations.
1. Inverse of A 2. Inverse of a 2x2 Matrix 3. Matrix With No Inverse 4. Solving a Matrix Equation 1.
Unit 3: Matrices.
Orthogonality and Least Squares
Systems of Equations and Inequalities Systems of Linear Equations: Substitution and Elimination Matrices Determinants Systems of Non-linear Equations Systems.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
4.6: Rank. Definition: Let A be an mxn matrix. Then each row of A has n entries and can therefore be associated with a vector in The set of all linear.
First, a little review: Consider: then: or It doesn’t matter whether the constant was 3 or -5, since when we take the derivative the constant disappears.
CS559: Computer Graphics Lecture 4: Compositing and Resampling Li Zhang Spring 2008.
4.1 Antiderivatives and Indefinite Integration Definition of Antiderivative: A function F is called an antiderivative of the function f if for every x.
Linearization and Newton’s Method. I. Linearization A.) Def. – If f is differentiable at x = a, then the approximating function is the LINEARIZATION of.
Solving Quadratic Equations Quadratic Equations: Think of other examples?
For any function f (x), the tangent is a close approximation of the function for some small distance from the tangent point. We call the equation of the.
Introduction Integration is the reverse process of Differentiation Differentiating gives us a formula for the gradient Integrating can get us the formula.
2.5 Determinants and Multiplicative Inverses of Matrices. Objectives: 1.Evaluate determinants. 2.Find the inverses of matrices. 3.Solve systems of equations.
Improving Performance of The Interior Point Method by Preconditioning Project Proposal by: Ken Ryals For: AMSC Fall 2007-Spring 2008.
2.5 – Determinants and Multiplicative Inverses of Matrices.
Image Warping Many slides from Alyosha Efros + Steve Seitz + Derek oeim Photo by Sean Carroll.
Unit 3: Matrices. Matrix: A rectangular arrangement of data into rows and columns, identified by capital letters. Matrix Dimensions: Number of rows, m,
College Algebra Chapter 6 Matrices and Determinants and Applications
Use Inverse Matrices to Solve Linear Systems
12-4: Matrix Methods for Square Systems
Image Blending : Computational Photography
Differential Equations
Gradient Domain High Dynamic Range Compression
1.6 Further Results on Systems
Highly Undersampled 0-norm Reconstruction
Wavelets : Introduction and Examples
One dimensional Poisson equation
Unit 3: Matrices
Lecture 5: Resampling, Compositing, and Filtering Li Zhang Spring 2008
6.5 Taylor Series Linearization
Gradient Domain High Dynamic Range Compression
Linearization and Newton’s Method
Linearization and Newton’s Method
Optical flow Computer Vision Spring 2019, Lecture 21
Solving Linear Systems of Equations - Inverse Matrix
What is the Range of Surface Reconstructions from a Gradient Field?
Presentation transcript:

CS448f: Image Processing For Photography and Vision The Gradient Domain

Image Gradients We can approximate the derivatives of an image using differences Equivalent to convolution by [-1 1] Note the zero boundary condition m = d/dx dm/dx =

Image Derivatives We can get back to the image by integrating – like an integral image Differentiating throws away constant terms – The boundary condition allowed us to recover it m = ∫ ∫ m dx =

Image Derivatives Can think of it as an extreme coarse/fine decomposition – coarse = boundary term – fine = gradients

In 2D Gradient in X = convolution by [-1 1] Gradient in Y = convolution by If we take both, we have 2n values to represent n pixels – Must be redundant! 1

Redundancy d (dm/dx) / dy = d (dm/dy) / dx Y derivative of X gradient = X derivative of Y gradient

Gradient Domain Editing Gradient domain techniques – Take image gradients – Mess with them – Try to put the image back together After you’ve messed with the gradients, the constraint on the previous slide doesn’t necessarily hold anymore.

The Poisson Solve Convolving by [-1 1] is a linear operator: D x Taking the Y gradient is some operator: D y We have desired gradient images g x and g y We want to find the image that best produces them Solve for an image m such that:

The Poisson Solve How? Using Least Squares: This is a Poisson Equation

The Poisson Solve D x = Convolution by [-1 1] D x T = Convolution by [1 -1] The product = Convolution by [1 -2 1] – Approximate second derivative D x T D x + D y T D y = convolution by

The Poisson Solve We need to invert: How big is the matrix? Anyone know any methods for inverting large sparse matrices?

Solving Large Linear Systems A = D x T D x + D y T D y b = D x T g x + D y T g y We need to solve Ax = b

1) Gradient Descent x = some initial estimate For (lots of iterations): r = b - Ax e = r T r α = e / r T Ar x += αr

2) Conjugate Gradient Descent x = some initial estimate d = r = Ax - b e new = r T r For (fewer iterations): α = e new / d T Ad x += αd r = b - Ax e old = e new e new = r T r d = r + d e new /e old (See An Introduction to the Conjugate Gradient Method Without the Agonizing Pain)

3) Coarse to Fine Conj. Grad. Desc. Downsample the target gradients Solve for a small solution Upsample the solution Use that as the initial estimate for a new conj. grad. descent Not too many iterations required at each level This is what ImageStack does in -poisson

4) FFT Method We’re trying to undo a convolution Convolutions are multiplications in Fourier space Therefore, go to Fourier space and divide

Applications How might we like to mess with the gradients? Let’s try some stuff

Applications Poisson Image Editing – Perez 2003 GradientShop – Bhat 2009 Gradient Domain HDR Compression – Fattal et al 2002 Efficient Gradien-Domain Compositing Using Quadtrees – Agarwala 2007 Coordinates for Instant Image Cloning – Farbman et al. 2009