Martin Burger Total Variation 1 Cetraro, September 2008 Numerical Schemes Wrap up approximate formulations of subgradient relation.

Slides:



Advertisements
Similar presentations
Professor Horst Cerjak, Thomas Pock A Duality Based Approach for Realtime TV-L 1 Optical Flow ICG A Duality Based Approach for Realtime TV-L.
Advertisements

Fast and Accurate Optical Flow Estimation
Accelerated, Parallel and PROXimal coordinate descent IPAM February 2014 APPROX Peter Richtárik (Joint work with Olivier Fercoq - arXiv: )
Various Regularization Methods in Computer Vision Min-Gyu Park Computer Vision Lab. School of Information and Communications GIST.
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods Segmentation.
Medical Image Segmentation: Beyond Level Sets (Ismail’s part) 1.
Martin Burger Total Variation 1 Cetraro, September 2008 Variational Methods and their Analysis Questions: - Existence - Uniqueness - Optimality conditions.
Near Optimal Rate Selection for Wireless Control Systems Abusayeed Saifullah, Chengjie Wu, Paras Tiwari, You Xu, Yong Fu, Chenyang Lu, Yixin Chen.
Distributed Optimization with Arbitrary Local Solvers
Inexact SQP Methods for Equality Constrained Optimization Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge.
EARS1160 – Numerical Methods notes by G. Houseman
Yves Meyer’s models for image decomposition and computational approaches Luminita Vese Department of Mathematics, UCLA Triet Le (Yale University), Linh.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Inverse Problems in Semiconductor Devices Martin Burger Johannes Kepler Universität Linz.
1cs542g-term Notes. 2 Solving Nonlinear Systems  Most thoroughly explored in the context of optimization  For systems arising in implicit time.
Optimization in Engineering Design 1 Lagrange Multipliers.
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Numerical Optimization
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and Related Methods II.
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods: Error.
Nonlinear Optimization for Optimal Control
Motion Analysis (contd.) Slides are from RPI Registration Class.
Methods For Nonlinear Least-Square Problems
1 Total variation minimization Numerical Analysis, Error Estimation, and Extensions Martin Burger Johannes Kepler University Linz SFB Numerical-Symbolic-Geometric.
Linear Regression  Using a linear function to interpolate the training set  The most popular criterion: Least squares approach  Given the training set:
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods Numerical.
The Widrow-Hoff Algorithm (Primal Form) Repeat: Until convergence criterion satisfied return: Given a training set and learning rate Initial:  Minimize.
The Perceptron Algorithm (Dual Form) Given a linearly separable training setand Repeat: until no mistakes made within the for loop return:
Error Estimation in TV Imaging Martin Burger Institute for Computational and Applied Mathematics European Institute for Molecular Imaging (EIMI) Center.
Reformulated - SVR as a Constrained Minimization Problem subject to n+1+2m variables and 2m constrains minimization problem Enlarge the problem size and.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Unconstrained Optimization Problem
Fast Optimal Design of Semiconductor Devices Martin Burger Institute for Computational and Applied Mathematics European Institute for Molecular Imaging.
Active Set Support Vector Regression
Support Vector Regression David R. Musicant and O.L. Mangasarian International Symposium on Mathematical Programming Thursday, August 10, 2000
1 Multiple Kernel Learning Naouel Baili MRL Seminar, Fall 2009.
Unconstrained Optimization Rong Jin. Logistic Regression The optimization problem is to find weights w and b that maximizes the above log-likelihood How.

Computational Optimization
UNCONSTRAINED MULTIVARIABLE
Collaborative Filtering Matrix Factorization Approach
Most physically significant large-scale atmospheric circulations have time scales on the order of Rossby waves but much larger than the time scales.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal February 12, 2007 Inexact Methods for PDE-Constrained Optimization.
Efficient Integration of Large Stiff Systems of ODEs Using Exponential Integrators M. Tokman, M. Tokman, University of California, Merced 2 hrs 1.5 hrs.
Frank Edward Curtis Northwestern University Joint work with Richard Byrd and Jorge Nocedal January 31, 2007 Inexact Methods for PDE-Constrained Optimization.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
 d.s. wu 1 Penalty Methods for Contact Resolution interactive contact resolution that usually works pi david s wu.
“On Sizing and Shifting The BFGS Update Within The Sized-Broyden Family of Secant Updates” Richard Tapia (Joint work with H. Yabe and H.J. Martinez) Rice.
Part 4 Nonlinear Programming 4.1 Introduction. Standard Form.
559 Fish 559; Lecture 25 Root Finding Methods. 559 What is Root Finding-I? Find the value for such that the following system of equations is satisfied:
Exact Differentiable Exterior Penalty for Linear Programming Olvi Mangasarian UW Madison & UCSD La Jolla Edward Wild UW Madison December 20, 2015 TexPoint.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
Kernel Regression Prof. Bennett Math Model of Learning and Discovery 1/28/05 Based on Chapter 2 of Shawe-Taylor and Cristianini.
Geology 5670/6670 Inverse Theory 18 Mar 2015 © A.R. Lowry 2015 Last time: Review of Inverse Assignment 1; Constrained optimization for nonlinear problems.
Massive Support Vector Regression (via Row and Column Chunking) David R. Musicant and O.L. Mangasarian NIPS 99 Workshop on Learning With Support Vectors.
Using Neumann Series to Solve Inverse Problems in Imaging Christopher Kumar Anand.
Exact Differentiable Exterior Penalty for Linear Programming
Root Finding Methods Fish 559; Lecture 15 a.
Multiplicative updates for L1-regularized regression
CS5321 Numerical Optimization
Collaborative Filtering Matrix Factorization Approach
CS5321 Numerical Optimization
CS5321 Numerical Optimization
Solving Equations 3x+7 –7 13 –7 =.
MATH 175: Numerical Analysis II
Multiple features Linear Regression with multiple variables
Multiple features Linear Regression with multiple variables
ADMM and DSO.
Constraints.
Presentation transcript:

Martin Burger Total Variation 1 Cetraro, September 2008 Numerical Schemes Wrap up approximate formulations of subgradient relation

Martin Burger Total Variation 2 Cetraro, September 2008 Numerical Schemes Primal Approximation Primal Fixed Point Dual Approximation Dual Fixed Point Dual Fixed Point for Primal Relation

Martin Burger Total Variation 3 Cetraro, September Fixed point methods Matrix form

Martin Burger Total Variation 4 Cetraro, September 2008 Fixed Point Schemes I Primal Gradient Method Based on approximation of F: Fixed-point approach for first optimality equation

Martin Burger Total Variation 5 Cetraro, September 2008 Fixed Point Schemes I Primal Gradient Method Based on Approximation, Rudin-Osher- Fatemi 89 + easy to implement, efficient iteration steps + global convergence (descent method for variational problem) - dependent on approximation - slow convergence - severe step size restrictions (explicit approximation of differential operator)

Martin Burger Total Variation 6 Cetraro, September 2008 Fixed Point Schemes I Primal Gradient Method Based on Approximation, Rudin-Osher- Fatemi 89 Special case of fixed point methods with choice

Martin Burger Total Variation 7 Cetraro, September 2008 Fixed Point Schemes I Primal Gradient Method Based on Approximation, Rudin-Osher- Fatemi 89 + easy to implement, efficient iteration steps + global convergence (descent method for variational problem) - dependent on approximation - slow convergence - severe step size restrictions (explicit approximation of differential operator)

Martin Burger Total Variation 8 Cetraro, September 2008 Fixed Point Schemes II Dual Gradient Projection Method Dual methods eliminate also u

Martin Burger Total Variation 9 Cetraro, September 2008 Fixed Point Schemes II Dual Gradient Projection Method Do gradient step on the quadrativc functional and project back to constraint set M Note: can be interpreted as a scheme where the first equation is always satisfied, i.e. special case with

Martin Burger Total Variation 10 Cetraro, September 2008 Fixed Point Schemes II Dual Gradient Projection Method, Chambolle 05, Chan et al 08, Aujol 08 + easy to implement, efficient iteration steps + global convergence (descent method for dual problem) + no approximation necessary - slow convergence - needs inversion of A*A, hence good for ROF, bad for inverse problems Obvious generalization for last point: use preconditioning of A*A

Martin Burger Total Variation 11 Cetraro, September 2008 Fixed Point Schemes II Chambolle‘s Method Dual method, again eliminates u

Martin Burger Total Variation 12 Cetraro, September 2008 Fixed Point Schemes III Chambolle‘s Method Complicated derivation from dual minimization problem in original paper Note: can be interpreted as a scheme where the first equation is always satisfied, in addition using dual fixed point form for the primal subgradient relation, i.e. special case with

Martin Burger Total Variation 13 Cetraro, September 2008 Fixed Point Schemes III Chambolle‘s Method, Chambolle 04 + easy to implement, efficient iteration steps + global convergence (descent method for dual problem) + no approximation necessary - slow convergence - needs inversion of A*A, hence good for ROF, bad for inverse problems Obvious generalization for last point: use preconditioning of A*A

Martin Burger Total Variation 14 Cetraro, September 2008 Fixed Point Schemes IV Inexact Uzawa method, Zhu and Chan 08 Primal gradient descent, dual projected gradient ascent in the reduced Lagrangian (for u and w) Coincides with dual gradient projection if A = I and appropriate choice of damping parameter 

Martin Burger Total Variation 15 Cetraro, September 2008 Fixed Point Schemes V Primal Lagged Diffusivity with Approximation, Vogel et al Approximate smoothed primal optimality condition Semi-implicit treatment of differential operator

Martin Burger Total Variation 16 Cetraro, September 2008 Fixed Point Schemes V Special case with choice

Martin Burger Total Variation 17 Cetraro, September 2008 Fixed Point Schemes V Primal Lagged Diffusivity with Approximation + acceptable step-size restrictions + global convergence (descent method for variational problem) - dependent on approximation - still slow convergence - differential equation with changing parameter to be solved in each step

Martin Burger Total Variation 18 Cetraro, September Thesholding methods C is damping matrix, possible perturbation T is thresholding operator

Martin Burger Total Variation 19 Cetraro, September 2008 Thresholding Methods I Primal Thresholding Method, Daubechies-Defrise-DeMol 03 Only used for D = -I Introduce

Martin Burger Total Variation 20 Cetraro, September 2008 Thresholding Methods I Primal Thresholding Method, Daubechies-Defrise-DeMol 03 Hence, special case with

Martin Burger Total Variation 21 Cetraro, September 2008 Thresholding Scheme I Primal Thresholding Method + easy to implement, efficient iteration steps if D= - I - slow convergence - cannot be generalized to cases where D is not invertible

Martin Burger Total Variation 22 Cetraro, September 2008 Thresholding Methods II Alternating Minimization, Yin et al 08, Amat-Pedregal 08 Use quadratic penalty for the gradient constraing (Moreau- Yosida regularization) Alternate minimization with respect to the variables

Martin Burger Total Variation 23 Cetraro, September 2008 Thresholding Methods II Alternating Minimization, Yin et al 08, Amat-Pedregal 08 Use quadratic penalty for the gradient constraing (Moreau- Yosida regularization) Alternate minimization with respect to the variables

Martin Burger Total Variation 24 Cetraro, September 2008 Thresholding Methods II Alternating Minimization, Yin et al 08, Amat-Pedregal 08 Introduce Hence, special case with

Martin Burger Total Variation 25 Cetraro, September 2008 Thresholding Scheme II Primal Thresholding Method + efficient iteration steps if D*D and A*A can be jointly inverted easily (e.g. by FFT) + treats differential operator implicitely, no severe stability bounds -Linear convergence - Smoothes the regularization functional

Martin Burger Total Variation 26 Cetraro, September 2008 Thresholding Methods III Split Bregman, Goldstein-Osher 08 Original motivation from Bregman iteration, can be rewritten as

Martin Burger Total Variation 27 Cetraro, September 2008 Thresholding Methods III Split Bregman, Goldstein-Osher 08 Difficult to solve directly, hence subiteration with thresholding After renumbering

Martin Burger Total Variation 28 Cetraro, September 2008 Thresholding Scheme III Split Bregman, Goldstein-Osher 08 + efficient iteration steps if D*D and A*A can be jointly inverted easily (e.g. by FFT) + treats differential operator implicitely, no severe stability bounds + does not need smoothing - Linear convergence

Martin Burger Total Variation 29 Cetraro, September Newton type methods Matrix form

Martin Burger Total Variation 30 Cetraro, September 2008 Newton-type Methods Primal or dual + fast local convergence - global convergence difficult - dependent on approximation (Newton-matrix degenerates) - needs inversion of large Newton matrix Good choice with efficient preconditioning for linear system in each iteration step