1 Total variation minimization Numerical Analysis, Error Estimation, and Extensions Martin Burger Johannes Kepler University Linz SFB Numerical-Symbolic-Geometric.

Slides:



Advertisements
Similar presentations
An Active contour Model without Edges
Advertisements

A Transform-based Variational Framework Guy Gilboa Pixel Club, November, 2013.
Johann Radon Institute for Computational and Applied Mathematics: 1/25 Signal- und Bildverarbeitung, Image Analysis and Processing.
Lecture 9 Support Vector Machines
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Various Regularization Methods in Computer Vision Min-Gyu Park Computer Vision Lab. School of Information and Communications GIST.
Edge Preserving Image Restoration using L1 norm
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
TVL1 Models for Imaging: Global Optimization & Geometric Properties Part I Tony F. Chan Math Dept, UCLA S. Esedoglu Math Dept, Univ. Michigan Other Collaborators:
Bregman Iterative Algorithms for L1 Minimization with
11/11/02 IDR Workshop Dealing With Location Uncertainty in Images Hasan F. Ates Princeton University 11/11/02.
SVM - Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training.
Experiments and Variables
Level set based Image Segmentation Hang Xiao Jan12, 2013.
Active Contours, Level Sets, and Image Segmentation
2013 SIAM Great Lakes Section From PDEs to Information Science and Back Russel Caflisch IPAM Mathematics Department, UCLA 1.
Introduction to Variational Methods and Applications
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2013 – 12269: Continuous Solution for Boundary Value Problems.
Martin Burger Total Variation 1 Cetraro, September 2008 Variational Methods and their Analysis Questions: - Existence - Uniqueness - Optimality conditions.
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
CMPUT 466/551 Principal Source: CMU
Chapter 2: Lasso for linear models
Yves Meyer’s models for image decomposition and computational approaches Luminita Vese Department of Mathematics, UCLA Triet Le (Yale University), Linh.
Inverse Problems in Semiconductor Devices Martin Burger Johannes Kepler Universität Linz.
Martin Burger Institut für Numerische und Angewandte Mathematik CeNoS Level set methods for imaging and application to MRI segmentation.
Front propagation in inverse problems and imaging Martin Burger Institute for Computational and Applied Mathematics European Institute for Molecular Imaging.
EE 7730 Image Segmentation.
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and Related Methods II.
University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin Support Vector Machines.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods: Error.
1 Inverse Problems for Electrodiffusion Martin Burger Johannes Kepler University Linz SFB Numerical-Symbolic-Geometric Scientific Computing Radon Institute.
Regularization with Singular Energies Martin Burger Institute for Computational and Applied Mathematics European Institute for Molecular Imaging (EIMI)
Preconditioned Level Set Flows Martin Burger Institute for Computational and Applied Mathematics European Institute for Molecular Imaging (EIMI) Center.
Segmentation Divide the image into segments. Each segment:
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods Numerical.
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and Related Methods.
1 Regularization with Singular Energies: Error Estimation and Numerics Martin Burger Institut für Numerische und Angewandte Mathematik Westfälische Wilhelms.
Error Estimation in TV Imaging Martin Burger Institute for Computational and Applied Mathematics European Institute for Molecular Imaging (EIMI) Center.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Support Vector Machines
Optical Flow Estimation using Variational Techniques Darya Frolova.
1 Regularisierung mir Singulären Energien Martin Burger Institut für Numerische und Angewandte Mathematik Westfälische Wilhelms Universität Münster
Martin Burger Total Variation 1 Cetraro, September 2008 Numerical Schemes Wrap up approximate formulations of subgradient relation.
EE565 Advanced Image Processing Copyright Xin Li Different Frameworks for Image Processing Statistical/Stochastic Models: Wiener’s MMSE estimation.
Clustering with Bregman Divergences Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, Joydeep Ghosh Presented by Rohit Gupta CSci 8980: Machine Learning.
1 Level Sets for Inverse Problems and Optimization I Martin Burger Johannes Kepler University Linz SFB Numerical-Symbolic-Geometric Scientific Computing.
Unitary Extension Principle: Ten Years After Zuowei Shen Department of Mathematics National University of Singapore.
Motivation from Real-World Applications EE565 Advanced Image Processing Copyright Xin Li Noisy Photos Noisy ultrasound data.
LINEAR PROGRAMMING SIMPLEX METHOD.
Adaptive Regularization of the NL-Means : Application to Image and Video Denoising IEEE TRANSACTION ON IMAGE PROCESSING , VOL , 23 , NO,8 , AUGUST 2014.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
计算机学院 计算感知 Support Vector Machines. 2 University of Texas at Austin Machine Learning Group 计算感知 计算机学院 Perceptron Revisited: Linear Separators Binary classification.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
Basis Expansions and Regularization Part II. Outline Review of Splines Wavelet Smoothing Reproducing Kernel Hilbert Spaces.
Different types of wavelets & their properties Compact support Symmetry Number of vanishing moments Smoothness and regularity Denoising Using Wavelets.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Exact Differentiable Exterior Penalty for Linear Programming Olvi Mangasarian UW Madison & UCSD La Jolla Edward Wild UW Madison December 20, 2015 TexPoint.
Digital Image Processing
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Bivariate Splines for Image Denoising*° *Grant Fiddyment University of Georgia, 2008 °Ming-Jun Lai Dept. of Mathematics University of Georgia.
ECE 576 – Power System Dynamics and Stability Prof. Tom Overbye Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Using Neumann Series to Solve Inverse Problems in Imaging Christopher Kumar Anand.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Introduction to Medical Imaging Week 6: Introduction to Medical Imaging Week 6: Denoising (part II) – Variational Methods and Evolutions Guy Gilboa Course.
PDE Methods for Image Restoration
Optimal sparse representations in general overcomplete bases
Lecture 7 Patch based methods: nonlocal means, BM3D, K- SVD, data-driven (tight) frame.
Presentation transcript:

1 Total variation minimization Numerical Analysis, Error Estimation, and Extensions Martin Burger Johannes Kepler University Linz SFB Numerical-Symbolic-Geometric Scientific Computing Radon Institute for Computational & Applied Mathematics Westfälische Wilhelms Universität Münster

Total variation minimization Obergurgl, September Stan Osher, Jinjun Xu, Guy Gilboa (UCLA) Lin He (Linz / UCLA) Klaus Frick, Otmar Scherzer (Innsbruck) Carola Schönlieb (Vienna) Don Goldfarb, Wotao Yin (Columbia) Collaborations

Total variation minimization Obergurgl, September Total variation methods are popular in imaging (and inverse problems), since - they keep sharp edges - eliminate oscillations (noise) - create new nice mathematics Many related approaches appeared in the last years, e.g. ℓ 1 penalization / sparsity techniques Introduction

Total variation minimization Obergurgl, September Total variation and related methods have some shortcomings - difficult to analyze and to obtain error estimates - systematic errors (clean images not reconstructed perfectly) - computational challenges - some extensions to other imaging tasks are not well understood (e.g. inpainting) Introduction

Total variation minimization Obergurgl, September Starting point of the analysis is the ROF model for denoising Rudin-Osher Fatemi 89/92, Acar-Vogel 93, Chambolle-Lions 96, Vogel 95/96, Scherzer-Dobson 96, Chavent-Kunisch 98, Meyer 01,… ROF Model

Total variation minimization Obergurgl, September ROF Model Reconstruction (code by Jinjun Xu) cleannoisy ROF

Total variation minimization Obergurgl, September First question for error estimation: estimate difference of u (minimizer of ROF) and f in terms of Estimate in the L 2 norm is standard, but does not yield information about edges Estimate in the BV-norm too ambitious: even arbitrarily small difference in edge location can yield BV-norm of order one ! Error Estimation

Total variation minimization Obergurgl, September We need a better error measure, stronger than L2, weaker than BV Possible choice: Bregman distance Bregman 67 Real distance for a strictly convex differentiable functional – not symmetric Symmetric version Error Estimation

Total variation minimization Obergurgl, September Total variation is neither symmetric nor differentiable Define generalized Bregman distance for each subgradient Symmetric version Kiwiel 97, Chen-Teboulle 97 Error Estimation

Total variation minimization Obergurgl, September Since TV seminorm is homogeneous of degree one, we have Bregman distance becomes Error Estimation

Total variation minimization Obergurgl, September Bregman distance for TV is not a strict distance, can be zero for In particular d TV is zero for contrast change Resmerita-Scherzer 06 Bregman distance is still not negative (TV convex) Bregman distance can provide information about edges Error Estimation

Total variation minimization Obergurgl, September Let v be piecewise constant with white background and color values on regions Then we obtain subgradients of the form with signed distance function and Error Estimation

Total variation minimization Obergurgl, September Bregman distances given by In the limit we obtain for being piecewise continuous Error Estimation

Total variation minimization Obergurgl, September For estimate in terms of we need smoothness condition on data Optimality condition for ROF Error Estimation

Total variation minimization Obergurgl, September Subtract q Estimate for Bregman distance, mb-Osher 04 Error Estimation

Total variation minimization Obergurgl, September In practice we have to deal with noisy data f (perturbation of some exact data g) Estimate for Bregman distance Error Estimation

Total variation minimization Obergurgl, September Optimal choice of the penalization parameter i.e. of the order of the noise variance Error Estimation

Total variation minimization Obergurgl, September Direct extension to deconvolution / linear inverse problems under standard source condition mb-Osher 04 Extension: stronger estimates under stronger conditions, Resmerita 05 Nonlinear inverse problems, Resmerita-Scherzer 06 Error Estimation

Total variation minimization Obergurgl, September Natural choice: primal discretization with piecewise constant functions on grid Problem 1: Numerical analysis (characterization of discrete subgradients) Problem 2: Discrete problems are the same for any anisotropic version of the total variation Discretization

Total variation minimization Obergurgl, September In multiple dimensions, nonconvergence of the primal discretization for the isotropic TV (p=2) can be shown Convergence of anisotropic TV (p=1) on rectangular aligned grids Fitzpatrick-Keeling 1997 Discretization

Total variation minimization Obergurgl, September Alternative: perform primal-dual discretization for optimality system (variational inequality) with convex set Primal-Dual Discretization

Total variation minimization Obergurgl, September Discretization Discretized convex set with appropriate elements (piecewise linear in 1D, Raviart- Thomas in multi-D) Primal-Dual Discretization

Total variation minimization Obergurgl, September In 1 D primal, primal-dual, and dual discretization are equivalent Error estimate for Bregman distance by analogous techniques Note that only the natural condition is needed to show Primal / Primal-Dual Discretization

Total variation minimization Obergurgl, September In multi-D similar estimates, additional work since projection of subgradient is not discrete subgradient. Primal-dual discretization equivalent to discretized dual minimization (Chambolle 03, Kunisch-Hintermüller 04). Can be used for existence of discrete solution, stability of p mb 06/07 ? Primal / Primal-Dual Discretization

Total variation minimization Obergurgl, September For most imaging applications Cartesian grids are used. Primal dual discretization can be reinterpreted as a finite difference scheme in this setup. Value of image intensity corresponds to color in a pixel of width h around the grid point. Raviart-Thomas elements on Cartesian grids particularly easy. First component piecewise linear in x, pw constant in y,z, etc. Leads to simple finite difference scheme with staggered grid Cartesian Grids

Total variation minimization Obergurgl, September ROF minimization has a systematic error, total variation of the reconstruction is smaller than total variation of clean image. Image features left in residual f-u g, clean f, noisy u, ROFf-u Extension I: Iterative Refinement & ISS

Total variation minimization Obergurgl, September Idea: add the residual („noise“) back to the image to pronounce the features decreased to much. Then do ROF again. Iterative procedure Osher-mb-Goldfarb-Xu-Yin 04 Extension I: Iterative Refinement & ISS

Total variation minimization Obergurgl, September Improves reconstructions significantly Extension I: Iterative Refinement & ISS

Total variation minimization Obergurgl, September Extension I: Iterative Refinement & ISS

Total variation minimization Obergurgl, September Simple observation from optimality condition Consequently, iterative refinement equivalent to Bregman iteration Extension I: Iterative Refinement & ISS

Total variation minimization Obergurgl, September Choice of parameter less important, can be kept small (oversmoothing). Regularizing effect comes from appropriate stopping. Quantitative stopping rules available, or „stop when you are happy“ – S.O. Limit to zero can be studied. Yields gradient flow for the dual variable („inverse scale space“) mb-Gilboa-Osher-Xu 06, mb-Frick-Osher-Scherzer 06 Extension I: Iterative Refinement & ISS

Total variation minimization Obergurgl, September Non-quadratic fidelity is possible, some caution needed for L 1 fidelity He-mb-Osher 05, mb-Frick-Osher-Scherzer 06 Error estimation in Bregman distance mb-Resmerita 06, in prep Further details see talk of Klaus Frick Extension I: Iterative Refinement & ISS

Total variation minimization Obergurgl, September Extension I: Inverse Scale Space Movie by M. Bachmayr, Master Thesis 06

Total variation minimization Obergurgl, September Application to other regularization techniques, e.g. wavelet thresholding is straightforward Starting from soft shrinkage, iterated refinement yields firm shrinkage, inverse scale space becomes hard shrinkage Osher-Xu 06 Bregman distance natural sparsity measure, source condition just requires sparse signal, number of nonzero components is smoothness measure in error estimates Extension I: Iterative Refinement & ISS

Total variation minimization Obergurgl, September Total variation, inverse scale space, and shrinkage techniques can be combined nicely See talk by Lin He Extension I: Iterative Refinement & ISS

Total variation minimization Obergurgl, September Total variation will prefer isotropic structures (circles, spheres) or special anisotropies In many applications one wants sharp corners in different directions. Adaptive anisotropy is needed Can be incorporated in ROF and ISS. See talk by Benjamin Berkels Extension II: Anisotropy

Total variation minimization Obergurgl, September Difficult to construct total variation techniques for inpainting Original extensions of ROF failed to obtain natural connectivity (see book by Chan, Shen 05) Inpainting region, image f (noisy) given on Try to minimize Extension III: Inpainting

Total variation minimization Obergurgl, September Optimality condition will have the form with A being a linear operator defining the norm In particular p = 0 in D ! Extension III: Inpainting

Total variation minimization Obergurgl, September Different iterated approach (motivated by Cahn-Hilliard inpainting, Bertozzi et al 05 ) Minimize in each step First term for damping, second for fidelity (fit to f where given, and to old iterate in the inpainting region), third term for smoothing Extension III: Inpainting

Total variation minimization Obergurgl, September Continuous flow for damping parameter to zero Fourth order flow for H -1 norm Stationary solution (existence ?) satisfies Extension III: Inpainting

Total variation minimization Obergurgl, September Result: Penguins Extension III: Inpainting

Total variation minimization Obergurgl, September Original motivation: Osher-Marquinha 01 used preconditioned gradient flow for ROF Stationary state assumed to be ROF minimizer Computational observation: not always true ! Trivial observation: for initial value u(0) = 0 the flow remains zero for all time ! Extension IV: Manifolds

Total variation minimization Obergurgl, September Embarrassing observation: flow always created by transport from initial value Important observation: Stationary state minimizes ROF on the manifold Extension IV: Manifolds

Total variation minimization Obergurgl, September Surprising observation: for f being the indicator function of a convex set, the flow is equivalent to the gradient flow of the L 1 version of ROF No loss of contrast ! More detailed analysis for general images needed Possible extension to ROF minimization on other manifolds by metric gradient flows Extension IV: Manifolds

Total variation minimization Obergurgl, September Download and Contact Papers and Talks: from October: wwwmath1.uni-muenster.de/num