Johann Radon Institute for Computational and Applied Mathematics: www.ricam.oeaw.ac.at 1/31 Signal- und Bildverarbeitung, 323.014 Image Analysis and Processing.

Slides:



Advertisements
Similar presentations
Finite Difference Discretization of Hyperbolic Equations: Linear Problems Lectures 8, 9 and 10.
Advertisements

Signal- und Bildverarbeitung, 323
Johann Radon Institute for Computational and Applied Mathematics: 1/25 Signal- und Bildverarbeitung, Image Analysis and Processing.
Ter Haar Romeny, FEV Geometry-driven diffusion: nonlinear scale-space – adaptive scale-space.
Scale & Affine Invariant Interest Point Detectors Mikolajczyk & Schmid presented by Dustin Lennon.
Johann Radon Institute for Computational and Applied Mathematics: 1/38 Mathematische Grundlagen in Vision & Grafik ( ) more.
Various Regularization Methods in Computer Vision Min-Gyu Park Computer Vision Lab. School of Information and Communications GIST.
Demetriou/Loizidou – ACSC330 – Chapter 4 Geometric Objects and Transformations Dr. Giorgos A. Demetriou Dr. Stephania Loizidou Himona Computer Science.
Johann Radon Institute for Computational and Applied Mathematics: 1/35 Signal- und Bildverarbeitung, Image Analysis and Processing.
Differential geometry I
Active Contours, Level Sets, and Image Segmentation
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Chapter 8 Elliptic Equation.
Topic 6 - Image Filtering - I DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Chapter 3 Image Enhancement in the Spatial Domain.
Introduction to Variational Methods and Applications
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2013 – 12269: Continuous Solution for Boundary Value Problems.
Ter Haar Romeny, Computer Vision 2014 Geometry-driven diffusion: nonlinear scale-space – adaptive scale-space.
Johann Radon Institute for Computational and Applied Mathematics: 1/33 Signal- und Bildverarbeitung, Image Analysis and Processing.
Geometric Modeling Notes on Curve and Surface Continuity Parts of Mortenson, Farin, Angel, Hill and others.
Johann Radon Institute for Computational and Applied Mathematics: 1/39 The Gaussian Kernel, Regularization.
Signal- und Bildverarbeitung, 323
Johann Radon Institute for Computational and Applied Mathematics: 1/28 Signal- und Bildverarbeitung, Image Analysis and Processing.
Image Segmentation some examples Zhiqiang wang
1 Lecture #5 Variational Approaches and Image Segmentation Lecture #5 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department,
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Lecture 4 Edge Detection
1 Curvature Driven Flows Allen Tannenbaum. 2 Basic curve evolution: Invariant Flows  Planar curve:  General flow:  General geometric flow:
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
Introduction to Gröbner Bases for Geometric Modeling Geometric & Solid Modeling 1989 Christoph M. Hoffmann.
Introduction to Numerical Methods I
EE565 Advanced Image Processing Copyright Xin Li Different Frameworks for Image Processing Statistical/Stochastic Models: Wiener’s MMSE estimation.
PHY 042: Electricity and Magnetism
Johann Radon Institute for Computational and Applied Mathematics: Summary and outlook.
Numerical Methods for Partial Differential Equations CAAM 452 Spring 2005 Lecture 9 Instructor: Tim Warburton.
Johann Radon Institute for Computational and Applied Mathematics: 1/49 Signal- und Bildverarbeitung, silently converted to:
Today – part 2 The differential structure of images
Johann Radon Institute for Computational and Applied Mathematics: 1/48 Mathematische Grundlagen in Vision & Grafik ( ) more.
Deformable Models Segmentation methods until now (no knowledge of shape: Thresholding Edge based Region based Deformable models Knowledge of the shape.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
7.1. Mean Shift Segmentation Idea of mean shift:
Ordinary Differential Equations
1 EEE 431 Computational Methods in Electrodynamics Lecture 4 By Dr. Rasime Uyguroglu
Image Processing Edge detection Filtering: Noise suppresion.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
Discontinuous Galerkin Methods for Solving Euler Equations Andrey Andreyev Advisor: James Baeder Mid.
7. Introduction to the numerical integration of PDE. As an example, we consider the following PDE with one variable; Finite difference method is one of.
Stable, Circulation- Preserving, Simplicial Fluids Sharif Elcott, Yiying Tong, Eva Kanso, Peter Schröder, and Mathieu Desbrun.
Introduction to Level Set Methods: Part II
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
CS654: Digital Image Analysis Lecture 36: Feature Extraction and Analysis.
Discretization Methods Chapter 2. Training Manual May 15, 2001 Inventory # Discretization Methods Topics Equations and The Goal Brief overview.
Basic Theory (for curve 01). 1.1 Points and Vectors  Real life methods for constructing curves and surfaces often start with points and vectors, which.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Air Systems Division Definition of anisotropic denoising operators via sectional curvature Stanley Durrleman September 19, 2006.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 7
Digital Image Processing CSC331
Amir Yavariabdi Introduction to the Calculus of Variations and Optical Flow.
Introduction to Symmetry Analysis Brian Cantwell Department of Aeronautics and Astronautics Stanford University Chapter 1 - Introduction to Symmetry.
L ECTURE 3 PDE Methods for Image Restoration. O VERVIEW Generic 2 nd order nonlinear evolution PDE Classification: Forward parabolic (smoothing): heat.
1 Tangent Vectors and Normal Vectors Unit Tangent Vector: Let be a smooth curve represented by on an open interval. The unit tangent vector at is defined.
Digital Image Processing
PDE Methods for Image Restoration
Lecture 4: Numerical Stability
Scale Invariant Feature Transform (SIFT)
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Tangent Vectors and Normal Vectors
topic4: Implicit method, Stability, ADI method
Presentation transcript:

Johann Radon Institute for Computational and Applied Mathematics: 1/31 Signal- und Bildverarbeitung, Image Analysis and Processing Arjan Kuijper Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences Altenbergerstraße 56 A-4040 Linz, Austria

Johann Radon Institute for Computational and Applied Mathematics: 2/31 Last week Total variation minimizing models have become one of the most popular and successful methodology for image restoration. ROF (Rudin-Osher-Fatemi) is one of the earliest and best known examples of PDE based edge preserving denoising. It was designed with the explicit goal of preserving sharp discontinuities (edges) in images while removing noise and other unwanted fine scale detail. However, it has some drawbacks: –Loss of contrast –Loss of geometry –Stair casing –Loss of texture

Johann Radon Institute for Computational and Applied Mathematics: 3/31 Today Mean curvature motion –Curve evolution –Denoising –Edge preserving –Implementation –Isophote vs. image implementation Taken from:

Johann Radon Institute for Computational and Applied Mathematics: 4/31 Overview of Evolution Equations We start with a generalized framework for a number of nonlinear evolution equations that have appeared in the literature. In Alvarez et al. (1993) the interested reader can find an extensive treatment on evolution equations of the general form: Imposing various axioms on a multi-scale analysis the authors derive a number of evolution equations that are listed here. Here, we distinguish two approaches: –i) evolution of the luminance function and –ii) evolution of the level sets of the image. These approaches are dual in the sense that one determines the other Alvarez, L., Guichard, F., Lions, P.L., and Morel, J.M Axioms and fundamental equations of image processing. Arch. for Rational Mechanics, 123(3):199–257.

Johann Radon Institute for Computational and Applied Mathematics: 5/31

Johann Radon Institute for Computational and Applied Mathematics: 6/31

Johann Radon Institute for Computational and Applied Mathematics: 7/31 Choices for F F =  L In this case we find the linear diffusion equation. The luminance L is conserved under the flow  L. F = c(|  L|)  L The heat conduction coefficient c is not a constant anymore but depends on local image properties (Perona and Malik, 1990) resulting in nonlinear or geometry-driven diffusion. F =  L/|  L| This equation has been used in Rudin et al. (1992) to remove noise based on nonlinear total variation.

Johann Radon Institute for Computational and Applied Mathematics: 8/31 Curve evolution Definition 8 (Curve Evolution). A general equation which evolves planar curves as a function of their geometry can be written as: In other words, a curve evolves as a function of curvature (and derivatives of curvature with respect to arc-length) only. The definition above follows from the following considerations. A general evolution of a curve can be written as: where T and N denote the tangential and normal unit vector to the curve respectively. However, the T component only affects the parameterization.

Johann Radon Institute for Computational and Applied Mathematics: 9/31 F. Cao, Geometric Curve Evolution and Image Processing, LNM 1805, Springer, 2002

Johann Radon Institute for Computational and Applied Mathematics: 10/31 Choices for g g = c This choice results in normal motion: Isophotes move in the normal direction with velocity c. This equation is equivalent to the morphological operation of erosion (or depending on the sign of c dilation) with a disc as structuring element. g = k This equation evolves the curve as a function of curvature and is known as the Euclidean shortening flow. It implies the following evolution of the luminance function:

Johann Radon Institute for Computational and Applied Mathematics: 11/31 Choices for g This equation has been proposed since it is invariant under Euclidean transformations. An elegant generalization to find the flow which is invariant with respect to any Lie group action can be found is as follows: The evolution given by where r denotes the arc-length which is invariant under the group, defines a flow which is invariant under the action of the group. These equations locally behave as the geometric heat equation: where is the G-invariant metric. If r is Euclidean arc-length (r e ) we find the Euclidean shortening flow:

Johann Radon Institute for Computational and Applied Mathematics: 12/31 This is not the second order derivative of the spatial coordinates (left), but of the parameterization (right)!

Johann Radon Institute for Computational and Applied Mathematics: 13/31 Choices for g This evolution is known as the affine shortening flow. If we insert the affine arc-length (r a ) we find: g = a + b k A combination of normal motion and Euclidean shortening flow. Note that a and b have different dimensions (the value b/ a is not invariant under a spatial rescaling x -> l x). We have to work in natural coordinates or multiply nth order derivatives with the n th power of scale. The luminance function evolves according to This is a Hamilton-Jacobi equation with parabolic right-hand side. Since there are two independent variables it generates a 2-dimensional Entropy Scale Space with a reaction axis (owing to the hyperbolic term) and a diffusion axis (owing to the parabolic right-handside).

Johann Radon Institute for Computational and Applied Mathematics: 14/31 Overview

Johann Radon Institute for Computational and Applied Mathematics: 15/31 Numerical stability When we approximate a partial differential equation with finite differences in the forward Euler scheme, we want to make large steps in evolution time (or scale) to reach the final evolution in the fastest way, with as little iterations as possible. How large steps are we allowed to make? In other words, can we find a criterion for which the equation remains stable? A famous answer to the question of stability was derived by Von Neumann, and is called the Von Neumann stability criterion. Alternative names: –Courant stability criterion –CFL condition (Courant-Friedrichs-Lewy condition) –…–…

Johann Radon Institute for Computational and Applied Mathematics: 16/31 Consider the 1D linear diffusion equation: This equation can be approximated with finite differences as We define, so rewrite to i.e. f(j,n)=0

Johann Radon Institute for Computational and Applied Mathematics: 17/31 Let the solution L j n of our PDE be a generalized exponential function, with k a general (spatial) wave number: When we insert this solution in our discretized PDE, we get We want the increment function f(j,n) to be maximal on the domain j, so we get the condition

Johann Radon Institute for Computational and Applied Mathematics: 18/31 The amplitude x n of the solution x n e  j k D x should not explode for large n, so in order to get a stable solution we need the criterion | x | £ 1. This means, because Cos(k D x)-1 is always non- positive, that This is an essential result. When we take a too large step size for D t in order to reach the final time faster, we may find that the result gets unstable. The Von Neumann criterion gives us the fastest way we can get to the iterative result. It is safe to stay well under the maximum value, to not compromise this stability close to the criterion.

Johann Radon Institute for Computational and Applied Mathematics: 19/31 The pixel step D x is mostly unity, so the maximum evolution step size should be This is indeed a strong limitation, making many iteration steps necessary. Gaussian derivative kernels improve this situation considerably. We start again with a general possible solution for the luminance function L(x,j,n), where x is the spatial coordinate, j is the discrete spatial grid position, and n is the discrete moment in evolution time of the PDE.

Johann Radon Institute for Computational and Applied Mathematics: 20/31 The Laplacian in 1D is just the second order spatial derivative: We recall that the convolution of a function f with a kernel g is defined as For discrete location j at time step n we get for the blurred intensity:

Johann Radon Institute for Computational and Applied Mathematics: 21/31 If we divide this by the original intensity function, we find a multiplication factor We are looking for the largest absolute value of the factor, because then we take the largest evolution steps. Because the factor is negative everywhere we need to find the minimum of the factor with respect to j, i.e. -> We then find for the maximum size of the time step factor

Johann Radon Institute for Computational and Applied Mathematics: 22/31 So we find for the Gaussian derivative implementation so | x | £ 1 implies, thus D t £ 2 ã s Introducing this in the time-space ratio R we get the limiting stepsize for a stable solution under Gaussian blurring: Note that this enables substantially larger stepsizes then in the nearest neighbor case.

Johann Radon Institute for Computational and Applied Mathematics: 23/31 For the Gaussian blurring of an image with s=0.8 pixels for the Laplacian operator, we get D s< ã.8 2 =1.74. We blur a test image to pixels (which is equal to s=64, ) in two ways: –a) with normal Gaussian convolution and –b) with the numerical implementation of the diffusion equation and Gaussian derivative calculation of the Laplacian.

Johann Radon Institute for Computational and Applied Mathematics: 24/31 Alvarez, Guichard, Lions and Morel realized that the P&M variable conductance diffusion was complicated by the choice of the parameter k. They reasoned that the principal influence on the local conductivity should be to direct the flow in the direction of the gradient only: They named the affine version (right-hand side to the power 1/3) the 'fundamental equation of image processing‘ This is the unique model of multi-scale analysis of an image, affine invariant and morphological invariant. L. Alvarez, F. Guichard, P-L. Lions, and J-M. Morel. Axioms and fundamental equations of image processing. Arch. Rational Mechanics and Anal., 16(9): , 1993.

Johann Radon Institute for Computational and Applied Mathematics: 25/31 Grayscale invariance The non-affine version can be written as There are a number of differences between this equation and the Perona & Malik equation: –the flow (of flux) is independent of the magnitude of the gradient; –There is no extra free parameter, like the edge-strength turnover parameter k; –in the P&M equation the diffusion decreases when the gradient is large, resulting in contrast dependent smoothing; –this equation is gray-scale invariant (the function does not change value when the grayscale function L is modified by a monotonically increasing or decreasing function f(L), f ¹ 0.). This PDE is known as –Euclidean shortening flow, –curve shortening, –Mean curvature Motion

Johann Radon Institute for Computational and Applied Mathematics: 26/31 Numerical examples shortening flow Same test image. By blurring the image the noise is gone, but the edge is gone too. MCM deblurs and keeps the edge.

Johann Radon Institute for Computational and Applied Mathematics: 27/31 The noise gradually disappears in this nonlinear scale- space evolution, while the edge strength is well preserved. Because the flux term, expressed in Gaussian derivatives, is rotation invariant, the edges are well preserved irrespective of their direction: this is edge- preserving smoothing.

Johann Radon Institute for Computational and Applied Mathematics: 28/31 Why the name 'shortening flow'? For the metric of the curve, defined as where p is an arbitrary parametrization of the curve, the evolution of the metric is equal to The total length of the curve evolves as so the length is always decreasing with time.

Johann Radon Institute for Computational and Applied Mathematics: 29/31 Summary of Numerical Stability

Johann Radon Institute for Computational and Applied Mathematics: 30/31 Summary There is a strong analogy between curve evolution and PDE based schemes. They can be related directly to one another. Euclidean shortening flow involves the diffusion to be limited to the direction perpendicular to the gradient only. The divergence of the flow in the equation is equal to the second order gauge derivative L vv with respect to v, the direction tangential to the isophote. Implementation with Gaussian derivatives may allow larger time steps

Johann Radon Institute for Computational and Applied Mathematics: 31/31 Next week Non-linear diffusion: Mumford Shah –Diffusion - reaction equations –Energy functional –Edge set …