Johann Radon Institute for Computational and Applied Mathematics: www.ricam.oeaw.ac.at 1/48 Mathematische Grundlagen in Vision & Grafik (710.100) more.

Slides:



Advertisements
Similar presentations
Ter Haar Romeny, EMBS Berder 2004 Deblurring with a scale-space approach.
Advertisements

Ter Haar Romeny, MICCAI 2008 Regularization and scale-space.
Ter Haar Romeny, EMBS Berder 2004 Gaussian derivative kernels The algebraic expressions for the first 4 orders:
3-D Computational Vision CSc Image Processing II - Fourier Transform.
Johann Radon Institute for Computational and Applied Mathematics: 1/38 Mathematische Grundlagen in Vision & Grafik ( ) more.
Johann Radon Institute for Computational and Applied Mathematics: 1/35 Signal- und Bildverarbeitung, Image Analysis and Processing.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Topic 6 - Image Filtering - I DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Ter Haar Romeny, Computer Vision 2014 Geometry-driven diffusion: nonlinear scale-space – adaptive scale-space.
Johann Radon Institute for Computational and Applied Mathematics: 1/33 Signal- und Bildverarbeitung, Image Analysis and Processing.
Johann Radon Institute for Computational and Applied Mathematics: 1/31 Signal- und Bildverarbeitung, Image Analysis and Processing.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Johann Radon Institute for Computational and Applied Mathematics: 1/39 The Gaussian Kernel, Regularization.
Signal- und Bildverarbeitung, 323
Fourier Transform – Chapter 13. Fourier Transform – continuous function Apply the Fourier Series to complex- valued functions using Euler’s notation to.
Johann Radon Institute for Computational and Applied Mathematics: 1/28 Signal- und Bildverarbeitung, Image Analysis and Processing.
ECE 472/572 - Digital Image Processing Lecture 8 - Image Restoration – Linear, Position-Invariant Degradations 10/10/11.
BMME 560 & BME 590I Medical Imaging: X-ray, CT, and Nuclear Methods Tomography Part 3.
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
1 Image filtering Hybrid Images, Oliva et al.,
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
CS443: Digital Imaging and Multimedia Filters Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Spring 2008 Ahmed Elgammal Dept.
1 Image filtering
Transforms: Basis to Basis Normal Basis Hadamard Basis Basis functions Method to find coefficients (“Transform”) Inverse Transform.
Basic Image Processing January 26, 30 and February 1.
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Chapter 4 Image Enhancement in the Frequency Domain Chapter.
Johann Radon Institute for Computational and Applied Mathematics: Summary and outlook.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
Johann Radon Institute for Computational and Applied Mathematics: 1/49 Signal- und Bildverarbeitung, silently converted to:
Image Representation Gaussian pyramids Laplacian Pyramids
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm.
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
IMAGE SAMPLING AND IMAGE QUANTIZATION 1. Introduction
Image Processing © 2002 R. C. Gonzalez & R. E. Woods Lecture 4 Image Enhancement in the Frequency Domain Lecture 4 Image Enhancement.
WAVELET TRANSFORM.
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
1 Lesson 8: Basic Monte Carlo integration We begin the 2 nd phase of our course: Study of general mathematics of MC We begin the 2 nd phase of our course:
Digital Image Processing Lecture 5: Neighborhood Processing: Spatial Filtering Prof. Charlene Tsai.
CSC508 What You Should Be Doing Code, code, code –Programming Gaussian Convolution Sobel Edge Operator.
Digital image processing Chapter 3. Image sampling and quantization IMAGE SAMPLING AND IMAGE QUANTIZATION 1. Introduction 2. Sampling in the two-dimensional.
Digital Image Processing Lecture 10: Image Restoration March 28, 2005 Prof. Charlene Tsai.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm.
Chapter 5: Neighborhood Processing
Digital Image Processing Lecture 10: Image Restoration
8-1 Chapter 8: Image Restoration Image enhancement: Overlook degradation processes, deal with images intuitively Image restoration: Known degradation processes;
Astronomical Data Analysis I
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Chapter 11 Filter Design 11.1 Introduction 11.2 Lowpass Filters
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
1 Computational Vision CSCI 363, Fall 2012 Lecture 6 Edge Detection.
Digital Image Processing Lecture 5: Neighborhood Processing: Spatial Filtering March 9, 2004 Prof. Charlene Tsai.
1 Methods in Image Analysis – Lecture 3 Fourier CMU Robotics Institute U. Pitt Bioengineering 2630 Spring Term, 2004 George Stetten, M.D., Ph.D.
Deblurring with a scale-space approach. Gaussian degradation occurs in a large number of situations. the point-spread-function of the human lens e.g.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Computer vision. Applications and Algorithms in CV Tutorial 3: Multi scale signal representation Pyramids DFT - Discrete Fourier transform.
Instructor: Mircea Nicolescu Lecture 7
Miguel Tavares Coimbra
Linear Filters and Edges Chapters 7 and 8
- photometric aspects of image formation gray level images
Correlation and Convolution They replace the value of an image pixel with a combination of its neighbors Basic operations in images Shift Invariant Linear.
Degradation/Restoration Model
Image Enhancement in the
The Chinese University of Hong Kong
Outline Linear Shift-invariant system Linear filters
Linear filtering.
Lecture 5: Resampling, Compositing, and Filtering Li Zhang Spring 2008
Basic Image Processing
Chapter 7 Finite Impulse Response(FIR) Filter Design
Presentation transcript:

Johann Radon Institute for Computational and Applied Mathematics: 1/48 Mathematische Grundlagen in Vision & Grafik ( ) more appropriate title: Scale Space and PDE methods in image analysis and processing Arjan Kuijper Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences Altenbergerstraße 56 A-4040 Linz, Austria

Johann Radon Institute for Computational and Applied Mathematics: 2/48 Summary of the previous time Observations are necessarily done through a finite aperture. Observed noise is part of the observation. The aperture cannot take any form. We have specific physical constraints for the early vision front-end kernel. We are able to set up a 'first principle' framework from which the exact sensitivity function of the measurement aperture can be derived. There exist many such derivations for an uncommitted kernel, all leading to the same unique result: the Gaussian kernel. Differentiation of discrete data is done by the convolution with the derivative of the observation kernel.

Johann Radon Institute for Computational and Applied Mathematics: 3/48 Summary of the previous time The Gaussian kernel... –is normalized –can be cascaded –can be made dimensionless using 'natural coordinates'. –is the 'blurred version' of the Dirac Delta function. –is the results of the central limit theorem –is anisotropic if the scales are different for the different dimensions. –acts as a low-pass filter in Fourier space. –is described by the diffusion equation Many functions can not be differentiated. –The solution, due to Schwartz, is to regularize the data by convolving them with a smooth test function. –Taking the derivative of this 'observed' function is then equivalent to convolving with the derivative of the test function. A well know variational form of regularization is given by the so- called Tikhonov regularization. –A functional is minimized in sense with the constraint of well behaving derivatives. –Tikhonov regularization with inclusion of the proper behavior of all derivatives is essentially equivalent to Gaussian blurring.

Johann Radon Institute for Computational and Applied Mathematics: 4/48 Today – part 1 Gaussian derivatives Natural limits on observations Deblurring Gaussian blur Multi-scale derivatives: implementations

Johann Radon Institute for Computational and Applied Mathematics: 5/48 Gaussian derivatives Shape and algebraic structure Gaussian derivatives in the Fourier domain Zero crossings of Gaussian derivative functions The correlation between Gaussian derivatives Discrete Gaussian kernels Other families of kernels Taken from B. M. ter Haar Romeny, Front-End Vision and Multi-scale Image Analysis, Dordrecht, Kluwer Academic Publishers, Chapter 4

Johann Radon Institute for Computational and Applied Mathematics: 6/48 Shape and algebraic structure When we take derivatives to x (spatial derivatives) of the Gaussian function repetitively, we see a pattern emerging of a polynomial of increasing order, multiplied with the original (normalized) Gaussian function again.

Johann Radon Institute for Computational and Applied Mathematics: 7/48 Hermite polynomials The Gaussian function itself is a common element of all higher order derivatives. These polynomials are closely related to the Hermite polynomials:

Johann Radon Institute for Computational and Applied Mathematics: 8/48 Gaussian envelope The amplitude of the Hermite polynomials explodes for large x, but the Gaussian envelope suppresses any polynomial function.

Johann Radon Institute for Computational and Applied Mathematics: 9/48 Gaussian envelope Due to the limiting extent of the Gaussian window function, the amplitude of the Gaussian derivative function can be negligible at the location of the larger zeros. NB: The Gaussian derivatives are not normalized.

Johann Radon Institute for Computational and Applied Mathematics: 10/48 Orthogonal functions The Hermite polynomials belong to the family of orthogonal functions on R w.r.t. their weight function For n, m 0…3: This does not hold for the Gaussian derivatives:

Johann Radon Institute for Computational and Applied Mathematics: 11/48 Fourier domain The Fourier transform of the derivative of a function is (-  ) times the Fourier transform of the function.

Johann Radon Institute for Computational and Applied Mathematics: 12/48 Zero crossings of Gaussian derivative functions We can define the 'width' of a Gaussian derivative function as the distance to the outermost zero- crossing. Only an estimation of the exact analytic solution can be given.

Johann Radon Institute for Computational and Applied Mathematics: 13/48 The correlation between Gaussian derivatives Higher order Gaussian derivative kernels tend to become more and more similar. the correlation coefficient r between two Gaussian derivatives of order n and m:

Johann Radon Institute for Computational and Applied Mathematics: 14/48 Correlation How does it look like:

Johann Radon Institute for Computational and Applied Mathematics: 15/48 Correlation For n,m 0…4 in Fourier space: The correlation is unity when n=m, as expected, is negative when n-m=2, and is positive when n-m=4, and is complex otherwise. Recall:

Johann Radon Institute for Computational and Applied Mathematics: 16/48 Correlation The correlation coefficient between a Gaussian derivative function and its even neighbour up quite quickly tends to unity for high differential order: (taking the absolute value) Gaussian derivatives are not suitable as a basis.

Johann Radon Institute for Computational and Applied Mathematics: 17/48 Discrete Gaussian kernels The optimal kernel for the discretized Gaussian kernel is " normalized modified Bessel function of the first kind". This function is almost equal to the Gaussian kernel for  >1.

Johann Radon Institute for Computational and Applied Mathematics: 18/48 Other families of kernels: Gabor Add constraints: specific frequency: This gives the Gabor family of receptive fields In real space:

Johann Radon Institute for Computational and Applied Mathematics: 19/48 Other constraints:  scale space Relax the seperability constraint (the power 2 in Fourier space) filter PDE: Taken from the PhD thesis of R. Duits F ¡ 1 ( e ¡ k ! k 2 s ) F ¡ 1 ( e ¡ k ! k s ) F ¡ 1 ( e ¡ k ! k 2 ® s )

Johann Radon Institute for Computational and Applied Mathematics: 20/48 Poisson scale space A comparison for  = ½ (Poission kernel) and  = 1 (Gaussian kernel): Taken from the PhD thesis of R. Duits

Johann Radon Institute for Computational and Applied Mathematics: 21/48  scale space A comparison for  = ½, ¾, 1: Taken from the PhD thesis of R. Duits

Johann Radon Institute for Computational and Applied Mathematics: 22/48 Summary The Gaussian derivatives are characterized by the product of a polynomial function, the Hermite polynomial, and a Gaussian kernel. The order of the Hermite polynomial is the same as the differential order of the Gaussian derivative. Gaussian derivatives are not orthogonal kernels. The Gaussian kernel is a special case of the Gabor kernels. The Gaussian scale space is a special case of the  scale spaces.

Johann Radon Institute for Computational and Applied Mathematics: 23/48 Gaussian derivatives, Natural limits on observations, deblurring, implementations Taken from B. M. ter Haar Romeny, Front-End Vision and Multi-scale Image Analysis, Dordrecht, Kluwer Academic Publishers, Chapter 7

Johann Radon Institute for Computational and Applied Mathematics: 24/48 Scale limit For a given order of differentiation there is a limiting scale-size below which the results are no longer exact. The value of the derivative starts to deviate for scales smaller than  = 0.6.

Johann Radon Institute for Computational and Applied Mathematics: 25/48 Scale In the Fourier domain leakage (aliasing) occurs

Johann Radon Institute for Computational and Applied Mathematics: 26/48 Leakage Leakage occurs for small scales: Error measure:

Johann Radon Institute for Computational and Applied Mathematics: 27/48 Accuracy & order Relation between scale , order of differentiation n, and accepted error (left: 5%, right 1, 5, and 10%):

Johann Radon Institute for Computational and Applied Mathematics: 28/48 Limits Data: There is a limit to the order of differentiation for a given scale of operator and required accuracy. The limit is due to the no longer 'fitting' of the Gaussian derivative kernel in its Gaussian envelop, known as aliasing.

Johann Radon Institute for Computational and Applied Mathematics: 29/48 Gaussian derivatives, limits, deblurring, implementations

Johann Radon Institute for Computational and Applied Mathematics: 30/48 Deblurring Gaussian blur Modeling Wiener Filter Deblurring with a scale-space approach Taken from A. Kuijper, Image Restoration in Forensic Research using Minimal Total Variation and Maximum Entropy, M.Sc. Thesis, B. M. ter Haar Romeny, Front-End Vision and Multi-scale Image Analysis, Dordrecht, Kluwer Academic Publishers, Chapter 16

Johann Radon Institute for Computational and Applied Mathematics: 31/48 Model Suppose we know our obtained image is blurred by some convolution: g = a(f) Reconstruction is easy: (especially in Fourier Space) isn’t it? No F = A ¡ 1 G

Johann Radon Institute for Computational and Applied Mathematics: 32/48 Noise Problem 1: The blurring kernel can become infinitesimally small, so we get division by zero Solution 1: Use the complex conjugacy: F = 1 A G F = A ¤ A ¤ A G = A ¤ j A j 2 G

Johann Radon Institute for Computational and Applied Mathematics: 33/48 Wiener Filter Problem 2: Actually, there is always noise: Solution 2: regularize the filter: This is the Wiener filter, R = signal-to-noise ratio. Max at “A=R” Demo g = a ( f ) + n F = A ¤ j A j 2 + R 2 G

Johann Radon Institute for Computational and Applied Mathematics: 34/48 Why regularizing? Recall last week:

Johann Radon Institute for Computational and Applied Mathematics: 35/48 Deblurring with a scale-space approach We create a Taylor series expansion of the scale-space L(x,y,t) to third order around the point t=0: Use the diffusion equation

Johann Radon Institute for Computational and Applied Mathematics: 36/48 Choosing the right scale For deblurring, a negative time is taken. The estimated blur is  est. The kernel size is  operator. The total deblurring distance is Note: It is a well known fact in image processing that subtraction of the Laplacian (times some constant depending on the blur) sharpens the image. This is the first order Taylor approximation!

Johann Radon Institute for Computational and Applied Mathematics: 37/48 Result

Johann Radon Institute for Computational and Applied Mathematics: 38/48 With noise:

Johann Radon Institute for Computational and Applied Mathematics: 39/48 Summary With a model of the blur (and noise), one can try to deblur images. Deblurring is instable, and can only be carried out analytically when no data is lost, for example through finite intensity representation (8 bit), noise of other pixel errors. The Wiener Filter traditionally does a good job, but requires estimation of a parameter. Deblurring can also be done by expanding the scale space of a blurred image into the negative scale direction by means of a Taylor expansion. It avoids Fourier transforms.

Johann Radon Institute for Computational and Applied Mathematics: 40/48 Gaussian derivatives, limits, deblurring, implementations

Johann Radon Institute for Computational and Applied Mathematics: 41/48 Implementations Implementation in the spatial domain Separable implementation Implementation in the Fourier domain Boundaries Taken from B. M. ter Haar Romeny, Front-End Vision and Multi-scale Image Analysis, Dordrecht, Kluwer Academic Publishers, Chapter 5

Johann Radon Institute for Computational and Applied Mathematics: 42/48 Implementation in the spatial domain In the spatial domain, the Gaussian is to be sampled. A sample range from +/- 4  suffices. The larger scale is chosen, the larger this kernel becomes. Truncation order is O(10 -5 ). The higher the order of differentiation, the higher the error becomes. Beware of the convolution rules your programming language uses!

Johann Radon Institute for Computational and Applied Mathematics: 43/48 Separable implementation The fastest implementation exploits the separability of the Gaussian kernel. –Apply convolution an the matrix representing the image –Transpose the matrix –Apply convolution again –Transpose again.

Johann Radon Institute for Computational and Applied Mathematics: 44/48 Orders of differentiation

Johann Radon Institute for Computational and Applied Mathematics: 45/48 Implementation in the Fourier domain The spatial convolutions are not exact. The Gaussian kernel is truncated. A convolution of two functions in the spatial domain is a multiplication of the Fourier transforms of the functions in the Fourier domain, and take the inverse Fourier transform to come back to the spatial domain. The filter is computed with the same size as the image. Beware of the rules of your programming language for built-in Fourier transforms!

Johann Radon Institute for Computational and Applied Mathematics: 46/48 Boundaries What is a boundary?

Johann Radon Institute for Computational and Applied Mathematics: 47/48 Boundary effect due to periodicity in the Fourier space: (original, x-derivative, y-derivative)

Johann Radon Institute for Computational and Applied Mathematics: 48/48 What to choose? Tilling or mirrored tilling? Padding with zero’s? Rule of thumb: tilling & ignore things that are at a distance  of the boundary.