The content of these slides by John Galeotti, © 2012 - 2015 Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN276201000580P,

Slides:



Advertisements
Similar presentations
The content of these slides by John Galeotti, © 2013 Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Advertisements

Various Regularization Methods in Computer Vision Min-Gyu Park Computer Vision Lab. School of Information and Communications GIST.
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Chapter 3 Image Enhancement in the Spatial Domain.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
The content of these slides by John Galeotti, © 2012 Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Lecture 12 Level Sets & Parametric Transforms sec & ch
This work by John Galeotti and Damion Shelton, © , was made possible in part by NIH NLM contract# HHSN P, and is licensed under a Creative.
Digital Image Processing
This work by John Galeotti and Damion Shelton, © , was made possible in part by NIH NLM contract# HHSN P, and is licensed under a Creative.
Smoothing 3D Meshes using Markov Random Fields
Lecture 2 Math & Probability Background ch. 1-2 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2014 BioE 2630 (Pitt) : (CMU RI)
ECE 472/572 - Digital Image Processing Lecture 8 - Image Restoration – Linear, Position-Invariant Degradations 10/10/11.
Visual Recognition Tutorial
Optimal solution of binary problems Much material taken from :  Olga Veksler, University of Western Ontario
Image processing. Image operations Operations on an image –Linear filtering –Non-linear filtering –Transformations –Noise removal –Segmentation.
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Digital Image Processing Chapter 5: Image Restoration.
Image Enhancement.
The content of these slides by John Galeotti, © 2008 to 2015 Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Lecture 6 Linear Processing ch. 5 of Machine Vision by Wesley E
Advanced Image Processing Image Relaxation – Restoration and Feature Extraction 02/02/10.
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Two Functions of Two Random.
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Digital Image Processing Lecture 5: Neighborhood Processing: Spatial Filtering Prof. Charlene Tsai.
EECS 274 Computer Vision Segmentation by Clustering II.
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Digital Image Processing Lecture 10: Image Restoration March 28, 2005 Prof. Charlene Tsai.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Digital Image Processing Lecture 10: Image Restoration
Ch5 Image Restoration CS446 Instructor: Nada ALZaben.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 07: BAYESIAN ESTIMATION (Cont.) Objectives:
1 Markov random field: A brief introduction (2) Tzu-Cheng Jen Institute of Electronics, NCTU
Ch.8 Efficient Coding of Visual Scenes by Grouping and Segmentation Bayesian Brain Tai Sing Lee and Alan L. Yuille Heo, Min-Oh.
Digital Image Processing Lecture 5: Neighborhood Processing: Spatial Filtering March 9, 2004 Prof. Charlene Tsai.
Motion Estimation using Markov Random Fields Hrvoje Bogunović Image Processing Group Faculty of Electrical Engineering and Computing University of Zagreb.
Classification Course web page: vision.cis.udel.edu/~cv May 14, 2003  Lecture 34.
Machine Learning 5. Parametric Methods.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
EE 551/451, Fall, 2006 Communication Systems Zhu Han Department of Electrical and Computer Engineering Class 15 Oct. 10 th, 2006.
Regularization of energy-based representations Minimize total energy E p (u) + (1- )E d (u,d) E p (u) : Stabilizing function - a smoothness constraint.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Lecture 22 Image Restoration. Image restoration Image restoration is the process of recovering the original scene from the observed scene which is degraded.
Lecture 10 Chapter 5: Image Restoration. Image restoration Image restoration is the process of recovering the original scene from the observed scene which.
Biointelligence Laboratory, Seoul National University
12. Principles of Parameter Estimation
The Chinese University of Hong Kong
Image Analysis Image Restoration.
Lecture 7—Image Relaxation: Restoration and Feature Extraction ch
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Learning Theory Reza Shadmehr
Adaptive Cooperative Systems Chapter 6 Markov Random Fields
Lecture 3 Math & Probability Background ch
Mathematical Foundations of BME
12. Principles of Parameter Estimation
Lecture 3 Math & Probability Background ch
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
Presentation transcript:

The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P, and is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. To view a copy of this license, visit or send a letter to Creative Commons, 171 2nd Street, Suite 300, San Francisco, California, 94105, USA. Permissions beyond the scope of this license may be available either from CMU or by ing The most recent version of these slides may be accessed online via Lecture 7—Image Relaxation: Restoration and Feature Extraction ch. 6 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2015 BioE 2630 (Pitt) : (CMU RI) (CMU ECE) : (CMU BME) Dr. John Galeotti

 Remember, all measured images are degraded  Noise (always)  Distortion = Blur (usually)  False edges  From noise  Unnoticed/Missed edges  From noise + blur 2 All images are degraded original image plot noisy image plot

We need an “un-degrader”…  To extract “clean” features for segmentation, registration, etc.  Restoration  A-posteriori image restoration  Removes degradations from images  Feature extraction  Iterative image feature extraction  Extracts features from noisy images 3

Image relaxation  The basic operation performed by:  Restoration  Feature extraction (of the type in ch. 6)  An image relaxation process is a multistep algorithm with the properties that:  The output of a step is the same form as the input (e.g., image to image)  Allows iteration  It converges to a bounded result  The operation on any pixel is dependent only on those pixels in some well defined, finite neighborhood of that pixel. (optional) 4

Restoration: An inverse problem  Assume:  An ideal image, f  A measured image, g  A distortion operation, D  Random noise, n  Put it all together: g = D( f ) + n 5 This is what we want This is what we get How do we extract f ?

Restoration is ill-posed  Even without noise  Even if the distortion is linear blur  Inverting linear blur = deconvolution  But we want restoration to be well-posed… 6

A well-posed problem  g = D ( f ) is well-posed if:  For each f, a solution exists,  The solution is unique, AND  The solution g continuously depends on the data f  Otherwise, it is ill-posed  Usually because it has a large condition number: K >> 1 7

Condition number, K  K   output /  input  For the linear system b = Ax  K = || A || || A -1 ||  K  [1,∞) 8

K for convolved blur  Why is restoration ill-posed for simple blur?  Why not just linearize a blur kernel, and then take the inverse of that matrix?  F = H -1 G  Because H is probably singular  If not, H almost certainly has a large K  So small amounts of noise in G will make the computed F almost meaningless  See the book for great examples 9

Regularization theory to the rescue!  How to handle an ill-posed problem?  Find a related well-posed problem!  One whose solution approximates that of our ill-posed problem  E.g., try minimizing:  But unless we know something about the noise, this is the exact same problem! 10

Digression: Statistics  Remember Bayes’ rule?  p( f | g ) = p( g | f ) * p( f ) / p( g ) 11 This is the a priori pdf This is the conditional pdf This is the a posteriori conditional pdf Just a normalization constant This is what we want! It is our discrimination function.

Maximum a posteriori (MAP) image processing algorithms  To find the f underlying a given g: 1.Use Bayes’ rule to “compute all” p( f q | g )  f q  (the set of all possible f ) 2.Pick the f q with the maximum p( f q | g )  p( g ) is “useless” here (it’s constant across all f q )  This is equivalent to:  f = argmax( f q ) p( g | f q ) * p( f q ) 12 Noise term Prior term

Probabilities of images  Based on probabilities of pixels  For each pixel i :  p( f i | g i )  p( g i | f i ) * p( f i )  Let’s simplify:  Assume no blur (just noise)  At this point, some people would say we are denoising the image.  p( g | f ) = ∏ p( g i | f i )  p( f ) = ∏ p( f i ) 13

Probabilities of pixel values  p( g i | f i )  This could be the density of the noise…  Such as a Gaussian noise model  = constant * e something  p( f i )  This could be a Gibbs distribution…  If you model your image as an ND Markov field  = e something  See the book for more details 14

Put the math together 15

Objective functions  We can re-write the previous slide’s final equation to use objective functions for our noise and prior terms:  f = argmin( f q ) ( ∑ p ’ ( g i | f i ) + ∑ p ’ ( f i ) )   f = argmin( f q ) ( H n ( f, g ) + H p ( f ) )  We can also combine these objective functions:  H( f, g ) = H n ( f, g ) + H p ( f ) 16

Purpose of the objective functions  Noise term H n ( f, g ) :  If we assume independent, Gaussian noise for each pixel,  We tell the minimization that f should resemble g.  Prior term (a.k.a. regularization term) H p ( f ) :  Tells the minimization what properties the image should have  Often, this means brightness that is:  Constant in local areas  Discontinuous at boundaries 17

Minimization is a beast!  Our objective function is not “nice”  It has many local minima  So gradient descent will not do well  We need a more powerful optimizer:  Mean field annealing (MFA)  Approximates simulated annealing  But it’s faster!  It’s also based on the mean field approximation of statistical mechanics 18

MFA  MFA is a continuation method  So it implements a homotopy  A homotopy is a continuous deformation of one hyper-surface into another  MFA procedure: 1.Distort our complex objective function into a convex hyper-surface (N-surface)  The only minima is now the global minimum 2.Gradually distort the convex N-surface back into our objective function 19

MFA: Single-Pixel Visualization Continuous deformation of a function which is initially convex to find the global minimum of a non-convex function. 20 possible values for a single pixel merit of each possible pixel value initial global minimum final global minimum Notice that we do not end up here or here

Generalized objective functions for MFA  Noise term:  (D( f )) i denotes some distortion (e.g., blur) of image f in the vicinity of pixel I  Prior term:   represents a priori knowledge about the roughness of the image, which is altered in the course of MFA  (R( f )) i denotes some function of image f at pixel i  The prior will seek the f which causes R( f ) to be zero (or as close to zero as possible) 21

R( f ) : choices, choices  Piecewise-constant images  =0 if the image is constant   0 if the image is piecewise-constant (why?)  The noise term will force a piecewise-constant image 22

R( f ) : Piecewise-planer images  =0 if the image is a plane   0 if the image is piecewise-planar  The noise term will force a piecewise-planar image 23

Graduated nonconvexity (GNC)  Similar to MFA  Uses a descent method  Reduces a control parameter  Can be derived using MFA as its basis  “Weak membrane” GNC is analogous to piecewise- constant MFA  But different:  Its objective function treats the presence of edges explicitly  Pixels labeled as edges don’t count in our noise term  So we must explicitly minimize the # of edge pixels 24

Variable conductance diffusion (VCD)  Idea:  Blur an image everywhere,  except at features of interest  such as edges 25

 Where:  t = time   i f = spatial gradient of f at pixel i  c i = conductivity (to blurring) 26 VCD simulates the diffusion eq. temporal derivative spatial derivative

Isotropic diffusion  If c i is constant across all pixels:  Isotropic diffusion  Not really VCD  Isotropic diffusion is equivalent to convolution with a Gaussian  The Gaussian’s variance is defined in terms of t and c i 27

VCD  c i is a function of spatial coordinates, parameterized by i  Typically a property of the local image intensities  Can be thought of as a factor by which space is locally compressed  To smooth except at edges:  Let c i be small if i is an edge pixel  Little smoothing occurs because “space is stretched” or “little heat flows”  Let c i be large at all other pixels  More smoothing occurs in the vicinity of pixel i because “space is compressed” or “heat flows easily” 28

VCD  A.K.A. Anisotropic diffusion  With repetition, produces a nearly piecewise uniform result  Like MFA and GNC formulations  Equivalent to MFA w/o a noise term  Edge-oriented VCD:  VCD + diffuse tangential to edges when near edges  Biased Anisotropic diffusion (BAD)  Equivalent to MAP image restoration 29

 From the Scientific Applications and Visualization Group at NIST  VCD Sample Images

Congratulations!  You have made it through most of the “introductory” material.  Now we’re ready for the “fun stuff.”  “Fun stuff” (why we do image analysis):  Segmentation  Registration  Shape Analysis  Etc. 31