Lecture 7—Image Relaxation: Restoration and Feature Extraction ch

Slides:



Advertisements
Similar presentations
CSCE643: Computer Vision Bayesian Tracking & Particle Filtering Jinxiang Chai Some slides from Stephen Roth.
Advertisements

Various Regularization Methods in Computer Vision Min-Gyu Park Computer Vision Lab. School of Information and Communications GIST.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Digital Image Processing
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Lecture 2 Math & Probability Background ch. 1-2 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2014 BioE 2630 (Pitt) : (CMU RI)
ECE 472/572 - Digital Image Processing Lecture 8 - Image Restoration – Linear, Position-Invariant Degradations 10/10/11.
Optimal solution of binary problems Much material taken from :  Olga Veksler, University of Western Ontario
6/9/2015Digital Image Processing1. 2 Example Histogram.
Image processing. Image operations Operations on an image –Linear filtering –Non-linear filtering –Transformations –Noise removal –Segmentation.
Digital Image Processing
Image Enhancement.
CS448f: Image Processing For Photography and Vision Denoising.
Advanced Image Processing Image Relaxation – Restoration and Feature Extraction 02/02/10.
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
1 Lesson 8: Basic Monte Carlo integration We begin the 2 nd phase of our course: Study of general mathematics of MC We begin the 2 nd phase of our course:
Digital Image Processing Lecture 5: Neighborhood Processing: Spatial Filtering Prof. Charlene Tsai.
EECS 274 Computer Vision Segmentation by Clustering II.
Digital Image Processing Lecture 10: Image Restoration March 28, 2005 Prof. Charlene Tsai.
Digital Image Processing Lecture 10: Image Restoration
Ch5 Image Restoration CS446 Instructor: Nada ALZaben.
1 Markov random field: A brief introduction (2) Tzu-Cheng Jen Institute of Electronics, NCTU
Ch.8 Efficient Coding of Visual Scenes by Grouping and Segmentation Bayesian Brain Tai Sing Lee and Alan L. Yuille Heo, Min-Oh.
Digital Image Processing Lecture 5: Neighborhood Processing: Spatial Filtering March 9, 2004 Prof. Charlene Tsai.
Motion Estimation using Markov Random Fields Hrvoje Bogunović Image Processing Group Faculty of Electrical Engineering and Computing University of Zagreb.
Classification Course web page: vision.cis.udel.edu/~cv May 14, 2003  Lecture 34.
Machine Learning 5. Parametric Methods.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Regularization of energy-based representations Minimize total energy E p (u) + (1- )E d (u,d) E p (u) : Stabilizing function - a smoothness constraint.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Lecture 22 Image Restoration. Image restoration Image restoration is the process of recovering the original scene from the observed scene which is degraded.
Lecture 10 Chapter 5: Image Restoration. Image restoration Image restoration is the process of recovering the original scene from the observed scene which.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Markov Random Fields Tomer Michaeli Graduate Course
Biointelligence Laboratory, Seoul National University
Miguel Tavares Coimbra
Lesson 8: Basic Monte Carlo integration
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
12. Principles of Parameter Estimation
LECTURE 11: Advanced Discriminant Analysis
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Digital Image Processing Lecture 10: Image Restoration
The Chinese University of Hong Kong
Lecture 13 Shape ch. 9, sec. 1-8, of Machine Vision by Wesley E
Chapter 3 Component Reliability Analysis of Structures.
School of Electrical and
Image Analysis Image Restoration.
Lecture 12 Level Sets & Parametric Transforms sec & ch
Lecture 6 Linear Processing ch. 5 of Machine Vision by Wesley E
Lecture 18 Deformable / Non-Rigid Registration ch
Computer Vision Lecture 9: Edge Detection II
Final Project Details Note: To print these slides in grayscale (e.g., on a laser printer), first change the Theme Background to “Style 1” (i.e., dark.
Lecture 5 Image Characterization ch. 4 of Machine Vision by Wesley E
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Graduate School of Information Sciences, Tohoku University
Lecture 15 Active Shape Models
Lecture 16: Morphology (ch 7) & Image Matching (ch 13) ch. 7 and ch
Principles of the Global Positioning System Lecture 11
Adaptive Cooperative Systems Chapter 6 Markov Random Fields
Lecture 3 Math & Probability Background ch
12. Principles of Parameter Estimation
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
Lecture 14 Shape ch. 9, sec. 1-8, of Machine Vision by Wesley E
Lecture 3 Math & Probability Background ch
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
Presentation transcript:

Lecture 7—Image Relaxation: Restoration and Feature Extraction ch Lecture 7—Image Relaxation: Restoration and Feature Extraction ch. 6 of Machine Vision by Wesley E. Snyder & Hairong Qi Note: To print these slides in grayscale (e.g., on a laser printer), first change the Theme Background to “Style 1” (i.e., dark text on white background), and then tell PowerPoint’s print dialog that the “Output” is “Grayscale.” Everything should be clearly legible then. This is how I also generate the .pdf handouts.

All images are degraded Remember, all measured images are degraded Noise (always) Distortion = Blur (usually) False edges From noise Unnoticed/Missed edges From noise + blur original image plot noisy image plot

We need an “un-degrader”… To extract “clean” features for segmentation, registration, etc. Restoration A-posteriori image restoration Removes degradations from images Feature extraction Iterative image feature extraction Extracts features from noisy images

Image relaxation The basic operation performed by: Restoration Feature extraction (of the type in ch. 6) An image relaxation process is a multistep algorithm with the properties that: The output of a step is the same form as the input (e.g., 2562 image to 2562 image) Allows iteration It converges to a bounded result The operation on any pixel is dependent only on those pixels in some well defined, finite neighborhood of that pixel. (optional)

Restoration: An inverse problem This is what we want Assume: An ideal image, f A measured image, g A distortion operation, D Random noise, n Put it all together: g = D( f ) + n This is what we get How do we extract f ?

Restoration is ill-posed Even without noise Even if the distortion is linear blur Inverting linear blur = deconvolution But we want restoration to be well-posed…

A well-posed problem g = D( f ) is well-posed if: For each f, a solution exists, The solution is unique, AND The solution g continuously depends on the data f Otherwise, it is ill-posed Usually because it has a large condition number: K >> 1 ** Important for Registration **

Condition number, K K   output /  input For the linear system b = Ax K = ||A|| ||A-1|| K  [1,∞)

K for convolved blur Why is restoration ill-posed for simple blur? Why not just linearize a blur kernel, and then take the inverse of that matrix? F = H-1G Because H is probably singular If not, H almost certainly has a large K So small amounts of noise in G will make the computed F almost meaningless See the book for great examples

Regularization theory to the rescue! How to handle an ill-posed problem? Find a related well-posed problem! One whose solution approximates that of our ill-posed problem E.g., try minimizing: But unless we know something about the noise, this is the exact same problem!

Digression: Statistics Remember Bayes’ rule? p( f | g ) = p( g | f ) * p( f ) / p( g ) This is the a posteriori conditional pdf This is the conditional pdf Just a normalization constant This is the a priori pdf This is what we want! It is our discrimination function.

Maximum a posteriori (MAP) image processing algorithms To find the f underlying a given g: Use Bayes’ rule to “compute all” p( fq | g ) fq  (the set of all possible f ) Pick the fq with the maximum p( fq | g ) p( g ) is “useless” here (it’s constant across all fq) This is equivalent to: f = argmax( fq) p( g | fq ) * p( fq ) Noise term Prior term

Probabilities of images Based on probabilities of pixels For each pixel i: p( fi | gi )  p( gi | fi ) * p( fi ) Let’s simplify: Assume no blur (just noise) At this point, some people would say we are denoising the image. p( g | f ) = ∏ p( gi | fi ) p( f ) = ∏ p( fi )

Probabilities of pixel values p( gi | fi ) This could be the density of the noise… Such as a Gaussian noise model = constant * esomething p( fi ) This could be a Gibbs distribution… If you model your image as an ND Markov field = esomething See the book for more details

Put the math together Remember, we want: And remember: f = argmax( fq) p( g | fq ) * p( fq ) where fq  (the set of all possible f ) And remember: p( g | f ) = ∏ p( gi | fi ) = constant * ∏ esomething p( f ) = ∏ p( fi ) = ∏ esomething where i  (the set of all image pixels) But we like ∑something better than ∏esomething, so take the log and solve for: f = argmin( fq) ( ∑ p’ ( gi | fi ) + ∑ p’( fi ) )

Objective functions We can re-write the previous slide’s final equation to use objective functions for our noise and prior terms: f = argmin(fq) ( ∑ p’( gi | fi ) + ∑ p’( fi ) )  f = argmin(fq) ( Hn( f, g ) + Hp( f ) ) We can also combine these objective functions: H( f, g ) = Hn( f, g ) + Hp( f )

Purpose of the objective functions Noise term Hn( f, g ): If we assume independent, Gaussian noise for each pixel, We tell the minimization that f should resemble g. Prior term (a.k.a. regularization term) Hp( f ): Tells the minimization what properties the image should have Often, this means brightness that is: Constant in local areas Discontinuous at boundaries

Minimization is a beast! Our objective function is not “nice” It has many local minima So gradient descent will not do well We need a more powerful optimizer: Mean field annealing (MFA) Approximates simulated annealing But it’s faster! It’s also based on the mean field approximation of statistical mechanics

MFA MFA is a continuation method So it implements a homotopy A homotopy is a continuous deformation of one hyper-surface into another MFA procedure: Distort our complex objective function into a convex hyper-surface (N-surface) The only minima is now the global minimum Gradually distort the convex N-surface back into our objective function

MFA: Single-Pixel Visualization 0.3 0.2 0.1 20 40 60 80 100 possible values for a single pixel merit of each possible pixel value initial global minimum final Notice that we do not end up here or here Continuous deformation of a function which is initially convex to find the (near-) global minimum of a non-convex function.

Generalized objective functions for MFA Noise term: (D( f ))i denotes some distortion (e.g., blur) of image f in the vicinity of pixel I Prior term:  represents a priori knowledge about the roughness of the image, which is altered in the course of MFA (R( f ))i denotes some function of image f at pixel i The prior will seek the f which causes R( f ) to be zero (or as close to zero as possible)

R( f ): choices, choices Piecewise-constant images =0 if the image is constant  0 if the image is piecewise-constant (why?) The noise term will force a piecewise-constant image Derivative = 0 almost everywhere

R( f ): Piecewise-planer images =0 if the image is a plane  0 if the image is piecewise-planar The noise term will force a piecewise-planar image Clearly a better fit (but not necessarily for, e.g., CT)

Graduated nonconvexity (GNC) Similar to MFA Uses a descent method Reduces a control parameter Can be derived using MFA as its basis “Weak membrane” GNC is analogous to piecewise-constant MFA But different: Its objective function treats the presence of edges explicitly Pixels labeled as edges don’t count in our noise term So we must explicitly minimize the # of edge pixels

Variable conductance diffusion (VCD) Idea: Blur an image everywhere, except at features of interest such as edges

VCD simulates the diffusion eq. temporal derivative spatial derivative Where: t = time i f = spatial gradient of f at pixel i ci = conductivity (to blurring)

Isotropic diffusion If ci is constant across all pixels: Not really VCD Isotropic diffusion is equivalent to convolution with a Gaussian The Gaussian’s variance is defined in terms of t and ci

VCD ci is a function of spatial coordinates, parameterized by i Typically a property of the local image intensities Can be thought of as a factor by which space is locally compressed To smooth except at edges: Let ci be small if i is an edge pixel Little smoothing occurs because “space is stretched” or “little heat flows” Let ci be large at all other pixels More smoothing occurs in the vicinity of pixel i because “space is compressed” or “heat flows easily”

VCD A.K.A. Anisotropic diffusion With repetition, produces a nearly piecewise uniform result Like MFA and GNC formulations Equivalent to MFA w/o a noise term Edge-oriented VCD: VCD + diffuse tangential to edges when near edges Biased Anisotropic diffusion (BAD) Equivalent to MAP image restoration BUT, are the pieces separated where we want?

VCD Sample Images From the Scientific Applications and Visualization Group at NIST http://math.nist.gov/mcsd/savg/software/filters/

Various VCD Approaches: Tradeoffs and example images Mirebeau J., Fehrenbach J., Risser L., Tobji S., “Anisotropic Diffusion in ITK”, the Insight Journal Images copied per Creative Commons license http://www.insight-journal.org/browse/publication/953 Then click on the “Download Paper” link in the top-right

Edge Preserving Smoothing Other techniques constantly being developed (but none is perfect) E.g., “A Brief Survey of Recent Edge-Preserving Smoothing Algorithms on Digital Images” https://arxiv.org/abs/1503.07297 SimpleITK filters: BilateralImageFilter Various types of AnisotropicDiffusionImageFilter Various types of CurvatureFlowImageFilter

Congratulations! You have made it through most of the “introductory” material. Now we’re ready for the “fun stuff.” “Fun stuff” (why we do image analysis): Segmentation Registration Shape Analysis Etc.