Download presentation
Presentation is loading. Please wait.
Published byUrsula Fields Modified over 8 years ago
1
Markov Random Fields Tomer Michaeli Graduate Course 048926
2
Wiener Filter for 1D GMRF
3
Wiener Filter for 2D GMRF OriginalBlur, small noiseDeblurredSevere noiseDenoised
4
Gaussian vs. Weak Spring Potentials
5
Distribution of Derivatives in Natural Images
6
Robust Non-Convex Potentials “Lorentzian”
7
Robust Non-Convex Potentials
8
Lorentzian potentials
9
Robust Non-Convex Potentials Input Quadratic Potentials (Gaussian Prior) Robust Potentials (Hyper-Laplacian Prior)
10
Unstable Reconstruction with Non-Convex Potentials
12
Robust Convex Potentials
13
Symmetric Bound Surrogate
14
MRF-Examples
15
Pairwise Cliques Manually chosen potentials
16
Weak spring Gaussian noisedenoising- weak spring potential
17
Hyper Laplacian Gaussian Prior ( ) Levin et. al ( )Hyper Laplacian ( ) Laplacian Prior ( )
18
Hyper Laplacian Gaussian Prior ( ) Levin et. al ( )Hyper Laplacian ( ) Laplacian Prior ( )
19
Larger Cliques Learned potentials
20
Fields of Experts Learned models Square 3x3 cliques Diamond shaped 5x5 cliques Square 7x7 cliques Cliques structures
21
Fields of Experts- Denoising
22
Fields of Experts- Inpainting
23
Shrinkage Fields
24
Gibbs sampling example: Bivariate normal distribution Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
25
Gibbs sampling example: Bivariate normal distribution
26
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Binary Denoising Before After Image represented as binary discrete variables. Some proportion of pixels randomly changed polarity.
27
Segmentation
28
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Multi-label Denoising Before After Image represented as discrete variables representing intensity. Some proportion of pixels randomly changed according to a uniform distribution.
29
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Denoising Goal Observed DataUncorrupted Image
30
Most of the pixels stay the same Observed image is not as smooth as original Now consider pdf over binary images that encourages smoothness – Markov random field 30 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Denoising Goal Observed DataUncorrupted Image
31
31 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Markov random fields Normalizing constant (partition function) Cost function Returns any number Subset of variables (clique) Relationship
32
32Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Smoothing Example
33
33Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Smoothing Example
34
34Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Smoothing Example Samples – mostly smooth
35
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Max-Flow Problem Goal: To push as much ‘flow’ as possible through the directed graph from the source to the sink. Cannot exceed the (non-negative) capacities c ij associated with each edge.
36
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Saturated Edges When we are pushing the maximum amount of flow: There must be at least one saturated edge on any path from source to sink (otherwise we could push more flow) The set of saturated edges hence separate the source and sink
37
Graph Construction One node per pixel (here a 3x3 image) Edge from source to every pixel node Edge from every pixel node to sink Reciprocal edges between neighbours Note that in the minimum cut EITHER the edge connecting to the source will be cut, OR the edge connecting to the sink, but NOT BOTH (unnecessary). Which determines whether we give that pixel label 1 or label 0. Now a 1 to 1 mapping between possible labelling and possible minimum cuts Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
38
Graph Construction Now add capacities so that minimum cut, minimizes our cost function Unary costs U(0), U(1) attached to links to source and sink. Either one or the other is paid. Pairwise costs between pixel nodes as shown. Why? Easiest to understand with some worked examples. Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
39
Example 1 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 39
40
Example 2 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 40
41
Example 3 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 41
42
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 42
43
Denoising Results Original Pairwise costs increasing Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
44
Convex vs. non-convex costs Quadratic Convex Submodular Truncated Quadratic Not Convex Not Submodular Potts Model Not Convex Not Submodular Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
45
What is wrong with convex costs? Pay lower price for many small changes than one large one Result: blurring at large changes in intensity Observed noisy imageDenoised result Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 45
46
Denoisi ng Results: Alpha Expansi on Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
47
Background subtraction Applications
48
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Grab cut Applications
49
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Stereo vision Applications
50
Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Shift-map image editing Applications
51
51Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
52
Shift-map image editing Applications
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.