Download presentation
Presentation is loading. Please wait.
Published byMabel Dennis Modified over 8 years ago
1
Bayesian Inference and Visual Processing: Image Parsing & DDMCMC. Alan Yuille (Dept. Statistics. UCLA) Tu, Chen, Yuille & Zhu (ICCV 2003).
2
Something Completely Different? No Medical Images. No Medical Images. Segmentation and Recognition of Natural Images. Segmentation and Recognition of Natural Images. But the ideas can be applied to Medical Images… But the ideas can be applied to Medical Images…
3
Image Parsing. (I) Image are composed of visual patterns: (I) Image are composed of visual patterns: (II) Parse an image by decomposing it into patterns. (II) Parse an image by decomposing it into patterns.
4
Talk Part I: Generative Model. Stochastic grammar for generating images in terms of visual patterns. Stochastic grammar for generating images in terms of visual patterns. Visual patterns can be generic (texture/shading) or objects. Visual patterns can be generic (texture/shading) or objects. Hierarchical Probability model – probability on graphs. Hierarchical Probability model – probability on graphs.
5
Talk Part II: Inference Algorithm Interpreting an image corresponds to constructing a parse graph. Interpreting an image corresponds to constructing a parse graph. Set of moves for constructing the parse graph. Set of moves for constructing the parse graph. Dynamics for moves use bottom-up & top-down visual processing. Dynamics for moves use bottom-up & top-down visual processing. Data-Driven Markov Chain Monte Carlo (DDMCMC). Data-Driven Markov Chain Monte Carlo (DDMCMC). Discriminative Models to drive Generative models. Discriminative Models to drive Generative models.
6
Part I: Generative Models. Previous related work by our group: Previous related work by our group: Zhu & Yuille 1996 (Region Competition). Zhu & Yuille 1996 (Region Competition). Tu & Zhu 2002. Tu & Zhu 2003. Tu & Zhu 2002. Tu & Zhu 2003. These theories assumed generic visual patterns only. These theories assumed generic visual patterns only.
7
Generic Patterns & Object Patterns. Limitations of Generic Visual Patterns. Limitations of Generic Visual Patterns. Object patterns enable us to unify segmentation & recognition. Object patterns enable us to unify segmentation & recognition.
8
Stochastic Grammar: Parsing Graph. Nodes represent visual patterns. Child nodes to image pixels. Nodes represent visual patterns. Child nodes to image pixels. Stochastic Grammars: Manning & Schultz.
9
Image Patterns. Node attributes: Node attributes: Zeta: Pattern Type – 66 Zeta: Pattern Type – 66 (I) Smooth, (II) Texture/Cluster, (III) Shading. (IV) Face, (V– LXVI) Text Characters. (I) Smooth, (II) Texture/Cluster, (III) Shading. (IV) Face, (V– LXVI) Text Characters. L – shape descriptor (image region modeled). L – shape descriptor (image region modeled). Theta: Model parameters. Theta: Model parameters.
10
Generative Model: Likelihood: Likelihood: Prior: Prior: Samples: Samples:
11
Stochastic Grammar Summary. Graph represents: Graph represents: Sampling from the graph generates an image. Sampling from the graph generates an image.
12
Part II: Inference Algorithm. We described a model to generate image: We described a model to generate image: P(I|W) & P(W). P(I|W) & P(W). Need an algorithm to infer W* from P(I|W) & P(W). Need an algorithm to infer W* from P(I|W) & P(W).
13
Inference & Parse Graph. Inference requires constructing a parse graph. Inference requires constructing a parse graph. Dynamics to create/delete nodes and alter node attributes: Dynamics to create/delete nodes and alter node attributes:
14
Moves: Birth & Death of Text. Birth & Death of Text. Birth & Death of Faces. Birth & Death of Faces. Splitting & Merging of Regions. Splitting & Merging of Regions. Switching Node Attributes (Model Parameters & Pattern Types). Switching Node Attributes (Model Parameters & Pattern Types). Moving Region Boundaries. Moving Region Boundaries.
15
Markov Chain Monte Carlo. Design a Markov Chain (MC) with transition kernel Design a Markov Chain (MC) with transition kernel Satisfies Detailed Balance. Satisfies Detailed Balance. Then repeated sampling from the MC will converge to samples from the posterior P(W|I). Then repeated sampling from the MC will converge to samples from the posterior P(W|I).
16
Moves & Sub-kernels. Implement each move by a transition sub- kernel: Implement each move by a transition sub- kernel: Combines moves by a full kernel: Combines moves by a full kernel: At each time-step – choose a type of move, then apply it to the graph. At each time-step – choose a type of move, then apply it to the graph. Kernels obey: Kernels obey:
17
Data Driven Proposals. Use data-driven proposals to make the Markov Chain efficient. Use data-driven proposals to make the Markov Chain efficient. Metropolis-Hastings design: Metropolis-Hastings design: Proposal probabilities are based on discriminative cues. Proposal probabilities are based on discriminative cues.
18
Discriminative Methods: Edge Cues Edge Cues Binarization Cues. Binarization Cues. Face Region Cues (AdaBoost). Face Region Cues (AdaBoost). Text Region Cues (AdaBoost). Text Region Cues (AdaBoost). Shape Affinity Cues. Shape Affinity Cues. Region Affinity Cues. Region Affinity Cues. Model Parameters & Pattern Type. Model Parameters & Pattern Type.
19
Design Proposals I. How to generate proposals How to generate proposals is the scope of W– states that can be reached from W with one move of type i. is the scope of W– states that can be reached from W with one move of type i. Ideal proposal: Ideal proposal:
20
Design Proposals II. Re-express this as: Re-express this as: Set Q(W,W’|I) to approximate P(W’|I)/P(W|I) and be easily computatable. Set Q(W,W’|I) to approximate P(W’|I)/P(W|I) and be easily computatable.
21
Example: Create Node by Splitting Select region (node) R_k. Select region (node) R_k. Propose a finite set of ways to split R_k based on discriminative cues (edges and edge linking). Propose a finite set of ways to split R_k based on discriminative cues (edges and edge linking). Consider split R_k to R_i & R_j. Consider split R_k to R_i & R_j.
22
Example: Create Node by Splitting. Create (Split). Create (Split). Denominator is known, we are in state W. Denominator is known, we are in state W. Use an affinity measure Use an affinity measure
23
Example: Create Face Create Face. Create Face. Bottom-up proposal for face driven by AdaBoost & edge detection. Bottom-up proposal for face driven by AdaBoost & edge detection. This will require splitting an existing region R_k into R_i (face) and R_j (remainder). This will require splitting an existing region R_k into R_i (face) and R_j (remainder). Parameters for R_k & R_j are known (the same). Parameters for R_k & R_j are known (the same).
24
Examples: Death by Merging. Death (Merge). Death (Merge). Select regions R_i & R_j to merge based on a similarity measure (as for splitting). Select regions R_i & R_j to merge based on a similarity measure (as for splitting). Same approximations as for splitting to approximate. Same approximations as for splitting to approximate.
25
Node Splitting/Merging. Causes for split/merge. Causes for split/merge. (i) Bottom-up cue – there is probably a face or text here (AdaBoost + Edge Detection). (i) Bottom-up cue – there is probably a face or text here (AdaBoost + Edge Detection). (ii) Bottom-up cue – there is an intensity edge splitting the region. (ii) Bottom-up cue – there is an intensity edge splitting the region. (iii) Current state W – model for region I fits data poorly. (iii) Current state W – model for region I fits data poorly. (iv) Current state W -- two regions are similar (by affinity measure). (iv) Current state W -- two regions are similar (by affinity measure).
26
Full Strategy: Integration: Integration:
27
Control Strategy: (For AZ).
28
AdaBoost – Conditional Probs. Supervised Learning. Supervised Learning.
29
Experiments: Competition & Cooperation. Competition & Cooperation.
30
Experiments: More results: More results:
31
Key Ideas: Generative Models for Visual Patterns & Stochastic Grammars. Generative Models for Visual Patterns & Stochastic Grammars. Inference: set of “moves” on the parse graph implemented by Kernels. Inference: set of “moves” on the parse graph implemented by Kernels. Discriminative Models – bottom-up – drive top-down Generative Models. Discriminative Models – bottom-up – drive top-down Generative Models.
32
Discussion: Image parsing gives a way to combine segmentation with recognition. Image parsing gives a way to combine segmentation with recognition. The DDMCMC algorithm is a way to combine discriminative models with generative models. The DDMCMC algorithm is a way to combine discriminative models with generative models. This bottom-up/top-down architecture may be suggestive of brain architecture. This bottom-up/top-down architecture may be suggestive of brain architecture.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.