Download presentation
Presentation is loading. Please wait.
Published byPeter Melton Modified over 9 years ago
1
Computer Vision Lecture 6. Probabilistic Methods in Segmentation
2
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Probability Theory Estimation and Decision Random models Mixture models for segmentation and the EM algorithm 16.1, 16.2 Linear Gaussian models for segmentation –Templates –Search
3
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 2 Probability Theory A continuous random variable is a model for a measurement, such as the air temperature in Lviv, that takes on a range of possible values and that cannot be predicted exactly. The density function p(t) conveys information about the values of temperature that occur more or less often. The density function tells us how often the temperature takes on certain values. If, for example, then, if we take many measurements, the fraction of measurements that are less than 15° will be 1/2.
4
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 3 More Probability A set of independent identically distributed (iid) random variables with density function p(t) is a model for a collection of measurements (t 1,... t n ) such that the value of any one of the variables cannot predict any of the others. The joint density function is given by We often use the IID model even when we have good reason to believe that the random variables are not independent.
5
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 4 Examples of Random Images The images below were produced by models based on probability theory IID, µ = 128, σ = 25Not independent, µ = 128, σ = 25 Different µ, same σSame µ, different σ
6
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 5 Mixture Distribution Suppose we have a sequence of random variables (y 1... y n ). Each variable is a vector consisting of two components, y = (x, ) where x is continuously distributed and takes on one of two values: = 1 or = 2. The variable x has density function p 1 (x) if = 1 and p 2 (x) if = 2. Variables i are iid with P( = 1) = 1 and P( = 1) = 1 and P( = 2) = 1. The density function of x is given by This is called a mixture distribution, and the model that produces the sequence x i is called a mixture model. If only x is observed then is a hidden random variable.
7
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 6 Decision Problem It is given that a sequence x i comes from a mixture model and that is unknown. Our goal is to find the values of from the values of x. This is called a decision problem. If the two density functions are as shown below, than it is reasonable to choose some threshold value t and to choose = 1 if x < t, and to choose = 2 if x ≥ t. We see that the segmentation problem is closely related to mixtures and to decision theory.
8
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 7 Estimation Problem Often, we do not know the exact form of the density function. We may know a formula, but not know some of the parameters. For example, we may know that the density function is given by We are also given a set of iid observations (x 1,..., x n ), and we need to find the values of It’s not possible find the exact values, but a good estimate can be found from the principle of maximum likelihood. The likelihood function is defined as As our estimate we chose the values produce the largest value of the likelihood function with the given values of (x 1,..., x n ).
9
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 8 Estimation for Gaussian Densities For a Gaussian random variable the maximum likelihood estimates of and can be found in closed form. They are:
10
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 9 Estimation for Mixtures There is no closed-form solution for estimating mixture parameters. The following iterative procedure is often used. We describe the procedure for estimating the following density function: We start with initial estimates of We find the next estimates (indicated by +) as follows:
11
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 10 Expectation Maximization The above is an example of the Expectation- Maximization algorithm. It is used for many difficult parameter estimation problems. The algorithm alternates between computing I lm, the expected values of the hidden variables, and finding the next values of the parameters The k - means algorithm can be viewed as something analogous to the E-M algorithm.
12
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 11 Template Matching A multivariate Gaussian model can be used when the components of a sequence or an image are not iid. A general formula for the density function is given on page 493. We will consider a simpler case. We assume that the observations are given by vector x. The components of the vector are Gaussian and have identical standard deviations, but the means are not identical. We represent the vector of mean values by m. The density function is given by
13
Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 12 Choice Among Templates Suppose we observe a vector x which is one of d possible objects each descibed by the above model but having different means m 1,... m d. To identify the object we find the one that has the highest p(x; m i ). This is equivalent to finding the value of i for which ||x – m i || 2 is as small as possible. We use algebra to show that To find the most likely i we need to find the largest If m i T m i has the same value for all i then the highest probability occurs when x T m i is as large as possible. This is the principle of maximum correlation.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.