Download presentation
Presentation is loading. Please wait.
1
Pattern Recognition and Machine Learning
Chapter 1: Introduction
2
Example Handwritten Digit Recognition
3
Polynomial Curve Fitting
4
Sum-of-Squares Error Function
5
0th Order Polynomial
6
1st Order Polynomial
7
3rd Order Polynomial
8
9th Order Polynomial
9
Over-fitting Root-Mean-Square (RMS) Error:
10
Polynomial Coefficients
11
Data Set Size: 9th Order Polynomial
12
Data Set Size: 9th Order Polynomial
13
Regularization Penalize large coefficient values
14
Regularization:
15
Regularization:
16
Regularization: vs.
17
Polynomial Coefficients
18
Probability Theory Apples and Oranges
19
Probability Theory Marginal Probability Conditional Probability
Joint Probability
20
Probability Theory Sum Rule Product Rule
21
The Rules of Probability
Sum Rule Product Rule
22
Bayes’ Theorem posterior likelihood × prior
23
Probability Densities
24
Transformed Densities
25
Expectations Conditional Expectation (discrete)
Approximate Expectation (discrete and continuous)
26
Variances and Covariances
27
The Gaussian Distribution
28
Gaussian Mean and Variance
29
The Multivariate Gaussian
30
Gaussian Parameter Estimation
Likelihood function
31
Maximum (Log) Likelihood
32
Properties of and
33
Curve Fitting Re-visited
34
Maximum Likelihood Determine by minimizing sum-of-squares error,
35
Predictive Distribution
36
MAP: A Step towards Bayes
Determine by minimizing regularized sum-of-squares error,
37
Bayesian Curve Fitting
38
Bayesian Predictive Distribution
39
Model Selection Cross-Validation
40
Curse of Dimensionality
41
Curse of Dimensionality
Polynomial curve fitting, M = 3 Gaussian Densities in higher dimensions
42
Decision Theory Inference step Determine either or . Decision step For given x, determine optimal t.
43
Minimum Misclassification Rate
44
Minimum Expected Loss Example: classify medical images as ‘cancer’ or ‘normal’ Decision Truth
45
Minimum Expected Loss Regions are chosen to minimize
46
Reject Option
47
Why Separate Inference and Decision?
Minimizing risk (loss matrix may change over time) Reject option Unbalanced class priors Combining models
48
Decision Theory for Regression
Inference step Determine . Decision step For given x, make optimal prediction, y(x), for t. Loss function:
49
The Squared Loss Function
50
Generative vs Discriminative
Generative approach: Model Use Bayes’ theorem Discriminative approach: Model directly
51
Entropy Important quantity in coding theory statistical physics
machine learning
52
Entropy Coding theory: x discrete with 8 possible states; how many bits to transmit the state of x? All states equally likely
53
Entropy
54
Entropy In how many ways can N identical objects be allocated M bins? Entropy maximized when
55
Entropy
56
Differential Entropy Put bins of width ¢ along the real line Differential entropy maximized (for fixed ) when in which case
57
Conditional Entropy
58
The Kullback-Leibler Divergence
59
Mutual Information
60
Pattern Recognition and Machine Learning
Chapter 2: Probability distributions
61
Parametric Distributions
Basic building blocks: Need to determine given Representation: or ? Recall Curve Fitting
62
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution
63
Binary Variables (2) N coin flips: Binomial Distribution
64
Binomial Distribution
65
Parameter Estimation (1)
ML for Bernoulli Given:
66
Parameter Estimation (2)
Example: Prediction: all future tosses will land heads up Overfitting to D
67
Beta Distribution Distribution over
68
Bayesian Bernoulli The Beta distribution provides the conjugate prior for the Bernoulli distribution.
69
Beta Distribution
70
Prior ∙ Likelihood = Posterior
71
Properties of the Posterior
As the size of the data set, N , increase
72
Prediction under the Posterior
What is the probability that the next coin toss will land heads up?
73
Multinomial Variables
1-of-K coding scheme:
74
ML Parameter estimation
Given: Ensure , use a Lagrange multiplier, ¸.
75
The Multinomial Distribution
76
The Dirichlet Distribution
Conjugate prior for the multinomial distribution.
77
Bayesian Multinomial (1)
78
Bayesian Multinomial (2)
79
The Gaussian Distribution
80
Central Limit Theorem The distribution of the sum of N i.i.d. random variables becomes increasingly Gaussian as N grows. Example: N uniform [0,1] random variables.
81
Geometry of the Multivariate Gaussian
82
Moments of the Multivariate Gaussian (1)
thanks to anti-symmetry of z
83
Moments of the Multivariate Gaussian (2)
84
Partitioned Gaussian Distributions
85
Partitioned Conditionals and Marginals
86
Partitioned Conditionals and Marginals
87
Bayes’ Theorem for Gaussian Variables
Given we have where
88
Maximum Likelihood for the Gaussian (1)
Given i.i.d. data , the log likeli-hood function is given by Sufficient statistics
89
Maximum Likelihood for the Gaussian (2)
Set the derivative of the log likelihood function to zero, and solve to obtain Similarly
90
Maximum Likelihood for the Gaussian (3)
Under the true distribution Hence define
91
Sequential Estimation
Contribution of the N th data point, xN correction given xN correction weight old estimate
92
The Robbins-Monro Algorithm (1)
Consider µ and z governed by p(z,µ) and define the regression function Seek µ? such that f(µ?) = 0.
93
The Robbins-Monro Algorithm (2)
Assume we are given samples from p(z,µ), one at the time.
94
The Robbins-Monro Algorithm (3)
Successive estimates of µ? are then given by Conditions on aN for convergence :
95
Robbins-Monro for Maximum Likelihood (1)
Regarding as a regression function, finding its root is equivalent to finding the maximum likelihood solution µML. Thus
96
Robbins-Monro for Maximum Likelihood (2)
Example: estimate the mean of a Gaussian. The distribution of z is Gaussian with mean ¹ { ¹ML. For the Robbins-Monro update equation, aN = ¾2=N.
97
Bayesian Inference for the Gaussian (1)
Assume ¾2 is known. Given i.i.d. data , the likelihood function for ¹ is given by This has a Gaussian shape as a function of ¹ (but it is not a distribution over ¹).
98
Bayesian Inference for the Gaussian (2)
Combined with a Gaussian prior over ¹, this gives the posterior Completing the square over ¹, we see that
99
Bayesian Inference for the Gaussian (3)
… where Note:
100
Bayesian Inference for the Gaussian (4)
Example: for N = 0, 1, 2 and 10.
101
Bayesian Inference for the Gaussian (5)
Sequential Estimation The posterior obtained after observing N { 1 data points becomes the prior when we observe the N th data point.
102
Bayesian Inference for the Gaussian (6)
Now assume ¹ is known. The likelihood function for ¸ = 1/¾2 is given by This has a Gamma shape as a function of ¸.
103
Bayesian Inference for the Gaussian (7)
The Gamma distribution
104
Bayesian Inference for the Gaussian (8)
Now we combine a Gamma prior, , with the likelihood function for ¸ to obtain which we recognize as with
105
Bayesian Inference for the Gaussian (9)
If both ¹ and ¸ are unknown, the joint likelihood function is given by We need a prior with the same functional dependence on ¹ and ¸.
106
Bayesian Inference for the Gaussian (10)
The Gaussian-gamma distribution Quadratic in ¹. Linear in ¸. Gamma distribution over ¸. Independent of ¹.
107
Bayesian Inference for the Gaussian (11)
The Gaussian-gamma distribution
108
Bayesian Inference for the Gaussian (12)
Multivariate conjugate priors ¹ unknown, ¤ known: p(¹) Gaussian. ¤ unknown, ¹ known: p(¤) Wishart, ¤ and ¹ unknown: p(¹,¤) Gaussian-Wishart,
109
Student’s t-Distribution
where Infinite mixture of Gaussians.
110
Student’s t-Distribution
111
Student’s t-Distribution
Robustness to outliers: Gaussian vs t-distribution.
112
Student’s t-Distribution
The D-variate case: where . Properties:
113
Periodic variables Examples: calendar time, direction, … We require
114
von Mises Distribution (1)
This requirement is satisfied by where is the 0th order modified Bessel function of the 1st kind.
115
von Mises Distribution (4)
116
Maximum Likelihood for von Mises
Given a data set, , the log likelihood function is given by Maximizing with respect to µ0 we directly obtain Similarly, maximizing with respect to m we get which can be solved numerically for mML.
117
Mixtures of Gaussians (1)
Old Faithful data set Single Gaussian Mixture of two Gaussians
118
Mixtures of Gaussians (2)
Combine simple models into a complex model: K=3 Component Mixing coefficient
119
Mixtures of Gaussians (3)
120
Mixtures of Gaussians (4)
Determining parameters ¹, §, and ¼ using maximum log likelihood Solution: use standard, iterative, numeric optimization methods or the expectation maximization algorithm (Chapter 9). Log of a sum; no closed form maximum.
121
The Exponential Family (1)
where ´ is the natural parameter and so g(´) can be interpreted as a normalization coefficient.
122
The Exponential Family (2.1)
The Bernoulli Distribution Comparing with the general form we see that and so Logistic sigmoid
123
The Exponential Family (2.2)
The Bernoulli distribution can hence be written as where
124
The Exponential Family (3.1)
The Multinomial Distribution where, , and NOTE: The ´ k parameters are not independent since the corresponding ¹k must satisfy
125
The Exponential Family (3.2)
Let . This leads to and Here the ´ k parameters are independent. Note that Softmax
126
The Exponential Family (3.3)
The Multinomial distribution can then be written as where
127
The Exponential Family (4)
The Gaussian Distribution where
128
ML for the Exponential Family (1)
From the definition of g(´) we get Thus
129
ML for the Exponential Family (2)
Give a data set, , the likelihood function is given by Thus we have Sufficient statistic
130
Prior corresponds to º pseudo-observations with value Â.
Conjugate priors For any member of the exponential family, there exists a prior Combining with the likelihood function, we get Prior corresponds to º pseudo-observations with value Â.
131
Noninformative Priors (1)
With little or no information available a-priori, we might choose a non-informative prior. ¸ discrete, K-nomial : ¸2[a,b] real and bounded: ¸ real and unbounded: improper! A constant prior may no longer be constant after a change of variable; consider p(¸) constant and ¸=´2:
132
Noninformative Priors (2)
Translation invariant priors. Consider For a corresponding prior over ¹, we have for any A and B. Thus p(¹) = p(¹ { c) and p(¹) must be constant.
133
Noninformative Priors (3)
Example: The mean of a Gaussian, ¹ ; the conjugate prior is also a Gaussian, As , this will become constant over ¹ .
134
Noninformative Priors (4)
Scale invariant priors. Consider and make the change of variable For a corresponding prior over ¾, we have for any A and B. Thus p(¾) / 1/¾ and so this prior is improper too. Note that this corresponds to p(ln ¾) being constant.
135
Noninformative Priors (5)
Example: For the variance of a Gaussian, ¾2, we have If ¸ = 1/¾2 and p(¾) / 1/¾ , then p(¸) / 1/ ¸. We know that the conjugate distribution for ¸ is the Gamma distribution, A noninformative prior is obtained when a0 = 0 and b0 = 0.
136
Nonparametric Methods (1)
Parametric distribution models are restricted to specific forms, which may not always be suitable; for example, consider modelling a multimodal distribution with a single, unimodal model. Nonparametric approaches make few assumptions about the overall shape of the distribution being modelled.
137
Nonparametric Methods (2)
Histogram methods partition the data space into distinct bins with widths ¢i and count the number of observations, ni, in each bin. Often, the same width is used for all bins, ¢i = ¢. ¢ acts as a smoothing parameter. In a D-dimensional space, using M bins in each dimen- sion will require MD bins!
138
Nonparametric Methods (3)
Assume observations drawn from a density p(x) and consider a small region R containing x such that The probability that K out of N observations lie inside R is Bin(KjN,P ) and if N is large If the volume of R, V, is sufficiently small, p(x) is approximately constant over R and Thus V small, yet K>0, therefore N large?
139
Nonparametric Methods (4)
Kernel Density Estimation: fix V, estimate K from the data. Let R be a hypercube centred on x and define the kernel function (Parzen window) It follows that and hence
140
Nonparametric Methods (5)
To avoid discontinuities in p(x), use a smooth kernel, e.g. a Gaussian Any kernel such that will work. h acts as a smoother.
141
Nonparametric Methods (6)
Nearest Neighbour Density Estimation: fix K, estimate V from the data. Consider a hypersphere centred on x and let it grow to a volume, V ?, that includes K of the given N data points. Then K acts as a smoother.
142
Nonparametric Methods (7)
Nonparametric models (not histograms) requires storing and computing with the entire data set. Parametric models, once fitted, are much more efficient in terms of storage and computation.
143
K-Nearest-Neighbours for Classification (1)
Given a data set with Nk data points from class Ck and , we have and correspondingly Since , Bayes’ theorem gives
144
K-Nearest-Neighbours for Classification (2)
145
K-Nearest-Neighbours for Classification (3)
K acts as a smother For , the error rate of the 1-nearest-neighbour classifier is never more than twice the optimal error (obtained from the true conditional class distributions).
146
Pattern Recognition and Machine Learning
Chapter 3: Linear models for regression
147
Linear Basis Function Models (1)
Example: Polynomial Curve Fitting
148
Linear Basis Function Models (2)
Generally where Áj(x) are known as basis functions. Typically, Á0(x) = 1, so that w0 acts as a bias. In the simplest case, we use linear basis functions : Ád(x) = xd.
149
Linear Basis Function Models (3)
Polynomial basis functions: These are global; a small change in x affect all basis functions.
150
Linear Basis Function Models (4)
Gaussian basis functions: These are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (width).
151
Linear Basis Function Models (5)
Sigmoidal basis functions: where Also these are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (slope).
152
Maximum Likelihood and Least Squares (1)
Assume observations from a deterministic function with added Gaussian noise: which is the same as saying, Given observed inputs, , and targets, , we obtain the likelihood function where
153
Maximum Likelihood and Least Squares (2)
Taking the logarithm, we get where is the sum-of-squares error.
154
Maximum Likelihood and Least Squares (3)
Computing the gradient and setting it to zero yields Solving for w, we get where The Moore-Penrose pseudo-inverse,
155
Geometry of Least Squares
Consider S is spanned by . wML minimizes the distance between t and its orthogonal projection on S, i.e. y. N-dimensional M-dimensional
156
Sequential Learning Data items considered one at a time (a.k.a. online learning); use stochastic (sequential) gradient descent: This is known as the least-mean-squares (LMS) algorithm. Issue: how to choose ´?
157
Regularized Least Squares (1)
Consider the error function: With the sum-of-squares error function and a quadratic regularizer, we get which is minimized by Data term + Regularization term ¸ is called the regularization coefficient.
158
Regularized Least Squares (2)
With a more general regularizer, we have Lasso Quadratic
159
Regularized Least Squares (3)
Lasso tends to generate sparser solutions than a quadratic regularizer.
160
Multiple Outputs (1) Analogously to the single output case we have: Given observed inputs, , and targets, , we obtain the log likelihood function
161
Multiple Outputs (2) Maximizing with respect to W, we obtain If we consider a single target variable, tk, we see that where , which is identical with the single output case.
162
The Bias-Variance Decomposition (1)
Recall the expected squared loss, where The second term of E[L] corresponds to the noise inherent in the random variable t. What about the first term?
163
The Bias-Variance Decomposition (2)
Suppose we were given multiple data sets, each of size N. Any particular data set, D, will give a particular function y(x;D). We then have
164
The Bias-Variance Decomposition (3)
Taking the expectation over D yields
165
The Bias-Variance Decomposition (4)
Thus we can write where
166
The Bias-Variance Decomposition (5)
Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.
167
The Bias-Variance Decomposition (6)
Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.
168
The Bias-Variance Decomposition (7)
Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.
169
The Bias-Variance Trade-off
From these plots, we note that an over-regularized model (large ¸) will have a high bias, while an under-regularized model (small ¸) will have a high variance.
170
Bayesian Linear Regression (1)
Define a conjugate prior over w Combining this with the likelihood function and using results for marginal and conditional Gaussian distributions, gives the posterior where
171
Bayesian Linear Regression (2)
A common choice for the prior is for which Next we consider an example …
172
Bayesian Linear Regression (3)
0 data points observed Prior Data Space
173
Bayesian Linear Regression (4)
1 data point observed Likelihood Posterior Data Space
174
Bayesian Linear Regression (5)
2 data points observed Likelihood Posterior Data Space
175
Bayesian Linear Regression (6)
20 data points observed Likelihood Posterior Data Space
176
Predictive Distribution (1)
Predict t for new values of x by integrating over w: where
177
Predictive Distribution (2)
Example: Sinusoidal data, 9 Gaussian basis functions, 1 data point
178
Predictive Distribution (3)
Example: Sinusoidal data, 9 Gaussian basis functions, 2 data points
179
Predictive Distribution (4)
Example: Sinusoidal data, 9 Gaussian basis functions, 4 data points
180
Predictive Distribution (5)
Example: Sinusoidal data, 9 Gaussian basis functions, 25 data points
181
Equivalent Kernel (1) The predictive mean can be written This is a weighted sum of the training data target values, tn. Equivalent kernel or smoother matrix.
182
Equivalent Kernel (2) Weight of tn depends on distance between x and xn; nearby xn carry more weight.
183
Equivalent Kernel (3) Non-local basis functions have local equivalent kernels: Polynomial Sigmoidal
184
Equivalent Kernel (4) The kernel as a covariance function: consider We can avoid the use of basis functions and define the kernel function directly, leading to Gaussian Processes (Chapter 6).
185
Equivalent Kernel (5) for all values of x; however, the equivalent kernel may be negative for some values of x. Like all kernel functions, the equivalent kernel can be expressed as an inner product: where .
186
Bayesian Model Comparison (1)
How do we choose the ‘right’ model? Assume we want to compare models Mi, i=1, …,L, using data D; this requires computing Bayes Factor: ratio of evidence for two models Posterior Prior Model evidence or marginal likelihood
187
Bayesian Model Comparison (2)
Having computed p(MijD), we can compute the predictive (mixture) distribution A simpler approximation, known as model selection, is to use the model with the highest evidence.
188
Bayesian Model Comparison (3)
For a model with parameters w, we get the model evidence by marginalizing over w Note that
189
Bayesian Model Comparison (4)
For a given model with a single parameter, w, con-sider the approximation where the posterior is assumed to be sharply peaked.
190
Bayesian Model Comparison (5)
Taking logarithms, we obtain With M parameters, all assumed to have the same ratio , we get Negative Negative and linear in M.
191
Bayesian Model Comparison (6)
Matching data and model complexity
192
The Evidence Approximation (1)
The fully Bayesian predictive distribution is given by but this integral is intractable. Approximate with where is the mode of , which is assumed to be sharply peaked; a.k.a. empirical Bayes, type II or gene-ralized maximum likelihood, or evidence approximation.
193
The Evidence Approximation (2)
From Bayes’ theorem we have and if we assume p(®,¯) to be flat we see that General results for Gaussian integrals give
194
The Evidence Approximation (3)
Example: sinusoidal data, M th degree polynomial,
195
Maximizing the Evidence Function (1)
To maximise w.r.t. ® and ¯, we define the eigenvector equation Thus has eigenvalues ¸i + ®.
196
Maximizing the Evidence Function (2)
We can now differentiate w.r.t. ® and ¯, and set the results to zero, to get where N.B. ° depends on both ® and ¯.
197
Effective Number of Parameters (3)
w1 is not well determined by the likelihood w2 is well determined by the likelihood ° is the number of well determined parameters Likelihood Prior
198
Effective Number of Parameters (2)
Example: sinusoidal data, 9 Gaussian basis functions, ¯ = 11.1.
199
Effective Number of Parameters (3)
Example: sinusoidal data, 9 Gaussian basis functions, ¯ = 11.1. Test set error
200
Effective Number of Parameters (4)
Example: sinusoidal data, 9 Gaussian basis functions, ¯ = 11.1.
201
Effective Number of Parameters (5)
In the limit , ° = M and we can consider using the easy-to-compute approximation
202
Limitations of Fixed Basis Functions
M basis function along each dimension of a D-dimensional input space requires MD basis functions: the curse of dimensionality. In later chapters, we shall see how we can get away with fewer basis functions, by choosing these using the training data.
203
Pattern Recognition and Machine Learning
Chapter 8: graphical models
204
Bayesian Networks Directed Acyclic Graph (DAG)
205
Bayesian Networks General Factorization
206
Bayesian Curve Fitting (1)
Polynomial
207
Bayesian Curve Fitting (2)
Plate
208
Bayesian Curve Fitting (3)
Input variables and explicit hyperparameters
209
Bayesian Curve Fitting —Learning
Condition on data
210
Bayesian Curve Fitting —Prediction
Predictive distribution: where
211
Generative Models Causal process for generating images
212
Discrete Variables (1) General joint distribution: K 2 { 1 parameters Independent joint distribution: 2(K { 1) parameters
213
Discrete Variables (2) General joint distribution over M variables: KM { 1 parameters M -node Markov chain: K { 1 + (M { 1) K(K { 1) parameters
214
Discrete Variables: Bayesian Parameters (1)
215
Discrete Variables: Bayesian Parameters (2)
Shared prior
216
Parameterized Conditional Distributions
If are discrete, K-state variables, in general has O(K M) parameters. The parameterized form requires only M + 1 parameters
217
Linear-Gaussian Models
Directed Graph Vector-valued Gaussian Nodes Each node is Gaussian, the mean is a linear function of the parents.
218
Conditional Independence
a is independent of b given c Equivalently Notation
219
Conditional Independence: Example 1
220
Conditional Independence: Example 1
221
Conditional Independence: Example 2
222
Conditional Independence: Example 2
223
Conditional Independence: Example 3
Note: this is the opposite of Example 1, with c unobserved.
224
Conditional Independence: Example 3
Note: this is the opposite of Example 1, with c observed.
225
“Am I out of fuel?” B = Battery (0=flat, 1=fully charged)
and hence B = Battery (0=flat, 1=fully charged) F = Fuel Tank (0=empty, 1=full) G = Fuel Gauge Reading (0=empty, 1=full)
226
“Am I out of fuel?” Probability of an empty tank increased by observing G = 0.
227
“Am I out of fuel?” Probability of an empty tank reduced by observing B = 0. This referred to as “explaining away”.
228
D-separation A, B, and C are non-intersecting subsets of nodes in a directed graph. A path from A to B is blocked if it contains a node such that either the arrows on the path meet either head-to-tail or tail-to-tail at the node, and the node is in the set C, or the arrows meet head-to-head at the node, and neither the node, nor any of its descendants, are in the set C. If all paths from A to B are blocked, A is said to be d-separated from B by C. If A is d-separated from B by C, the joint distribution over all variables in the graph satisfies
229
D-separation: Example
230
D-separation: I.I.D. Data
231
Directed Graphs as Distribution Filters
232
The Markov Blanket Factors independent of xi cancel between numerator and denominator.
233
Cliques and Maximal Cliques
234
Joint Distribution where is the potential over clique C and is the normalization coefficient; note: M K-state variables KM terms in Z. Energies and the Boltzmann distribution
235
Illustration: Image De-Noising (1)
Original Image Noisy Image
236
Illustration: Image De-Noising (2)
237
Illustration: Image De-Noising (3)
Noisy Image Restored Image (ICM)
238
Illustration: Image De-Noising (4)
Restored Image (ICM) Restored Image (Graph cuts)
239
Converting Directed to Undirected Graphs (1)
240
Converting Directed to Undirected Graphs (2)
Additional links
241
Directed vs. Undirected Graphs (1)
242
Directed vs. Undirected Graphs (2)
243
Inference in Graphical Models
244
Inference on a Chain
245
Inference on a Chain
246
Inference on a Chain
247
Inference on a Chain
248
Inference on a Chain To compute local marginals:
Compute and store all forward messages, Compute and store all backward messages, Compute Z at any node xm Compute for all variables required.
249
Trees Undirected Tree Directed Tree Polytree
250
Factor Graphs
251
Factor Graphs from Directed Graphs
252
Factor Graphs from Undirected Graphs
253
The Sum-Product Algorithm (1)
Objective: to obtain an efficient, exact inference algorithm for finding marginals; in situations where several marginals are required, to allow computations to be shared efficiently. Key idea: Distributive Law
254
The Sum-Product Algorithm (2)
255
The Sum-Product Algorithm (3)
256
The Sum-Product Algorithm (4)
257
The Sum-Product Algorithm (5)
258
The Sum-Product Algorithm (6)
259
The Sum-Product Algorithm (7)
Initialization
260
The Sum-Product Algorithm (8)
To compute local marginals: Pick an arbitrary node as root Compute and propagate messages from the leaf nodes to the root, storing received messages at every node. Compute and propagate messages from the root to the leaf nodes, storing received messages at every node. Compute the product of received messages at each node for which the marginal is required, and normalize if necessary.
261
Sum-Product: Example (1)
262
Sum-Product: Example (2)
263
Sum-Product: Example (3)
264
Sum-Product: Example (4)
265
The Max-Sum Algorithm (1)
Objective: an efficient algorithm for finding the value xmax that maximises p(x); the value of p(xmax). In general, maximum marginals joint maximum.
266
The Max-Sum Algorithm (2)
Maximizing over a chain (max-product)
267
The Max-Sum Algorithm (3)
Generalizes to tree-structured factor graph maximizing as close to the leaf nodes as possible
268
The Max-Sum Algorithm (4)
Max-Product Max-Sum For numerical reasons, use Again, use distributive law
269
The Max-Sum Algorithm (5)
Initialization (leaf nodes) Recursion
270
The Max-Sum Algorithm (6)
Termination (root node) Back-track, for all nodes i with l factor nodes to the root (l=0)
271
The Max-Sum Algorithm (7)
Example: Markov chain
272
The Junction Tree Algorithm
Exact inference on general graphs. Works by turning the initial graph into a junction tree and then running a sum-product-like algorithm. Intractable on graphs with large cliques.
273
Loopy Belief Propagation
Sum-Product on general graphs. Initial unit messages passed across all links, after which messages are passed around until convergence (not guaranteed!). Approximate but tractable for large graphs. Sometime works well, sometimes not at all.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.