Download presentation
Presentation is loading. Please wait.
1
Probability Theory and Parameter Estimation I
2
Least Squares and Gauss
How to solve the least square problem? Carl Friedrich Gauss solution in (age 18) Why solving the least square problem? Carl Friedrich Gauss solution in 1822 (age 46) Least square solution is optimal in the sense that it is the best linear unbiased estimator of the coefficients of the polynomials Assumptions: errors have zero mean and equal variances
3
Probability Theory Apples and Oranges
4
Probability Theory Marginal Probability Conditional Probability
Joint Probability
5
Probability Theory Sum Rule Product Rule
6
The Rules of Probability
Sum Rule Product Rule 6
7
Bayes’ Theorem posterior likelihood × prior
8
Probability Densities
9
Transformed Densities
10
Expectations Conditional Expectation (discrete)
Approximate Expectation (discrete and continuous)
11
Variances and Covariances
12
The Gaussian Distribution
mean variance
13
Gaussian Mean and Variance
14
The Multivariate Gaussian
determinant of covariance matrix
15
Gaussian Parameter Estimation
Goal: find parameters (mean and variance) that maximize the product of the blue points Likelihood function i.i.d. samples = independent and identically distributed samples of a random variable N observations of variable x
16
Maximum (Log) Likelihood
Maximizing with respect to m Maximizing with respect to variance
17
Properties of and Biased estimator
18
Maximum Likelihood Estimation
Bias is at the core of the problem of MLE
19
Curve Fitting Re-visited
20
Maximum Likelihood I Maximize log likelihood Surprise! Maximizing log likelihood is equivalent to minimizing sum of square error function!
21
Maximum Likelihood II Maximize log likelihood
Determine by minimizing sum-of-squares error,
22
Predictive Distribution
23
MAP: A Step towards Bayes
Maximum posterior hyper-parameter prior over parameters likelihood Determine by minimizing regularized sum-of-squares error.
24
MAP: A Step towards Bayes
Determine by minimizing regularized sum-of-squares error. Surprise! Maximizing posterior is equivalent to minimizing regularized sum of square error function!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.