Download presentation
Presentation is loading. Please wait.
Published byCorey Richards Modified over 9 years ago
1
CHAPTER 4: Parametric Methods
2
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Parametric Estimation Given X = { x t } t goal: infer probability distribution p(x) Parametric estimation: Assume a form for p (x | θ ) and estimate θ, its sufficient statistics, using X e.g., N ( μ, σ 2 ) where θ = { μ, σ 2 } Problem: How can we obtain θ from X? Assumption: X contains samples of a one- dimensional random variable Later multivariate estimation: X contains multiple and not only a single measurement. Example; Gaussian Distribution http://en.wikipedia.org/wiki/Normal_distribution http://en.wikipedia.org/wiki/Normal_distribution
3
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 3 Maximum Likelihood Estimation Density function p with parameters θ is given and x t ~p (X | θ ) Likelihood of θ given the sample X l ( θ |X) = p (X | θ ) = ∏ t p (x t | θ ) We look θ for that “maximizes the likelihood of the sample”! Log likelihood L( θ |X) = log l ( θ |X) = ∑ t log p (x t | θ ) Maximum likelihood estimator (MLE) θ * = argmax θ L( θ |X) Homework: Sample: 0, 3, 3, 4, 5 and x~N( , )? Use MLE to find( , )!
4
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 4 Examples: Bernoulli/Multinomial Bernoulli: Two states, failure/success, x in {0,1} P (x) = p o x (1 – p o ) (1 – x) L (p o |X) = log ∏ t p o x t (1 – p o ) (1 – x t ) MLE: p o = ∑ t x t / N Multinomial: K>2 states, x i in {0,1} P (x 1,x 2,...,x K ) = ∏ i p i x i L(p 1,p 2,...,p K |X) = log ∏ t ∏ i p i x i t MLE: p i = ∑ t x i t / N
5
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 5 Gaussian (Normal) Distribution p(x) = N ( μ, σ 2 ) MLE for μ and σ 2 : μ σ http://en.wikipedia.org/wiki/Probability_density_function
6
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 6 Bias and Variance Unknown parameter θ Estimator d i = d (X i ) on sample X i Bias: b θ (d) = E [d] – θ Variance: E [(d–E [d]) 2 ] Mean square error of the estimator d: r (d, θ ) = E [(d– θ ) 2 ] = (E [d] – θ ) 2 + E [(d–E [d]) 2 ] = Bias 2 + Variance Error in the Model itselfVariation/randomness of the model
7
7 Bayes’ Estimator Treat θ as a random var with prior p ( θ ) Bayes’ rule: p ( θ |X) = p(X| θ ) * p( θ ) / p(X) Maximum a Posteriori (MAP): θ MAP = argmax θ p( θ |X) Maximum Likelihood (ML): θ ML = argmax θ p(X| θ ) Bayes’ Estimator: θ Bayes’ = E[ θ |X] = ∫ θ p( θ |X) d θ Comments: ML just takes the maximum value of the density function Compared with ML, MAP additionally considers priors Bayes’ estimator averages over all possible values of θ which are weighted by their likelihood to occur (which is measured by a probability distribution p( θ )). For MAP see: http://en.wikipedia.org/wiki/Maximum_a_posteriori_estimationhttp://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation For comparison see: http://metaoptimize.com/qa/questions/7885/what-is-the-relationship-between-mle-map-em-point-estimation http://metaoptimize.com/qa/questions/7885/what-is-the-relationship-between-mle-map-em-point-estimation
8
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 8 Bayes’ Estimator: Example x t ~ N ( θ, σ o 2 ) and θ ~ N ( μ, σ 2 ) θ ML = m θ MAP = θ Bayes’ = σ : converges to m Skip today !
9
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 9 Parametric Classification kind of p(C i |x)
10
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 10 Parametric Classification kind of p(C i |x) Using Bayes Theorem P(C 1 |x)=P(C 1 )xP(x|C 1 )/P(x) P(C 2 |x)=P(C 2 )xP(x|C 2 )/P(x) As P(x) is the same in both formulas, we can drop it! Data ML/MAP P(x|C i )
11
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 11 Given the sample ML estimates are Discriminant becomes
12
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 12 Equal variances Single boundary at halfway between means
13
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 13 Variances are different Two boundaries Homework!
14
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 14 Regression Maximizing the probability of the sample again!
15
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 15 Regression: From LogL to Error Skip to 20!
16
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 16 Linear Regression Relationship to what we discussed in Topic2??
17
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 17 Polynomial Regression Here we get k+1 equations with k+1 unknowns!
18
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 18 Other Error Measures Square Error: Relative Square Error: Absolute Error: E ( θ |X) = ∑ t |r t – g(x t | θ )| ε -sensitive Error: E ( θ |X) = ∑ t 1(|r t – g(x t | θ )|> ε ) (|r t – g(x t | θ )| – ε )
19
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 19 Bias and Variance biasvariance noisesquared error To be revisited next week!
20
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 20 Estimating Bias and Variance M samples X i ={x t i, r t i }, i=1,...,M are used to fit g i (x), i =1,...,M
21
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 21 Bias/Variance Dilemma Example: g i (x)=2 has no variance and high bias g i (x)= ∑ t r t i /N has lower bias with variance As we increase complexity, bias decreases (a better fit to data) and variance increases (fit varies more with data) Bias/Variance dilemma: (Geman et al., 1992)
22
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 22 bias variance f gigi g f
23
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 23 Polynomial Regression Best fit “min error”
24
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 24 Model Selection Cross-validation: Measure generalization accuracy by testing on data unused during training Regularization: Penalize complex models E’=error on data + λ model complexity Akaike’s information criterion (AIC), Bayesian information criterion (BIC) Minimum description length (MDL): Kolmogorov complexity, shortest description of data Structural risk minimization (SRM) Remark: will be discussed in more depth later: Topic 11
25
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 25 Bayesian Model Selection Prior on models, p(model) Regularization, when prior favors simpler models Bayes, MAP of the posterior, p(model|data) Average over a number of models with high posterior (voting, ensembles: Chapter 15)
26
CHAPTER 5: Multivariate Methods Normal Distribution: http://en.wikipedia.org/wiki/Normal_distributionhttp://en.wikipedia.org/wiki/Normal_distribution Z-score: see http://en.wikipedia.org/wiki/Standard_scorehttp://en.wikipedia.org/wiki/Standard_score
27
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 27 Multivariate Data Multiple measurements (sensors) d inputs/features/attributes: d-variate N instances/observations/examples
28
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 28 Multivariate Parameters Correlation: http://en.wikipedia.org/wiki/Correlationhttp://en.wikipedia.org/wiki/Correlation Example: 16 0 0 0 16 -3 0 -3 1
29
29 Parameter Estimation http://en.wikipedia.org/wiki/Multivariate_normal_distribution http://webscripts.softpedia.com/script/Scientific-Engineering-Ruby/Statistics-and-Probability/Multivariate-Gaussian-Distribution-35454.html
30
30 Multivariate Normal Distribution Mahalanobis distance between x and (5.9)
31
31 Mahalanobis Distance Mahalanobis distance between x and http://www.analyzemath.com/Calculators/inverse_matrix_3by3.html http://en.wikipedia.org/wiki/Mahalanobis_distance The Mahalanobis distance is based on correlations between variables by which different patterns can be identified and analyzed. It differs from Euclidean distance in that it takes into account the correlations of the data set and is scale-invariant. correlationsEuclidean distancedata setscale-invariant
32
Z-score: see http://en.wikipedia.org/wiki/Standard_scorehttp://en.wikipedia.org/wiki/Standard_score 32 Multivariate Normal Distribution Mahalanobis distance: ( x – μ ) T ∑ –1 ( x – μ ) measures the distance from x to μ in terms of ∑ (normalizes for difference in variances and correlations) Bivariate: d = 2 Remark: is the correlation between the two variables Called z-score zi for xi
33
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 33 Bivariate Normal
34
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 34
35
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 35 Independent Inputs: Naive Bayes If x i are independent, offdiagonals of ∑ are 0, Mahalanobis distance reduces to weighted (by 1/ σ i ) Euclidean distance: If variances are also equal, reduces to Euclidean distance
36
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 36 Parametric Classification If p (x | C i ) ~ N ( μ i, ∑ i ) Discriminant functions are
37
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 37 Estimation of Parameters
38
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 38 Different S i Quadratic discriminant skip
39
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 39 likelihoods posterior for C 1 discriminant: P (C 1 |x ) = 0.5
40
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 40 Common Covariance Matrix S Shared common sample covariance S Discriminant reduces to which is a linear discriminant Initially skip!
41
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 41 Common Covariance Matrix S Initially skip!
42
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 42 Diagonal S When x j j = 1,..d, are independent, ∑ is diagonal p (x|C i ) = ∏ j p (x j |C i )(Naive Bayes’ assumption) Classify based on weighted Euclidean distance (in s j units) to the nearest mean Likely covered in April!
43
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 43 Diagonal S variances may be different
44
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 44 Diagonal S, equal variances Nearest mean classifier: Classify based on Euclidean distance to the nearest mean Each mean can be considered a prototype or template and this is template matching
45
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 45 Diagonal S, equal variances * ?
46
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 46 Model Selection As we increase complexity (less restricted S), bias decreases and variance increases Assume simple models (allow some bias) to control variance (regularization) AssumptionCovariance matrixNo of parameters Shared, HypersphericSi=S=s2ISi=S=s2I1 Shared, Axis-alignedS i =S, with s ij =0d Shared, HyperellipsoidalSi=SSi=Sd(d+1)/2 Different, Hyperellipsoidal SiSi K d(d+1)/2
47
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 47 Discrete Features Binary features: if x j are independent (Naive Bayes’) the discriminant is linear Estimated parameters skip!
48
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 48 Discrete Features Multinomial (1-of-n j ) features: x j {v 1, v 2,..., v n j } if x j are independent skip!
49
Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 49 Multivariate Regression Multivariate linear model Multivariate polynomial model: Define new higher-order variables z 1 =x 1, z 2 =x 2, z 3 =x 1 2, z 4 =x 2 2, z 5 =x 1 x 2 and use the linear model in this new z space (basis functions, kernel trick, SVM: Chapter 10) skip!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.