Download presentation
Presentation is loading. Please wait.
Published byGeorge Hines Modified over 9 years ago
1
Introduction to Machine Learning Multivariate Methods 姓名 : 李政軒
2
Multiple measurements d inputs/features/attributes: d-variate N instances/observations/examples Multivariate Data
3
Multivariate Parameters
4
Parameter Estimation
5
Estimation of Missing Values What to do if certain instances have missing attributes Ignore those instances: not a good idea if the sample is small Use ‘missing’ as an attribute: may give information Imputation: Fill in the missing value ◦ Mean imputation: Use the most likely value ◦ Imputation by regression: Predict based on other attributes
6
Multivariate Normal Distribution
7
Mahalanobis distance: ( x – μ ) T ∑ –1 ( x – μ ) measures the distance from x to μ in terms of ∑ (normalizes for difference in variances and correlations) Bivariate: d = 2 Multivariate Normal Distribution
8
Bivariate Normal Is probability contour plot of the bivariate normal distribution. Its center is given by the mean, and its shape and orientation depend on the covariance matrix.
9
If x i are independent, offdiagonals of ∑ are 0, Mahalanobis distance reduces to weighted (by 1/ σ i ) Euclidean distance: If variances are also equal, reduces to Euclidean distance Independent Inputs: Naive Bayes
10
Parametric Classification If p (x | C i ) ~ N ( μ i, ∑ i ) Discriminant functions
11
Estimation of Parameters
12
Different S i Quadratic discriminant
13
likelihoods posterior for C 1 discriminant: P (C 1 |x ) = 0.5
14
Shared common sample covariance S Discriminant reduces to which is a linear discriminant Common Covariance Matrix S
15
Covariances may be arbitrary but shared by both classes.
16
When x j j = 1,..d, are independent, ∑ is diagonal p (x|C i ) = ∏ j p (x j |C i ) Classify based on weighted Euclidean distance (in s j units) to the nearest mean Diagonal S
17
variances may be different
18
Nearest mean classifier: Classify based on Euclidean distance to the nearest mean Each mean can be considered a prototype or template and this is template matching Diagonal S, equal variances
19
* ? All classes have equal, diagonal covariance matrices of equal variances on both dimensions.
20
As we increase complexity (less restricted S), bias decreases and variance increases Assume simple models (allow some bias) to control variance (regularization) Model Selection
21
Different cases of the covariance matrices fitted to the same data lead to different boundaries.
22
Binary features: if x j are independent (Naive Bayes’) the discriminant is linear Discrete Features Estimated parameters
23
Multinomial (1-of-n j ) features: x j Î {v 1, v 2,..., v n j } if x j are independent Discrete Features
24
Multivariate linear model Multivariate polynomial model: Define new higher-order variables z 1 =x 1, z 2 =x 2, z 3 =x 1 2, z 4 =x 2 2, z 5 =x 1 x 2 and use the linear model in this new z space Multivariate Regression
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.