Download presentation
Presentation is loading. Please wait.
Published byJohn O’Neal’ Modified over 9 years ago
1
Overview of Supervised Learning
2
2015-10-23Overview of Supervised Learning2 Outline Linear Regression and Nearest Neighbors method Statistical Decision Theory Local Methods in High Dimensions Statistical Models, Supervised Learning and Function Approximation Structured Regression Models Classes of Restricted Estimators Model Selection and Bias
3
2015-10-23Overview of Supervised Learning3 Notation X : inputs, feature vector, predictors, independent variables. Generally X will be a vector of p values. Qualitative features are coded in X. –Sample values of X generally in lower case; x i is i -th of N sample values. Y : output, response, dependent variable. –Typically a scalar, can be a vector, of real values. Again y i is a realized value. G : a qualitative response, taking values in a discrete set G ; e.g. G ={ survived, died }. We often code G via a binary indicator response vector Y.
4
2015-10-23Overview of Supervised Learning4 Problem 200 points generated in IR 2 from a unknown distribution; 100 in each of two classes G ={ GREEN, RED }. Can we build a rule to predict the color of the future points?
5
2015-10-23Overview of Supervised Learning5 Code Y=1 if G=RED, else Y=0. We model Y as a linear function of X: Obtain by least squares, by minimizing the quadratic criterion: Given an model matrix X and a response vector y, Linear regression
6
2015-10-23Overview of Supervised Learning6 Linear regression
7
2015-10-23Overview of Supervised Learning7 Linear regression Figure 2.1: A Classification example in two dimensions. The classes are coded as a binary variable (GREEN=0, RED=1) and then fit by linear regression. The line is the decision boundary defined by. The red shaded region denotes that part of input space classified as RED,while the green region is classified as GREEN.
8
2015-10-23Overview of Supervised Learning8 Possible scenarios
9
2015-10-23Overview of Supervised Learning9 K-Nearest Neighbors
10
2015-10-23Overview of Supervised Learning10 K-Nearest Neighbors Figure 2.2: The same classification example in two dimensions as in Figure 2.1. The classes are coded as a binary variable (GREEN=0, RED=1) and the fit by 15- nearest-neighbor. The predicted class is hence chosen by majority vote amongst the 15-nearest neighbors.
11
2015-10-23Overview of Supervised Learning11 K-Nearest Neighbors Figure 2.3: The same classification example are coded as a binary variable ( GREEN=0, RED=1), and then predicted by 1-nearest-neighbor classification.
12
2015-10-23Overview of Supervised Learning12 Linear regression vs. k-NN
13
2015-10-23Overview of Supervised Learning13 Linear regression vs. k-NN Figure 2.4: Misclassification curves for the simulation example above. a test sample of size 10,000 was used. The red curves are test and the green are training error for k- NN classification. The results for linear regression are the bigger green and red dots at three degrees of freedom. The purple line is the optimal Bayes Error Rate.
14
2015-10-23Overview of Supervised Learning14 Other Methods
15
2015-10-23Overview of Supervised Learning15 Statistical decision theory
16
2015-10-23Overview of Supervised Learning16 回归函数
17
2015-10-23Overview of Supervised Learning17
18
2015-10-23Overview of Supervised Learning18
19
2015-10-23Overview of Supervised Learning19 Bayes Classifier
20
2015-10-23Overview of Supervised Learning20 Bayes Classifier Figure 2.5: The optimal Bayes decision boundary for the simulation example above. Since the generating density is known for each class, this boundary can be calculated exactly.
21
2015-10-23Overview of Supervised Learning21 Curse of dimensionality
22
2015-10-23Overview of Supervised Learning22
23
2015-10-23Overview of Supervised Learning23
24
2015-10-23Overview of Supervised Learning24
25
Linear Model Linear Regression Test error 2015-10-23Overview of Supervised Learning25
26
2015-10-23Overview of Supervised Learning26 Curse of dimensionality
28
2015-10-23Overview of Supervised Learning28
29
2015-10-23Overview of Supervised Learning29 Statistical Models
30
2015-10-23Overview of Supervised Learning30 Supervised Learning
31
2015-10-23Overview of Supervised Learning31 Two Types of Supervised Learning
32
2015-10-23Overview of Supervised Learning32 Learning Classification Models
33
2015-10-23Overview of Supervised Learning33 Learning Regression Models
34
2015-10-23Overview of Supervised Learning34 Function Approximation
35
2015-10-23Overview of Supervised Learning35 Function Approximation Figure 2.10: Least squares fitting of a function of two inputs. The parameters of f θ (x) are chosen so as to minimize the sum-of- squared vertical errors.
36
2015-10-23Overview of Supervised Learning36 Function Approximation More generally, Maximum Likelihood Estimation provides a natural basis for estimation. E.g. multinomial
37
2015-10-23Overview of Supervised Learning37 Structured Regression Models
38
2015-10-23Overview of Supervised Learning38 Classes of Restricted Estimators
39
2015-10-23Overview of Supervised Learning39 Model Selection & the Bias-Variance Tradeoff
40
2015-10-23Overview of Supervised Learning40 Model Selection & the Bias-Variance Tradeoff Test and training error as a function of model complexity.
41
2015-10-23Overview of Supervised Learning41 Page 27 Ex 2.1; 2.2 ; 2.6
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.