Download presentation
Presentation is loading. Please wait.
Published byRoger Mitchell Modified over 8 years ago
1
Regression 10-601 Machine Learning
2
Outline Regression vs Classification Linear regression – another discriminative learning method –As optimization Gradient descent –As matrix inversion (Ordinary Least Squares) Overfitting and bias-variance Bias-variance decomposition for classification
3
What is regression?
4
Where we are Inputs Classifier Predict category Inputs Density Estimator Prob- ability Inputs Regressor Predict real # √ Today √
5
Regression examples
6
Prediction of menu prices Chaheau Gimpel … and Smith EMNLP 2012 …
7
A decision tree: classification Play Don’t Play
8
A regression tree Play = 30m, 45min Play = 0m, 0m, 15mPlay = 0m, 0m Play = 20m, 30m, 45m, Play ~= 37 Play ~= 5 Play ~= 0 Play ~= 32
9
Theme for the week: learning as optimization
10
Types of learners Two types of learners: 1. Generative: make assumptions about how to generate data (given the class) - e.g., naïve Bayes 2. Discriminative - directly estimate a decision rule/boundary - e.g., logistic regression Today: another discriminative learner, but for regression tasks
11
Regression for LMS as optimization Toy problem #2 11 Least Mean Squares
12
Linear regression Given an input x we would like to compute an output y For example: - Predict height from age - Predict Google’s price from Yahoo’s price - Predict distance from wall from sensors X Y
13
Linear regression Given an input x we would like to compute an output y In linear regression we assume that y and x are related with the following equation: y = wx+ where w is a parameter and represents measurement or other noise X Y What we are trying to predict Observed values
14
Our goal is to estimate w from a training data of pairs Optimization goal: minimize squared error (least squares): Why least squares? - minimizes squared distance between measurements and predicted line - has a nice probabilistic interpretation - the math is pretty Linear regression X Y see HW
15
Solving linear regression To optimize: We just take the derivative w.r.t. to w …. prediction Compare to logistic regression…
16
Solving linear regression To optimize – closed form: We just take the derivative w.r.t. to w and set to 0: covar(X,Y)/var(X) if mean(X)=mean(Y)=0 covar(X,Y)/var(X) if mean(X)=mean(Y)=0
17
Regression example Generated: w=2 Recovered: w=2.03 Noise: std=1
18
Regression example Generated: w=2 Recovered: w=2.05 Noise: std=2
19
Regression example Generated: w=2 Recovered: w=2.08 Noise: std=4
20
Bias term So far we assumed that the line passes through the origin What if the line does not? No problem, simply change the model to y = w 0 + w 1 x+ Can use least squares to determine w 0, w 1 X Y w0w0
21
Bias term So far we assumed that the line passes through the origin What if the line does not? No problem, simply extend the model to y = w 0 + w 1 x+ Can use least squares to determine w 0, w 1 X Y w0w0 Simpler solution is coming soon…
22
Multivariate regression What if we have several inputs? - Stock prices for Yahoo, Microsoft and Ebay for the Google prediction task This becomes a multivariate regression problem Again, its easy to model: y = w 0 + w 1 x 1 + … + w k x k + Google’s stock price Yahoo’s stock price Microsoft’s stock price
23
Multivariate regression What if we have several inputs? - Stock prices for Yahoo, Microsoft and Ebay for the Google prediction task This becomes a multivariate regression problem Again, its easy to model: y = w 0 + w 1 x 1 + … + w k x k +
24
y=10+3x 1 2 -2x 2 2 + In some cases we would like to use polynomial or other terms based on the input data, are these still linear regression problems? Yes. As long as the coefficients are linear the equation is still a linear regression problem! Not all functions can be approximated by a line/hyperplane…
25
Non-Linear basis function So far we only used the observed values x 1,x 2,… However, linear regression can be applied in the same way to functions of these values –Eg: to add a term w x 1 x 2 add a new variable z=x 1 x 2 so each example becomes: x 1, x 2, …. z As long as these functions can be directly computed from the observed values the parameters are still linear in the data and the problem remains a multi-variate linear regression problem
26
Non-Linear basis function How can we use this to add an intercept term? Add a new “variable” z=1 and weight w 0
27
Non-linear basis functions What type of functions can we use? A few common examples: - Polynomial: j (x) = x j for j=0 … n - Gaussian: - Sigmoid: - Logs: Any function of the input values can be used. The solution for the parameters of the regression remains the same.
28
General linear regression problem Using our new notations for the basis function linear regression can be written as Where j (x) can be either x j for multivariate regression or one of the non-linear basis functions we defined … and 0 (x)=1 for the intercept term
29
Learning/Optimizing Multivariate Least Squares Approach 1: Gradient Descent
30
Gradient descent 30
31
Gradient Descent for Linear Regression Goal: minimize the following loss function: sum over n examples sum over k+1 basis vectors
32
Gradient Descent for Linear Regression Goal: minimize the following loss function:
33
Gradient Descent for Linear Regression Learning algorithm: Initialize weights w=0 For t=1,… until convergence: Predict for each example x i using w: Compute gradient of loss: This is a vector g Update: w = w – λg λ is the learning rate.
34
Gradient Descent for Linear Regression We can use any of the tricks we used for logistic regression: –stochastic gradient descent (if the data is too big to put in memory) –regularization –…
35
Linear regression is a convex optimization problem proof: differentiate again to get the second derivative so again gradient descent will reach a global optimum
36
Multivariate Least Squares Approach 2: Matrix Inversion
37
OLS (Ordinary Least Squares Solution) Goal: minimize the following loss function:
38
Notation: n examples k+1 basis vectors
39
Goal: minimize the following loss function: n examples k+1 basis vectors
40
n examples k+1 basis vectors …
41
n examples k+1 basis vectors …
42
n examples k+1 basis vectors …
43
n examples …
44
k+1 basis … n examples
45
k+1 basis vectors …
46
recap: Solving linear regression To optimize – closed form: We just take the derivative w.r.t. to w and set to 0: covar(X,Y)/var(X) if mean(X)=mean(Y)=0 covar(X,Y)/var(X) if mean(X)=mean(Y)=0
47
n examples k+1 basis vectors …
48
LMS for general linear regression problem Deriving w we get: n by k+1 matrix n entries vector k+1 entries vector This solution is also known as ‘pseudo inverse’ Another reason to start with an objective function: you can see when two learning methods are the same!
49
LMS versus gradient descent LMS solution: + Very simple in Matlab or something similar -Requires matrix inverse, which is expensive for a large matrix. Gradient descent: + Fast for large matrices + Stochastic GD is very memory efficient + Easily extended to other cases - Parameters to tweak (how to decide convergence? what is the learning rate? ….)
50
Regression and Overfitting
51
An example: polynomial basis vectors on a small dataset –From Bishop Ch 1
52
0 th Order Polynomial n=10
53
1 st Order Polynomial
54
3 rd Order Polynomial
55
9 th Order Polynomial
56
Over-fitting Root-Mean-Square (RMS) Error:
57
Polynomial Coefficients
58
Data Set Size: 9 th Order Polynomial
59
Regularization Penalize large coefficient values
60
Regularization: +
61
Polynomial Coefficients noneexp(18)huge
62
Over Regularization:
63
Regularized Gradient Descent for LR Goal: minimize the following loss function:
64
Probabilistic Interpretation of Least Squares
65
A probabilistic interpretation Our least squares minimization solution can also be motivated by a probabilistic in interpretation of the regression problem: The MLE for w in this model is the same as the solution we derived for least squares criteria: where ε is Gaussian noise
66
Understanding Overfitting: Bias-Variance
67
Example Tom Dietterich, Oregon St
68
Example Tom Dietterich, Oregon St Same experiment, repeated: with 50 samples of 20 points each
69
The true function f can’t be fit perfectly with hypotheses from our class H (lines) Error 1 We don’t get the best hypothesis from H because of noise/small sample size Error 2 Fix: more expressive set of hypotheses H Fix: less expressive set of hypotheses H noise is similar to error 1
70
Bias-Variance Decomposition: Regression
71
Bias and variance for regression For regression, we can easily decompose the error of the learned model into two parts: bias (error 1) and variance (error 2) –Bias: the class of models can’t fit the data. Fix: a more expressive model class. –Variance: the class of models could fit the data, but doesn’t because it’s hard to fit. Fix: a less expressive model class.
72
Bias – Variance decomposition of error learned from D true function dataset and noise Fix test case x, then do this experiment: 1. Draw size n sample D = (x 1,y 1 ),….(x n,y n ) 2. Train linear regressor h D using D 3. Draw one test example (x, f(x)+ε) 4. Measure squared error of h D on that one example x What’s the expected error? 72 noise
73
Bias – Variance decomposition of error learned from D true function dataset and noise noise Notation - to simplify this long-term expectation of learner’s prediction on this x averaged over many data sets D long-term expectation of learner’s prediction on this x averaged over many data sets D
74
Bias – Variance decomposition of error
75
Squared difference btwn our long- term expectation for the learners performance, E D [h D (x)], and what we expect in a representative run on a dataset D (hat y) Squared difference between best possible prediction for x, f(x), and our “long-term” expectation for what the learner will do if we averaged over many datasets D, E D [h D (x)] BIAS 2 VARIANCE 75
76
bias variance x=5
77
Bias-variance decomposition This is something real that you can (approximately) measure experimentally –if you have synthetic data Different learners and model classes have different tradeoffs –large bias/small variance: few features, highly regularized, highly pruned decision trees, large-k k- NN… –small bias/high variance: many features, less regularization, unpruned trees, small-k k-NN…
78
Bias and variance For classification, we can also decompose the error of a learned classifier into two terms: bias and variance –Bias: the class of models can’t fit the data. –Fix: a more expressive model class. –Variance: the class of models could fit the data, but doesn’t because it’s hard to fit. –Fix: a less expressive model class.
79
Another view of a decision tree Sepal_length<5.7 Sepal_width>2.8
80
Another view of a decision tree Sepal_length>5.7 N N Sepal_width>2.8 length>5.1 N N Y Y width>3.1 Y Y length>4.6 N N Y Y
81
Another view of a decision tree Sepal_length>5.7 N N Sepal_width>2.8 length>5.1 N N Y Y width>3.1 Y Y N N
82
Another view of a decision tree
84
Bias-Variance Decomposition: Measuring
85
Bias-variance decomposition This is something real that you can (approximately) measure experimentally –if you have synthetic data –…or if you’re clever –You need to somehow approximate E D {h D (x)} –I.e., construct many variants of the dataset D
86
Background: “Bootstrap” sampling Input: dataset D Output: many variants of D: D 1,…,D T For t=1,….,T: –D t = { } –For i=1…|D|: Pick (x,y) uniformly at random from D (i.e., with replacement) and add it to D t Some examples never get picked (~37%) Some are picked 2x, 3x, ….
87
Measuring Bias-Variance with “Bootstrap” sampling Create B bootstrap variants of D (approximate many draws of D) For each bootstrap dataset –T b is the dataset; U b are the “out of bag” examples –Train a hypothesis h b on T b –Test h b on each x in U b Now for each (x,y) example we have many predictions h 1 (x),h 2 (x), …. so we can estimate (ignoring noise) –variance: ordinary variance of h 1 (x),….,h n (x) –bias: average(h 1 (x),…,h n (x)) - y
88
Applying Bias-Variance Analysis By measuring the bias and variance on a problem, we can determine how to improve our model –If bias is high, we need to allow our model to be more complex –If variance is high, we need to reduce the complexity of the model Bias-variance analysis also suggests a way to reduce variance: bagging 88
89
Bagging
90
Bootstrap Aggregation (Bagging) Use the bootstrap to create B variants of D Learn a classifier from each variant Vote the learned classifiers to predict on a test example
91
Bagging (bootstrap aggregation) Breaking it down: –input: dataset D and YFCL –output: a classifier h D-BAG –use bootstrap to construct variants D 1,…,D T –for t=1,…,T: train YFCL on D t to get h t –to classify x with h D-BAG classify x with h 1,….,h T and predict the most frequently predicted class for x (majority vote) Note that you can use any learner you like! You can also test h t on the “out of bag” examples
92
Experiments Freund and Schapire
93
solid: NB dashed: LR
94
Bagged, minimally pruned decision trees
96
Generally, bagged decision trees outperform the linear classifier eventually if the data is large enough and clean enough.
97
Bagging (bootstrap aggregation) Experimentally: –especially with minimal pruning: decision trees have low bias but high variance. –bagging usually improves performance for decision trees and similar methods –It reduces variance without increasing the bias (much).
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.