Download presentation
Presentation is loading. Please wait.
Published byOsborne Franklin Modified over 9 years ago
1
R EGRESSION S HRINKAGE AND S ELECTION VIA THE L ASSO Author: Robert Tibshirani Journal of the Royal Statistical Society 1996 Presentation: Tinglin Liu Oct. 27 2010 1
2
O UTLINE What’s the Lasso? Why should we use the Lasso? Why will the results of Lasso be sparse? How to find the Lasso solutions? 2
3
O UTLINE What’s the Lasso? Why should we use the Lasso? Why will the results of Lasso be sparse? How to find the Lasso solutions? 3
4
L ASSO (L EAST A BSOLUTE S HRINKAGE AND S ELECTION O PERATOR ) Definition It’s a coefficients shrunken version of the ordinary Least Square Estimate, by minimizing the Residual Sum of Squares subjecting to the constraint that the sum of the absolute value of the coefficients should be no greater than a constant. 4
5
L ASSO (L EAST A BSOLUTE S HRINKAGE AND S ELECTION O PERATOR ) Features Equivalent to the Classic Expression of Sparse Coding Here, the standardization is required to make the and normalize every predictor variable. Murray, W., Gill, P. and Wright, M.(1981) Practical Optimization. Chapter 5. Academic Press 5
6
L ASSO (L EAST A BSOLUTE S HRINKAGE AND S ELECTION O PERATOR ) Features Sparse Solutions Let be the full least square estimates and Value will cause the shrinkage Let as the scaled Lasso parameter 6
7
L ASSO (L EAST A BSOLUTE S HRINKAGE AND S ELECTION O PERATOR ) Features Lasso as Bayes Estimate Assume that, and has the double exponential probability distribution as: Then, we can derive the lasso regression estimate as the Bayes posterior mode. 7
8
O UTLINE What’s the Lasso? Why should we use the Lasso? Why will the results of Lasso be sparse? How to find the Lasso solutions? 8
9
W HY L ASSO ? Prediction Accuracy Assume, and, then the prediction error of the estimate is OLS estimates often have low bias but large variance, the Lasso can improve the overall prediction accuracy by sacrifice a little bias to reduce the variance of the predicted value. 9
10
W HY L ASSO ? Interpretation In many cases, the response is determined by just a small subset of the predictor variables. 10
11
O UTLINE What’s the Lasso? Why should we use the Lasso? Why will the results of Lasso be sparse? How to find the Lasso solutions? 11
12
W HY S PARSE ? 12 Geometry of Lasso The criterion equals to the quadratic function as: This function is expressed as the elliptical contour centered at the OLS estimates. The L1 norm constraints is expressed as the square centered at the origin. The Lasso solution is the first place where the contour touches the square.
13
W HY S PARSE ? 13
14
W HY S PARSE ? 14 Geometry of Lasso Since the variables are standardized, the principal axes of the contours are at to the co-ordinate axes. The correlations between the variables can influence the axis length of the elliptical contours, but have almost no influence upon the solution of the Lasso solutions.
15
O UTLINE What’s the Lasso? Why should we use the Lasso? Why will the results of Lasso be sparse? How to find the Lasso solutions? 15
16
H OW TO SOLVE THE PROBLEM ? The absolute inequality constraints can be translated into inequality constraints. ( p stands for the number of predictor variables ) Where is an matrix, corresponding to linear inequality constraints. But direct application of this procedure is not practical due to the fact that may be very large. 16 Lawson, C. and Hansen, R. (1974) Solving Least Squares Problems. Prentice Hall.
17
H OW TO SOLVE THE PROBLEM ? Outline of the Algorithm Sequentially introduce the inequality constraints In practice, the average iteration steps required is in the range of (0.5p, 0,75p), so the algorithm is acceptable. 17 Lawson, C. and Hansen, R. (1974) Solving Least Squares Problems. Prentice Hall.
18
18
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.