PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Pattern Recognition and Machine Learning
Pattern Recognition and Machine Learning
CS479/679 Pattern Recognition Dr. George Bebis
Biointelligence Laboratory, Seoul National University
Pattern Recognition and Machine Learning
Computer vision: models, learning and inference Chapter 8 Regression.
Supervised Learning Recap
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
The General Linear Model. The Simple Linear Model Linear Regression.
Chapter 4: Linear Models for Classification
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Visual Recognition Tutorial
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
Pattern Recognition and Machine Learning
Machine Learning CMPT 726 Simon Fraser University
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
Today Wrap up of probability Vectors, Matrices. Calculus
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
PATTERN RECOGNITION AND MACHINE LEARNING
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
CSC2515 Fall 2007 Introduction to Machine Learning Lecture 2: Linear regression All lecture slides will be available as.ppt,.ps, &.htm at
Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 14 Introduction to Regression Bastian Leibe.
Perceptual and Sensory Augmented Computing Advanced Machine Learning Winter’12 Advanced Machine Learning Lecture 3 Linear Regression II Bastian.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Ch 4. Linear Models for Classification (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized and revised by Hee-Woong Lim.
1 E. Fatemizadeh Statistical Pattern Recognition.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
Biointelligence Laboratory, Seoul National University
Linear Models for Classification
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Lecture 2: Statistical learning primer for biologists
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
Machine Learning 5. Parametric Methods.
Regression. We have talked about regression problems before, as the problem of estimating the mapping f(x) between an independent variable x and a dependent.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Pattern recognition – basic concepts. Sample input attribute, attribute, feature, input variable, independent variable (atribut, rys, příznak, vstupní.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
3. Linear Models for Regression 後半 東京大学大学院 学際情報学府 中川研究室 星野 綾子.
CS Statistical Machine learning Lecture 7 Yuan (Alan) Qi Purdue CS Sept Acknowledgement: Sargur Srihari’s slides.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Pattern Recognition and Machine Learning
LINEAR CLASSIFIERS The Problem: Consider a two class task with ω1, ω2.
Probability Theory and Parameter Estimation I
Introduction to Machine Learning
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Pattern Recognition and Machine Learning
Machine learning, pattern recognition and statistical data modelling
Bias and Variance of the Estimator
Latent Variables, Mixture Models and EM
Probabilistic Models for Linear Regression
Collaborative Filtering Matrix Factorization Approach
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Pattern Recognition and Machine Learning
Biointelligence Laboratory, Seoul National University
Parametric Methods Berlin Chen, 2005 References:
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
Pattern Recognition and Machine Learning Chapter 2: Probability Distributions July chonbuk national university.
Presentation transcript:

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION

What is Supervised- Unsupervised Learning? Regression: Prediction Estimation Classification Detection Hypothesis Testing

Supervised Learning Given training data D = {xn,tn}, find t for an unknown x Deterministic: Make a model for t=y(x) Stochastic: Model p(t/x): uncertainity of t for a given x Then Minimize the expecte loss

Linear Basis Function Models Example: Polynomial Curve Fitting

Linear regression Generally Á j (x): basis functions, linear w.r. To w. Á 0 ( x ) = 1, so that w 0 acts as a bias. Linear basis functions : Á d ( x ) = x d.

Polynomial basis functions: These are global; a small change in x affect all basis functions.

Gaussian basis functions: These are local; a small change in x only affect nearby basis functions. ¹ j and s control location and scale (width).

Sigmoidal basis functions: where Also these are local; a small change in x only affect nearby basis functions. ¹ j and s control location and scale (slope).

Two Approaches for supervised learning: 1.Maximum Likelihood Estimate: y(x,w) is a deterministic function with added Gaussian noise: 2. Bayes Estimate w is a stochastic parameter which comes from And conjugate prior is

Maximum Likelihood and Least Squares (1) Given observed inputs and targets, D = {xn,tn}, we obtain the likelihood function for is

Maximum Likelihood and Least Squares (2) Taking the logarithm, we get where is the sum-of-squares error.

Computing the gradient and setting it to zero yields Solving for w, we get where Maximum Likelihood and Least Squares (3) The Moore-Penrose pseudo-inverse,.

Maximum Likelihood and Least Squares (4) Maximizing with respect to the bias, w 0, alone, we see that We can also maximize with respect to ¯, giving

Geometry of Least Squares Consider t =(t1,...,tN )T S is spanned by. w ML minimizes the distance between t and its orthogonal projection on S, i.e. y. N -dimensional M -dimensional

Sequential Learning Data items considered one at a time, use stochastic (sequential) gradient descent: This is known as the least-mean-squares (LMS) algorithm. Issue: how to choose ´ ?

Regularized Least Squares Consider the error function: With the sum-of-squares error function and a quadratic regularizer, we get which is minimized by Data term + Regularization term ¸ is called the regularization coefficient.

General Regularizer Λ defines the size q defines the shape LassoQuadratic

Regularized least squares optimizes the least squares error subject to the regularization constraint min s.to Lagrange Multipliers, form Hamiltonian

Sparcity: Some w becomes 0 as λ increases Lasso tends to generate sparser solutions than a quadratic regularizer. E D (w) E w (w)

Multiple Outputs (1) Analogously to the single output case we have: Given observed inputs,, and targets,, we obtain the log likelihood function

Multiple Outputs (2) Maximizing with respect to W, we obtain If we consider a single target variable, t k, we see that where, which is identical with the single output case.

Kırk Katır? Kırk satır? Overfitting? Underfitting?

Bias-Variance Trade-off Recall the expected quare error loss Minimize it Optimal value of y(x) is

Minimum Expected suqare error loss where The second term of E [L] corresponds to the noise inherent in the random variable t. What about the first term?

The Bias-Variance Decomposition Given multiple data sets, each of size N. Any particular data set, D, will give a particular function y(x;D). Let E D [y(x; D)] be average prediction over all data sets. Then,

The Bias-Variance Decomposition

bias: average prediction over all data sets - the desired regression function. variance: the extent to which the solutions for individual data sets vary around their average, measures the extent to which the function y(x; D) is sensitive to the particular choice of data set.

The Bias-Variance Decomposition Thus, our goal is to minimize where

But… The Bias-Variance Trade-off From these plots, we note that an over-regularized model (large ¸ ) will have a high bias, while an under- regularized model (small ¸ ) will have a high variance.

The Bias-Variance Decomposition (7) Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.

The Bias-Variance Decomposition (6) Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.

The Bias-Variance Decomposition Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.

Recall, for supervised learning: 1.Maximum Likelihood Estimate: y(x,w) is a deterministic function with added Gaussian noise: 2. Bayes Estimate w is a stochastic parameter which comes from And conjugate prior is

BAYES POSTERİOR≈ LİKELİHOOD. CONJUGATE PRIOR

Bayesian Linear Regression (1) Define a conjugate prior over w Combining this with the likelihood function and using results for marginal and conditional Gaussian distributions, gives the posterior where

Bayesian Linear Regression (2) A common choice for the prior is zero-mean isotropic Gaussian governed by a single precision parameter α so that for which Next we consider an example …

Bayesian Linear Regression (3) 0 data points observed. Prior is Prior Data Space Alpha is 2

Bayesian Linear Regression (4) 1 data point observed from y(x, w)=w 0 + w 1 x Select w 0 = -0.3 and w 1 =0.5 Choose xn from U(x|-1, 1), find f(xn, w), Add Gaussian noise N(e/0, 0.2) to obtain the tn Goal: recover w 0 and w 1 LikelihoodPosterior Data Space

Bayesian Linear Regression (5) 2 data points observed LikelihoodPosterior Data Space

Bayesian Linear Regression (6) 20 data points observed LikelihoodPosterior Data Space

Predictive Distribution (1) Forget about estimating w. Predict t for new values of x by integrating over w : where Data noise Uncertainity of w

Example : Sinusoidal data, 9 Gaussian basis functions, 1 data point Sample fonctions drawn from posterior Predictive distribution Pink region: point-wise predictive variance as a function of x. Red curve: mean of the corresponding Gaussian predictive distributiondistribution,

Example : Sinusoidal data, 9 Gaussian basis functions, 2 data points Note the point- wise uncertainty in the predictions

Example: Sinusoidal data, 9 Gaussian basis functions, 4 data points

Example: Sinusoidal data, 9 Gaussian basis functions, 25 data points

Equivalent Kernel Recall the Bayesian posterior with isotropic Gaussian prior Where Note that because the posterior distribution is Gaussian, its mode coincides with its mean. Thus the maximum posterior weight vector is

Equivalent Kernel (1) The predictive mean can be written This is a weighted sum of the training data target values, t n. Equivalent kernel or smoother matrix.

Equivalent kernel k(x, x’) for Gaussian Basis function Weight of t n depends on distance between x and x n nearby x n carry more weight.

Examples of equivalent kernels k(x, x ) for x =0 plotted as a function of x Non-local basis functions have local equivalent kernels: PolynomialSigmoidal

The kernel as a covariance function: consider recall that the predictive mean at x=x’ is highly correlated, whereas for more distant pairs of points the correlation will be smaller.

Properties of Equivalent Kernel for all values of x ; however, the equivalent kernel may be negative for some values of x. Like all kernel functions, the equivalent kernel can be expressed as an inner product: where.

An alternative approach to regression Instead of introducing a set of basis functions, which implicitly determines an equivalent kernel, define a localized kernel directly and use this to make predictions for new input vectors x, given the observed training set.

Bayesian Model Comparison (1) Model; the underlying pdf. Complexity: The order of the linear regression eq. Note that there are infinetly many different models Among them how do we choose the ‘right’ model?

Bayesian Model Comparison (1) Assume we want to compare models M i i=1, …,L, using data D; this requires computing Bayes Factor: ratio of evidence for two models PosteriorPrior Model evidence or marginal likelihood

What is Model Evidance? For a model with parameters w, we get the model evidence by marginalizing over w Note that

Bayesian Model Comparison (2) Having computed the posterior p(M i /D), and the predictive distribution p(t/x,Mi,D) for each model,we can compute the predictive (mixture) distribution Note: p(t/x,D) is multimodal. A simpler approximation, known as model selection, is to use the model with the highest evidence.

Approximate Model Evidance For a given model Mi with a single parameter, w, con- sider the approximation where the posterior is assumed to be sharply peaked.

Approximate Model Evidance for a specific Mi Taking logarithms, we obtain With M parameters, all assumed to have the same ratio, we get Negative Negative and linear in M.

Evidance with different complexities M1 simplest, M2 medium, M3 most complex

The Evidence Approximation (1) The fully Bayesian predictive distribution is given by but this integral is intractable. Approximate with where is the mode of, which is assumed to be sharply peaked;

How do you obtain From Bayes’ theorem we have and if we assume p(®,¯) to be flat we see that General results for Gaussian integrals give

Approximate Log Evidance

Comparison of Approximate Evidance and Root Mean Square Error

Maximizing the Evidence Function (1) To maximise w.r.t. ® and ¯, we define the eigenvector equation Let has eigenvalues ¸ i + ®.

Maximizing the Evidence Function (2) We can now differentiate w.r.t. ® and ¯, and set the results to zero, to get where depends on both ® and ¯.

Algorithm for estimation 1.İnitialize alpha, beta 2.Compute lambda: 3.Compute gamma: 4.Find mN: 5.Re-estimate alpha, beta 6. Re iterate until convergence

Effective Number of Parameters (3) Likelihood Prior w 1 is not well determined by the likelihood w 2 is well determined by the likelihood ° is the number of well determined parameters

Effective Number of Parameters (2) Example: sinusoidal data, 9 Gaussian basis functions, ¯ = 11.1.

Effective Number of Parameters (3) Example: sinusoidal data, 9 Gaussian basis functions, ¯ = Test set error

Effective Number of Parameters (4) Example: sinusoidal data, 9 Gaussian basis functions, ¯ = 11.1.

Effective Number of Parameters (5) In the limit, ° = M and we can consider using the easy-to-compute approximation

Limitations of Fixed Basis Functions M basis function along each dimension of a D -dimensional input space requires M D basis functions: the curse of dimensionality. In later chapters, we shall see how we can get away with fewer basis functions, by choosing these using the training data.