PATTERN RECOGNITION AND MACHINE LEARNING

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Pattern Recognition and Machine Learning
Polynomial Curve Fitting BITS C464/BITS F464 Navneet Goyal Department of Computer Science, BITS-Pilani, Pilani Campus, India.
Pattern Recognition and Machine Learning
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
CS479/679 Pattern Recognition Dr. George Bebis
Biointelligence Laboratory, Seoul National University
Pattern Recognition and Machine Learning
Computer vision: models, learning and inference Chapter 8 Regression.
R OBERTO B ATTITI, M AURO B RUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Feb 2014.
LECTURE 11: BAYESIAN PARAMETER ESTIMATION
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Visual Recognition Tutorial
Pattern Recognition and Machine Learning
x – independent variable (input)
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
Machine Learning CMPT 726 Simon Fraser University
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Visual Recognition Tutorial
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
CSC2515 Fall 2007 Introduction to Machine Learning Lecture 2: Linear regression All lecture slides will be available as.ppt,.ps, &.htm at
Perceptual and Sensory Augmented Computing Machine Learning, WS 13/14 Machine Learning – Lecture 14 Introduction to Regression Bastian Leibe.
Perceptual and Sensory Augmented Computing Advanced Machine Learning Winter’12 Advanced Machine Learning Lecture 3 Linear Regression II Bastian.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Ch 4. Linear Models for Classification (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized and revised by Hee-Woong Lim.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
Sparse Kernel Methods 1 Sparse Kernel Methods for Classification and Regression October 17, 2007 Kyungchul Park SKKU.
Guest lecture: Feature Selection Alan Qi Dec 2, 2004.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
Gaussian Processes For Regression, Classification, and Prediction.
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
Machine Learning 5. Parametric Methods.
Machine Learning CUNY Graduate Center Lecture 6: Linear Regression II.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
CSC321: Lecture 8: The Bayesian way to fit models Geoffrey Hinton.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
3. Linear Models for Regression 後半 東京大学大学院 学際情報学府 中川研究室 星野 綾子.
CS Statistical Machine learning Lecture 7 Yuan (Alan) Qi Purdue CS Sept Acknowledgement: Sargur Srihari’s slides.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Pattern Recognition and Machine Learning
Oliver Schulte Machine Learning 726
Deep Feedforward Networks
Probability Theory and Parameter Estimation I
Introduction to Machine Learning
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
Oliver Schulte Machine Learning 726
Ch3: Model Building through Regression
CSE 4705 Artificial Intelligence
Computer vision: models, learning and inference
Special Topics In Scientific Computing
Probabilistic Models for Linear Regression
Roberto Battiti, Mauro Brunato
CSCI 5822 Probabilistic Models of Human and Machine Learning
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Pattern Recognition and Machine Learning
Biointelligence Laboratory, Seoul National University
Parametric Methods Berlin Chen, 2005 References:
Learning From Observed Data
Ch 3. Linear Models for Regression (2/2) Pattern Recognition and Machine Learning, C. M. Bishop, Previously summarized by Yung-Kyun Noh Updated.
Probabilistic Surrogate Models
Presentation transcript:

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION

Outline Discuss tutorial. Regression Examples. The Gaussian distribution. Linear Regression. Maximum Likelihood estimation.

Polynomial Curve Fitting

Academia Example Predict: final percentage mark for student. Features: 6 assignment grades, midterm exam, final exam, project, age. Questions we could ask. I forgot the weights of components. Can you recover them from a spreadsheet of the final grades? I lost the final exam grades. How well can I still predict the final mark? How important is each component, actually? Could I guess well someone’s final mark given their assignments? Given their exams? Show real-world example.

The Gaussian Distribution

Central Limit Theorem The distribution of the sum of N i.i.d. random variables becomes increasingly Gaussian as N grows. Example: N uniform [0,1] random variables.

Reading exponential prob formulas In infinite space, cannot just form sum Σx p(x)  grows to infinity. Instead, use exponential, e.g. p(n) = (1/2)n Suppose there is a relevant feature f(x) and I want to express that “the greater f(x) is, the less probable x is”. Use p(x) = exp(-f(x)).

Example: exponential form sample size Fair Coin: The longer the sample size, the less likely it is. p(n) = 2-n. ln[p(n)] Try to do matlab plot Sample size n

Exponential Form: Gaussian mean The further x is from the mean, the less likely it is. ln[p(x)] 2(x-μ)

Smaller variance decreases probability The smaller the variance σ2, the less likely x is (away from the mean). Or: the greater the precision, the less likely x is. ln[p(x)] 1/σ2 = β

Minimal energy = max probability The greater the energy (of the joint state), the less probable the state is. ln[p(x)] E(x)

Linear Basis Function Models (1) Generally where Áj(x) are known as basis functions. Typically, Á0(x) = 1, so that w0 acts as a bias. In the simplest case, we use linear basis functions : Ád(x) = xd. Can often think of basis functions as “features” computed from data vector x.

Linear Basis Function Models (2) Polynomial basis functions: These are global; a small change in x affect all basis functions. Showing functions for difference choices of exponents.

Linear Basis Function Models (3) Gaussian basis functions: These are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (width). Related to kernel methods.

Linear Basis Function Models (4) Sigmoidal basis functions: where Also these are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (slope). Maps x to 0-1 range. Like smooth treshold. Also like probability.

Curve Fitting With Noise

Maximum Likelihood and Least Squares (1) Assume observations from a deterministic function with added Gaussian noise: which is the same as saying, Given observed inputs, , and targets, , we obtain the likelihood function where

Maximum Likelihood and Least Squares (2) Taking the logarithm, we get where is the sum-of-squares error.

Maximum Likelihood and Least Squares (3) Computing the gradient and setting it to zero yields Solving for w, we get where The Moore-Penrose pseudo-inverse, . Find difference in prediction of n. Multiply feature vector by diff.Roughly, wml= target vector times inverse of data matrix. I think of this as a change of basis, see below. See next slide too. Chain rule for equation.

Linear Algebra/Geometry of Least Squares Consider S is spanned by . wML minimizes the distance between t and its orthogonal projection on S, i.e. y. N-dimensional M-dimensional Also a nice perspective on previous derivation. The space of predicted vectors S is spanned by an m-dimensional basis, namely the m column vectors (feature vectors) although it has n entries. Write target vector t as sum of S + other stuff. Y is the predicted vector as a function of w. t-project is projected vector. Wml minimizes Euclidean dist between y and t-project. Question: can’t I just choose the components of the y vector in S space? Think about transforming y into a different basis.

Maximum Likelihood and Least Squares (4) Maximizing with respect to the bias, w0, alone, we see that We can also maximize with respect to ¯, giving Bias max = average target – (weighted) average feature. Transform feature vectors to average feature vectors to predicted target average. Variance is the average error variance E-D(w) -> fit observed variance. So weight vector fits observed components (after basis change), variance fits observed variance given weights (the “unexplained” or “residual” variance).

0th Order Polynomial

3rd Order Polynomial

9th Order Polynomial

Over-fitting Root-Mean-Square (RMS) Error:

Polynomial Coefficients

Data Set Size: 9th Order Polynomial

1st Order Polynomial

Data Set Size: 9th Order Polynomial

Quadratic Regularization Penalize large coefficient values

Regularization:

Regularization:

Regularization: vs. Bias vs. variance analysis of error: two components to error.

Regularized Least Squares (1) Consider the error function: With the sum-of-squares error function and a quadratic regularizer, we get which is minimized by Data term + Regularization term ¸ is called the regularization coefficient. Problem in assignment. This inverse always exists.

Regularized Least Squares (2) With a more general regularizer, we have Lasso Quadratic

Regularized Least Squares (3) Lasso tends to generate sparser solutions than a quadratic regularizer.

Cross-Validation for Regularization 4-fold cross validation. Common default = 10. Use to evaluate lambda, stop at first minimum of error function. Jackknife: leave out one data point only, do for all data points. Think about doing this for the Bernoulli distribution.

Bayesian Linear Regression (1) Define a conjugate shrinkage prior over weight vector w: p(w|α) = N(w|0,α-1I) Combining this with the likelihood function and using results for marginal and conditional Gaussian distributions, gives a posterior distribution. Log of the posterior = sum of squared errors + quadratic regularization.

Bayesian Linear Regression (3) 0 data points observed Prior Data Space

Bayesian Linear Regression (4) 1 data point observed Likelihood Posterior Data Space

Bayesian Linear Regression (5) 2 data points observed Likelihood Posterior Data Space

Bayesian Linear Regression (6) 20 data points observed Likelihood Posterior Data Space

Predictive Distribution (1) Predict t for new values of x by integrating over w. Can be solved analytically.

Predictive Distribution (2) Example: Sinusoidal data, 9 Gaussian basis functions, 1 data point Showing 1 standard deviation of posterior.

Predictive Distribution (3) Example: Sinusoidal data, 9 Gaussian basis functions, 2 data points

Predictive Distribution (4) Example: Sinusoidal data, 9 Gaussian basis functions, 4 data points

Predictive Distribution (5) Example: Sinusoidal data, 9 Gaussian basis functions, 25 data points

Limitations of Fixed Basis Functions M basis function along each dimension of a D-dimensional input space requires MD basis functions: the curse of dimensionality. In later chapters, we shall see how we can get away with fewer basis functions, by choosing these using the training data.