Today Linear Regression Logistic Regression Bayesians v. Frequentists

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Generative Models Thus far we have essentially considered techniques that perform classification indirectly by modeling the training data, optimizing.
Linear Regression.
Brief introduction on Logistic Regression
Pattern Recognition and Machine Learning
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Support Vector Machines
Supervised Learning Recap
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Middle Term Exam 03/01 (Thursday), take home, turn in at noon time of 03/02 (Friday)
Chapter 4: Linear Models for Classification
What is Statistical Modeling
Visual Recognition Tutorial
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
Pattern Recognition and Machine Learning
Logistic Regression Rong Jin. Logistic Regression Model  In Gaussian generative model:  Generalize the ratio to a linear model Parameters: w and c.
Announcements  Homework 4 is due on this Thursday (02/27/2004)  Project proposal is due on 03/02.
Linear Methods for Classification
Machine Learning CMPT 726 Simon Fraser University
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
Today Logistic Regression Decision Trees Redux Graphical Models
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Logistic Regression 10701/15781 Recitation February 5, 2008 Parts of the slides are from previous years’ recitation and lecture notes, and from Prof. Andrew.
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
Today Wrap up of probability Vectors, Matrices. Calculus
Crash Course on Machine Learning
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
Machine Learning Queens College Lecture 3: Probability and Statistics.
PATTERN RECOGNITION AND MACHINE LEARNING
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
CSE 446 Gaussian Naïve Bayes & Logistic Regression Winter 2012
1 Logistic Regression Adapted from: Tom Mitchell’s Machine Learning Book Evan Wei Xiang and Qiang Yang.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
LOGISTIC REGRESSION David Kauchak CS451 – Fall 2013.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
CS Statistical Machine learning Lecture 18 Yuan (Alan) Qi Purdue CS Oct
CSE 446 Logistic Regression Winter 2012 Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer.
Ch 4. Linear Models for Classification (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized and revised by Hee-Woong Lim.
Machine Learning CUNY Graduate Center Lecture 4: Logistic Regression.
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Classification: Logistic Regression –NB & LR connections Readings: Barber.
Linear Models for Classification
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
Gaussian Processes For Regression, Classification, and Prediction.
Machine Learning CUNY Graduate Center Lecture 2: Math Primer.
Logistic Regression William Cohen.
Machine Learning 5. Parametric Methods.
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
CSE 446 Logistic Regression Perceptron Learning Winter 2012 Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer.
Machine Learning CUNY Graduate Center Lecture 6: Linear Regression II.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Page 1 CS 546 Machine Learning in NLP Review 2: Loss minimization, SVM and Logistic Regression Dan Roth Department of Computer Science University of Illinois.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Naive Bayes (Generative Classifier) vs. Logistic Regression (Discriminative Classifier) Minkyoung Kim.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Chapter 3: Maximum-Likelihood Parameter Estimation
Machine Learning Logistic Regression
ECE 5424: Introduction to Machine Learning
Machine Learning Logistic Regression
Statistical Learning Dong Liu Dept. EEIS, USTC.
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Pattern Recognition and Machine Learning
Parametric Methods Berlin Chen, 2005 References:
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Recap: Naïve Bayes classifier
Presentation transcript:

Lecture 4: Logistic Regression Machine Learning CUNY Graduate Center

Today Linear Regression Logistic Regression Bayesians v. Frequentists Bayesian Linear Regression Logistic Regression Linear Model for Classification

Regularization: Penalize large weights Introduce a penalty term in the loss function. Regularized Regression (L2-Regularization or Ridge Regression)

More regularization The penalty term defines the styles of regularization L2-Regularization L1-Regularization L0-Regularization L0-norm is the optimal subset of features

Curse of dimensionality Increasing dimensionality of features increases the data requirements exponentially. For example, if a single feature can be accurately approximated with 100 data points, to optimize the joint over two features requires 100*100 data points. Models should be small relative to the amount of available data Dimensionality reduction techniques – feature selection – can help. L0-regularization is explicit feature selection L1- and L2-regularizations approximate feature selection.

Bayesians v. Frequentists What is a probability? Frequentists A probability is the likelihood that an event will happen It is approximated by the ratio of the number of observed events to the number of total events Assessment is vital to selecting a model Point estimates are absolutely fine Bayesians A probability is a degree of believability of a proposition. Bayesians require that probabilities be prior beliefs conditioned on data. The Bayesian approach “is optimal”, given a good model, a good prior and a good loss function. Don’t worry so much about assessment. If you are ever making a point estimate, you’ve made a mistake. The only valid probabilities are posteriors based on evidence given some prior

Bayesian Linear Regression The previous MLE derivation of linear regression uses point estimates for the weight vector, w. Bayesians say, “hold it right there”. Use a prior distribution over w to estimate parameters Alpha is a hyperparameter over w, where alpha is the precision or inverse variance of the distribution. Now optimize:

Optimize the Bayesian posterior As usual it’s easier to optimize after a log transform.

Optimize the Bayesian posterior As usual it’s easier to optimize after a log transform.

Optimize the Bayesian posterior Ignoring terms that do not depend on w IDENTICAL formulation as L2-regularization

Context Overfitting is bad. Bayesians vs. Frequentists Is one better? Machine Learning uses techniques from both camps.

Logistic Regression Linear model applied to classification Supervised: target information is available Each data point xi has a corresponding target ti. Goal: Identify a function

Target Variables In binary classification, it is convenient to represent ti as a scalar with a range of [0,1] Interpretation of ti as the likelihood that xi is the member of the positive class Used to represent the confidence of a prediction. For L > 2 classes, ti is often represented as a K element vector. tij represents the degree of membership in class j. |ti| = 1 E.g. 5-way classification vector:

Graphical Example of Classification

Decision Boundaries

Graphical Example of Classification

Classification approaches Generative Models the joint distribution between c and x Highest data requirements Discriminative Fewer parameters to approximate Discriminant Function May still be trained probabilistically, but not necessarily modeling a likelihood.

Treating Classification as a Linear model

Relationship between Regression and Classification Since we’re classifying two classes, why not set one class to ‘0’ and the other to ‘1’ then use linear regression. Regression: -infinity to infinity, while class labels are 0, 1 Can use a threshold, e.g. y >= 0.5 then class 1 y < 0.5 then class 2 f(x)>=0.5? Happy/Good/ClassA Sad/Not Good/ClassB 1

Odds-ratio Rather than thresholding, we’ll relate the regression to the class-conditional probability. Ratio of the odd of prediction y = 1 or y = 0 If p(y=1|x) = 0.8 and p(y=0|x) = 0.2 Odds ratio = 0.8/0.2 = 4 Use a linear model to predict odds rather than a class label.

Logit – Log odds ratio function LHS: 0 to infinity RHS: -infinity to infinity Use a log function. Has the added bonus of disolving the division leading to easy manipulation

Logistic Regression A linear model used to predict log-odds ratio of two classes Include image.

Logit to probability

Sigmoid function Squashing function to map the reals to a finite domain.

Gaussian Class-conditional Assume the data is generated from a gaussian distribution for each class. Leads to a bayesian formulation of logistic regression.

Bayesian Logistic Regression

Maximum Likelihood Extimation Logistic Regression Class-conditional Gaussian. Multinomial Class distribution. As ever, take the derivative of this likelihood function w.r.t.

Maximum Likelihood Estimation of the prior

Maximum Likelihood Estimation of the prior

Maximum Likelihood Estimation of the prior

Discriminative Training Take the derivatives w.r.t. Be prepared for this for homework. In the generative formulation, we need to estimate the joint of t and x. But we get an intuitive regularization technique. Discriminative Training Model p(t|x) directly.

What’s the problem with generative training Formulated this way, in D dimensions, this function has D parameters. In the generative case, 2D means, and D(D+1)/2 covariance values Quadratic growth in the number of parameters. We’d rather linear growth.

Discriminative Training

Optimization Take the gradient in terms of w

Optimization

Optimization

Optimization

Optimization: putting it together

Optimization We know the gradient of the error function, but how do we find the maximum value? Setting to zero is nontrivial Numerical approximation

Gradient Descent Take a guess. Move in the direction of the negative gradient Jump again. In a convex function this will converge Other methods include Newton-Raphson

Multi-class discriminant functions Can extend to multiple classes Other approaches include constructing K-1 binary classifiers. Each classifier compares cn to not cn Computationally simpler, but not without problems

Exponential Model Logistic Regression is a type of exponential model. Linear combination of weights and features to produce a probabilistic model.

Problems with Binary Discriminant functions

K-class discriminant

Entropy Measure of uncertainty, or Measure of “Information” High uncertainty equals high entropy. Rare events are more “informative” than common events.

Entropy How much information is received when observing ‘x’? If independent, p(x,y) = p(x)p(y). H(x,y) = H(x) + H(y) The information contained in two unrelated events is equal to their sum.

Entropy Binary coding of p(x): -log p(x) “How many bits does it take to represent a value p(x)?” How many “decimal” places? How many binary decimal places? Expected value of observed information

Examples of Entropy Uniform distributions have higher distributions.

Maximum Entropy Logistic Regression is also known as Maximum Entropy. Entropy is convex. Convergence Expectation. Constrain this optimization to enforce good classification. Increase maximum likelihood of the data while making the distribution of weights most even. Include as many useful features as possible.

Maximum Entropy with Constraints From Klein and Manning Tutorial

Optimization formulation If we let the weights represent likelihoods of value for each feature. For each feature i

Solving MaxEnt formulation Convex optimization with a concave objective function and linear constraints. Lagrange Multipliers Dual representation of the maximum likelihood estimation of Logistic Regression For each feature i

Summary Bayesian Regularization Logistic Regression Entropy Introduction of a prior over parameters serves to constrain weights Logistic Regression Log odds to construct a linear model Formulation with Gaussian Class Conditionals Discriminative Training Gradient Descent Entropy Logistic Regression as Maximum Entropy.

Next Time Graphical Models Read Chapter 8.1, 8.2