Download presentation
Presentation is loading. Please wait.
1
Lecture 04: Logistic Regression
CS480/680: Intro to ML Lecture 04: Logistic Regression 9/20/18 Yao-Liang Yu
2
Outline Announcements Bernoulli model Logistic regression Computation
9/20/18 Yao-Liang Yu
3
Outline Announcements Bernoulli model Logistic regression Computation
9/20/18 Yao-Liang Yu
4
Announcements Assignment 1 due next week 9/20/18 Yao-Liang Yu
5
Outline Announcements Bernoulli model Logistic regression Computation
9/20/18 Yao-Liang Yu
6
Classification revisited
ŷ = sign( xTw + b ) How confident we are about ŷ? |xTw + b| seems a good indicator real-valued; hard to interpret ways to transform into [0,1] Better(?) idea: learn confidence directly 9/20/18 Yao-Liang Yu
7
Conditional probability
P(Y=1 | X=x): conditional on seeing x, what is the chance of this instance being positive, i.e., Y=1? obviously, value in [0,1] P(Y=0 | X=x) = 1 – P(Y=1 | X=x), if two classes more generally, sum to 1 Notation (Simplex). Δc-1 := { p in Rc: p ≥ 0, Σk pk = 1 } 9/20/18 Yao-Liang Yu
8
Reduction to a harder problem
P(Y=1 | X=x) = E(1Y=1 | X=x) Let Z = 1Y=1, then regression function for (X, Z) use linear regression for binary Z? Exploit structure! conditional probabilities are in a simplex Never reduce to unnecessarily harder problem 9/20/18 Yao-Liang Yu
9
Bernoulli model Let P(Y=1 | X=x) = p(x; w), parameterized by w
Conditional likelihood on {(x1, y1), … (xn, yn)}: simplifies if independence holds Assuming yi is {0,1}-valued 9/20/18 Yao-Liang Yu
10
Naïve solution Find w to maximize conditional likelihood
What is the solution if p(x; w) does not depend on x? What is the solution if p(x; w) does not depend on w? 9/20/18 Yao-Liang Yu
11
Generalized linear models (GLM)
y ~ Bernoulli(p); p = p(x; w) natural parameter Logistic regression y ~ Normal(μ, σ2); μ = μ(x; w) (weighted) least-squares regression GLM: y ~ exp( θ φ(y) – A(θ) ) log-partition function sufficient statistics 9/20/18 Yao-Liang Yu
12
Outline Announcements Bernoulli model Logistic regression Computation
9/20/18 Yao-Liang Yu
13
Logit transform p(x; w) = wTx? p >=0 not guaranteed…
log p(x; w) = wTx? better! LHS negative, RHS real-valued… Logit transform Or equivalently odds ratio 9/20/18 Yao-Liang Yu
14
Prediction with confidence
ŷ = 1 if p = P(Y=1 | X=x) > ½ iff wTx > 0 Decision boundary wTx = 0 ŷ = sign(wTx) as before, but with confidence p(x; w) 9/20/18 Yao-Liang Yu
15
Not just a classification algorithm
Logistic regression does more than classification it estimates conditional probabilities under the logit transform assumption Having confidence in prediction is nice the price is an assumption that may or may not hold If classification is the sole goal, then doing extra work as shall see, SVM only estimates decision boundary 9/20/18 Yao-Liang Yu
16
More than logistic regression
F(p) transforms p from [0,1] to R Then, equating F(p) to a linear function wTx But, there are many other choices for F! precisely the inverse of any distribution function! 9/20/18 Yao-Liang Yu
17
Logistic distribution
Cumulative Distribution Function Mean mu, variance s2π2/3 9/20/18 Yao-Liang Yu
18
Outline Announcements Bernoulli model Logistic regression Computation
9/20/18 Yao-Liang Yu
19
Maximum likelihood Minimize negative log-likelihood 9/20/18
Yao-Liang Yu
20
Newton’s algorithm η = 1: iterative weighted least-squares PSD
Uncertain predictions get bigger weight η = 1: iterative weighted least-squares 9/20/18 Yao-Liang Yu
21
Comparison 9/20/18 Yao-Liang Yu
22
A word about implementation
Numerically computing exponential can be tricky easily underflows or overflows The usual trick estimate the range of the exponents shift the mean of the exponents to 0 9/20/18 Yao-Liang Yu
23
More than 2 classes Softmax Again, nonnegative and sum to 1
Negative log-likelihood (y is one-hot) 9/20/18 Yao-Liang Yu
24
Questions? 9/20/18 Yao-Liang Yu
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.