Download presentation
Presentation is loading. Please wait.
Published byKerrie Jenkins Modified over 9 years ago
1
CHAPTER 10: Logistic Regression
2
Binary classification Two classes Y = {0,1} Goal is to learn how to correctly classify the input into one of these two classes Class 0 – labeled as 0 Class 1 – labeled as 1 We would like to learn f : X → {0,1} Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2
3
3
4
4
5
5
6
Since the output must be 0 or 1, we cannot directly use an unlimited linear model to estimate f(x i ). Furthermore, we would like to estimate the probability p i =Pr(y i =1|x i ) as f(x i ). We model the log odds of the probability p i as: ln (p i /(1-p i )) = .x i where ln (p i /(1-p i ))=g(p i ) is the logit function By applying the inverse of g() – the logistic function, we get back p i : logit -1 [ ln (p i /(1-p i )) ] = p i Applying it on the RHS as well, we get p i = logit -1 ( .x i ) = 1 / (1 + e - .xi ) Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 6
7
7
8
8
9
9
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.