Download presentation
Presentation is loading. Please wait.
Published byPatrick Casey Modified over 9 years ago
1
4. Binary dependent variable Sometimes it is not possible to quantify the y’s Ex. To work or not? To vote one or other party, etc. Some difficulties: 1.Heteroskedasticity LS inefficient 2.Individual tests of significance not applicable (lack of normality) R 2 not representative 3.LS or GLS can be improved (non linear methods) 4.Prediction not reliable (cannot get 0 or 1) The forecasted value for β ^ X o is P(Y=1)
2
4.1 Linear probability model The theoretical probability that an i chooses option Y=1 is determined by a linear function In sum, it is like LS with a dummy as dependent variable Given that Y {0,1} β is NOT the change in Y to unit changes in X β measures the change in the probability of success when X changes, all other things the same
3
4.2 & 4.3 Logit & Probit The LPM is easy to use yet has two serious drawbacks: 1.Prediction is not bounded between [0,1] 2.The rate of change is constant (this is common to LPM & LS!) Alternatives: Logit & Probit Non-linear functions that make for a bounded probability between [0,1] Logit: Logistic function accumulative distribution of logistic distribution Probit: accumulative distribution of normal distribution Which one is better? Similar results
4
4.2 & 4.3 Logit & Probit LPM LS or GLS Now: maximum likelihood (ML), due to the NON linear nature of the function. Before, under CLRM LS = ML ML will account for heteroskedasticity, is consistent, and asymptotically normal Individual hypothesis tests are analogous to those of LS
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.