1 BINARY CHOICE MODELS: PROBIT ANALYSIS In the case of probit analysis, the sigmoid function is the cumulative standardized normal distribution.

Slides:



Advertisements
Similar presentations
Dummy Dependent variable Models
Advertisements

Brief introduction on Logistic Regression
CHOW TEST AND DUMMY VARIABLE GROUP TEST
EC220 - Introduction to econometrics (chapter 10)
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: interactive explanatory variables Original citation: Dougherty, C. (2012)
Repeated Measures, Part 3 May, 2009 Charles E. McCulloch, Division of Biostatistics, Dept of Epidemiology and Biostatistics, UCSF.
ELASTICITIES AND DOUBLE-LOGARITHMIC MODELS
1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.
Lecture 9 Today: Ch. 3: Multiple Regression Analysis Example with two independent variables Frisch-Waugh-Lovell theorem.
INTERPRETATION OF A REGRESSION EQUATION
1 BINARY CHOICE MODELS: PROBIT ANALYSIS In the case of probit analysis, the sigmoid function F(Z) giving the probability is the cumulative standardized.
Nguyen Ngoc Anh Nguyen Ha Trang
Chapter 4: Linear Models for Classification
Multilevel Models 4 Sociology 8811, Class 26 Copyright © 2007 by Evan Schofer Do not copy or distribute without permission.
Logistic Regression Multivariate Analysis. What is a log and an exponent? Log is the power to which a base of 10 must be raised to produce a given number.
1Prof. Dr. Rainer Stachuletz Limited Dependent Variables P(y = 1|x) = G(  0 + x  ) y* =  0 + x  + u, y = max(0,y*)
Binary Response Lecture 22 Lecture 22.
In previous lecture, we highlighted 3 shortcomings of the LPM. The most serious one is the unboundedness problem, i.e., the LPM may make the nonsense predictions.
Ordered probit models.
FIN357 Li1 Binary Dependent Variables Chapter 12 P(y = 1|x) = G(  0 + x  )
So far, we have considered regression models with dummy variables of independent variables. In this lecture, we will study regression models whose dependent.
In previous lecture, we dealt with the unboundedness problem of LPM using the logit model. In this lecture, we will consider another alternative, i.e.
The Binary Logit Model Definition Characteristics Estimation 0.
© Christopher Dougherty 1999–2006 VARIABLE MISSPECIFICATION I: OMISSION OF A RELEVANT VARIABLE We will now investigate the consequences of misspecifying.
BINARY CHOICE MODELS: LOGIT ANALYSIS
Christopher Dougherty EC220 - Introduction to econometrics (chapter 4) Slideshow: nonlinear regression Original citation: Dougherty, C. (2012) EC220 -
TOBIT ANALYSIS Sometimes the dependent variable in a regression model is subject to a lower limit or an upper limit, or both. Suppose that in the absence.
DUMMY CLASSIFICATION WITH MORE THAN TWO CATEGORIES This sequence explains how to extend the dummy variable technique to handle a qualitative explanatory.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: Tobit models Original citation: Dougherty, C. (2012) EC220 - Introduction.
1 INTERACTIVE EXPLANATORY VARIABLES The model shown above is linear in parameters and it may be fitted using straightforward OLS, provided that the regression.
Christopher Dougherty EC220 - Introduction to econometrics (chapter 10) Slideshow: binary choice logit models Original citation: Dougherty, C. (2012) EC220.
1 PROXY VARIABLES Suppose that a variable Y is hypothesized to depend on a set of explanatory variables X 2,..., X k as shown above, and suppose that for.
1 G Lect 11W Logistic Regression Review Maximum Likelihood Estimates Probit Regression and Example Model Fit G Multiple Regression Week 11.
Methods Workshop (3/10/07) Topic: Event Count Models.
F TEST OF GOODNESS OF FIT FOR THE WHOLE EQUATION 1 This sequence describes two F tests of goodness of fit in a multiple regression model. The first relates.
MULTIPLE REGRESSION WITH TWO EXPLANATORY VARIABLES: EXAMPLE 1 This sequence provides a geometrical interpretation of a multiple regression model with two.
Logistic Regression STA2101/442 F 2014 See last slide for copyright information.
9-1 MGMG 522 : Session #9 Binary Regression (Ch. 13)
Maximum Likelihood Estimation Methods of Economic Investigation Lecture 17.
Lecture 3 Linear random intercept models. Example: Weight of Guinea Pigs Body weights of 48 pigs in 9 successive weeks of follow-up (Table 3.1 DLZ) The.
Lecture 18 Ordinal and Polytomous Logistic Regression BMTRY 701 Biostatistical Methods II.
(1)Combine the correlated variables. 1 In this sequence, we look at four possible indirect methods for alleviating a problem of multicollinearity. POSSIBLE.
The dangers of an immediate use of model based methods The chronic bronchitis study: bronc: 0= no 1=yes poll: pollution level cig: cigarettes smokes per.
Qualitative and Limited Dependent Variable Models ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
1 NONLINEAR REGRESSION Suppose you believe that a variable Y depends on a variable X according to the relationship shown and you wish to obtain estimates.
SEMILOGARITHMIC MODELS 1 This sequence introduces the semilogarithmic model and shows how it may be applied to an earnings function. The dependent variable.
1 BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL Economists are often interested in the factors behind the decision-making of individuals or enterprises,
1 REPARAMETERIZATION OF A MODEL AND t TEST OF A LINEAR RESTRICTION Linear restrictions can also be tested using a t test. This involves the reparameterization.
F TESTS RELATING TO GROUPS OF EXPLANATORY VARIABLES 1 We now come to more general F tests of goodness of fit. This is a test of the joint explanatory power.
Roger B. Hammer Assistant Professor Department of Sociology Oregon State University Conducting Social Research Logistic Regression Categorical Data Analysis.
Nonparametric Statistics
Birthweight (gms) BPDNProp Total BPD (Bronchopulmonary Dysplasia) by birth weight Proportion.
1 BINARY CHOICE MODELS: LOGIT ANALYSIS The linear probability model may make the nonsense predictions that an event will occur with probability greater.
1 COMPARING LINEAR AND LOGARITHMIC SPECIFICATIONS When alternative specifications of a regression model have the same dependent variable, R 2 can be used.
The Probit Model Alexander Spermann University of Freiburg SS 2008.
R Programming/ Binomial Models Shinichiro Suna. Binomial Models In binomial model, we have one outcome which is binary and a set of explanatory variables.
VARIABLE MISSPECIFICATION I: OMISSION OF A RELEVANT VARIABLE In this sequence and the next we will investigate the consequences of misspecifying the regression.
Instructor: R. Makoto 1richard makoto UZ Econ313 Lecture notes.
The Probit Model Alexander Spermann University of Freiburg SoSe 2009
Nonparametric Statistics
Lecture 18 Matched Case Control Studies
THE LOGIT AND PROBIT MODELS
Introduction to Logistic Regression
THE LOGIT AND PROBIT MODELS
Nonparametric Statistics
MPHIL AdvancedEconometrics
Introduction to Econometrics, 5th edition
Limited Dependent Variables
Presentation transcript:

1 BINARY CHOICE MODELS: PROBIT ANALYSIS In the case of probit analysis, the sigmoid function is the cumulative standardized normal distribution.

2 BINARY CHOICE MODELS: PROBIT ANALYSIS The maximum likelihood principle is again used to obtain estimates of the parameters.

. probit GRAD ASVABC SM SF MALE Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration 2: log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Probit estimates Number of obs = 540 LR chi2(4) = Prob > chi2 = Log likelihood = Pseudo R2 = GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] ASVABC | SM | SF | MALE | _cons | Here is the result of the probit regression using the example of graduating from high school. BINARY CHOICE MODELS: PROBIT ANALYSIS

. probit GRAD ASVABC SM SF MALE Iteration 0: log likelihood = Iteration 1: log likelihood = Iteration 2: log likelihood = Iteration 3: log likelihood = Iteration 4: log likelihood = Probit estimates Number of obs = 540 LR chi2(4) = Prob > chi2 = Log likelihood = Pseudo R2 = GRAD | Coef. Std. Err. z P>|z| [95% Conf. Interval] ASVABC | SM | SF | MALE | _cons | BINARY CHOICE MODELS: PROBIT ANALYSIS As with logit analysis, the coefficients have no direct interpretation. However, we can use them to quantify the marginal effects of the explanatory variables on the probability of graduating from high school.

5 As with logit analysis, the marginal effect of X i on p can be written as the product of the marginal effect of Z on p and the marginal effect of X i on Z. BINARY CHOICE MODELS: PROBIT ANALYSIS

6 The marginal effect of Z on p is given by the standardized normal distribution. The marginal effect of X i on Z is given by  i. BINARY CHOICE MODELS: PROBIT ANALYSIS

7 As with logit analysis, the marginal effects vary with Z. A common procedure is to evaluate them for the value of Z given by the sample means of the explanatory variables. BINARY CHOICE MODELS: PROBIT ANALYSIS

8 As with logit analysis, the marginal effects vary with Z. A common procedure is to evaluate them for the value of Z given by the sample means of the explanatory variables. BINARY CHOICE MODELS: PROBIT ANALYSIS. sum GRAD ASVABC SM SF MALE Variable | Obs Mean Std. Dev. Min Max GRAD | ASVABC | SM | SF | MALE |

Probit: Marginal Effects mean b product f(Z) f(Z)b ASVABC SM11.58–0.008– –0.001 SF MALE constant1.00–1.451–1.451 Total BINARY CHOICE MODELS: PROBIT ANALYSIS In this case Z is equal to when the X variables are equal to their sample means.

Probit: Marginal Effects mean b product f(Z) f(Z)b ASVABC SM11.58–0.008– –0.001 SF MALE constant1.00–1.451–1.451 Total BINARY CHOICE MODELS: PROBIT ANALYSIS We then calculate f(Z).

11 BINARY CHOICE MODELS: PROBIT ANALYSIS The estimated marginal effects are f(Z) multiplied by the respective coefficients. We see that a one-point increase in ASVABC increases the probability of graduating from high school by 0.4 percent. Probit: Marginal Effects mean b product f(Z) f(Z)b ASVABC SM11.58–0.008– –0.001 SF MALE constant1.00–1.451–1.451 Total1.881

Probit: Marginal Effects mean b product f(Z) f(Z)b ASVABC SM11.58–0.008– –0.001 SF MALE constant1.00–1.451–1.451 Total Every extra year of schooling of the mother decreases the probability of graduating by 0.1 percent. Father's schooling has no discernible effect. Males have 0.4 percent higher probability than females. BINARY CHOICE MODELS: PROBIT ANALYSIS

Logit Probit Linear f(Z)b f(Z)b b ASVABC SM–0.001–0.001–0.002 SF MALE – The logit and probit results are displayed for comparison. The coefficients in the regressions are very different because different mathematical functions are being fitted. BINARY CHOICE MODELS: PROBIT ANALYSIS

Logit Probit Linear f(Z)b f(Z)b b ASVABC SM–0.001–0.001–0.002 SF MALE – Nevertheless the estimates of the marginal effects are usually similar. BINARY CHOICE MODELS: PROBIT ANALYSIS

Logit Probit Linear f(Z)b f(Z)b b ASVABC SM–0.001–0.001–0.002 SF MALE – However, if the outcomes in the sample are divided between a large majority and a small minority, they can differ. BINARY CHOICE MODELS: PROBIT ANALYSIS

Logit Probit Linear f(Z)b f(Z)b b ASVABC SM–0.001–0.001–0.002 SF MALE – This is because the observations are then concentrated in a tail of the distribution. Although the logit and probit functions share the same sigmoid outline, their tails are somewhat different. BINARY CHOICE MODELS: PROBIT ANALYSIS

Logit Probit Linear f(Z)b f(Z)b b ASVABC SM–0.001–0.001–0.002 SF MALE – BINARY CHOICE MODELS: PROBIT ANALYSIS This is the case here, but even so the estimates are identical to three decimal places. According to a leading authority, Amemiya, there are no compelling grounds for preferring logit to probit or vice versa.

18 Finally, for comparison, the estimates for the corresponding regression using the linear probability model are displayed. BINARY CHOICE MODELS: PROBIT ANALYSIS Logit Probit Linear f(Z)b f(Z)b b ASVABC SM–0.001–0.001–0.002 SF MALE –0.007

Logit Probit Linear f(Z)b f(Z)b b ASVABC SM–0.001–0.001–0.002 SF MALE – If the outcomes are evenly divided, the LPM coefficients are usually similar to those for logit and probit. However, when one outcome dominates, as in this case, they are not very good approximations. BINARY CHOICE MODELS: PROBIT ANALYSIS

Binary Response Models: Interpretation II Probit: g(0)=.4 Logit: g(0)=.25 Linear probability model: g(0)=1 –To make the logit and probit slope estimates comparable, we can multiply the probit estimates by.4/.25=1.6. –The logit slope estimates should be divided by 4 to make them roughly comparable to the LPM (Linear Probability Model) estimates.