Download presentation
Presentation is loading. Please wait.
Published byAvice Ramsey Modified over 9 years ago
1
Structure of the class 1.The linear probability model 2.Maximum likelihood estimations 3.Binary logit models and some other models 4.Multinomial models
2
The Linear Probability Model
3
The linear probability model When the dependent variable is binary (0/1, for example, Y=1 if the firm innovates, 0 otherwise), OLS is called the linear probability model. How should one interpret β j ? Provided that E(u|X)=0 holds true, then: β measures the variation of the probability of success for a one-unit variation of X (ΔX=1)
4
1. Non normality of errors 2. Heteroskedastic errors 3. Fallacious predictions Limits of the linear probability model
5
Overcoming the limits of the LPM 1. Non normality of errors Increase sample size 2. Heteroskedastic errors Use robust estimators 3. Fallacious prediction Perform non linear or constrained regressions
6
Persistent use of LPM Although it has limits, the LPM is still used 1.In the process of data exploration (early stages of the research) 2.It is a good indicator of the marginal effect of the representative observation (at the mean) 3.When dealing with very large samples, least squares can overcome the complications imposed by maximum likelihood techniques. Time of computation Endogeneity and panel data problems
7
The LOGIT/PROBIT Model
8
Probability, odds and logit/probit We need to explain the occurrence of an event: the LHS variable takes two values : y={0;1}. In fact, we need to explain the probability of occurrence of the event, conditional on X: P(Y=y | X) ∈ [0 ; 1]. OLS estimations are not adequate, because predictions can lie outside the interval [0 ; 1]. We need to transform a real number, say z to ∈ ]-∞;+∞[ into P(Y=y | X) ∈ [0 ; 1]. The logit/probit transformation links a real number z ∈ ]- ∞;+∞[ to P(Y=y | X) ∈ [0 ; 1].It is also called the link function
9
Binary Response Models: Logit - Probit Link function approach
10
Maximum likelihood estimations OLS can be of much help. We will use Maximum Likelihood Estimation (MLE) instead. MLE is an alternative to OLS. It consists of finding the parameters values which is the most consistent with the data we have. The likelihood is defined as the joint probability to observe a given sample, given the parameters involved in the generating function. One way to distinguish between OLS and MLE is as follows: OLS adapts the model to the data you have : you only have one model derived from your data. MLE instead supposes there is an infinity of models, and chooses the model most likely to explain your data.
11
Let us assume that you have a sample of n random observations. Let f(y i ) be the probability that y i = 1 or y i = 0. The joint probability to observe jointly n values of y i is given by the likelihood function: Logit likelihood Likelihood functions
12
Knowing p (as the logit), having defined f(.), we come up with the likelihood function:
13
The log transform of the likelihood function (the log likelihood) is much easier to manipulate, and is written: Log likelihood (LL) functions
14
The LL function can yield an infinity of values for the parameters β. Given the functional form of f(.) and the n observations at hand, which values of parameters β maximize the likelihood of my sample? In other words, what are the most likely values of my unknown parameters β given the sample I have? Maximum likelihood estimations
15
However, there is not analytical solutions to this non linear problem. Instead, we rely on a optimization algorithm (Newton-Raphson) The LL is globally concave and has a maximum. The gradient is used to compute the parameters of interest, and the hessian is used to compute the variance-covariance matrix. Maximum likelihood estimations You need to imagine that the computer is going to generate all possible values of β, and is going to compute a likelihood value for each (vector of ) values to then choose (the vector of) β such that the likelihood is highest.
16
Binary Dependent Variable – Research questions We want to explore the factors affecting the probability of being successful innovator (inno = 1): Why?
17
Instruction Stata : logit logit y x 1 x 2 x 3 … x k [if] [weight] [, options] Options noconstant : estimates the model without the constant robust : estimates robust variances, also in case of heteroscedasticity if : it allows to select the observations we want to include in the analysis weight : it allows to weight different observations Logistic Regression with STATA
18
A positive coefficient indicates that the probability of innovation success increases with the corresponding explanatory variable. A negative coefficient implies that the probability to innovate decreases with the corresponding explanatory variable. Warning! One of the problems encountered in interpreting probabilities is their non-linearity: the probabilities do not vary in the same way according to the level of regressors This is the reason why it is normal in practice to calculate the probability of (the event occurring) at the average point of the sample Interpretation of Coefficients
19
Let’s run the more complete model logit inno lrdi lassets spe biotech
20
Using the sample mean values of rdi, lassets, spe and biotech, we compute the conditional probability : Interpretation of Coefficients
21
It is often useful to know the marginal effect of a regressor on the probability that the event occur (innovation) As the probability is a non-linear function of explanatory variables, the change in probability due to a change in one of the explanatory variables is not identical if the other variables are at the average, median or first quartile, etc. level. Marginal Effects
22
Goodness of Fit Measures In ML estimations, there is no such measure as the R 2 But the log likelihood measure can be used to assess the goodness of fit. But note the following : The higher the number of observations, the lower the joint probability, the more the LL measures goes towards -∞ Given the number of observations, the better the fit, the higher the LL measures (since it is always negative, the closer to zero it is) The philosophy is to compare two models looking at their LL values. One is meant to be the constrained model, the other one is the unconstrained model.
23
Goodness of Fit Measures A model is said to be constrained when the observed set the parameters associated with some variable to zero. A model is said to be unconstrained when the observer release this assumption and allows the parameters associated with some variable to be different from zero. For example, we can compare two models, one with no explanatory variables, one with all our explanatory variables. The one with no explanatory variables implicitly assume that all parameters are equal to zero. Hence it is the constrained model because we (implicitly) constrain the parameters to be nil.
24
The likelihood ratio test (LR test) The most used measure of goodness of fit in ML estimations is the likelihood ratio. The likelihood ratio is the difference between the unconstrained model and the constrained model. This difference is distributed 2. If the difference in the LL values is (no) important, it is because the set of explanatory variables brings in (un)significant information. The null hypothesis H 0 is that the model brings no significant information as follows: High LR values will lead the observer to reject hypothesis H 0 and accept the alternative hypothesis H a that the set of explanatory variables does significantly explain the outcome.
25
The McFadden Pseudo R 2 We also use the McFadden Pseudo R 2 (1973). Its interpretation is analogous to the OLS R 2. However its is biased doward and remain generally low. Le pseudo-R 2 also compares The likelihood ratio is the difference between the unconstrained model and the constrained model and is comprised between 0 and 1.
26
Goodness of Fit Measures Constrained model Unconstrained model
27
The Logit model is only one way of modeling binary choice models The Probit model is another way of modeling binary choice models. It is actually more used than logit models and assume a normal distribution (not a logistic one) for the z values. The complementary log-log models is used where the occurrence of the event is very rare, with the distribution of z being asymetric. Other Binary Choice models
28
Probit model Complementary log-log model
29
Likelihood functions and Stata commands Example logit inno rdi lassets spe pharma probit inno rdi lassets spe pharma cloglog inno rdi lassets spe pharma
30
Probability Density Functions
31
Cumulative Distribution Functions
32
Comparison of models OLSLogitProbitC log-log Ln(R&D intensity)0.1100.7520.422354 [3.90]***[3.57]***[3.46]***[3.13]*** ln(Assets)0.1250.9970.5640.493 [8.58]***[7.29]***[7.53]***[7.19]*** Spe0.0560.4250.2240.151 [1.11][1.01][0.98][0.76] BiotechDummy0.4423.7992.1201.817 [7.49]***[6.58]***[6.77]***[6.51]*** Constant-0.843-11.634-6.576-6.086 [3.91]**[6.01]***[6.12]***[6.08]*** Observations431 Absolute t value in brackets (OLS) z value for other models. * 10%, ** 5%, *** 1%
33
Comparison of marginal effects OLSLogitProbitC log-log Ln(R&D intensity)0.110 0.082 0.090 0.098 ln(Assets)0.125 0.110 0.1210.136 Specialisation0.0560.0460.0470.042 Biotech Dummy 0.442 0.3680.3740.379 For all models logit, probit and cloglog, marginal effects have been computed for a one-unit variation (around the mean) of the variable at stake, holding all other variables at the sample mean values.
34
Multinomial LOGIT Models
35
Multinomial models Let us now focus on the case where the dependent variable has several outcomes (or is multinomial). For example, innovative firms may need to collaborate with other organizations. One can code this type of interactions as follows Collaborate with university (modality 1) Collaborate with large incumbent firms (modality 2) Collaborate with SMEs (modality 3) Do it alone (modality 4) Or, studying firm survival Survival (modality 1) Liquidation (modality 2) Mergers & acquisition (modality 3)
36
36 Multiple alternatives without obvious ordering Choice of a single alternative out of a number of distinct alternatives e.g.: which means of transportation do you use to get to work? bus, car, bicycle etc. example for ordered structure: how do you feel today: very well, fairly well, not too well, miserably
38
Random Utility Model RUM underlies economic interpretation of discrete choice models. Developed by Daniel McFadden for econometric applications see JoEL January 2001 for Nobel lecture; also Manski (2001) Daniel McFadden and the Econometric Analysis of Discrete Choice, Scandinavian Journal of Economics, 103(2), 217-229 Preferences are functions of biological taste templates, experiences, other personal characteristics Some of these are observed, others unobserved Allows for taste heterogeneity Discussion below is in terms of individual utility (e.g. migration, transport mode choice) but similar reasoning applies to firm choices
39
Random Utility Model Individual i’s utility from a choice j can be decomposed into two components: V ij is deterministic – common to everyone, given the same characteristics and constraints representative tastes of the population e.g. effects of time and cost on travel mode choice ij is random reflects idiosyncratic tastes of i and unobserved attributes of choice j
40
Random Utility Model V ij is a function of attributes of alternative j (e.g. price and time) and observed consumer and choice characteristics. We are interested in finding , , Lets forget about z now for simplicity
41
RUM and binary choices Consider two choices e.g. bus or car We observe whether an individual uses one or the other Define What is the probability that we observe an individual choosing to travel by bus? Assume utility maximisation Individual chooses bus (y=1) rather than car (y=0) if utility of commuting by bus exceeds utility of commuting by car
42
RUM and binary choices So choose bus if So the probability that we observe an individual choosing bus travel is
43
The linear probability model Assume probability depends linearly on observed characteristics (price and time) Then you can estimate by linear regression Where is the “dummy variable” for mode choice (1 if bus, 0 if car) Other consumer and choice characteristics can be included (the zs in the first slide in this section)
44
Probits and logits Common assumptions: Cumulative normal distribution function – “Probit” Logistic function – “Logit” Estimation by maximum likelihood
45
45 A discrete choice underpinning choice between M alternatives decision is determined by the utility level U ij, an individual i derives from choosing alternative j Let: where i=1,…,N individuals; j=0,…,J alternatives (1) The alternative providing the highest level of utility will be chosen.
46
46 The probability that alternative j will be chosen is: In general, this requires solving multidimensional integrals analytical solutions do not exist
47
47 Exception: If the error terms εij in are assumed to be independently & identically standard extreme value distributed, then an analytical solution exists. In this case, similar to binary logit, it can be shown that the choice probabilities are
48
Let us assume that you have a sample of n random observations. Let f(y j ) be the probability that y i = j. The joint probability to observe jointly n values of y j is given by the likelihood function: We need to specify function f(.). It comes from the empirical discrete distribution of an event that can have several outcomes. This is the multinomial distribution. Hence: Likelihood functions
49
The maximum likelihood function The maximum likelihood function reads:
50
The maximum likelihood function The log transform of the likelihood yields
51
Multinomial logit models Stata Instruction : mlogit mlogit y x 1 x 2 x 3 … x k [if] [weight] [, options] Options : noconstant : omits the constant robust : controls for heteroskedasticity if : select observations weight : weights observations
52
use mlogit.dta, clear mlogit type_exit log_time log_labour entry_age entry_spin cohort_* Base outcome, chosen by STATA, with the highest empirical frequency Goodness of fit Parameter estimates, Standard errors and z values Multinomial logit models
53
Interpretation of coefficients The interpretation of coefficients always refer to the base category Does the probability of being bought- out decrease overtime ? No! Relative to survival the probability of being bought-out decrease overtime
54
Interpretation of coefficients The interpretation of coefficients always refer to the base category Is the probability of being bought-out lower for spinoff? No! Relative to survival the probability of being bought-out is lower for spinoff
55
55 Marginal Effects Elasticities relative change of p ij if x increases by 1 per cent
56
Independence of irrelevant alternatives - IAA The model assumes that each pair of outcome is independent from all other alternatives. In other words, alternatives are irrelevant. From a statistical viewpoint, this is tantamount to assuming independence of the error terms across pairs of alternatives A simple way to test the IIA property is to estimate the model taking off one modality (called the restrained model), and to compare the parameters with those of the complete model If IIA holds, the parameters should not change significantly If IIA does not hold, the parameters should change significantly
57
Multinomial logit and “IIA” Many applications in economic and geographical journals (and other research areas) The multinomial logit model is the workhorse of multiple choice modelling in all disciplines. Easy to compute But it has a drawback
58
Independence of Irrelevant Alternatives Consider market shares Red bus 20% Blue bus 20% Train 60% IIA assumes that if red bus company shuts down, the market shares become Blue bus 20% + 5% = 25% Train 60% + 15% = 75% Because the ratio of blue bus trips to train trips must stay at 1:3
59
Independence of Irrelevant Alternatives Model assumes that ‘unobserved’ attributes of all alternatives are perceived as equally similar But will people unable to travel by red bus really switch to travelling by train? Most likely outcome is (assuming supply of bus seats is elastic) Blue bus: 40% Train: 60% This failure of multinomial/conditional logit models is called the Independence of Irrelevant Alternatives assumption (IIA)
60
H 0 : The IIA property is valid H 1 : The IIA property is not valid The H statistics (H stands for Hausman) follows a χ² distribution with M degree of freedom (M being the number of parameters) Independence of irrelevant alternatives - IAA
61
STATA application: the IIA test H 0 : The IIA property is valid H 1 : The IIA property is not valid mlogtest, hausman Omitted variable
62
Application de IIA mlogtest, hausman We compare the parameters of the model “liquidation relative bought-out” estimated simultaneously with “survival relative to bought-out” avec the parameters of the model “liquidation relative bought-out” estimated without “survival relative to bought-out” H 0 : The IIA property is valid H 1 : The IIA property is not valid
63
Application de IIA mlogtest, hausman The conclusion is that outcome survival significantly alters the choice between liquidation and bought-out. In fact for a company, being bought-out must be seen as a way to remain active with a cost of losing control on economic decision, notably investment. H 0 : The IIA property is valid H 1 : The IIA property is not valid
64
64 Cramer-Ridder Test Often you want to know whether certain alternatives can be merged into one: e.g., do you have to distinguish between employment states such as “unemployment” and “nonemployment” The Cramer-Ridder tests the null hypothesis that the alternatives can be merged. It has the form of a LR test: 2(logL U -logL R )~χ²
65
65 Derive the log likelihood value of the restricted model where two alternatives (here, A and N) have been merged: where log is the log likelihood of the of the pooled model, and n A and n N are the number of times A and N have been chosen restricted model, log is the log likelihood
66
Exercise use http://www.stata- press.com/data/r8/sysdsn3 tabulate insure mlogit insure age male nonwhite site2 site3
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.