Download presentation
Presentation is loading. Please wait.
1
Linear beta pricing models: cross-sectional regression tests FINA790C Spring 2006 HKUST
2
Motivation The F test and ML likelihood ratio test are not without drawbacks: We need T > N To solve this we could form portfolios (but this is not without problems) When the model is rejected we don’t know why (e.g. do expected returns depend on factor loadings or on characteristics?)
3
Cross-sectional regression Can we use the information from the whole cross-section of stock returns to test linear beta pricing models? Fama-MacBeth two-pass cross-sectional regression methodology –Estimate each asset’s beta from time-series regression –Cross-sectional regression of asset returns on constant, betas (and possibly other characteristics) –Run cross-sectional regressions each period, average coefficients over time
4
Linear beta pricing model At time t the returns on the N securities are R t = [R 1t R 2t … R Nt ]’ with variance matrix R Let f t = [f 1t … f Kt ]’ be the vector of time-t values taken by the K factors with variance matrix f The linear beta pricing model is E[R it ] = λ 0 + λ’β i for i=1, …,N or E[R t ] = λ 0 1 + Bλ where B = E[(Rt-E(Rt))(ft-E(ft))’] f
5
Return generating process From the definition of B the time series for R t is R t = E[R t ] + B( f t -E(f t ) ) + u t with E[u t ] = 0 and E[u t f t ’] = 0 NxK Imposing linear beta pricing model gives R t = λ 0 1 + B(f t -E(f t )+λ) + u t
6
CSR method:description Define γ = [λ 0 λ’]’ ( (K+1)x1 vector ) and X = [ 1 B ] ( N x (K+1) matrix ) Assume N > K and rank(X) = K+1 Then E[R t ] = [ 1 B ][λ 0 λ’]’ = X γ
7
CSR – first pass In first step we estimate f and B through usual estimators f * = [(1/T) (f t –μ f *)(f t –μ f *)’] μ f * = (1/T) f t B* = [(1/T) (R t –μ R *)(f t –μ f *)’] f * -1 μ R * = (1/T) R t In practice we can use rolling estimation period prior to testing period
8
CSR - second pass In second step, for each t = 1, …, T we use the estimate B* of the beta matrix and do cross-sectional regression of returns on estimated B γ* = (X*’Q*X*) 1 X*’Q*R t (for feasible GLS with weighting matrix Q*) where X* = [1 B* ] The time-series average is γ** = (1/T) (X*’Q*X*) 1 X*’Q*R t = (X*’Q*X*) 1 X*’Q* μ R *
9
Fama-MacBeth OLS Fama-MacBeth set Q = I N and γ OLS * = (X*’X*) 1 X*’R t The time-series average is γ OLS ** = (X*’X*) 1 X*’μ R * And the variance of γ OLS * is given by (1/T) (γ OLS * - γ OLS ** )(γ OLS * - γ OLS ** )’
10
Issues in CSR methodology Don’t observe true beta B, but measured B* with error: what is effect on sampling distribution of estimates? How is CSR methodology related to maximum likelihood methodology?
11
Sampling distribution of γ** Let D = (X’QX) -1 X’Q, X = [ 1 B ] Basic Result If (R t ’, f t ’)’ is stationary and serially independent then under standard assumptions, as T→∞, √T(γ** - γ) converges in distribution to a multivariate normal with mean zero and covariance V = D R D’ + DΠD’ - D(Γ + Γ’)D’
12
Where does V come from? Write μ R * = X*γ + (μ R * - E(R t )) – (B*-B)λ So √T( γ** - γ) = (X*’Q*X*) -1 X*’Q* √T(μ R * - E(R t )) - (X*’Q*X*) -1 X*’Q* √T(B* - B) λ Error in estimating γ comes from: Using average rather than expected returns Using estimated rather than true betas
13
Comparing V to Fama-MacBeth variance estimator From the definition of γ OLS * its asymptotic variance is (X’X) -1 X’ R X(X’X) -1 = D R D’ So in general the Fama-MacBeth standard errors are incorrect because of the errors-in-variables problem
14
Special case: conditional homoscedasticity of residuals given factors Suppose we also assume that conditional on values of the factors f t, the time-series regression residuals u t have zero expectation, constant covariance U and are serially uncorrelated This will hold if (R t ’, f t ’)’ is iid and jointly multivariate normal
15
Asymptotic variance for special case Recall γ = [λ 0 λ’]’ ( (K+1)x1 vector ) and define the (K+1)x(K+1) bordered matrix f † = 0 0’ K 0 K f Then Basic Result holds with V = f † + (1+ λ’ f -1 λ)D U D’ Asymptotically valid standard errors are obtained by substituting consistent extimates for the various parameters
16
Example: Sharpe-Lintner-Black CAPM For k=1, this simplifies to: The usual Fama-MacBeth variance estimator (ignoring estimation error in betas) understates the correct variance except under the null hypothesis that λ 1 (market risk premium) = 0
17
Maximum likelihood and two-pass CSR MLE estimates B and γ simultaneously and thereby solves the errors-in-variables problem. Asymptotic covariance matrix of the two- pass cross-sectional regression GLS estimator γ** is the same as that for MLE I.e. two-pass GLS is consistent and asymptotically efficient as T→∞
18
Two-pass GLS For given T, as N →∞ however, the two- pass GLS estimator still suffers from an errors-in-variables problem from using B* (i.e. two-pass GLS is not N-consistent) We can make the two-pass GLS estimator N-consistent as well through a simple modification (see Litzenberger and Ramaswamy (1979), Shanken (1992))
19
Modified two-pass CSR For example: Sharpe-Lintner-Black CAPM estimated with two-pass OLS The errors-in-variable problem applies to betas, or the lower right-hand block of X*’X*. Note that E(β*’β*) = β’β + tr( U )/(Tσ M 2 * ) So deduct the last term from the lower- right hand block; this adjustment corrects for the EIV problem as N →∞.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.