Download presentation
Presentation is loading. Please wait.
1
Lecture Week 3 Topics in Regression Analysis
2
Overview Multiple regression Dummy variables Tests of restrictions 2 nd hour: some issues in cost of capital
3
Multiple Regression Same principle as simple regression – but difficult to draw. Key elements: –Definition of variables: source, measurement, number of observations, raw, log, squared, etc. –Method of estimation: typically ordinary least squares for linear forms –The output
4
Log-log formulation
5
Defining an equation in Eviews Once you’ve created a workfile and loaded or imported your data, click on: Objects….New objects…Equation Then type in the list of variables, e.g. xssretsp c xsretftse Once you have the first result you can use Proc…Specify/estimate to adjust the equation
6
Looking at multiple regression output Dependent Variable: FTSE100 Method: Least Squares Date: 02/04/04 Time: 13:49 Sample(adjusted): 1/02/2003 10/20/2003 Included observations: 208 after adjusting endpoints Variable CoefficientStd.Error t-StatisticProb. C 59.456956.352781.0550840.2926 FTSE100(-1) 0.985040.01415369.597050 MON 8.0227898.331810.9629110.3367 R-squared0.9594Mean dependent var3975.715 Adjusted R-squared0.9590 S.D. dependent var238.2850 S.E. of regression48.234 Akaike info criterion10.6043 Sum squared resid476948.9 Schwarz criterion10.6524 Log likelihood-1099.852 F-statistic2423.397 Durbin-Watson stat2.2199 Prob(F-statistic)0.000000
7
What the third box means R squared: proportion of variance of Y explained by reg Adjusted R squared = takes account of number of variables s.e. of regression = s.d. of residuals Log likelihood: function of log(sum sq resid) Durbin-Watson: Test for 1 st order autocorrelation Akaike and Schwartz information criteria: used for selecting between non-nested models with different numbers of parameters F- statistic: tests null that all slope coeffs are zero
8
Dummy variables Intercept dummy: Y = a + b X + c Dummy Y X Dummy = 0 c Dummy = 1
9
A few points on tests Classical tests based on “nested” hypotheses Lots of them based on “Do the data support the alternate hypothesis against the null?” Core methodology= difference in fit between alternate hypothesis (typically unrestricted form) versus null hypothesis (typically restricted form) t tests, F tests, likelihood-function based tests all based ultimately on what happens to sum of squared residuals in presence/absence of restriction b 1, b 2, b 3, can be anything b 2 = 0
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.