Econometric Modelling

Slides:



Advertisements
Similar presentations
Introductory Mathematics & Statistics for Business
Advertisements

Cointegration and Error Correction Models
Autocorrelation Functions and ARIMA Modelling
Dummy Variables. Introduction Discuss the use of dummy variables in Financial Econometrics. Examine the issue of normality and the use of dummy variables.
Functional Form and Dynamic Models
Multiple Regression.
Dummy Dependent variable Models
F-tests continued.
Autocorrelation and Heteroskedasticity
Introduction Describe what panel data is and the reasons for using it in this format Assess the importance of fixed and random effects Examine the Hausman.
Ordinary least Squares
Regression Analysis.
Elementary Statistics
Applied Econometrics Second edition
Chi-Square and Analysis of Variance (ANOVA)
Panel Data Models Prepared by Vera Tabakova, East Carolina University.
Multiple Regression. Introduction In this chapter, we extend the simple linear regression model. Any number of independent variables is now allowed. We.
Lecture Unit Multiple Regression.
© The McGraw-Hill Companies, Inc., Chapter 12 Chi-Square.
1 Chapter 20: Statistical Tests for Ordinal Data.
Simple Linear Regression Analysis
Multiple Regression and Model Building
1 CAPM Betas The Capital Asset Pricing Model (“CAPM”) [ R s - R f ] = b 0 + b 1 [ R M - R f ] + e People commonly refer to the b 0 in this model as the.
The Multiple Regression Model.
Welcome to Econ 420 Applied Regression Analysis Study Guide Week Fourteen.
Statistics and Quantitative Analysis U4320
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
1 Lecture 2: ANOVA, Prediction, Assumptions and Properties Graduate School Social Science Statistics II Gwilym Pryce
© 2010 Pearson Prentice Hall. All rights reserved Least Squares Regression Models.
The Multiple Regression Model Prepared by Vera Tabakova, East Carolina University.
Econ 140 Lecture 131 Multiple Regression Models Lecture 13.
Multiple Regression Models
Multiple Regression Applications
Chapter 11 Multiple Regression.
Further Inference in the Multiple Regression Model Prepared by Vera Tabakova, East Carolina University.
Topic 3: Regression.
DUMMY CLASSIFICATION WITH MORE THAN TWO CATEGORIES This sequence explains how to extend the dummy variable technique to handle a qualitative explanatory.
Multiple Linear Regression Analysis
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Simple Linear Regression Analysis Chapter 13.
Regression Analysis. Regression analysis Definition: Regression analysis is a statistical method for fitting an equation to a data set. It is used to.
Class 4 Ordinary Least Squares SKEMA Ph.D programme Lionel Nesta Observatoire Français des Conjonctures Economiques
Multiple Regression. In the previous section, we examined simple regression, which has just one independent variable on the right side of the equation.
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Y X 0 X and Y are not perfectly correlated. However, there is on average a positive relationship between Y and X X1X1 X2X2.
Multiple Regression I 1 Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 4 Multiple Regression Analysis (Part 1) Terry Dielman.
Trees Example More than one variable. The residual plot suggests that the linear model is satisfactory. The R squared value seems quite low though,
11 Chapter 5 The Research Process – Hypothesis Development – (Stage 4 in Research Process) © 2009 John Wiley & Sons Ltd.
Example x y We wish to check for a non zero correlation.
11.1 Heteroskedasticity: Nature and Detection Aims and Learning Objectives By the end of this session students should be able to: Explain the nature.
The Probit Model Alexander Spermann University of Freiburg SS 2008.
Heteroscedasticity Heteroscedasticity is present if the variance of the error term is not a constant. This is most commonly a problem when dealing with.
The Probit Model Alexander Spermann University of Freiburg SoSe 2009
ECONOMETRICS EC331 Prof. Burak Saltoglu
Ch5 Relaxing the Assumptions of the Classical Model
F-tests continued.
FUNCTIONAL FORMS OF REGRESSION MODELS
Chow test.
Kakhramon Yusupov June 15th, :30pm – 3:00pm Session 3
The Multiple Regression Model
Chapter 11 Simple Regression
Further Inference in the Multiple Regression Model
Goodness of Fit The sum of squared deviations from the mean of a variable can be decomposed as follows: TSS = ESS + RSS This decomposition can be used.
Undergraduated Econometrics
Tutorial 1: Misspecification
Simple Linear Regression and Correlation
Chapter 7: The Normality Assumption and Inference with OLS
Seminar in Economics Econ. 470
BEC 30325: MANAGERIAL ECONOMICS
Tutorial 6 SEG rd Oct..
BEC 30325: MANAGERIAL ECONOMICS
Presentation transcript:

Econometric Modelling

Introduction To examine some econometric results from various financial models To use the results to determine levels of significance of the variables and whether the results fit the theory To use the results for testing specific restrictions. Suggest some potential problems when assessing model results

Carrying out a regression Set out the model/theory, including expected signs and magnitudes of the coefficients Gather data Estimate the model using a relevant technique Interpret results, assess diagnostic tests. If model fails the diagnostic tests, respecify model

Stock price return Model Given the following model, We wish to obtain estimates of the constant and slope coefficients:

Variables

Estimation We would estimate this model using ordinary least squares (OLS), although as we will find out later other methods may be more appropriate. The model is estimated using monthly data from 2000m1 to 2005 m12. This gives 6 years of data producing 72 observations.

Results

Coefficients The signs on the coefficients are as we hypothesised with the possible exception of p. However as this variable is insignificant, the sign is of less importance. For y, a unit rise in y gives a 0.8 of a unit rise in the dependent variable s(t). For p, a unit rise in p gives a 0.2 of a unit rise in s(t) etc.

T-statistics Firstly test if the 4 variables are individually different to 0, using the t-test (we usually ignore the constant) E.g. y: 0.8-0/0.2=4 Critical value is 2.000 (5%) (72-5 degrees of freedom, 60 d of f is nearest in tables) As 4>2 we reject the null hypothesis that y=0, therefore y is said to be significantly different to 0. The t-statistics for p: 1, i: 10 and rp:2.333, we conclude that y, i and rp are significant and p is insignificant. This result would suggest we might consider removing p from our model.

R-squared The adjusted R-squared statistic is 0.58, which is relatively good explanatory power. The F-test for the significance of the goodness of fit is 25. The critical value for F(4,67) is 2.53 (5%). As 25 > 2.53, the goodness of fit of the regression is significant, or the joint explanatory power of the variables is significantly different to 0.

DW statistic We first need to find the dl and du values from the tables. As k is 4 and we have 72 observations, the critical values are: dl-1.49 and du 1.74. The DW statistic is 1.84, which is between du (1.74) and 4-du (2.26), so we accept the null hypothesis of no 1st order autocorrelation.

LM test for higher autocorrelation Given that we have monthly data, we test for 12th order autocorrelation. The critical value for chi-squared (12) is 21.026. As the LM statistic is 12.8 < 21.026, we accept the null hypothesis of no 12th order autocorrelation

White’s test for heteroskedasticity This follows a chi-squared distribution with 14 degrees of freedom (including cross product terms) The critical value with 14 degrees of freedom is: 23.685 using the chi-squared tables As 15.2 < 23.685, we accept the null hypothesis that there is no heteroskedasticity.

Market Model According to the market model, the return on an asset is determined by a constant and the return to the market index.

Market Model As before we would wish to run an OLS regression, then interpret the coefficient, t-statistics and various diagnostic tests for autocorrelation and heteroskedasticity. In this model we would expect β > 0, the closer to 1, the closer asset i follows the market index. If we have 100 days of daily data for the regression, we get the following result:

Market Model

Market Model The result shows that a unit rise in the market produces a 0.9 of a unit rise in asset i. This suggests this asset closely follows the market and would be considered safe. The t-statistic shows that the market index is significant, 0.9-0/0.1=9. critical value is 1.98. As 9 > 1.98, we reject the null hypothesis and say that the market has a significant effect on asset i. The usual diagnostic tests would have been produced and interpreted as before.

The Market-Adjusted-Returns-Model Based on the original model, termed the market model, we can test a restriction using the t-test to determine if the model we have is an alternative specification, termed the market-adjusted-returns model.

Test This model implies the following: β=1 We can use a t-test to determine if this condition holds.

Test A t-test can also be used to determine if β = 1. The critical value is the same as before, the test is:

Test As 1< 2 (ignore the sign the t-statistic is an absolute value), we fail to reject the null hypothesis and conclude that the market adjusted model applies. In this case the hypothesis is:

F-test of a restriction The main use of this F-test is to determine if a group of explanatory variables are jointly equal to 0. However we can test alternative theories or restrictions The most common restriction is that 2 or more explanatory variables sum to 1. The best example of this is the Cobb-Douglas production function.

Cobb-Douglas Production Function This model suggests that output is a function of capital and labour, in logarithmic form it can be expressed as:

Restriction We can then test if constant returns to scale applies by testing the restriction that the coefficients on the capital and labour coefficients sum to 1. Constant returns to scale are a proportionate increase in all inputs produces a proportionate increase in outputs. This allows us to rewrite the Cobb-Douglas production function in terms of output per unit of labour (divide through by l)

Test for Constant Returns to Scale Run the regression in its unrestricted form with both explanatory variables (k and l) Collect the RSS (unrestricted) Run the following restricted version, with constant returns to scale and again collect the RSS (restricted), then use the formula used previously (see over)

Test for constant returns to scale Cont..

Test for constant returns to scale…. If we get a RSS (unrestricted) of 1.2 and a RSS (restricted) of 1.4 and we have 60 observations. We would get an F statistic of:

Constant returns to scale… We would reject the null hypothesis of constant returns to scale, therefore we would use the unrestricted model with capital and labour included separately. The null hypothesis is:

Conclusion When running an OLS regression, we need to assess the coefficients, t-statistics and diagnostic tests. We can also use the t-statistic to determine if a coefficient equals 1. The F-test can also be used to test a specific restriction in a model.