Presentation is loading. Please wait.

Presentation is loading. Please wait.

May 2004 Prof. Himayatullah 1 Basic Econometrics Chapter 5: TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing.

Similar presentations


Presentation on theme: "May 2004 Prof. Himayatullah 1 Basic Econometrics Chapter 5: TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing."— Presentation transcript:

1 May 2004 Prof. Himayatullah 1 Basic Econometrics Chapter 5: TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing

2 May 2004 Prof. Himayatullah 2 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-1. Statistical Prerequisites S ee Appendix A with key concepts such as probability, probability distributions, Type I Error, Type II Error,level of significance, power of a statistic test, and confidence interval

3 May 2004 Prof. Himayatullah 3 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-2. Interval estimation: Some basic Ideas How “ close ” is, say,  ^ 2 to  2 ? Pr (  ^ 2 -    2   ^ 2 +  ) = 1 -  (5.2.1) Random interval  ^ 2 -    2   ^ 2 +  if exits, it known as confidence interval  ^ 2 -  is lower confidence limit  ^ 2 +  is upper confidence limit

4 May 2004 Prof. Himayatullah 4 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-2. Interval estimation: Some basic Ideas (1 -  ) is confidence coefficient, 0 <  < 1 is significance level Equation (5.2.1) does not mean that the Pr of  2 lying between the given limits is (1 -  ), but the Pr of constructing an interval that contains  2 is (1 -  ) (  ^ 2 - ,  ^ 2 +  ) is random interval

5 May 2004 Prof. Himayatullah 5 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-2. Interval estimation: Some basic Ideas In repeated sampling, the intervals will enclose, in (1 -  )*100 of the cases, the true value of the parameters For a specific sample, can not say that the probability is (1 -  ) that a given fixed interval includes the true  2 If the sampling or probability distributions of the estimators are known, one can make confidence interval statement like (5.2.1)

6 May 2004 Prof. Himayatullah 6 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-3. Confidence Intervals for Regression Coefficients Z= (  ^ 2 -  2 )/se(  ^ 2 ) = (  ^ 2 -  2 )  x 2 i /  ~N(0,1) (5.3.1) We did not know  and have to use  ^ instead, so: t= (  ^ 2 -  2 )/se(  ^ 2 ) = (  ^ 2 -  2 )  x 2 i /  ^ ~ t(n-2) (5.3.2) => Interval for  2 Pr [ -t  /2  t  t  /2 ] = 1-  (5.3.3)

7 May 2004 Prof. Himayatullah 7 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-3. Confidence Intervals for Regression Coefficients Or confidence interval for  2 is Pr [  ^ 2 -t  /2 se(  ^ 2 )   2   ^ 2 +t  /2 se(  ^ 2 )] = 1-  (5.3.5) Confidence Interval for  1 Pr [  ^ 1 -t  /2 se(  ^ 1 )   1   ^ 1 +t  /2 se(  ^ 1 )] = 1-  (5.3.7)

8 May 2004 Prof. Himayatullah 8 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-4. Confidence Intervals for  2 Pr [(n-2)  ^ 2 /  2  /2   2  (n-2)  ^ 2 /  2 1-  /2 ] = 1-  (5.4.3) The interpretation of this interval is: If we establish (1-  ) confidence limits on  2 and if we maintain a priori that these limits will include true  2, we shall be right in the long run (1-  ) percent of the time

9 May 2004 Prof. Himayatullah 9 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-5. Hypothesis Testing: General Comments  The stated hypothesis is known as the null hypothesis: H o  The H o is tested against and alternative hypothesis: H 1 5-6. Hypothesis Testing: The confidence interval approach One-sided or one-tail Test H 0 :  2   * versus H 1 :  2 >  *

10 May 2004 Prof. Himayatullah 10 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing Two-sided or two-tail Test H 0 :  2 =  * versus H 1 :  2 #  *  ^ 2 - t  /2 se(  ^ 2 )   2   ^ 2 + t  /2 se(  ^ 2 ) values of  2 lying in this interval are plausible under H o with 100*(1-  )% confidence. If  2 lies in this region we do not reject H o (the finding is statistically insignificant) If  2 falls outside this interval, we reject H o (the finding is statistically significant)

11 May 2004 Prof. Himayatullah 11 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing Hypothesis Testing: 5-7. Hypothesis Testing: The test of significance approach The test of significance approach A test of significance is a procedure by which sample results are used to verify the truth or falsity of a null hypothesis Testing the significance of regression coefficient: The t-test Testing the significance of regression coefficient: The t-test Pr [  ^ 2 -t  /2 se(  ^ 2 )   2   ^ 2 +t  /2 se(  ^ 2 )]= 1-  (5.7.2)

12 May 2004 Prof. Himayatullah 12 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing Hypothesis Testing: The test of significance approach 5-7. Hypothesis Testing: The test of significance approach Table 5-1: Decision Rule for t-test of significance Table 5-1: Decision Rule for t-test of significance Type of Hypothesis H0H0 H1H1 Reject H 0 if Two-tail  2 =  2 *  2 #  2 * |t| > t  /2,df Right-tail  2   2 *  2 >  2 * t > t ,df Left-tail 2 2*2 2*  2 <  2 * t < - t ,df

13 May 2004 Prof. Himayatullah 13 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing Hypothesis Testing: The test of significance approach 5-7. Hypothesis Testing: The test of significance approach Testing the significance of  2 : The  2 Test Under the Normality assumption we have: Under the Normality assumption we have:  ^ 2  2 =  2  2 = (n-2) ------- ~  2 (n-2) (5.4.1)  2 From (5.4.2) and (5.4.3) on page 520 =>

14 May 2004 Prof. Himayatullah 14 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing Hypothesis Testing: The test of significance approach 5-7. Hypothesis Testing: The test of significance approach Table 5-2: A summary of the  2 Test Table 5-2: A summary of the  2 Test H0H0 H1H1 Reject H 0 if  2 =  2 0  2 >  2 0  2 Df.(  ^ 2 )/  2 0 >  2 ,df  2 =  2 0  2 <  2 0  2 ( Df.(  ^ 2 )/  2 0 <  2 ( 1-  ),df  2 =  2 0  2 #  2 0  2  2 ( Df.(  ^ 2 )/  2 0 >  2  /2,df or <  2 ( 1-  /2), df

15 May 2004 Prof. Himayatullah 15 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing Hypothesis Testing: 5-8. Hypothesis Testing: Some practical aspects Some practical aspects 1) The meaning of “Accepting” or “Rejecting” a Hypothesis 2) The Null Hypothesis and the Rule of Thumb Thumb 3) Forming the Null and Alternative Hypotheses Hypotheses 4) Choosing , the Level of Significance

16 May 2004 Prof. Himayatullah 16 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing Hypothesis Testing: 5-8. Hypothesis Testing: Some practical aspects Some practical aspects 5) The Exact Level of Significance: The p-Value [See page 132] The p-Value [See page 132]] 6) Statistical Significance versus Practical Significance Practical Significance 7) The Choice between Confidence- Interval and Test-of-Significance Interval and Test-of-Significance Approaches to Hypothesis Testing Approaches to Hypothesis Testing [Warning: Read carefully pages 117-134 ]

17 May 2004 Prof. Himayatullah 17 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-9. Regression Analysis and Analysis of Variance TSS = ESS + RSS TSS = ESS + RSS F=[MSS of ESS] /[MSS of RSS] = F=[MSS of ESS] /[MSS of RSS] = =  2 ^ 2  x i 2 /  ^ 2 (5.9.1) =  2 ^ 2  x i 2 /  ^ 2 (5.9.1) If u i are normally distributed; H 0 :  2 = 0 then F follows the F distribution with 1 and n-2 degree of freedom If u i are normally distributed; H 0 :  2 = 0 then F follows the F distribution with 1 and n-2 degree of freedom

18 May 2004 Prof. Himayatullah 18 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-9. Regression Analysis and Analysis of Variance F provides a test statistic to test the null hypothesis that true  2 is zero by compare this F ratio with the F-critical obtained from F tables at the chosen level of significance, or obtain the p- value of the computed F statistic to make decision F provides a test statistic to test the null hypothesis that true  2 is zero by compare this F ratio with the F-critical obtained from F tables at the chosen level of significance, or obtain the p- value of the computed F statistic to make decision

19 May 2004 Prof. Himayatullah 19 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-9. Regression Analysis and Analysis of Variance Table 5-3. ANOVA for two-variable regression model Table 5-3. ANOVA for two-variable regression model Source of Variation Sum of square ( SS)Degree of Freedom - (Df) Mean sum of square ( MSS) ESS (due to regression)  y^ i 2 =  2 ^ 2  x i 2  y^ i 2 =  2 ^ 2  x i 2 1 2^2 xi22^2 xi22^2 xi22^2 xi2 RSS (due to residuals)  u^ i 2  u^ i 2 n-2  u^ i 2 /(n-2)=  ^ 2 TSS  y i 2  y i 2 n-1

20 May 2004 Prof. Himayatullah 20 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-10. Application of Regression Analysis: Problem of Prediction By the data of Table 3-2, we obtained the sample regression (3.6.2) : Y^ i = 24.4545 + 0.5091X i, where Y^ i is the estimator of true E(Y i ) There are two kinds of prediction as follows:

21 May 2004 Prof. Himayatullah 21 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-10. Application of Regression Analysis: Problem of Prediction Mean prediction: Prediction of the conditional mean value of Y corresponding to a chosen X, say X 0, that is the point on the population regression line itself (see pages 137-138 for details) Individual prediction: Prediction of an individual Y value corresponding to X 0 (see pages 138-139 for details)

22 May 2004 Prof. Himayatullah 22 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-11. Reporting the results of regression analysis An illustration: Y^ I = 24.4545 + 0.5091X i (5.1.1) Se = (6.4138) (0.0357) r 2 = 0.9621 t = (3.8128) (14.2405) df= 8 P = (0.002517) (0.000000289) F 1,2 =2202.87

23 May 2004 Prof. Himayatullah 23 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-12. Evaluating the results of regression analysis: Normality Test: The Chi-Square (  2 ) Goodness of fit Test  2 N-1-k =  (O i – E i ) 2 /E i (5.12.1) O i is observed residuals (u^ i ) in interval i E i is expected residuals in interval i N is number of classes or groups; k is number of parameters to be estimated. If p-value of obtaining  2 N-1-k is high (or  2 N-1-k is small) => The Normality Hypothesis can not be rejected

24 May 2004 Prof. Himayatullah 24 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-12. Evaluating the results of regression analysis: Normality Test: The Chi-Square (  2 ) Goodness of fit Test H 0 : u i is normally distributed H 1 : u i is un-normally distributed Calculated-  2 N-1-k =  (O i – E i ) 2 /E i (5.12.1) Decision rule: Calculated-  2 N-1-k > Critical-  2 N-1-k then H 0 can be rejected

25 May 2004 Prof. Himayatullah 25 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-12. Evaluating the results of regression analysis: The Jarque-Bera (JB) test of normality This test first computes the Skewness (S) and Kurtosis (K) and uses the following statistic: JB = n [S 2 /6 + (K-3) 2 /24] (5.12.2) Mean= x bar =  x i /n ; SD 2 =  (x i -x bar ) 2 /(n-1) S=m 3 /m 2 3/2 ; K=m 4 /m 2 2 ; m k =  (x i -x bar ) k /n

26 May 2004 Prof. Himayatullah 26 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-12. (Continued) Under the null hypothesis H 0 that the residuals are normally distributed Jarque and Bera show that in large sample (asymptotically) the JB statistic given in (5.12.12) follows the Chi-Square distribution with 2 df. If the p-value of the computed Chi-Square statistic in an application is sufficiently low, one can reject the hypothesis that the residuals are normally distributed. But if p-value is reasonable high, one does not reject the normality assumption.

27 May 2004 Prof. Himayatullah 27 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 1. Estimation and Hypothesis testing constitute the two main branches of classical statistics 2. Hypothesis testing answers this question: Is a given finding compatible with a stated hypothesis or not? 3. There are two mutually complementary approaches to answering the preceding question: Confidence interval and test of significance.

28 May 2004 Prof. Himayatullah 28 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 4. Confidence-interval approach has a specified probability of including within its limits the true value of the unknown parameter. If the null- hypothesized value lies in the confidence interval, H 0 is not rejected, whereas if it lies outside this interval, H 0 can be rejected

29 May 2004 Prof. Himayatullah 29 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 5.Significance test procedure develops a test statistic which follows a well-defined probability distribution (like normal, t, F, or Chi-square). Once a test statistic is computed, its p-value can be easily obtained. The p-value The p-value of a test is the lowest significance level, at which we would reject H 0. It gives exact probability of obtaining the estimated test statistic under H 0. If p-value is small, one can reject H 0, but if it is large one may not reject H 0.

30 May 2004 Prof. Himayatullah 30 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 6. Type I error is the error of rejecting a true hypothesis. Type II error is the error of accepting a false hypothesis. In practice, one should be careful in fixing the level of significance , the probability of committing a type I error (at arbitrary values such as 1%, 5%, 10%). It is better to quote the p-value of the test statistic.

31 May 2004 Prof. Himayatullah 31 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 7.This chapter introduced the normality test to find out whether u i follows the normal distribution. Since in small samples, the t, F,and Chi-square tests require the normality assumption, it is important that this assumption be checked formally

32 May 2004 Prof. Himayatullah 32 Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions (ended) 8.If the model is deemed practically adequate, it may be used for forecasting purposes. But should not go too far out of the sample range of the regressor values. Otherwise, forecasting errors can increase dramatically.


Download ppt "May 2004 Prof. Himayatullah 1 Basic Econometrics Chapter 5: TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing."

Similar presentations


Ads by Google