Download presentation
Presentation is loading. Please wait.
1
REGRESSION (CONTINUED)
LECTURE 4 REGRESSION (CONTINUED) Analysis of Variance; Standard Errors & Confidence Intervals; Prediction Intervals; Examination of Residuals Supplementary Readings: Wilks, chapters 6,9; Bevington, P.R., Robinson, D.K., Data Reduction and Error Analysis for the Physical Sciences, McGraw-Hill, 1992.
2
What should we require of them?
Recall from last time… Define: We call these residuals What should we require of them?
3
What should we require of them?
Recall from last time… GAUSSIAN What should we require of them?
4
Analysis of Variance (“ANOVA”)?
Recall from last time… Analysis of Variance (“ANOVA”)? 2(n=5) Gaussian data
5
Analysis of Variance (“ANOVA”)
is guaranteed by linear regression procedure Why “n-2”?
6
Analysis of Variance (“ANOVA”)
Define:
7
Analysis of Variance (“ANOVA”) 1 and n-2 degrees of freedom
Define: 1 and n-2 degrees of freedom
8
Analysis of Variance (“ANOVA”) 1 and n-2 degrees of freedom
Source df SS MS F-test Total n-1 SST Regression 1 SSR MSR=SSR MSR/MSE Residual n-2 SSE MSE=se2 1 and n-2 degrees of freedom
9
Analysis of Variance (“ANOVA”) for Simple Linear Regression
Source df SS MS F-test Total n-1 SST Regression 1 SSR MSR=SSR MSR/MSE Residual n-2 SSE MSE=se2 We’ll discuss ANOVA further in the next lecture (“multivariate regression”)
10
‘Goodness of Fit’
11
‘Goodness of Fit’ For simple linear regression
12
‘Goodness of Fit’ Outside the “support” of the regression, in general,
13
‘Goodness of Fit’ Outside the “support” of the regression, in general,
14
‘Goodness of Fit’ Reliability Bias
15
Analysis of Variance (“ANOVA”)
Under Gaussian assumptions, the estimates from linear regression of the parameter a and b represent unbiased estimates of means of a Gaussian distribution Where the standard errors in the regression parameters are:
16
Confidence Intervals The estimated regression slope ‘b’ is likely to be within some range of the true ‘b’
17
Confidence Intervals This naturally defines a t test for the presence of a trend:
18
Prediction Intervals MSE in a predicted value or, (‘Prediction Error’)
is larger than the nominal MSE, increasing as the predictand value departs from the mean Note that sy approaches se as the ‘training’ sample becomes large
19
Linear Correlation ‘r’ suffers from sampling error both in the regression slope and the estimates of variance…
20
Linear Correlation ‘r’ suffers from sampling error both in the regression slope and the estimates of variance…
21
Linear Correlation Coefficient
22
Examining Residuals Heteroscedasticity
A trend in residual variance violates the assumption of Gaussian residuals…
23
Examining Residuals Heteroscedasticity
Often a simple transformation of the original data will yield more closely Gaussian residuals…
24
Examining Residuals Leverage Points can still be a problem!
25
Examining Residuals Autocorrelation Durbin-Watson Statistic
26
Examining Residuals Autocorrelation
Suppose we have the simple (‘first order autoregressive’) model Then we can still use all of the results based on Gaussian statistics, but with the modified sample size: For example:
27
Examining Residuals Autocorrelation
Suppose we have the simple (‘first order autoregressive’) model Then we can still use all of the results based on Gaussian statistics, but with the modified sample size: Different for tests of variance
28
Examining Residuals Autocorrelation
Suppose we have the simple (‘first order autoregressive’) model Then we can still use all of the results based on Gaussian statistics, but with the modified sample size: Different again for correlations
29
We can remove the serial correlation through
Examining Residuals Suppose we have the simple (‘first order autoregressive’) model We can remove the serial correlation through
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.