Download presentation
Presentation is loading. Please wait.
1
Inference about the Slope and Intercept
Recall, we have established that the least square estimates b0 and b1 are linear combinations of the Yi’s. Further, we have showed that they are unbiased and have the following variances In order to make inference we assume that εi’s have a Normal distribution, that is εi ~ N(0, σ2). This in turn means that the Yi’s are normally distributed. Since both b0 and b1 are linear combination of the Yi’s they also have a Normal distribution. week 4
2
Inference for β1 in Normal Error Regression Model
The least square estimate of β1 is b1, because it is a linear combination of normally distributed random variables (Yi’s) we have the following result: We estimate the variance of b1 by S2/SXX where S2 is the MSE which has n-2 df. Claim: The distribution of is t with n-2 df. Proof: week 4
3
Tests and CIs for β1 The hypothesis of interest about the slope in a Normal linear regression model is H0: β1 = 0. The test statistic for this hypothesis is We compare the above test statistic to a t with n-2 df distribution to obtain the P-value…. Further, 100(1-α)% CI for β1 is: week 4
4
Important Comment Similar results can be obtained about the intercept in a Normal linear regression model. See the book for more details. However, in many cases the intercept does not have any practical meaning and therefore it is not necessary to make inference about it. week 4
5
Example We have Data on Violent and Property Crimes in 23 US
Metropolitan Areas.The data contains the following three variables: violcrim = number of violent crimes propcrim = number of property crimes popn = population in 1000's We are interested in the relationship between the size of the city and the number of violent crimes…. week 4
6
Prediction of Mean Response
Very often, we would want to use the estimated regression line to make prediction about the mean of the response for a particular X value (assumed to be fixed). We know that the least square line is an estimate of Now, we can pick a point in the range in the regression line (Xh, Yh) then, is an estimate of Claim: Proof: This is the variance of the estimate of E(Y) when X = Xh. week 4
7
Confidence Interval for E(Yh)
For a given Xh , a 100(1-α)% CI for the mean value of Y is where Note, the CI above will be wider the further Xh is from . week 4
8
Example Consider the snow gauge data.
Suppose we wish to predict the mean loggain when the device was calibrated at density 0.5, that is, when Xh = 0.5…. week 4
9
Prediction of New Observation
We want to use the regression line to predict a particular value of Y for a given X = Xh,new, a new point taken after n observation. The predicted value of a new point measured when X = Xh,new is Note, the above predicted value is the same as the estimate of E(Y) at Xh,new but it should have larger variance. The predicted value has two sources of variability. One is due to the regression line being estimated by b0+b1X. The second one is due to εh,new i.e., points don’t fall exactly on line. To calculated the variance of we look at the difference week 4
10
Prediction Interval for New Observation
100(1-α)% prediction interval for when X = Xh,new is This is not a confidence interval; CI’s are for parameters and we are estimating a value of a random variable. week 4
11
Confidence Bands for E(Y)
Confidence bands capture the true mean of Y , E(Y) = β0+ β1X, everywhere over the range of the data. For this we use the Working-Hotelling procedure which gives us the following boundary values at any given Xh where F(2, n-2); α is the upper α –quantile from an F distribution with 2 and n-2 df. (Table B.4) week 4
12
Decomposition of Sum of Squares
The total sum of squares (SS) in the response variable is The total SS can be decompose into two main sources; error SS and regression SS. The error SS is The regression SS is It is the amount of variation in Y’s that is explained by the linear relationship of Y with X. week 4
13
Claims First, SSTO = SSR +SSE, that is Proof:….
Alternative decomposition is Proof: Exercises. week 4
14
Analysis of Variance Table
The decomposition of SS discussed above is usually summarized in analysis of variance table (ANOVA) as follow: Note that the MSE is s2 our estimate of σ2. week 4
15
Coefficient of Determination
The coefficient of determination is It must satisfy 0 ≤ R2 ≤ 1. R2 gives the percentage of variation in Y’s that is explained by the regression line. week 4
16
Claim R2 = r2, that is the coefficient of determination is the correlation coefficient square. Proof:… week 4
17
Important Comments about R2
It is useful measure but… There is no absolute rule about how big it should be. It is not resistant to outliers. It is not meaningful for models with no intercepts. It is not useful for comparing models unless one set of predictors is a subset of the other. week 4
18
ANOVE F Test The ANOVA table gives us another test of H0: β1 = 0.
The test statistics is Derivations … week 4
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.