Download presentation
Presentation is loading. Please wait.
Published byΞΟΟΟ ΟΞΏΟ ΞΞ»Ξ΅Ο ΞΈΞ΅ΟΞ―ΞΏΟ Modified over 5 years ago
1
The basic idea of Analysis of Variance (ANOVA) approach to Regression analysis
In the previous topics, we have developed the basic regression model and demonstrated how to make prediction. In this topic, we will view the regression analysis from the perspective of analysis of variance. The new perspective will not enable us to do anything new, but the analysis of variance approach will come into its own when we take up multiple regression models and other types of linear statistical models.
2
Partitioning variance in the total sum of squares
π π β π The analysis of variance approach is based on the partitioning of the total sums of squares and degrees of freedom associated with the response variable Y. These deviations are shown by the vertical lines in figure (a). The total variation, denoted by SSTO, is the sum of the squared deviations. [S] SSTO is the sum of deviations from the mean. SSTO stands for total sum of squares. If all Y observations are the same, SSTO=0. The greater the variation among Yi, the larger is SSTO. Thus SSTO is a measure of uncertainty in Y. When we utilize the predictor variable X, the variation reflecting the uncertainty in Y is that of the Y observations around the fitted regression line: Y-Yhat. These deviations are shown by the vertical lines in figure (b). The measure of variation in Y when X is taken into account is the sum of the squared deviations, which is the familiar SSE. [S] SSE denotes the error sum of squares. If all Y observations fall on the fitted regression line, SSE=0. This means variation in Y can be perfectly explained by X. The greater the variation of the Y observations around the fitted regression line, the larger is SSE. The difference between SSTO and SSE is another sum of squares, SSR, where SSR stands for regression sum of squares. These deviations are shown by the vertical lines in figure (c). Each deviation is simply the difference between the fitted value of the regression line and the mean of the fitted values Ybar. If the regression line is horizontal so that Ybar=Y bar. Then SSR=0. Otherwise SSR is positive. SSR may be considered the part of variation in Y associated with the regression line. The larger SSR is in relation to SSTO, the more variation in Y is explained by the regression model, and the better model it is. [B] SSR is sometime denoted by SSM, which stands for sum of squared of the model. πΊ π π β π π = πΊ π π β π π π πΊ π π β π π πΊπΊπ»πΆ = πΊπΊπ¬ πΊπΊπΉ Also known as SSM (model) βTotal sum of squaresβ β error sum of squaresβ βregression sum of squaresβ
3
Partitioning Degree of freedom
πΊ π π β π π =πΊ π π β π π π πΊ π π β π π πΊπΊπ»πΆ = πΊπΊπ¬ πΊπΊπΉ Degree of freedom πβ = πβ 1 Corresponding to the portioning of the total sum of square SSTO, there is a partitioning of the associated degrees of freedom (df) SSTO has n-1 degree of freedom since the sum Yi-Ybar is 0, and one df is lost because the sample mean is used to estimate the population mean. SSE has n-2 df. Two df are lost because the two parameters, intercept and slope are estimated in the regression function. SSR has 1 df. In the two df associated with the regression line, one df is lost because the the deviation Ξ£ π β π =0
4
Degree of freedom of error (an intuitive flavor)
Q: what is the minimum requirement on data points to estimate this regression? π= π½ 0 + π½ 1 π+π, and π=πβ π½ 0 β π½ 1 π π=2, πππΈ=0 π=3, πππΈ=1 So, πππΈ=πβ2=πβπβ1 Where π is the number of predictors (π=1 ππ π‘βππ πππ π) Now look at the concept of degree of freedom in an intuitive flavor. The way we will start is by looking at a simple linear regression with one independent variable X, and one dependent variable Y. I will ask you what is the minimum requirement on data points to estimate this regression? For example, say Y is the personβs height, and we try to estimate the personβs height via their weight, x. You can expect someone who weights more might be taller, but it is not obviously going to be a one-to-one relationship. There is going to be some error associated with it. A regression is essentially drawing a line of best fit through all of the observations But how many people do you need in this study to make a regression? So here we have X and Y. now if you think you have one observation so one person in the study you cannot run a regression. You cannot draws a line of best fit through one point. So you think, alright, we need two observations for our regression. You see that both of those points are used to build the regression line. The fact that all points are on the line doesnβt mean the line is a best fit, in fact, the relationship between X and Y cannot be accessed when all observations are used to build the model. It is only after we get that third observation that the model gain some freedom to access the regression model. The idea is that we have one degree of freedom because we have that third observation which allows the model to actually different from the points itself. So if we throw in another observation in the mix there we actually now have two degree of freedom because thereβs two additional observations giving the model a bit more flexibility. In general, if we have n observations in the model, 2 of them will be used to build the linear model, and the rest n-2 will be the degree of freedom left to access the model, i.e., for accessing the random errors.
5
Degree of freedom of error (an intuitive flavor)
Q: what is the minimum requirement on data points to estimate this regression? What is the degree of freedom left? π= π½ 0 + π½ 1 π 1 + π½ 2 π 2 +π, and π=πβ π½ 0 β π½ 1 π 1 β π½ 2 π 2 π=3, πππΈ=0 π=4, πππΈ=1 So, πππΈ=πβ3 =πβπβ1 Where π is the number of predictors (π=2 ππ π‘βππ πππ π) Now suppose I have thrown in a second variable, X2. So letβs say Y is the height, X1 is the weight, and X2 is the personβs motherβs height. In this case, what is the minimum requirement on data points to estimate this regression? Visually we can say it is represented a sort of three-dimensional space, we have X1 on the horizontal axis here, and X2 on this kind of axis coming out of the page if you will. And Y is going up. Let is start with 3. Unlike the two dimensional equivalent, here we are actually drawing a plane of best fit when you have two X variables. Essentially what you are doing is putting a plane through those points and appreciate that any three points in three dimensional space can have a plane cut through all three of them. When we introduce the forth point, the plane can gets some freedom now to cut through those four points, meaning we have a degrees of freedom of 1. So a three dimensional model, we need at least 3 points to build the model, and 4 points to get one degree of freedom. And if have 5 points we have 2 degree of freedom. So degree of freedom with 2 X variables will be n-3. With the additional X variable in the model, we lost some degree of freedom from building the model. This leads to the formula of dfE=n-p-1, where, n is the number of observations, p is the number of X variables.
6
The Analysis of Variance (ANOVA):
Source of Variation SS π
π MS E{MS} Regression πππ=πΊ π π β π π = π π π πΊ π π’ β πΏ π . 1 MSR= πππ 1 π π + π· π π πΏ π β πΏ π Error πππ=πΊ π π β π π π πβ2 MSE= πππ nβ2 π π Total ππππ=πΊ π π β π π πβ1 The breakdowns of the total sum of squares and associated degrees of freedom are displayed in the form of an analysis of variance table (ANOVA table) The SSR can also be computed as b1^2 SSX. Mean squares of interest also are shown as the SSR divided by the degree of freedom. In addition, this ANOVA table contains a column of expected mean squares. The mean of the sampling distribution of MSE, E{MSE}= π 2 , MSE is an unbiased estimation for the random error, whether or not X and Y are linearly related. The mean of the sampling distribution of MSR is π 2 , when π½ 1 =0, this happens when the linear regression line is flat, and overlap with the horizontal line Ybar. Otherwise, MSR will tend to be larger than MSE. Tis suggests that a comparison of MSR and MSE is useful for testing whether or not beta1=0. If MSR and MSE are about the same, beta1 should 0 and the regression line is flat. On the other hand, if MSR is substantially greater than MSE, beta1 should not be 0, the regression line is not flat. This indeed is the basic idea underlying the ANOVA, name the F test. Comments: E{MSE}= π π , MSE is an unbiased estimation for the random error. E{MSR}> π π , when π· π β π. Hence MSR will tend to be larger than MSE.
7
The F test of Ho: π½ 1 =0 versus Ha: π½ 1 β 0
Source of Variation SS π
π MS E{MS} Regression πππ=πΊ π π β π π 1 MSR= πππ 1 π π + π· π π πΏ π β πΏ π Error πππ=πΊ π π β π π π πβ2 MSE= πππ nβ2 π π Total ππππ=πΊ π π β π π πβ1 The test statistic for analysis of variance approach is denoted by F* or Fs. Fs is the ration of MSE to MSE, and follows a F distribution, with the degree of freedom 1, n-2. π β π π = π΄πΊπΉ π΄πΊπ¬ ~ π (π,πβπ) Reject Ho if π β > π (πβπΆ;π,πβπ)
8
Example1 Ho: π½ 1 =0 versus Ha: π½ 1 β 0
Source of Variation SS π
π MS F Regression 252378 1 Error 54825 23 Total 307203 24 π β = π΄πΊπΉ π΄πΊπ¬ = ~ π ( , ) For example, suppose an ANOVA table is given, At level, reject Ho if π β > π (π.ππ;π,ππ) Conclude that there is a linear association between X and Y.
9
Example1 Ho: π½ 1 =0 versus Ha: π½ 1 β 0
Source of Variation SS π
π MS F Regression 252378 1 Error 54825 23 Total 307203 24 252378/1=252378 232378/2384=105.88 54825/23=2384 π β = π΄πΊπΉ π΄πΊπ¬ =πππ.ππ~ π (π,ππ) For example, suppose we want to test a two sided test for whether beta1 equals to 0. And a partial ANOVA table is given. Reject Ho if π β > π (π.ππ;π,ππ) Conclude that there is a linear association between X and Y.
10
π β =πππ.ππ~ π (π,ππ) Method 1: use R Reject Ho if
π β > π (π.ππ;π,ππ)= 4.28 Method 2: use F table Or, reject Ho if π β > π (π.ππ;π,ππ)= 4.35 The estimated π·βπππππ<π.πππ, since π β >π π.πππ;π,ππ =ππ.ππ Now letβs find the critical value F. Using the qf function, we compute the exact critical value to be 4.28.
11
Example2: An investigative study collected 16 observations from random locations in the Wabash river, count the amount of particulates from each location (X) and number of fish (Y). Complete the following ANOVA table for the regression analysis. Dose the result suggest that the amount of particulates of the water (in different location) affect the number of fish? Source of Variation SS π
π MS F Regression 4.5 Error Total 85.2 π β = π΄πΊπΉ π΄πΊπ¬ =~ π( ) What is the F(_______) if use the F table? Reject Ho if πΉ β >_______(π’π π π
) or ______(use the F table) Conclude that there ______(is/isnβt) a linear association between work hours and lot size.
12
Example2: An investigative study collected 16 observations from random locations in the Wabash river, count the amount of particulates from each location (X) and number of fish (Y). Complete the following ANOVA table for the regression analysis. Dose the result suggest that the amount of particulates of the water (in different location) affect the number of fish? Source of Variation SS π
π MS F Regression 4.5 1 0.781 Error 80.7 14 5.76 Total 85.2 15 π β = π΄πΊπΉ π΄πΊπ¬ =~ π (π,ππ) or π π,ππ with the F table Reject Ho if π β >π.π or 4.75 with the F table Conclude that there isnβt a linear association between amount of particulates of the water and fish.
13
Equivalence of a two-sided F test (ANOVA) and t test (SLR)
π»π: π½ 1 = π»π: π½ 1 β 0 The T test statistic π‘ β = π 1 π π 1 ~π‘(πβ2) The F test statistic πΉ β = πππ
πππΈ ~ πΉ (1,πβ2) πΉ β = πππ
πππΈ = π 1 2 Ξ£ π π β π 2 πππΈ = π π 2 π 1 = π‘ β 2 πππππ π 2 π 1 =πππΈ/Ξ£ π π β π 2 For a given alpha level, the F test for beta1 is equivalent algebraically to the two-tailed t test. To see this, recall that the t test statistic is b1 over standard error b1. The F statistic is MSR/MSE where MSR=b1^2 times SSX, with some rearrangements, we can appreciate that F statistic is actually the square of the t statistic. The critical values are also equivalent in the sense that t^2 =F For example, at alpha of 0.05 level, df3=23, the t critical value of 2.069, and we square this value to get 4.28 which is the F critical value. Thus, at any given significant level, we can use either the t test or the F test for testing beta1. Note that F test and T test are only equivalent in the SLR test [S] and two sided test [S]. The T test and F tests are equivalent in SLR πΉ β = π‘ β 2 for two sided test, the critical values: π‘ 1β πΌ 2 ,πβ2 2 =πΉ(1βπΌ;1, πβ2) For example, at πΌ=0.05, πππ=23: π‘ ;23 2 = =4.28=πΉ 0.95, 1, 23
14
F Test (ANOVA) and T test are not always equivalent
1. They are equivalent in simple linear regression (SLR) problem, and will not be so for Multiple regression. 2. They are equivalent when H0 : Ξ²1 = 0. Ho: π½ 1 = π½ 1 β (β 0) can be tested with a t-test. In Ho: π½ 1 = π½ 1 β β 0 , the test statistic Fβ has a non-central F distribution, and require extra steps and not covered in the course. 3. In SLR, the T test is more flexible and more commonly used than the F test. We will continue to compare them in MLR. F test and t test are now always equivalent. They are equivalent in simple linear regression (SLR) problem, and will not be so for Multiple regression. They are equivalent when the null hypothesis is to compare beta 1 to 0. When the null hypothesis is to compare beta1 with a nonzero constant, the t test is still applicable. But the F distribution is now a more complicated non-central distribution and require extra steps. In SLR, the T test is more flexible and more commonly used than the F test. We will continue to compare them in MLR.
15
Introduce the General Linear Test (GLT) approach
Ho: π· π =π versus Ha: π· π β π Full model: π π = π½ 0 + π½ 1 π 1 + π π Under Ha πππΈ πΉ =Ξ£ π π β π π 2 =πππΈ , π π πΉ =πβ2 Reduced model: π π = π½ 0 + π π Under Ho πππΈ π
=Ξ£ π π β π π 2 =ππππ, π π π
=πβ1 ANOVA is an example of the general test for a linear statistical model, or GLT approach. This approach is very useful in building complicated regression model. The idea is easily to understand here in the context of simple linear regression. We begin with the model considered to be appropriate for the data, the full or unrestricted model. In simple linear regression model, the full model has all the parameters, Y=beta0+beta1X+error. The full model is a model under Ha. Under the full model, the total error sum of squares is SSE the df is n-2, as before. [S] The F in notation SSE(F) means βfullβ. The reduced model has a reduced set of parameters. Under Ho, beta1=0, and the full model is reduced as Y=beta0+error. Under the reduced model, since beta1=0, the regression line is the horizontal line and Yhat=Ybar. It can be shown that the b0=Ybar, and the error sum of squares for this model is SSE(SSR)=sum of (Y-Ybar)^2, and the df is n-1. [S] the R in the notation SSE(R) means βreducedβ. The logic now is to compare the two error sums of squares SSE(F) and SSE(R), If more difference in SSE(F) and SSE(R), the more likely that the full model is different from the reduced model, and suggests that Ha holds true. [B] In other words, if the full model has a much smaller residual error SSE(F) than the reduced model, we say that the full model is a much better model to reduce the random error and X helps to explain a significant amount of variation in Y. The actual test statist is a function of difference [S] SSE(R)-SSE(F). Which follows the F distribution if Ho is true. We can appreciate that the test statistic in the general linear test is actually identical to the ANVOA in Simple linear regression. βSignificant reduction in SSE?β πΉ β = πππΈ π
βπππΈ πΉ π π π
βπ π πΉ πππΈ(πΉ)/π π πΉ = πππ
πππΈ ~ πΉ (1,πβ2) The test statistic of the general linear test in simple linear regression is identical to the ANOVA test statistic.
16
General Linear Test can be extended to multiple parameters ( π· π , π· π , β¦)
Given the number of additional parameters in the the full (more complex) model compared to the reduced model, does the full model yield a larger reduction in SSE than we would expect to get by adding a similar number of unrelated (i.e., useless) predictor variables? πΉ β = πππΈ π
βπππΈ πΉ π π π
βπ π πΉ πππΈ(πΉ)/π π πΉ = πππ
πππΈ ~ πΉ (π π π
βπ π πΉ ,π π πΉ ) General linear test can be easily extended to access multiple parameters beta1, beta2, β¦ We will see it again in multiple linear regression. The GLT is a very general tool. We will see it again in Multiple Linear Regression.
17
Summary The basic idea of ANOVA is to study the source and proportion of variance in data F test (ANOVA) and T test (Simple Linear model) are not always equivalent ANOVA test extends to General Linear test/model (GLM/GLT) Summary, in this topic, we talk about basic idea of ANOVA, compare F and T test, and finally introduce the idea of general linear test namely GLT, which is also referred to as GLM, general linear model.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.