Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fitting Equations to Data. A Common situation: Suppose that we have a single dependent variable Y (continuous numerical) and one or several independent.

Similar presentations


Presentation on theme: "Fitting Equations to Data. A Common situation: Suppose that we have a single dependent variable Y (continuous numerical) and one or several independent."— Presentation transcript:

1 Fitting Equations to Data

2 A Common situation: Suppose that we have a single dependent variable Y (continuous numerical) and one or several independent variables, X 1, X 2, X 3,... (also continuous numerical, although there are techniques that allow you to handle categorical independent variables). The objective will be to “fit” an equation to the data collected on these measurements that explains the dependence of Y on X 1, X 2, X 3,...

3 What is the value of these equations?

4 Equations give very precise and concise descriptions (models) of data explaining how dependent variables are related to independent variables.

5 Examples Linear models Y= Blood Pressure, X = age Y =  X +  +  Exponential growth or decay models Y = Average of 5 best times for the 100m during an Olympic year, X = the Olympic year. +  Another growth model. (The Gompertz model) Y = size of a cancerous tumor, X = time after implantation.

6 Note: the presence of the random error term, , (random noise). This is a important term in any statistical model. Without this term the model is deterministic and doesn’t require the statistical analysis

7 What is the value of these equations? 1.Equations give very precise and concise descriptions (models) of data and how dependent variables are related to independent variables. 2.The parameters of the equations usually have very useful interpretations relative to the phenomena that is being studied. 3.The equations can be used to calculate and estimate very useful quantities related to phenomena. Relative extrema, future or out-of-range values of the phenomena 4.Equations can provide the framework for comparison.

8 The Multiple Linear Regression Model An important statistical model

9 Again we assume that we have a single dependent variable Y and p (say) independent variables X 1, X 2, X 3,..., X p. The equation (model) that generally describes the relationship between Y and the Independent variables is of the form: Y = f(X 1, X 2,...,X p |  1,  2,...,  q ) +  where  1,  2,...,  q are unknown parameters of the function f and  is a random disturbance (usually assumed to have a normal distribution with mean 0 and standard deviation .

10 In Multiple Linear Regression we assume the following model Y =  0 +  1 X 1 +  2 X 2 +... +  p X p +  This model is called the Multiple Linear Regression Model. Again are unknown parameters of the model and where  0,  1,  2,...,  p are unknown parameters and  is a random disturbance assumed to have a normal distribution with mean 0 and standard deviation .

11 The importance of the Linear model 1. It is the simplest form of a model in which each independent variable has some effect on the.dependent variable Y. When fitting models to data one tries to find the simplest form of a model that still adequately describes the relationship between the dependent variable and the independent variables. The linear model is sometimes the first model to be fitted and only abandoned if it turns out to be inadequate.

12 2.In many instances a linear model is the most appropriate model to describe the dependence relationship between the dependent variable and the independent variables. This will be true if the dependent variable increases at a constant rate as any or the independent variables is increased while holding the other independent variables constant.

13 3. Many non-Linear models can be put into the form of a Linear model by appropriately transforming the dependent variables and/or any or all of the independent variables. This important fact ensures the wide utility of the Linear model. (i.e. the fact the many non-linear models are linearizable.)

14 An Example The following data comes from an experiment that was interested in investigating the source from which corn plants in various soils obtain their phosphorous. The concentration of inorganic phosphorous (X 1 ) and the concentration of organic phosphorous (X 2 ) was measured in the soil of n = 18 test plots. In addition the phosphorous content (Y) of corn grown in the soil was also measured. The data is displayed below:

15 Inorganic Phosphorous X 1 Organic Phosphorous X 2 Plant Available Phosphorous Y Inorganic Phosphorous X 1 Organic Phosphorous X 2 Plant Available Phosphorous Y 0.4536412.65851 0.4236010.93776 3.1197123.14696 0.6346123.15077 4.7245421.64493 1.7657723.15695 9.444811.93654 10.1319326.858168 11.6299329.95199

16 Coefficients Intercept 56.2510241 (  0 ) X1X1 1.78977412 (  1 ) X2X2 0.08664925 (  2 ) Equation: Y = 56.2510241 + 1.78977412 X 1 + 0.08664925 X 2

17

18 Summary of the Statistics used in Multiple Regression

19 The Least Squares Estimates: - The values that minimize Note: = predicted value of y i

20 The Analysis of Variance Table Entries a) Adjusted Total Sum of Squares (SS Total ) b) Residual Sum of Squares (SS Error ) c) Regression Sum of Squares (SS Reg ) Note: i.e. SS Total = SS Reg +SS Error

21 The Analysis of Variance Table Source Sum of Squares d.f.Mean SquareF Regression SS Reg pSS Reg /p = MS Reg MS Reg /s 2 ErrorSS Error n-p-1SS Error /(n-p-1) =MS Error = s 2 TotalSS Total n-1

22 Uses: 1.To estimate  2 (the error variance). - Use s 2 = MSError to estimate  2. 2.To test the Hypothesis H 0 :  1 =  1 =  2 =...  =  p = 0. Use the test statistic F = MS Reg / s 2 = [(1/p)SS Reg ]/[(1/(n-p-1))SS Error ]. - Reject H 0 if F > F a (p,n-p-1).

23 3.To compute other statistics that are useful in describing the relationship between Y (the dependent variable) and X 1, X 2,...,X p (the independent variables). a)R 2 = the coefficient of determination = SS Reg /SS Total = = the proportion of variance in Y explained by X 1, X2,...,X p 1 - R 2 = the proportion of variance in Y that is left unexplained by X 1, X2,..., X p = SSError/SSTotal.

24 b)R a 2 = "R 2 adjusted" for degrees of freedom. = 1 -[the proportion of variance in Y that is left unexplained by X 1, X 2,..., X p adjusted for d.f.] = 1 - [(1/(n-p-1))SS Error ]/[(1/(n-1))SS Total ]. = 1 - [(n-1)SS Error ]/[(n-p-1)SS Total ]. = 1 - [(n-1)/(n-p-1)] [1 - R 2 ].

25 c) R=  R 2 = the Multiple correlation coefficient of Y with X 1, X 2,...,X p = = the maximum correlation between Y and a linear combination of X 1, X 2,...,X p Comment: The statistics F, R 2, R a 2 and R are equivalent statistics.

26 Using SPSS Note: The use of another statistical package such as Minitab is similar to using SPSS

27 After starting the SSPS program the following dialogue box appears:

28 If you select Opening an existing file and press OK the following dialogue box appears

29 The following dialogue box appears:

30 If the variable names are in the file ask it to read the names. If you do not specify the Range the program will identify the Range: Once you “click OK”, two windows will appear

31 One that will contain the output:

32 The other containing the data:

33 To perform any statistical Analysis select the Analyze menu:

34 Then select Regression and Linear.

35 The following Regression dialogue box appears

36 Select the Dependent variable Y.

37 Select the Independent variables X 1, X 2, etc.

38 If you select the Method - Enter.

39 All variables will be put into the equation. There are also several other methods that can be used : 1.Forward selection 2.Backward Elimination 3.Stepwise Regression

40

41 Forward selection 1.This method starts with no variables in the equation 2.Carries out statistical tests on variables not in the equation to see which have a significant effect on the dependent variable. 3.Adds the most significant. 4.Continues until all variables not in the equation have no significant effect on the dependent variable.

42 Backward Elimination 1.This method starts with all variables in the equation 2.Carries out statistical tests on variables in the equation to see which have no significant effect on the dependent variable. 3.Deletes the least significant. 4.Continues until all variables in the equation have a significant effect on the dependent variable.

43 Stepwise Regression (uses both forward and backward techniques) 1.This method starts with no variables in the equation 2.Carries out statistical tests on variables not in the equation to see which have a significant effect on the dependent variable. 3.It then adds the most significant. 4.After a variable is added it checks to see if any variables added earlier can now be deleted. 5.Continues until all variables not in the equation have no significant effect on the dependent variable.

44 All of these methods are procedures for attempting to find the best equation The best equation is the equation that is the simplest (not containing variables that are not important) yet adequate (containing variables that are important)

45 Once the dependent variable, the independent variables and the Method have been selected if you press OK, the Analysis will be performed.

46 The output will contain the following table R 2 and R 2 adjusted measures the proportion of variance in Y that is explained by X 1, X 2, X 3, etc (67.6% and 67.3%) R is the Multiple correlation coefficient (the maximum correlation between Y and a linear combination of X 1, X 2, X 3, etc)

47 The next table is the Analysis of Variance Table The F test is testing if the regression coefficients of the predictor variables are all zero. Namely none of the independent variables X 1, X 2, X 3, etc have any effect on Y

48 The final table in the output Gives the estimates of the regression coefficients, there standard error and the t test for testing if they are zero Note: Engine size has no significant effect on Mileage

49 The estimated equation from the table below: Is:

50 Note the equation is: Mileage decreases with: 1.With increases in Engine Size (not significant, p = 0.432) With increases in Horsepower (significant, p = 0.000) With increases in Weight (significant, p = 0.000)

51 Properties of the Least Squares Estimators: 1.Normally distributed ( If there error terms are Normally distributed) 2.Unbiased Estimators of the Linear Parameters  0,  1,  2,...  p. 3.Minimum Variance (Minimum Standard Error) of all Unbiased Estimators of the Linear Parameters  0,  1,  2,...  p.

52 Comments: 1.The Error Variance s 2 (and s). 2.s X i, the standard deviation of X i (the i th independent variable). 3.The sample size n. 4.The correlations between all pairs of variables.

53 decreases as s decreases. decreases as s X i increases. decreases as n increases. increases as the correlation between pairs of independent variables increases. –In fact the standard error of the least squares estimates can be extremely high if there is a high correlation between one of the independent variables and a linear combination of the remaining independent variables. (the problem of Multicollinearity). The standard error of  ˆ i, S.E.   ˆ i   s  ˆ i

54 The Covariance Matrix,Correlation and X T X inverse matrix The Covariance Matrix where and

55 The Correlation Matrix

56 The X T X inverse matrix

57 If we multiply each entry in the X T X inverse matrix by s 2 = MS Error this matrix turns into the covariance matrix for :

58 These matrices can be used to compute standard Errors for linear combinations of the regression coefficients Namely

59

60

61 An Example Suppose one is interested in how the cost per month (Y) of heating a plant is determined the average atmospheric temperature in the Month (X 1 ) and the number of operating days in the month (X 2 ). The data on these variables was collected for n = 25 months selected at random and is given on the following page. Y = cost per month of heating a plant X 1 = average atmospheric temperature in the month X 2 = the number of operating days for the plant in the month.

62 The Least Squares Estimates: ConstantX1X1 X2X2 Estimate912.6-7.2420.29 Standard Error110.280.804.577 The Covariance Matrix ConstantX1X1 X2X2 12162-49.203-464.36 X1X1.63390.76796 X2X2 20.947 The Correlation Matrix ConstantX1X1 X2X2 1.000-.1764-.0920 X1X1 1.000.0210 X2X2 1.000 The X T X Inverse matrix ConstantX1X1 X2X2 2.778747-0.011242-0.106098 X1X1 0.14207x10 - 3 0.175467x10 -3 X2X2 0.478599

63 The Analysis of Variance Table SourcedfSSMSF Regression254187127093661.899 Error22962874377 Total24638158

64 Summary Statistics (R 2, R adjusted 2 = R a 2 and R) R 2 = 541871/638158 =.8491 (explained variance in Y - 84.91 %) R a 2 = 1 - [1 - R 2 ][(n-1)/(n-p-1)] = 1 - [1 -.8491][24/22] =.8354 (83.54 %) R = =.9215 = Multiple correlation coefficient

65

66

67 Three-dimensional Scatter-plot of Cost, Temp and Days.

68 Example Motor Vehicle example Variables 1.(Y) mpg – Mileage 2.(X 1 ) engine – Engine size. 3.(X 2 ) horse – Horsepower. 4.(X 3 ) weight – Weight.

69

70 Select Analysis->Regression->Linear

71 To print the correlation matrix or the covariance matrix of the estimates select Statistics

72 Check the box for the covariance matrix of the estimates.

73 Here is the table giving the estimates and their standard errors.

74 Here is the table giving the correlation matrix and covariance matrix of the regression estimates: What is missing in SPSS is covariances and correlations with the intercept estimate (constant).

75 This can be found by using the following trick 1.Introduce a new variable (called constnt) 2.The new “variable” takes on the value 1 for all cases

76 Select Transform->Compute

77 The following dialogue box appears Type in the name of the target variable - constnt Type in ‘1’ for the Numeric Expression

78 This variable is now added to the data file

79 Add this new variable (constnt) to the list of independent variables

80 Under Options make sure the box – Include constant in equation – is unchecked The coefficient of the new variable will be the constant.

81 Here are the estimates of the parameters with their standard errors Note the agreement with parameter estimates and their standard errors as previously calculated.

82 Here is the correlation matrix and the covariance matrix of the estimates.

83 Testing for Hypotheses related to Multiple Regression.

84 Testing for Hypotheses related to Multiple Regression. The General Linear Hypothesis H 0 :h 11  1 + h 12  2 + h 13  3 +... + h 1p  p = h 1 h 21  1 + h 22  2 + h 23  3 +... + h 2p  p = h 2... h q1  1 + h q2  2 + h q3  3 +... + h qp  p = h q where h 11  h 12, h 13,..., h qp and h 1  h 2, h 3,..., h q are known coefficients.

85 Examples 1.H 0 :  1 = 0 2.H 0 :  1 = 0,  2 = 0,  3 = 0 3.H 0 :  1 =  2 4.H 0 :  1 =  2,  3 =  4 5.H 0 :  1 = 1/2(  2 +  3 ) 6.H 0 :  1 = 1/2(  2 +  3 ),  3 = 1/3(  4 +  5 +  6 )

86 1. The Complete Model Y =  0 +  1 X 1 +  2 X 2 +  3 X 3 +... +  p X p +  2. The Reduced Model The model implied by H 0. You are interested in knowing whether the complete model can be simplified to the reduced model. When testing hypotheses there are two models of interest.

87 Some Comments 1.The complete model contains more parameters and will always provide a better fit to the data than the reduced model. 2.The Residual Sum of Squares for the complete model will always be smaller than the R.S.S. for the reduced model. 3.If the reduction in the R.S,S. is small as we change from the reduced model to the complete model, the reduced model should be accepted as providing an adequate fit. 4.If the reduction in the R.S,S. is large as we change from the reduced model to the complete model, the reduced model should be rejected as providing an adequate fit and the complete model should be kept. These principles form the basis for the following test.

88 Testing the General Linear Hypothesis The F-test for H 0 is performed by carrying out two runs of a multiple regression package.

89 Run 1: Fit the complete model. Resulting in the following Anova Table: SourcedfSum of Squares RegressionpSS Reg Residual (Error)n-p-1SS Error Totaln-1SS Total

90 Run 2: Fit the reduced model (q parameters eliminated) Resulting in the following Anova Table: SourcedfSum of Squares Regressionp-qSS 1 Reg Residual (Error)n-p+q-1SS 1 Error Totaln-1SS Total

91 The Test: The Test is carried out using the Test Statistic where SS H 0 = SS 1 Error - SS Error = SS Reg - SS 1 Reg and s 2 = SS Error /(n-p-1). The test statistic, F, has an F-distribution with 1 = q d.f. in the numerator and 2 = n – p - 1 d.f. in the denominator if H 0 is true.

92 Distribution when H 0 is true

93 The Critical Region Reject H 0 if F > F  (q, n – p – 1) F  (q, n – p – 1)

94 The Anova Table for the Test: SourcedfSum of SquaresMean SquareF Regressionp-qSS 1 Reg [1/(p-q)]SS 1 Reg MS 1 Reg /s 2 (for the reduced model) DepartureqSS H0 (1/q)SS H0 MS H0 /s 2 from H 0 Residual n-p-1SS Error s 2 (Error) Totaln-1SS Total

95 Some Examples: Four independent Variables X 1, X 2, X 3, X 4 The Complete Model Y =  0 +  1 X 1 +  2 X 2 +  3 X 3 +  4 X 4 + 

96 1)a)H 0 :  3 = 0,  4 = 0 (q = 2) b)The Reduced Model: Y =  0 +  1 X 1 +  2 X 2 +  Dependent Variable:Y Independent Variables: X 1, X 2

97 2)a)H 0 :  3 = 4.5,  4 = 8.0 (q = 2) b)The Reduced Model: Y – 4.5X 3 – 8.0X 4 =  0 +  1 X 1 +  2 X 2 +  Dependent Variable:Y – 4.5X 3 – 8.0X 4 Independent Variables: X 1, X 2

98 Example Motor Vehicle example Variables 1.(Y) mpg – Mileage 2.(X 1 ) engine – Engine size. 3.(X 2 ) horse – Horsepower. 4.(X 3 ) weight – Weight.

99 Suppose we want to test: H 0 :  1 = 0 against H A :  1 ≠ 0 i.e. engine size(engine) has no effect on mileage(mpg). The Full model: Y =  0 +  1 X 1 +  2 X 2 +  1 X 3 +  (mpg) (engine)(horse) (weight) The reduced model: Y =  0 +  2 X 2 +  1 X 3 + 

100 The ANOVA Table for the Full model:

101 The reduction in the residual sum of squares = 7733.138452 - 7720.835649 = 12.30280251 The ANOVA Table for the Reduced model:

102 The ANOVA Table for testing H 0 :  1 = 0 against H A :  1 ≠ 0

103 Now suppose we want to test: H 0 :  1 = 0,  2 = 0 against H A :  1 ≠ 0 or  2 ≠ 0 i.e. engine size (engine) and horsepower (horse) have no effect on mileage (mpg). The Full model: Y =  0 +  1 X 1 +  2 X 2 +  1 X 3 +  (mpg) (engine)(horse) (weight) The reduced model: Y =  0 +  1 X 3 + 

104 The ANOVA Table for the Full model

105 The reduction in the residual sum of squares = 8299.023 - 7720.835649 = 578.1875392 The ANOVA Table for the Reduced model:

106 The ANOVA Table for testing H 0 :  1 = 0,  2 = 0 against H A :  1 ≠ 0 or  2 ≠ 0

107 Testing the General Linear Hypothesis Another Example

108 In the following example: Weight Gain was being measured along with the amount of protein in the diet due to the following sources –Beef, –Pork, and –two types of cereals.

109 Dependent Variable Y = Weight Gain Independent Variables X 1 = the amount of protein in the diet due to the Beef source, X 2 = the amount of protein in the diet due to the Pork source, X 3 = the amount of protein in the diet due to the Cereal 1 source X 4 = the amount of protein in the diet due to the Cereal 2 source.

110 The Multiple Linear model Y =  0 +  1 X 1 +  2 X 2 +  3 X 3 +  4 X 4 +  or Weight Gain =  0 +  1 (Beef) +  2 (Pork) +  3 (Cereal 1) +  4 (Cereal 2) + 

111 caseBeefPorkCereal 1Cereal 2Weight Gain 13.488.959.264.7243.05 21.774.932.770.4534.29 36.393.014.921.7931.79 49.970.678.568.4241.94 57.414.198.414.4345.29 63.584.12.051.132.02 71.22.646.035.5526.93 86.80.974.85.9836.45 92.39.950.896.7431.52 106.470.69.177.2739.67 115.084.988.653.2437.72 120.622.247.790.0829.01 136.472.192.53.0831.15 147.350.180.677.8731.89 The weight gains are given in the following table below:

112 The Summary Statistics of Regression computation are given below: Regression Statistics Multiple R0.89243188 R Square0.79643465 Adjusted R Square0.70596117 Standard Error3.03382552 Observations14

113 The estimates of the regression coefficients and their standard errors are given below: CoefficientsStandard Error t StatP-valueLower 95%Upper 95% Intercept19.46149892.91659566.672676519.1339E-0512.863696326.0593016 X1X1 1.477696330.412884743.578956040.005940440.543685452.41170721 X2X2 0.975842240.329688012.959896060.015962210.230035581.7216489 X3X3 0.943516420.264790133.563261230.006088110.344519071.54251378 X4X4 -0.03445260.36188355-0.09520350.92623923-0.85309070.78418551

114 ANOVA dfSSMSFSignificance F Regression4324.09326781.02331688.80296180.00355147 Residual982.83687569.20409729 Total13406.930143

115 Note that  i is the rate of increase in weight gain due to increase in protein with respect to the given source of protein. One of course would be interested in whether weight gain increased with protein for any of the sources of protein. That is testing the Null Hypothesis H 0 :  1 = 0,  2 = 0,  3 = 0 and  4 = 0 against the alternative Hypothesis H A : at least one  i  0.

116 This can be achieved by using the Anova Table below: dfSSMSFSignificance F Regression4324.09326781.02331688.80296180.00355147 Residual982.83687569.20409729 Total13406.930143

117 Test statistic – F ratio F distribution Significance – p value F

118 F distribution describes the behaviour or the F statistics when H 0 is true. If associated p-value is small, H 0 should be rejected in favour of H A. The cut-off values are  =.05 or  =.01

119 However one would also be interested in making more specific comparisons. Namely, comparing effect on weight gain of –the two meat sources and –the two cereal sources on weight gain

120 In this case we would be interested in testing the Null Hypothesis H 0 :  1 =  2,  3 =  4 against the alternative Hypothesis H A :  1   2 or  3   4.

121 Then assuming H 0 :  1 =  2,  3 =  4 the reduced model becomes Y =  0 +  1 (X 1 + X 2 ) +  3 (X 3 + X 4 ) +  Dependent Variable: Y Independent Variables: (X 1 + X 2 ) and (X 3 + X 4 )

122 The Anova Table for the reduced model: dfSSMSFSignificance F Regression2276.132469138.06623511.61128130.0019451 Residual11130.79767411.8906976 Total13406.930143

123 The Anova Table for the complete model: dfSSMSFSignificance F Regression4324.09326781.02331688.80296180.00355147 Residual982.83687569.20409729 Total13406.930143

124 the Anova Table to carrying out the test: dfSSMSFSignificance F  1 +  2 = 0,  3 +  4 = 0 2276.132469138.06623515.00051880.00136222  1 =  2,  3 =  4 247.960798223.98039912.605404780.12802848 Residual982.83687569.20409729 Total13406.930143

125 DUMMY VARIABLES


Download ppt "Fitting Equations to Data. A Common situation: Suppose that we have a single dependent variable Y (continuous numerical) and one or several independent."

Similar presentations


Ads by Google