Applications The General Linear Model. Transformations.

Slides:



Advertisements
Similar presentations
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Advertisements

© Department of Statistics 2012 STATS 330 Lecture 32: Slide 1 Stats 330: Lecture 32.
Correlation and regression
Hypothesis Testing Steps in Hypothesis Testing:
1 Chapter 4 Experiments with Blocking Factors The Randomized Complete Block Design Nuisance factor: a design factor that probably has an effect.
11 Simple Linear Regression and Correlation CHAPTER OUTLINE
Econ 140 Lecture 81 Classical Regression II Lecture 8.
Regression Analysis Module 3. Regression Regression is the attempt to explain the variation in a dependent variable using the variation in independent.
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Ch11 Curve Fitting Dr. Deshi Ye
P M V Subbarao Professor Mechanical Engineering Department
Regression Analysis Using Excel. Econometrics Econometrics is simply the statistical analysis of economic phenomena Here, we just summarize some of the.
The General Linear Model. The Simple Linear Model Linear Regression.
Chapter 13 Multiple Regression
Chapter 10 Simple Regression.
Chapter 12 Multiple Regression
1 Chapter 3 Multiple Linear Regression Ray-Bing Chen Institute of Statistics National University of Kaohsiung.
Chapter 11 Multiple Regression.
REGRESSION AND CORRELATION
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Transformations. Transformation (re-expression) of a Variable A very useful transformation is the natural log transformation Transformation of a variable.
Review for Final Exam Some important themes from Chapters 9-11 Final exam covers these chapters, but implicitly tests the entire course, because we use.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 13-1 Chapter 13 Introduction to Multiple Regression Statistics for Managers.
Calibration & Curve Fitting
Objectives of Multiple Regression
Least-Squares Regression
Chapter 13: Inference in Regression
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
3/2003 Rev 1 I – slide 1 of 33 Session I Part I Review of Fundamentals Module 2Basic Physics and Mathematics Used in Radiation Protection.
1 1 Slide Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple Coefficient of Determination n Model Assumptions n Testing.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Curve Fitting.
23-1 Analysis of Covariance (Chapter 16) A procedure for comparing treatment means that incorporates information on a quantitative explanatory variable,
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
The Use of Dummy Variables. In the examples so far the independent variables are continuous numerical variables. Suppose that some of the independent.
Correlation and Regression Used when we are interested in the relationship between two variables. NOT the differences between means or medians of different.
Transformations. Transformations to Linearity Many non-linear curves can be put into a linear form by appropriate transformations of the either – the.
Chapter 11 Linear Regression Straight Lines, Least-Squares and More Chapter 11A Can you pick out the straight lines and find the least-square?
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
Multiple Regression Petter Mostad Review: Simple linear regression We define a model where are independent (normally distributed) with equal.
1 11 Simple Linear Regression and Correlation 11-1 Empirical Models 11-2 Simple Linear Regression 11-3 Properties of the Least Squares Estimators 11-4.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Within Subjects Analysis of Variance PowerPoint.
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Chapter 13 Multiple Regression
MARKETING RESEARCH CHAPTER 18 :Correlation and Regression.
Categorical Independent Variables STA302 Fall 2013.
General Linear Model.
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
Orthogonal Linear Contrasts A technique for partitioning ANOVA sum of squares into individual degrees of freedom.
Chapter 8: Simple Linear Regression Yang Zhenlin.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Transformations.
Logistic regression. Recall the simple linear regression model: y =  0 +  1 x +  where we are trying to predict a continuous dependent variable y from.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
The simple linear regression model and parameter estimation
Chapter 14 Introduction to Multiple Regression
BINARY LOGISTIC REGRESSION
Chapter 4 Basic Estimation Techniques
Chapter 7. Classification and Prediction
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Simple Linear Regression - Introduction
Regression Analysis Week 4.
Statistical Methods For Engineers
CHAPTER 29: Multiple Regression*
Transformations.
Hypothesis testing and Estimation
Transformations.
Chapter 12 Review Inference for Regression
14 Design of Experiments with Several Factors CHAPTER OUTLINE
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression
Presentation transcript:

Applications The General Linear Model

Transformations

Transformations to Linearity Many non-linear curves can be put into a linear form by appropriate transformations of the either – the dependent variable Y or –some (or all) of the independent variables X 1, X 2,..., X p. This leads to the wide utility of the Linear model. We have seen that through the use of dummy variables, categorical independent variables can be incorporated into a Linear Model. We will now see that through the technique of variable transformation that many examples of non-linear behaviour can also be converted to linear behaviour.

Intrinsically Linear (Linearizable) Curves 1 Hyperbolas y = x/(ax-b) Linear form: 1/y = a -b (1/x) or Y =  0 +  1 X Transformations: Y = 1/y, X=1/x,  0 = a,  1 = -b

2. Exponential y =  e  x =  x Linear form: ln y = ln  +  x = ln  + ln  x or Y =  0 +  1 X Transformations: Y = ln y, X = x,  0 = ln ,  1 =  = ln 

3. Power Functions y = a x b Linear from: ln y = lna + blnx or Y =  0 +  1 X

Logarithmic Functions y = a + b lnx Linear from: y = a + b lnx or Y =  0 +  1 X Transformations: Y = y, X = ln x,  0 = a,  1 = b

Other special functions y = a e b/x Linear from: ln y = lna + b 1/x or Y =  0 +  1 X Transformations: Y = ln y, X = 1/x,  0 = lna,  1 = b

The Box-Cox Family of Transformations

The Transformation Staircase

Graph of ln(x)

The effect of the transformation

The ln-transformation is a member of the Box-Cox family of transformations with = 0 If you decrease the value of the effect of the transformation will be greater. If you increase the value of the effect of the transformation will be less.

The effect of the ln transformation It spreads out values that are close to zero Compacts values that are large

The Bulging Rule x up y up y down x down

Non-Linear Models Nonlinearizable models

Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring  ) “rate of increase in Y” =

The Logistic Growth Model or (ignoring  ) “rate of increase in Y” = Equation:

The Gompertz Growth Model: or (ignoring  ) “rate of increase in Y” = Equation:

Polynomial Regression models

Polynomial Models y =  0 +  1 x +  2 x 2 +  3 x 3 Linear form Y =  0 +  1 X 1 +  2 X 2 +  3 X 3 Variables Y = y, X 1 = x, X 2 = x 2, X 3 = x 3

Suppose that we have two variables 1. Y – the dependent variable (response variable) 2. X – the independent variable (explanatory variable, factor)

Assume that we have collected data on two variables X and Y. Let ( x 1, y 1 ) ( x 2, y 2 ) ( x 3, y 3 ) … ( x n, y n ) denote the pairs of measurements on the on two variables X and Y for n cases in a sample (or population)

1.independent random variables. 2.Normally distributed. 3.Have the common variance, . 4.The mean of y i is: The assumption will be made that y 1, y 2, y 3 …, y n are

Each y i is assumed to be randomly generated from a normal distribution with mean and standard deviation .

The Model The matrix formulation

The Normal Equations

Example In the following example two quantities are being measured X = amount of an additive to a chemical process Y = the yield of the process

Graph X vs Y

The Model – Cubic polynomial (degree 3) Comment: A cubic polynomial in x can be fitted to y by defining the variables X 1 = x, X 2 = x 2, and X 3 = x 3 Then fitting the linear model

Response Surface Models Extending polynomial regression models to k independent variables

Response Surface models (2 independent vars.) Dependent variable Y and two independent variables x 1 and x 2. (These ideas are easily extended to more the two independent variables) The Model (A cubic response surface model) Compare with a linear model:

The response surface model can be put into the form of a linear model : Y =  0 +  1 X 1 +  2 X 2 +  3 X 3 +  4 X 4 +  5 X 5 +  6 X 6 +  7 X 7 +  8 X 8 +  9 X 9 +  by defining

More Generally, consider the random variable Y with 1. E[Y] = g(U 1,U 2,..., U k ) =  1  1 (U 1,U 2,..., U k ) +  2  2 (U 1,U 2,..., U k )  p  p (U 1,U 2,..., U k ) = and 2. var(Y) =  2 where  1,  2,...,  p are unknown parameters and  1,  2,...,  p are known functions of the nonrandom variables U 1,U 2,..., U k. Assume further that Y is normally distributed.

Now suppose that n independent observations of Y, (y 1, y 2,..., y n ) are made corresponding to n sets of values of (U 1,U 2,..., U k ) : (u 11,u 12,..., u 1k ), (u 21,u 22,..., u 2k ),... (u n1,u n2,..., u nk ). Let x ij =  j (u i1,u i2,..., u ik ) j =1, 2,..., p; i =1, 2,..., n. Then or

Polynomial Regression Model: One variable U. Quadratic Response Surface Model: Two variables U 1, U 2.

Trigonometric Polynomial Models

y =  0 +  1 cos(2  f 1 x) +  1 sin(2  f 1 x) + … +  k cos(2  f k x) +  k sin(2  f k x) Linear form Y =  0 +  1 C 1 +  1 S 1 + … +  k C k +  k S k Variables Y = y, C 1 = cos(2  f 1 x), S 2 = sin(2  f 1 x), … C k = cos(2  f k x), S k = sin(2  f k x)

General set of models The Normal equations: given data

Two important Special Cases Polynomial Models Trig-polynomial Models

Orthogonal Polynomial Models

Definition Consider the values x 0, x 1, …, x n and the polynomials are orthogonal relative to x 0, x 1, …, x n if: If in addition, they are called orthonormal

Consider the model This is equivalent to a polynomial model. Rather than the basis for this model being The basis is,polynomials of degree 0, 1, 2, 3, etc

The Normal Equations given the data

Derivation of Orthogonal Polynomials With equally spaced data points

Suppose x 0 = a, x 1 = a + b, x 2 = a + 2b, …, x n = a + nb

To do the calculations we need the values of: These values depend only on 1. n = the number of observations 2. i = the degree of the polynomial, and 3. j = the index of x j.

Orthogonal Linear Contrasts for Polynomial Regression

The Use of Dummy Variables

In the examples so far the independent variables are continuous numerical variables. Suppose that some of the independent variables are categorical. Dummy variables are artificially defined variables designed to convert a model including categorical independent variables to the standard multiple regression model.

Example: Comparison of Slopes of k Regression Lines with Common Intercept

Situation: k treatments or k populations are being compared. For each of the k treatments we have measured both –Y (the response variable) and –X (an independent variable) Y is assumed to be linearly related to X with –the slope dependent on treatment (population), while –the intercept is the same for each treatment

The Model:

This model can be artificially put into the form of the Multiple Regression model by the use of dummy variables to handle the categorical independent variable Treatments. Dummy variables are variables that are artificially defined

In this case we define a new variable for each category of the categorical variable. That is we will define X i for each category of treatments as follows :

Then the model can be written as follows: The Complete Model: where

In this case Dependent Variable: Y Independent Variables: X 1, X 2,..., X k

In the above situation we would likely be interested in testing the equality of the slopes. Namely the Null Hypothesis (q = k – 1)

The Reduced Model: Dependent Variable: Y Independent Variable: X = X 1 + X X k

Example: In the following example we are measuring –Yield Y as it depends on –the amount (X) of a pesticide. Again we will assume that the dependence of Y on X will be linear. (I should point out that the concepts that are used in this discussion can easily be adapted to the non- linear situation.)

Suppose that the experiment is going to be repeated for three brands of pesticides: A, B and C. The quantity, X, of pesticide in this experiment was set at 3 different levels: –2 units/hectare, –4 units/hectare and –8 units per hectare. Four test plots were randomly assigned to each of the nine combinations of test plot and level of pesticide.

Note that we would expect a common intercept for each brand of pesticide since when the amount of pesticide, X, is zero the four brands of pesticides would be equivalent.

The data for this experiment is given in the following table: 248 A B C

PesticideX (Amount)X1X1 X2X2 X3X3 Y A A A A B B B B C C C C A A A A B B B B C C C C A A A A B B B B C C C C The data as it would appear in a data file. The variables X 1, X 2 and X 3 are the “dummy” variables

Fitting the complete model : ANOVA dfSSMSFS ignificance F Regression E-07 Residual Total Coefficients Intercept X1X X2X X3X

Fitting the reduced model : ANOVA dfSSMSFSignificance F Regression Residual Total Coefficients Intercept X

The Anova Table for testing the equality of slopes dfSSMSFSignificance F common slope zero E-06 Slope comparison Residual Total

Example: Comparison of Intercepts of k Regression Lines with a Common Slope (One-way Analysis of Covariance)

Situation: k treatments or k populations are being compared. For each of the k treatments we have measured both Y (then response variable) and X (an independent variable) Y is assumed to be linearly related to X with the intercept dependent on treatment (population), while the slope is the same for each treatment. Y is called the response variable, while X is called the covariate.

The Model:

Equivalent Forms of the Model: 1) 2)

This model can be artificially put into the form of the Multiple Regression model by the use of dummy variables to handle the categorical independent variable Treatments.

In this case we define a new variable for each category of the categorical variable. That is we will define X i for categories I i = 1, 2, …, (k – 1) of treatments as follows:

Then the model can be written as follows: The Complete Model: where

In this case Dependent Variable: Y Independent Variables: X 1, X 2,..., X k-1, X

In the above situation we would likely be interested in testing the equality of the intercepts. Namely the Null Hypothesis (q = k – 1)

The Reduced Model: Dependent Variable: Y Independent Variable: X

Example: In the following example we are interested in comparing the effects of five workbooks (A, B, C, D, E) on the performance of students in Mathematics. For each workbook, 15 students are selected (Total of n = 15×5 = 75). Each student is given a pretest (pretest score ≡ X) and given a final test (final score ≡ Y). The data is given on the following slide

The data The Model:

Graphical display of data

Some comments 1.The linear relationship between Y (Final Score) and X (Pretest Score), models the differing aptitudes for mathematics. 2.The shifting up and down of this linear relationship measures the effect of workbooks on the final score Y.

The Model:

The data as it would appear in a data file.

The data as it would appear in a data file with Dummy variables, (X1, X2, X3, X4 )added

Here is the data file in SPSS with the Dummy variables, (X1, X2, X3, X4 )added. The can be added within SPSS

Fitting the complete model The dependent variable is the final score, Y. The independent variables are the Pre-score X and the four dummy variables X 1, X 2, X 3, X 4.

The Output

The Output - continued

The interpretation of the coefficients The common slope

The interpretation of the coefficients The intercept for workbook E

The interpretation of the coefficients The changes in the intercept when we change from workbook E to other workbooks.

1.When the workbook is E then X 1 = 0,…, X 4 = 0 and The model can be written as follows: The Complete Model: 2.When the workbook is A then X 1 = 1,…, X 4 = 0 and hence  1 is the change in the intercept when we change form workbook E to workbook A.

Testing for the equality of the intercepts The reduced model The dependent variable in only X (the pre-score)

Fitting the reduced model The dependent variable is the final score, Y. The independent variables is only the Pre-score X.

The Output for the reduced model Lower R 2

The Output - continued Increased R.S.S

The F Test

The Reduced model The Complete model

The F test

Testing for zero slope The reduced model The dependent variables are X 1, X 2, X 3, X 4 (the dummies)

The Reduced model The Complete model

The F test

The Analysis of Covariance This analysis can also be performed by using a package that can perform Analysis of Covariance (ANACOVA) The package sets up the dummy variables automatically

Here is the data file in SPSS. The Dummy variables are no longer needed.

In SPSS to perform ANACOVA you select from the menu – Analysis->General Linear Model->Univariatee

This dialog box will appear

You now select: 1.The dependent variable Y (Final Score) 2.The Fixed Factor (the categorical independent variable – workbook) 3.The covariate (the continuous independent variable – pretest score)

Compare this with the previous computed table The output: The ANOVA TABLE

This is the sum of squares in the numerator when we attempt to test if the slope is zero (and allow the intercepts to be different) The output: The ANOVA TABLE

The Use of Dummy Variables

Example: Comparison of Slopes of k Regression Lines with Common Intercept

Situation: k treatments or k populations are being compared. For each of the k treatments we have measured both –Y (the response variable) and –X (an independent variable) Y is assumed to be linearly related to X with –the slope dependent on treatment (population), while –the intercept is the same for each treatment

The Model:

The model can be written as follows: The Complete Model: where

Example: Comparison of Intercepts of k Regression Lines with a Common Slope (One-way Analysis of Covariance)

Situation: k treatments or k populations are being compared. For each of the k treatments we have measured both Y (then response variable) and X (an independent variable) Y is assumed to be linearly related to X with the intercept dependent on treatment (population), while the slope is the same for each treatment. Y is called the response variable, while X is called the covariate.

The Model:

The model can be written as follows: The Complete Model: where

Another application of the use of dummy variables The dependent variable, Y, is linearly related to X, but the slope changes at one or several known values of X (nodes). Y X nodes

The model Y X x1x1 x2x2 xkxk 11 22 kk or

Now define Etc.

Then the model can be written

An Example In this example we are measuring Y at time X. Y is growing linearly with time. At time X = 10, an additive is added to the process which may change the rate of growth. The data

Graph

Now define the dummy variables

The data as it appears in SPSS – x1, x2 are the dummy variables

We now regress y on x1 and x2.

The Output

Graph

Testing for no change in slope Here we want to test H 0 :  1 =  2 vs H A :  1 ≠  2 The reduced model is Y =  0 +  1 (X 1 + X 2 ) +  =  0 +  1 X + 

Fitting the reduced model We now regress y on x.

The Output

Graph – fitting a common slope

The test for the equality of slope