Hypothesis testing and Estimation

Slides:



Advertisements
Similar presentations
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Advertisements

Managerial Economics in a Global Economy
Inference for Regression
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
11 Simple Linear Regression and Correlation CHAPTER OUTLINE
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Comparing k Populations Means – One way Analysis of Variance (ANOVA)
The General Linear Model. The Simple Linear Model Linear Regression.
The Simple Regression Model
SIMPLE LINEAR REGRESSION
SIMPLE LINEAR REGRESSION
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Correlation and Regression Analysis
SIMPLE LINEAR REGRESSION
Introduction to Linear Regression and Correlation Analysis
Inference for regression - Simple linear regression
Inferences in Regression and Correlation Analysis Ayona Chatterjee Spring 2008 Math 4803/5803.
CHAPTER 14 MULTIPLE REGRESSION
Linear Regression Hypothesis testing and Estimation.
Fitting Equations to Data. A Common situation: Suppose that we have a single dependent variable Y (continuous numerical) and one or several independent.
1 11 Simple Linear Regression and Correlation 11-1 Empirical Models 11-2 Simple Linear Regression 11-3 Properties of the Least Squares Estimators 11-4.
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Multiple Regression. Simple Regression in detail Y i = β o + β 1 x i + ε i Where Y => Dependent variable X => Independent variable β o => Model parameter.
Logistic regression. Recall the simple linear regression model: y =  0 +  1 x +  where we are trying to predict a continuous dependent variable y from.
Hypothesis testing and Estimation
Multivariate Data. Descriptive techniques for Multivariate data In most research situations data is collected on more than one variable (usually many.
The Simple Linear Regression Model. Estimators in Simple Linear Regression and.
Logistic regression. Recall the simple linear regression model: y =  0 +  1 x +  where we are trying to predict a continuous dependent variable y from.
Using SPSS Note: The use of another statistical package such as Minitab is similar to using SPSS.
Correlation. The statistic: Definition is called Pearsons correlation coefficient.
Summary of the Statistics used in Multiple Regression.
Linear Regression Hypothesis testing and Estimation.
Comparing k Populations Means – One way Analysis of Variance (ANOVA)
Stats Methods at IC Lecture 3: Regression.
Inference about the slope parameter and correlation
The simple linear regression model and parameter estimation
Lecture 11: Simple Linear Regression
Regression and Correlation
Chapter 4 Basic Estimation Techniques
Chapter 7. Classification and Prediction
Chapter 14 Inference on the Least-Squares Regression Model and Multiple Regression.
Regression Analysis AGEC 784.
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Topic 10 - Linear Regression
Multivariate Data.
CHAPTER 7 Linear Correlation & Regression Methods
Logistic Regression.
Correlation and Simple Linear Regression
Chapter 11: Simple Linear Regression
Slides by JOHN LOUCKS St. Edward’s University.
Correlation and Simple Linear Regression
Comparing k Populations
Comparing k Populations
CHAPTER 29: Multiple Regression*
CHAPTER 26: Inference for Regression
Comparing k Populations
Hypothesis testing and Estimation
Correlation and Simple Linear Regression
Correlation and Regression
Comparing k Populations
Simple Linear Regression
SIMPLE LINEAR REGRESSION
Simple Linear Regression and Correlation
Product moment correlation
SIMPLE LINEAR REGRESSION
Multiple Testing Tukey’s Multiple comparison procedure
St. Edward’s University
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression
Presentation transcript:

Hypothesis testing and Estimation Linear Regression Hypothesis testing and Estimation

Assume that we have collected data on two variables X and Y. Let (x1, y1) (x2, y2) (x3, y3) … (xn, yn) denote the pairs of measurements on the on two variables X and Y for n cases in a sample (or population)

The Statistical Model

Each yi is assumed to be randomly generated from a normal distribution with mean mi = a + bxi and standard deviation s. (a, b and s are unknown) yi a + bxi s xi Y = a + bX slope = b a

The Data The Linear Regression Model The data falls roughly about a straight line. Y = a + bX unseen

Fitting the best straight line to “linear” data The Least Squares Line Fitting the best straight line to “linear” data

Let Y = a + b X denote an arbitrary equation of a straight line. a and b are known values. This equation can be used to predict for each value of X, the value of Y. For example, if X = xi (as for the ith case) then the predicted value of Y is:

The residual can be computed for each case in the sample, The residual sum of squares (RSS) is a measure of the “goodness of fit of the line Y = a + bX to the data

The optimal choice of a and b will result in the residual sum of squares attaining a minimum. If this is the case than the line: Y = a + bX is called the Least Squares Line

The equation for the least squares line Let

Hypothesis testing and Estimation Linear Regression Hypothesis testing and Estimation

Fitting the best straight line to “linear” data The Least Squares Line Fitting the best straight line to “linear” data

Computing Formulae:

Then the slope of the least squares line can be shown to be:

and the intercept of the least squares line can be shown to be:

The residual sum of Squares Computing formula

Estimating s, the standard deviation in the regression model : Computing formula This estimate of s is said to be based on n – 2 degrees of freedom

Sampling distributions of the estimators

The sampling distribution slope of the least squares line : It can be shown that b has a normal distribution with mean and standard deviation

Thus has a standard normal distribution, and has a t distribution with df = n - 2

(1 – a)100% Confidence Limits for slope b : ta/2 critical value for the t-distribution with n – 2 degrees of freedom

Testing the slope The test statistic is: - has a t distribution with df = n – 2 if H0 is true.

The Critical Region Reject df = n – 2 This is a two tailed tests. One tailed tests are also possible

The sampling distribution intercept of the least squares line : It can be shown that a has a normal distribution with mean and standard deviation

Thus has a standard normal distribution and has a t distribution with df = n - 2

(1 – a)100% Confidence Limits for intercept a : ta/2 critical value for the t-distribution with n – 2 degrees of freedom

Testing the intercept The test statistic is: - has a t distribution with df = n – 2 if H0 is true.

The Critical Region Reject df = n – 2

Example

The following data showed the per capita consumption of cigarettes per month (X) in various countries in 1930, and the death rates from lung cancer for men in 1950.   TABLE : Per capita consumption of cigarettes per month (Xi) in n = 11 countries in 1930, and the death rates, Yi (per 100,000), from lung cancer for men in 1950.   Country (i) Xi Yi Australia 48 18 Canada 50 15 Denmark 38 17 Finland 110 35 Great Britain 110 46 Holland 49 24 Iceland 23 6 Norway 25 9 Sweden 30 11 Switzerland 51 25 USA 130 20  

Fitting the Least Squares Line

Fitting the Least Squares Line First compute the following three quantities:

Computing Estimate of Slope (b), Intercept (a) and standard deviation (s),

95% Confidence Limits for slope b : 0.0706 to 0.3862 t.025 = 2.262 critical value for the t-distribution with 9 degrees of freedom

95% Confidence Limits for intercept a : -4.34 to 17.85 t.025 = 2.262 critical value for the t-distribution with 9 degrees of freedom

Y = 6.756 + (0.228)X 95% confidence Limits for slope 0.0706 to 0.3862 95% confidence Limits for intercept -4.34 to 17.85

Testing the positive slope The test statistic is:

The Critical Region Reject df = 11 – 2 = 9 A one tailed test

we reject and conclude

Confidence Limits for Points on the Regression Line The intercept a is a specific point on the regression line. It is the y – coordinate of the point on the regression line when x = 0. It is the predicted value of y when x = 0. We may also be interested in other points on the regression line. e.g. when x = x0 In this case the y – coordinate of the point on the regression line when x = x0 is a + b x0

y = a + b x a + b x0 x0

(1- a)100% Confidence Limits for a + b x0 : ta/2 is the a/2 critical value for the t-distribution with n - 2 degrees of freedom

Prediction Limits for new values of the Dependent variable y An important application of the regression line is prediction. Knowing the value of x (x0) what is the value of y? The predicted value of y when x = x0 is: This in turn can be estimated by:.

The predictor Gives only a single value for y. A more appropriate piece of information would be a range of values. A range of values that has a fixed probability of capturing the value for y. A (1- a)100% prediction interval for y.

(1- a)100% Prediction Limits for y when x = x0: ta/2 is the a/2 critical value for the t-distribution with n - 2 degrees of freedom

Example In this example we are studying building fires in a city and interested in the relationship between: X = the distance of the closest fire hall and the building that puts out the alarm and Y = cost of the damage (1000$) The data was collected on n = 15 fires.

The Data

Scatter Plot

Computations

Computations Continued

Computations Continued

Computations Continued

95% Confidence Limits for slope b : 4.07 to 5.77 t.025 = 2.160 critical value for the t-distribution with 13 degrees of freedom

95% Confidence Limits for intercept a : 7.21 to 13.35 t.025 = 2.160 critical value for the t-distribution with 13 degrees of freedom

Least Squares Line y=4.92x+10.28

(1- a)100% Confidence Limits for a + b x0 : ta/2 is the a/2 critical value for the t-distribution with n - 2 degrees of freedom

95% Confidence Limits for a + b x0 :

95% Confidence Limits for a + b x0

(1- a)100% Prediction Limits for y when x = x0: ta/2 is the a/2 critical value for the t-distribution with n - 2 degrees of freedom

95% Prediction Limits for y when x = x0

95% Prediction Limits for y when x = x0

Linear Regression Summary Hypothesis testing and Estimation

(1 – a)100% Confidence Limits for slope b : ta/2 critical value for the t-distribution with n – 2 degrees of freedom

Testing the slope The test statistic is: - has a t distribution with df = n – 2 if H0 is true.

(1 – a)100% Confidence Limits for intercept a : ta/2 critical value for the t-distribution with n – 2 degrees of freedom

Testing the intercept The test statistic is: - has a t distribution with df = n – 2 if H0 is true.

(1- a)100% Confidence Limits for a + b x0 : ta/2 is the a/2 critical value for the t-distribution with n - 2 degrees of freedom

(1- a)100% Prediction Limits for y when x = x0: ta/2 is the a/2 critical value for the t-distribution with n - 2 degrees of freedom

Correlation

Definition The statistic: is called Pearsons correlation coefficient

Properties -1 ≤ r ≤ 1, |r| ≤ 1, r2 ≤ 1 |r| = 1 (r = +1 or -1) if the points (x1, y1), (x2, y2), …, (xn, yn) lie along a straight line. (positive slope for +1, negative slope for -1)

The test for independence (zero correlation) H0: X and Y are independent HA: X and Y are correlated The test statistic: The Critical region Reject H0 if |t| > ta/2 (df = n – 2) This is a two-tailed critical region, the critical region could also be one-tailed

Example In this example we are studying building fires in a city and interested in the relationship between: X = the distance of the closest fire hall and the building that puts out the alarm and Y = cost of the damage (1000$) The data was collected on n = 15 fires.

The Data

Scatter Plot

Computations

Computations Continued

Computations Continued

The correlation coefficient The test for independence (zero correlation) The test statistic: We reject H0: independence, if |t| > t0.025 = 2.160 H0: independence, is rejected

Relationship between Regression and Correlation

Recall Also since Thus the slope of the least squares line is simply the ratio of the standard deviations × the correlation coefficient

The test for independence (zero correlation) H0: X and Y are independent HA: X and Y are correlated Uses the test statistic: Note: and

The two tests The test for independence (zero correlation) H0: X and Y are independent HA: X and Y are correlated The test for zero slope H0: b = 0. HA: b ≠ 0 are equivalent

the test statistic for independence:

Regression (in general)

This model can be used for In many experiments we would have collected data on a single variable Y (the dependent variable ) and on p (say) other variables X1, X2, X3, ... , Xp (the independent variables).   One is interested in determining a model that describes the relationship between Y (the response (dependent) variable) and X1, X2, …, Xp (the predictor (independent) variables. This model can be used for Prediction Controlling Y by manipulating X1, X2, …, Xp

The Model: is an equation of the form   The Model: is an equation of the form Y = f(X1, X2,... ,Xp | q1, q2, ... , qq) + e where q1, q2, ... , qq are unknown parameters of the function f and e is a random disturbance (usually assumed to have a normal distribution with mean 0 and standard deviation s).

Examples: Y = Blood Pressure, X = age The model Y = a + bX + e,thus q1 = a and q2 = b. This model is called: the simple Linear Regression Model Y = a + bX

Y = average of five best times for running the 100m, X = the year The model Y = a e-bX + g + e, thus q1 = a, q2 = b and q2 = g. This model is called: the exponential Regression Model Y = a e-bX + g

Y = gas mileage ( mpg) of a car brand X1 = engine size X2 = horsepower X3 = weight The model Y = b0 + b1 X1 + b2 X2 + b3 X3 + e. This model is called: the Multiple Linear Regression Model

The Multiple Linear Regression Model

In Multiple Linear Regression we assume the following model   Y = b0 + b1 X1 + b2 X2 + ... + bp Xp + e This model is called the Multiple Linear Regression Model. Again are unknown parameters of the model and where b0, b1, b2, ... , bp are unknown parameters and e is a random disturbance assumed to have a normal distribution with mean 0 and standard deviation s.

The importance of the Linear model 1.     It is the simplest form of a model in which each dependent variable has some effect on the independent variable Y. When fitting models to data one tries to find the simplest form of a model that still adequately describes the relationship between the dependent variable and the independent variables. The linear model is sometimes the first model to be fitted and only abandoned if it turns out to be inadequate.

In many instance a linear model is the most appropriate model to describe the dependence relationship between the dependent variable and the independent variables. This will be true if the dependent variable increases at a constant rate as any or the independent variables is increased while holding the other independent variables constant.

3.     Many non-Linear models can be Linearized (put into the form of a Linear model by appropriately transformation the dependent variables and/or any or all of the independent variables.) This important fact ensures the wide utility of the Linear model. (i.e. the fact the many non-linear models are linearizable.)

An Example The following data comes from an experiment that was interested in investigating the source from which corn plants in various soils obtain their phosphorous. The concentration of inorganic phosphorous (X1) and the concentration of organic phosphorous (X2) was measured in the soil of n = 18 test plots. In addition the phosphorous content (Y) of corn grown in the soil was also measured. The data is displayed below:

Inorganic Phosphorous X1 Organic X2 Plant Available Y 0.4 53 64 12.6 58 51 23 60 10.9 37 76 3.1 19 71 23.1 46 96 0.6 34 61 50 77 4.7 24 54 21.6 44 93 1.7 65 56 95 9.4 81 1.9 36 10.1 31 26.8 168 11.6 29 29.9 99

Coefficients Intercept 56.2510241 (b0) X1 1.78977412 (b1) X2   Coefficients Intercept 56.2510241 (b0) X1 1.78977412 (b1) X2 0.08664925 (b2) Equation: Y = 56.2510241 + 1.78977412 X1 + 0.08664925 X2

The Multiple Linear Regression Model

In Multiple Linear Regression we assume the following model   Y = b0 + b1 X1 + b2 X2 + ... + bp Xp + e This model is called the Multiple Linear Regression Model. Again are unknown parameters of the model and where b0, b1, b2, ... , bp are unknown parameters and e is a random disturbance assumed to have a normal distribution with mean 0 and standard deviation s.

Summary of the Statistics used in Multiple Regression

The Least Squares Estimates: - the values that minimize

The Analysis of Variance Table Entries a) Adjusted Total Sum of Squares (SSTotal) b) Residual Sum of Squares (SSError) c) Regression Sum of Squares (SSReg) Note: i.e. SSTotal = SSReg +SSError  

The Analysis of Variance Table Source Sum of Squares d.f. Mean Square F Regression SSReg p SSReg/p = MSReg MSReg/s2 Error SSError n-p-1 SSError/(n-p-1) =MSError = s2 Total SSTotal n-1

Uses: 1. To estimate s2 (the error variance). - Use s2 = MSError to estimate s2. To test the Hypothesis H0: b1 = b2= ... = bp = 0. Use the test statistic - Reject H0 if F > Fa(p,n-p-1).

3. To compute other statistics that are useful in describing the relationship between Y (the dependent variable) and X1, X2, ... ,Xp (the independent variables). a) R2 = the coefficient of determination = SSReg/SSTotal = = the proportion of variance in Y explained by X1, X2, ... ,Xp 1 - R2 = the proportion of variance in Y that is left unexplained by X1, X2, ... , Xp = SSError/SSTotal.

b) Ra2 = "R2 adjusted" for degrees of freedom. = 1 -[the proportion of variance in Y that is left unexplained by X1, X2,... , Xp adjusted for d.f.]

c). R= ÖR2 = the Multiple correlation coefficient of Y with X1, X2, c) R= ÖR2 = the Multiple correlation coefficient of Y with X1, X2, ... ,Xp = = the maximum correlation between Y and a linear combination of X1, X2, ... ,Xp Comment: The statistics F, R2, Ra2 and R are equivalent statistics.

Using Statistical Packages To perform Multiple Regression

Using SPSS Note: The use of another statistical package such as Minitab is similar to using SPSS

After starting the SSPS program the following dialogue box appears:

If you select Opening an existing file and press OK the following dialogue box appears

The following dialogue box appears:

If the variable names are in the file ask it to read the names If the variable names are in the file ask it to read the names. If you do not specify the Range the program will identify the Range: Once you “click OK”, two windows will appear

One that will contain the output:

The other containing the data:

To perform any statistical Analysis select the Analyze menu:

Then select Regression and Linear.

The following Regression dialogue box appears

Select the Dependent variable Y.

Select the Independent variables X1, X2, etc.

If you select the Method - Enter.

All variables will be put into the equation. There are also several other methods that can be used : Forward selection Backward Elimination Stepwise Regression

Forward selection This method starts with no variables in the equation Carries out statistical tests on variables not in the equation to see which have a significant effect on the dependent variable. Adds the most significant. Continues until all variables not in the equation have no significant effect on the dependent variable.

Backward Elimination This method starts with all variables in the equation Carries out statistical tests on variables in the equation to see which have no significant effect on the dependent variable. Deletes the least significant. Continues until all variables in the equation have a significant effect on the dependent variable.

Stepwise Regression (uses both forward and backward techniques) This method starts with no variables in the equation Carries out statistical tests on variables not in the equation to see which have a significant effect on the dependent variable. It then adds the most significant. After a variable is added it checks to see if any variables added earlier can now be deleted. Continues until all variables not in the equation have no significant effect on the dependent variable.

All of these methods are procedures for attempting to find the best equation The best equation is the equation that is the simplest (not containing variables that are not important) yet adequate (containing variables that are important)

Once the dependent variable, the independent variables and the Method have been selected if you press OK, the Analysis will be performed.

The output will contain the following table R2 and R2 adjusted measures the proportion of variance in Y that is explained by X1, X2, X3, etc (67.6% and 67.3%) R is the Multiple correlation coefficient (the maximum correlation between Y and a linear combination of X1, X2, X3, etc)

The next table is the Analysis of Variance Table The F test is testing if the regression coefficients of the predictor variables are all zero. Namely none of the independent variables X1, X2, X3, etc have any effect on Y

The final table in the output Gives the estimates of the regression coefficients, there standard error and the t test for testing if they are zero Note: Engine size has no significant effect on Mileage

The estimated equation from the table below: Is:

Note the equation is: Mileage decreases with: With increases in Engine Size (not significant, p = 0.432) With increases in Horsepower (significant, p = 0.000) With increases in Weight (significant, p = 0.000)

Logistic regression

Recall the simple linear regression model: y = b0 + b1x + e where we are trying to predict a continuous dependent variable y from a continuous independent variable x. This model can be extended to Multiple linear regression model: y = b0 + b1x1 + b2x2 + … + + bpxp + e Here we are trying to predict a continuous dependent variable y from a several continuous dependent variables x1 , x2 , … , xp .

Now suppose the dependent variable y is binary. It takes on two values “Success” (1) or “Failure” (0) We are interested in predicting a y from a continuous dependent variable x. This is the situation in which Logistic Regression is used

Example We are interested how the success (y) of a new antibiotic cream is curing “acne problems” and how it depends on the amount (x) that is applied daily. The values of y are 1 (Success) or 0 (Failure). The values of x range over a continuum

The logisitic Regression Model Let p denote P[y = 1] = P[Success]. This quantity will increase with the value of x. is called the odds ratio The ratio: This quantity will also increase with the value of x, ranging from zero to infinity. The quantity: is called the log odds ratio

Example: odds ratio, log odds ratio Suppose a die is rolled: Success = “roll a six”, p = 1/6 The odds ratio The log odds ratio

The logisitic Regression Model Assumes the log odds ratio is linearly related to x. i. e. : In terms of the odds ratio

The logisitic Regression Model Solving for p in terms x. or

Interpretation of the parameter b0 (determines the intercept) x

Interpretation of the parameter b1 (determines when p is 0 Interpretation of the parameter b1 (determines when p is 0.50 (along with b0)) p when x

Also when is the rate of increase in p with respect to x when p = 0.50

Interpretation of the parameter b1 (determines slope when p is 0.50 ) x

The data The data will for each case consist of a value for x, the continuous independent variable a value for y (1 or 0) (Success or Failure) Total of n = 250 cases

Estimation of the parameters The parameters are estimated by Maximum Likelihood estimation and require a statistical package such as SPSS

Using SPSS to perform Logistic regression Open the data file:

Choose from the menu: Analyze -> Regression -> Binary Logistic

The following dialogue box appears Select the dependent variable (y) and the independent variable (x) (covariate). Press OK.

Here is the output The Estimates and their S.E.

The parameter Estimates

Interpretation of the parameter b0 (determines the intercept) Interpretation of the parameter b1 (determines when p is 0.50 (along with b0))

Another interpretation of the parameter b1 is the rate of increase in p with respect to x when p = 0.50

The Logistic Regression Model The dependent variable y is binary. It takes on two values “Success” (1) or “Failure” (0) We are interested in predicting a y from a continuous dependent variable x.

The logisitic Regression Model Let p denote P[y = 1] = P[Success]. This quantity will increase with the value of x. is called the odds ratio The ratio: This quantity will also increase with the value of x, ranging from zero to infinity. The quantity: is called the log odds ratio

The logisitic Regression Model Assumes the log odds ratio is linearly related to x. i. e. : In terms of the odds ratio

The logisitic Regression Model In terms of p

The graph of p vs x p x

The Multiple Logistic Regression model

Here we attempt to predict the outcome of a binary response variable Y from several independent variables X1, X2 , … etc

Multiple Logistic Regression an example In this example we are interested in determining the risk of infants (who were born prematurely) of developing BPD (bronchopulmonary dysplasia) More specifically we are interested in developing a predictive model which will determine the probability of developing BPD from X1 = gestational Age and X2 = Birthweight

For n = 223 infants in prenatal ward the following measurements were determined X1 = gestational Age (weeks), X2 = Birth weight (grams) and Y = presence of BPD

The data

The results

Graph: Showing Risk of BPD vs GA and BrthWt

Non-Parametric Statistics