1 Chapter 8 – Regression 2 Basic review, estimating the standard error of the estimate and short cut problems and solutions.

Slides:



Advertisements
Similar presentations
Lesson 10: Linear Regression and Correlation
Advertisements

Objectives 10.1 Simple linear regression
Correlation and Regression
Chapter 15 (Ch. 13 in 2nd Can.) Association Between Variables Measured at the Interval-Ratio Level: Bivariate Correlation and Regression.
Cal State Northridge  320 Andrew Ainsworth PhD Regression.
Chapter 5 Introduction to Inferential Statistics.
The Regression Equation Using the regression equation to individualize prediction and move beyond saying that everyone is equal, that everyone should score.
T scores and confidence intervals using the t distribution.
The standard error of the sample mean and confidence intervals
PSY 307 – Statistics for the Behavioral Sciences
t scores and confidence intervals using the t distribution
The standard error of the sample mean and confidence intervals
Chapter 3 The Normal Curve Where have we been? To calculate SS, the variance, and the standard deviation: find the deviations from , square and sum.
Chapter 5 Introduction to Inferential Statistics.
Chapter 9 - Lecture 2 Some more theory and alternative problem formats. (These are problem formats more likely to appear on exams. Most of your time in.
Chapter 8 – Regression 2 Basic review, estimating the standard error of the estimate and short cut problems and solutions.
1 Chapter 8 – Regression 2 Basic review, estimating the standard error of the estimate and short cut problems and solutions.
Correlation 2 Computations, and the best fitting line.
Chapter 3 The Normal Curve Where have we been? To calculate SS, the variance, and the standard deviation: find the deviations from , square and sum.
The Simple Regression Model
Correlation 2 Computations, and the best fitting line.
Confidence intervals using the t distribution. Chapter 6 t scores as estimates of z scores; t curves as approximations of z curves Estimated standard.
SIMPLE LINEAR REGRESSION
The standard error of the sample mean and confidence intervals How far is the average sample mean from the population mean? In what interval around mu.
Chapter 5 Introduction to Inferential Statistics.
The Regression Equation How we can move beyond predicting that everyone should score right at the mean by using the regression equation to individualize.
The Regression Equation Using the regression equation to individualize prediction and move beyond saying that everyone is equal, that everyone should score.
Introduction to Probability and Statistics Linear Regression and Correlation.
SIMPLE LINEAR REGRESSION
Chapter 1-6 Review Chapter 1 The mean, variance and minimizing error.
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
T scores and confidence intervals using the t distribution.
Relationships Among Variables
The standard error of the sample mean and confidence intervals How far is the average sample mean from the population mean? In what interval around mu.
Correlation and Linear Regression
SIMPLE LINEAR REGRESSION
Psy B07 Chapter 1Slide 1 ANALYSIS OF VARIANCE. Psy B07 Chapter 1Slide 2 t-test refresher  In chapter 7 we talked about analyses that could be conducted.
Linear Regression and Correlation
Correlation and Linear Regression
Confidence Intervals and Hypothesis Testing
1 Psych 5500/6500 Chi-Square (Part Two) Test for Association Fall, 2008.
CORRELATION & REGRESSION
Correlation and Regression. The test you choose depends on level of measurement: IndependentDependentTest DichotomousContinuous Independent Samples t-test.
Chapter 15 Correlation and Regression
Copyright © 2012 by Nelson Education Limited. Chapter 7 Hypothesis Testing I: The One-Sample Case 7-1.
© The McGraw-Hill Companies, Inc., Chapter 11 Correlation and Regression.
Hypothesis of Association: Correlation
Introduction to Linear Regression
Correlation and Regression Used when we are interested in the relationship between two variables. NOT the differences between means or medians of different.
McGraw-Hill/Irwin Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 13 Linear Regression and Correlation.
1 Regression & Correlation (1) 1.A relationship between 2 variables X and Y 2.The relationship seen as a straight line 3.Two problems 4.How can we tell.
Midterm Review Ch 7-8. Requests for Help by Chapter.
Testing Differences between Means, continued Statistics for Political Science Levin and Fox Chapter Seven.
Linear Regression and Correlation Chapter GOALS 1. Understand and interpret the terms dependent and independent variable. 2. Calculate and interpret.
Copyright (C) 2002 Houghton Mifflin Company. All rights reserved. 1 Understandable Statistics Seventh Edition By Brase and Brase Prepared by: Lynn Smith.
Example x y We wish to check for a non zero correlation.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
©The McGraw-Hill Companies, Inc. 2008McGraw-Hill/Irwin Linear Regression and Correlation Chapter 13.
Chapter 13 Linear Regression and Correlation. Our Objectives  Draw a scatter diagram.  Understand and interpret the terms dependent and independent.
Regression and Correlation
Computations, and the best fitting line.
Hypothesis Testing Review
Elementary Statistics
Statistical Inference about Regression
Correlation and Regression
SIMPLE LINEAR REGRESSION
SIMPLE LINEAR REGRESSION
Regression & Correlation (1)
Linear Regression and Correlation
Presentation transcript:

1 Chapter 8 – Regression 2 Basic review, estimating the standard error of the estimate and short cut problems and solutions.

2 You can use the regression equation when: 1. the relationship between X and Y is linear, 2. r falls outside the CI.95 around and is therefore a statistically significant correlation, and 3. X is within the range of X scores observed in your sample,

3 Simple problems using the regression equation t Y ' =r *tXtX t Y ' =.150 *0.40 = 0.06 t Y ' =.40 *-1.70 = t Y ' =.40 *1.70 = 0.68

4 Predictions from Raw Data 1. Calculate the t score for X. 2. Solve the regression equation. 3. Transform the estimated t score for Y into a raw score.

5 Predicting from and to raw scores Problem: Estimate the midterm point total given a study time of 400 minutes. It is given that the estimated mean of the study time is 560 minutes and the estimated standard deviation is (Range = ) It is given that the estimated mean of midterm points is 76 and their estimated standard deviation is The estimated correlation coefficient is.851.

6 Predicting from and to raw scores 1. Translate raw X to t X score. X X-bar s X (X-X-bar) / s X = t X ( )/216.02= -0.74

7 Use regression equation 2. Find value of t Y' r r * t X = t Y' *-0.74=-0.63

8 Translate t Y' to raw Y' Y s Y Y + (t Y' * s Y ) = Y' (-0.63*7.98) = 70.97

9 A Caution zNever assume that a correlation will stay linear outside of the range you originally observed. zTherefore, never use the regression equation to make predictions from X values outside of the range you found in your sample. zExample: Measuring heights of children.

10. Reviewing the r table and reporting the results of calculating r from a random sample

11 How the r table is laid out: the important columns yColumn 1 of the r table shows degrees of freedom for correlation and regression (df REG ) ydf REG =n P -2 yColumn 2 shows the CI.95 for varying degrees of freedom yColumn 3 shows the absolute value of the r that falls just outside the CI.95. Any r this far or further from falsifies the hypothesis that rho=0.000 and can be used in the regression equation to make predictions of Y scores for people who were not in the original sample but who were part of the population from which the sample is drawn.

to to to to to to to to to to to to to to to to to to to df nonsignificant If r falls in within the 95% CI around 0.000, then the result is not significant. Find your degrees of freedom (n p -2) in this column You cannot reject the null hypothesis. You must assume that rho = Does the absolute value of r equal or exceed the value in this column? r is significant with alpha =.05. If r is significant you can consider it an unbiased, least squares estimate of rho. alpha =.05. You can use it in the regression equation to estimate Y scores.

13 Example : Achovy pizza and horror films, rho=0.000 H 1 : People who enjoy food with strong flavors also enjoy other strong sensations. H 0 : There is no relationship between enjoying food with strong flavors and enjoying other strong sensations. anchovies horror films Can we reject the null hypothesis? (scale 0-9)

14 Can we reject the null hypothesis? Horror films Pizza

15 Can we reject the null hypothesis? r =.352 df = 8 We do the math and we find that:

to to to to to to to to to to to to to to to to to to to df nonsignificant r table

17 This finding falls within the CI.95 around zWe call such findings “nonsignificant” zNonsignificant is abbreviated n.s. zWe would report these finding as follows zr (8)=0.352, n.s. zGiven that it fell inside the CI.95, we must assume that rho actually equals zero and that our sample r is.352 instead of solely because of sampling fluctuation. zWe go back to predicting that everyone will score at the mean of Y.

18 How to report a significant r zFor example, let’s say that you had a sample (n P =30) and r = zLooking under n P -2=28 df REG, we find the interval consistent with the null is between and zSo we are outside the CI.95 for rho=0.000 zWe would write that result as r(28)=-.400, p<.05 zThat tells you the df REG, the value of r, and that you can expect an r that far from five or fewer times in 100 when rho = 0.000

19 Then there is Column 4 zColumn 4 shows the values that lie outside a CI.99 z(The CI.99 itself isn’t shown like the CI.95 in Column 2 because it isn’t important enough.) zHowever, Column 4 gives you bragging rights. zIf your r is as far or further from as the number in Column 4, you can say there is 1 or fewer chance in 100 of an r being this far from zero (p<.01). zFor example, let’s say that you had a sample (n P =30) and r = zThe critical value at.01 is.463. You are further from 0.00 than that.So you can brag. zYou write that result as r(28)=-.525, p<.01.

20 To summarize zIf r falls inside the CI.95 around 0.000, it is nonsignificant (n.s.) and you can’t use the regression equation (e.g., r(28)=.300, n.s. zIf r falls outside the CI.95, but not as far from as the number in Column 4, you have a significant finding and can use the regression equation (e.g., r(28)=-.400,p<.05 zIf r is as far or further from zero as the number in Column 4, you can use the regression equation and brag while doing it (e.g., r(28)=-.525, p<.01

to to to to to to to to to to to to to to to to to to to df nonsignificant.05.01

22 Can you reject H 0 ? to to to to to to to to to to to to to df nonsignificant r =.386 n p = 19 df REG = 17

23 Can you reject H 0 ? to to to to to to to to to to to to to df nonsignificant r = n p = 47 df reg = 45

24 How much better than the mean can we guess?

25 Improved prediction zIf we can use the regression equation rather than the mean to make individualized estimates of Y scores, how much better are our estimates? zWe are making predictions about scores on the Y variable from our knowledge of the statistically significant correlation between X & Y and the fact that we know someone’s X score. zThe average unsquared error when we predict that everyone will score at the mean of Y equals s Y, the ordinary standard deviation of Y. zHow much better than that can we do?

26 Estimating the standard error of the estimate the (very) long way. zCalculate correlation (which includes calculating s for Y). zIf the correlation is significant, you can use the regression equation to make individualized predictions of scores on the Y variable. zThe average unsquared error of prediction when you do that is called the estimated standard error of the estimate.

27 Example for Prediction Error zA study was performed to investigate whether the quality of an image affects reading time. zThe experimental hypothesis was that reduced quality would slow down reading time. zQuality was measured on a scale of 1 to 10. Reading time was in seconds.

28 Quality vs Reading Time data: Compute the correlation Quality (scale 1-10) Reading time (seconds) Is there a relationship? Check for linearity. Compute r.

29 Calculate t scores for X X  X=39.25 n= 7 X=5.61 (X - X) X - X t X = (X - X) / s X MS W = 4.73/(7-1) = 0.79 s = 0.89 SS W = 4.73

30 Calculate t scores for Y Y  Y=52.5 n= 7 Y=7.50 MS W = 3.78/(7-1) = 0.63 s Y = 0.79 (Y - Y) Y - Y t Y = (Y - Y) / s Y SS W = 3.78

31 Plot t scores t Y t X

32 t score plot with best fitting line: linear? YES!

33 Calculate r t Y t X t Y - t X (t Y - t X )  (t X - t Y ) 2 / (n P - 1) = r = 1 - (1/2 * 3.580) = =  (t X - t Y ) 2 = 21.48

34 Check whether r is significant r = df = n P -2 = 5  is.05 Look in r table:With 5 df REG, the CI.95 goes from to r(5)= -.790, p <.05 r is significant!

35 We can calculate the Y' for every raw X X Y '

36 Can we show mathematically that regression estimates are better than mean estimates? Y Y ' To calculate the standard deviation we take deviations of Y from the mean of Y, square them, add them up, divide by degrees of freedom, and then take the square root. To calculate the standard error of the estimate, s EST, we will take the deviations of each raw Y score from its regression equation estimate, square them, add them up, divide by degrees of freedom, and take the square root. We expect of course that there will be less error if we use regression. Y 7.5

37 Estimated standard error of the estimate MS RES = 1.49/(7-2) = 0.30 S EST = (Y - Y ' ) Y - Y ' SS RES = 1.49 Y Y '

38 How much better? S EST = 0.546S Y = % less error when use the regression equation instead of the mean to predict.

39 Mathematical magic There is usually an alternative formula for calculating statistics that is easier to perform. We went through a lot of extra steps to calculate S EST = It is not necessary to calculate all of the Y ’s..

40 Another way to phrase it: How much error did we get rid of? zTreat it as a weight loss problem. zIf Jack is 30 pounds overweight and he loses 40% of it, how much is he still overweight. zHe lost.400 x 30 pounds = 12 pounds. zHe has 30 – 12 = 18 pounds left to lose.

41 SS Y = error to start r 2 =percent of error lost zSS Y is the total amount of error we start with when prediction scores on Y. It is the amount of error when everyone is predicted to score at the mean. zThe proportion of error you get rid of using the regression equation as your predictor equals Pearson’s correlation coefficient squared (r 2 )

42 To get the total error left find how much you got rid of, then subtract from what you started with zAmount you got rid of: SS Y * r 2 zAmount left: SS RES = SS Y – (SS Y * r 2 ) zAverage amount of squared error left: yMS RES = SS RES /df REG = SS RES /(n P -2) ys EST = square root of MS RES

43 Computing s EST the easier way! We already knew that SS Y = 3.80 and r = MS RES = 1.43/(7-2) = S EST = = (3.80 * (-0.79) 2 ) SS RES = SS Y - (SS Y * r 2 ) = 1.43

44 How much better? S EST = 0.535S Y = % less error when use the regression equation instead of the mean to predict. The difference between 33% and 32% when we calculated using the long way is due to rounding error.

45 Stating the obvious: zThe estimated standard deviation (s) was the estimated average unsquared distance of scores in the population from mu. zThe estimated standard error of the estimate (s EST ) is the estimated average unsquared distance of scores in the population from the regression equation based predicted Y scores. z Both reflect the error of prediction. Using the regression equation individualizes prediction and, if r is significant, leads to less error.

46 Do one yourself. zAssume the original sum of squares for error is , n P =22 and the sum of the squared differences between the t X and t Y scores is zWhat is r? zIs r statistically significant? Write the results as you would in a report. zWhat is the estimated average unsquared distance of Y scores from the regression line? zWhat percent improvement is obtained when s is compared to s EST ?

47 Answers: What is r? Is it significant? zCompute r  (t X - t Y ) 2 / (n P - 1) = 12.60/21=.600 r= /2(.600) =.700 Is r significant? r(20)=.700, p<.01  (t X - t Y ) 2 = 12.60

48 What is the estimated average unsquared distance of scores in the population from the regression line? zThat is the same as asking “What is the estimated standard error of the estimate?” MS RES = /(20) = S EST = 3.27 = – [ * (0.70) 2 ] SS RES = SS Y - (SS Y * r 2 ) =

49 What percent improvement is obtained when s is compared to s EST ? zMS W =SS W /df = /21 = z

50 Last and (perhaps) least: zProportion improvement = (s-s EST )/s z(4.47 – 3.27)/4.47=.268 zPercent improvement = proportion improvement *100 zIn this case there was about a 26.8% improvement in unsquared error when you use the regression equation rather than the mean as your basis for predicting Y scores.

51 End chapter 8 slides here Slides past here were not covered in lecture and will not be on the exam.

52 Error Types: Type 1 Error zType 1 error occurs when you accidentally get a random sample with an r outside the range predicted by the null hypothesis even though rho= This forces you to reject the null hypothesis when there really is no relationship between X and Y in the population as a whole. zScientists are conservative and set up conditions to avoid Type 1 errors.

53 Error Types: Type 2 Error zA type 2 error can only occur when there really is a correlation between X and Y in the population, but you accidentally get a sample r that falls within the range predicted by the null hypothesis. You must then fail to reject the null and assume rho=0.000 zThis is incorrect and results in Type 2 error.

54 Alpha levels zAny result can be found by chance. zHowever some results are so strong that they are very unlikely. zUnlikely is defined as occuring by chance 5 (or fewer) times in 100. zThe risk of getting a weird sample that causes a Type 1 error is called alpha. z  =.05