Presentation is loading. Please wait.

Presentation is loading. Please wait.

Regression & Correlation (1)

Similar presentations


Presentation on theme: "Regression & Correlation (1)"— Presentation transcript:

1 Regression & Correlation (1)
A relationship between 2 variables X and Y The relationship seen as a straight line Two problems How can we tell if our regression line is useful? Test of hypothesis about the slope, β1 Correlation Useful features of r Test of hypothesis about ρ Examples

2 A relationship between two variables X & Y
We often have pairs of scores for a given set of cases. For example, we might have: # of years of education and annual income, or IQ and GPA income and # of books in the household More generally, we have any X and Y, and our question is, does knowing something about X tell us anything about Y?

3 A relationship between two variables X & Y
Does knowing something about X tell us anything about Y? For example, knowing how many years of education a person has, could you usefully estimate their annual income, or the number of cigarettes they smoke in a year?

4 A relationship between two variables X & Y
Often, the answer to that question is, Yes – there is a relationship between the X and Y scores you have measured. On average, as number of years of education goes up (across a set of people), number of cigarettes smoked per year goes down.

5 A relationship between two variables X & Y
In the graph on the next slide, we see two things: X goes down as Y goes up. At each value of X, there is some variability in Y – but substantially less than there is in Y overall.

6 Note that the range of the Y values for this value of X is small, compared to the whole range of Y in the data set. Y = Cigarettes per year X = Years of education

7 The relationship seen as a straight line
The relationship between an X and a Y can be described using the equation for a straight line. Y = β0 + β1X + ε Y-intercept Slope Error Note: this is the (theoretical) population equation relating Y to X Note: this equation exists only in theory – it’s the equation we would obtain if we measured X and Y for all cases in the population

8 Two problems Y = β0 + β1X + ε In principle, this equation would let us predict the value of Y for a given X without error IF A. X were the only variable that influenced Y Usually, it isn’t B. We knew the population values of β0 + β1 Usually, we don’t Note that we can never have the true equation Y = β0 + β1X + ε – that is, we can never know what β0 & β1 are. We can only obtain estimates of these values from our sample data. And most of the time, we’ll be working with criterion variables Y which are influenced by many X variables

9 Be sure to distinguish between Actual values of Y in the population.
Two problems Be sure to distinguish between Actual values of Y in the population. Values of Y we would predict using Y = β0 + β1X + ε if we had the population values for β0 + β1. C. Values of Y we predict on the basis of the X-Y relationship in our sample data: Y = β0 + β1X ^ ^ Why no ε here?

10 Two problems When we predict Y on the basis of X for a given case, two things can cause the predicted values to be different from the values we would find if we actually measured Y for that case: 1. We don’t know the population values of β0 and β1 – only the sample values β0 and β1. Note that if we did know β0 & β1, this source of error would disappear. ^ ^

11 Two problems 2. In the population, Y is not uniquely determined by X. As a result, for each value of X, there is a distribution of Y values. relative to our predicted Y for a given value of X, the observed values of Y will sometimes be higher and sometimes be lower. these “errors” are random – over the long term, they will cancel each other out but even if we knew β0 and β1, this source of error would still exist.

12 Two problems In other words We don’t have population values for the slope and the intercept of the line relating X to Y. That’s one problem. Even if we had population values for the slope and the intercept, the equation relating X to Y would still not perfectly predict Y. That’s the other problem.

13 How can we tell if our regression line is useful?
The line is useful if the predicted values of Y are close to the observed values of Y (in the sample). We use our sample X and Y values to compute the regression line, Y = β0 + β1X. We then use this line to predict the same Y values, and compare our predicted values with the observed values in the sample data. If the prediction is good, we can then use the regression line to predict Y for values of X not in our sample. ^ ^

14 How can we tell if our regression line is useful?
^ ^ ^ ^ ^ ^ (Yi – Yi) = Yi – (β0 + β1Xi) (since Yi = β0 + β1Xi) Therefore, the sum of the squared deviations of predicted Y values from actual Y values is: SSE = Σ[Yi – (β0 + β1Xi)]2 Now β0 and β1 are the “least squares estimators” of β0 + β1 – giving smaller SSE than any other values of β0 and β1 would. ^ ^ Note that SSE is the sum of squared deviations – squared so that the sum is not zero. ^ ^ ^ ^

15 Y When there is no relation between X and Y, the best estimator of the Y value for any case is the mean, Y. Notice that the slope of this line is zero! X

16 How can we tell if our regression line is useful?
If X is completely unrelated to Y, the best estimate we could make of Y would be the mean, Y, for any value of X. We find out whether our regression line is useful by asking whether its slope is different from 0. H0: β1 = 0 ^ [Why not β1?]

17 How can we tell if our regression line is useful?
^ To test that null hypothesis, we use the fact that β1 is one slope taken from the sampling distribution of β1. β1 = SSXY β0 = Y - β1X SSXX Where SSXY = Σ(Xi – X) (Yi –Y) = ΣXiYi – ΣXi ΣYi n ^ ^ ^ ^ Note: If we had drawn a different sample, we would have gotten a different β1-hat from that sampling distribution

18 How can we tell if our regression line is useful?
SSXX = Σ(Xi – X)2 = ΣX2 – (ΣX)2 n (n = sample size) For the sampling distribution of β1: The mean = β1 β1 =  √SSXX ^ ^

19 How can we tell if our regression line is useful?
We estimate β1 by sβ1 = s √SSXX Where s = SSE n-2 ^ ^

20 Test of hypothesis about the slope, β1
Since  is unknown, we use t to test H0: H0: β1 = 0 H0: β1 = 0 HA: β1 < 0 HA: β1 ≠ 0 or β1 > 0 Test statistic: t = β1 – 0 Sβ1 ^ ^

21 Test of hypothesis about the slope, β1
Rejection region: tobt < t │tobt│ > t/2 tobt > t tcrit is based on n-2 degrees of freedom.

22 Correlation The Pearson Correlation coefficient r is a numerical, descriptive measure of the strength and direction of relationship between two variables X and Y. r = SSXY SSXXSSYY r gives much the same information as β1. However r is “scale-less” and (-1 ≤ r ≤1) Those two qualities of r are not true of β1-with-a-hat ^

23 Useful features of r r indexes the X-Y relationship:
r > 0 means Y increases as X increases r < 0 means Y decreases as X increases r = 0 means there is no relationship between X & Y r is the sample correlation coefficient. We can use it to estimate rho (ρ), the population correlation coefficient, and use r to test H0: ρ = 0 This test of hypothesis about ρ is equivalent to a test of H0: β1 = 0.

24 Test of hypothesis about ρ
H0: ρ = 0 H0: ρ = 0 HA: ρ < 0 HA: ρ ≠ 0 or ρ > 0 Test statistic: t = r – ρ 1 – r2 n – 2 tcrit has n-2 degrees of freedom. Note: this test is strongly equivalent to the test of the hypothesis that the slope of the regression line, β1 = 0.

25 √ Example 1 H0: ρ = 0 HA: ρ ≠ 0 Test statistic: t = r – ρ 1 – r2 n – 2
tcrit = t(5, α/2 = .025) =

26 Example 1 – Sum formulas First, calculations involving X:
ΣX = 74 (ΣX)2 = 5476 ΣX2 = 922 Then, analogous calculations involving Y: ΣY = 82 (ΣY)2 = 6724 ΣY2 = 1076 Then, calculations involving X and Y: ΣXY = 976

27 Example 1 – Sums of squares formulas
SSXY = Σ(Xi – X) (Yi –Y) = ΣXiYi – ΣXi ΣYi n SSXX = Σ(Xi – X)2 = ΣX2 – (ΣX)2 SSYY = Σ(Yi – Y)2 = ΣY2 – (ΣY)2

28 √ Example 1 – calculate r SSXY = 109.143 SSXX = 139.71 SSYY = 115.429
r = SSXY r = .859 SSXXSSYY

29 √ √ Example 1 – do t-test t = r – ρ 1 – r2 n – 2
5 Reject H0: A significant correlation exists.

30 √ Example 2 H0: ρ = 0 HA: ρ > 0 Test statistic: t = r – ρ 1 – r2
Note – these are the Greek letter rho, NOT the English letter P H0: ρ = 0 HA: ρ > 0 Test statistic: t = r – ρ 1 – r2 n – 2 tcrit = t(7-2 = 5, α = .05) = 2.015

31 Example 2 – Sum formulas First, calculations involving X:
ΣX = 4.2 (ΣX)2 = ΣX2 = 2.86 Then, analogous calculations involving Y: ΣY = 32 (ΣY)2 = 1024 ΣY2 = 161.5 Then, calculations involving X and Y: ΣXY = 21.35

32 Example 2 – calculate r SSXY = 21.35 – (4.2)(32) = 2.15 7
SSXX = 2.86 – = .34

33 √ Example 2 – calculate r SSYY = 161.5 – 1024 = 15.2143 7 r = SSXY
SSXXSSYY r = .945

34 √ √ Example 2 – do t-test t = r – ρ 1 – r2 n – 2
5 Reject H0: A significant correlation exists.


Download ppt "Regression & Correlation (1)"

Similar presentations


Ads by Google