Download presentation
Presentation is loading. Please wait.
Published byKerrie Pitts Modified over 9 years ago
1
1 G89.2228 Lect 10a G89.2228 Lecture 10a Revisited Example: Okazaki’s inferences from a survey Inferences on correlation Correlation: Power and effect size Regression: Expected Y given X Inference on regression Return to example
2
2 G89.2228 Lect 10a Example: Okazaki’s Inferences from a survey Does self-construal account for relation of adverse functioning with Asian status? Survey of 348 students Self-reported Interdependence was correlated.53 with self-reported Fear of Negative Evaluation Illustrative plot (simulated) of r=.53
3
3 G89.2228 Lect 10a Review of Correlation Definitions In a population with variables X and Y, If we have a sample from the population, we can calculate the product moment estimate: To estimate the population value, the (X,Y) pairs should be representative The sampling distribution of r XY is not simple. The standard error of r actually depends on knowing .
4
4 G89.2228 Lect 10a Inferences on correlation Testing H 0 : = 0 when either X or Y are normally distributed – A statistic that can be justified from a regression approach is –We usually do not compute a standard error for r, because it depends on itself. For other inferences on one or more correlations, we use Fisher’s so-called z transformation: The standard error of is Howell shows how CI, and comparisons of correlations from independent samples can be computed using.
5
5 G89.2228 Lect 10a Example: Okazaki’s correlation Test of H 0 : =0 r=.53 and N=348 The null hypothesis is rejected. Confidence Interval for Compute Compute confidence interval Transform back using Note that the resulting confidence interval is asymmetric
6
6 G89.2228 Lect 10a Correlation: power and effect size Cohen’s rule of thumb for correlation effect sizes (both d above and differences in Fisher’s z transformation) is: small =.1 medium =.3 large =.5 Example (Okazaki, continued): N=348 gives 97% power to detect =.20 with a two tailed test, =.05. If =.10, this N would only give 47% power. Power and Precision program and Howell’s approximate method give similar results
7
7 G89.2228 Lect 10a Regression: Expected Y given X When Y and X are correlated, then the expected value of Y varies with X. E(Y|X) is not constant for different choices of X. We could chop up the plot of Y and X and compute separate means of Y for different value ranges of X Often this set of Conditional Expectations of Y given X can be described by a linear model Instead of estimating many means of Y|X, we estimate a * and b *, the y-intercept and the slope of the line.
8
8 G89.2228 Lect 10a Regression coefficients as parameters If Y and X are known to have a bivariate normal distribution, then the relation between these is known to be linear. The conditional distribution of Y given X is expressed with parameters a * and b *. a * and b * may also derive meaning from structural models: Y is assumed to be caused by X. This assumption can not be tested, but the strength of the causal path under the model can be assessed. In some cases, we do not assume that a * and b * have any deep meaning, or that the true relation between Y and X is exactly linear. Instead, linear regression is used as an approximate predictive model.
9
9 G89.2228 Lect 10a Estimating regression statistics b * and a * can be estimated using ordinary least squares methods. The resulting estimates are: They minimize the sum of squared residuals,, where is the predicted value of If The slope of Y regressed on X is not generally the same as the slope of X regressed on Y. The constant a * is the expected value of Y when X=0.
10
10 G89.2228 Lect 10a Inference on regression The regression model is: where S YX is the standard deviation of the residuals The estimates, a and b will have normal distributions because of the central limit theorem. The standard error of b is based on N-2 degrees of freedom.
11
11 G89.2228 Lect 10a Inference on regression (continued) To test H 0 : b=0, construct a t-test: t=b/s b, on N-2 degrees of freedom. To construct a 95% CI around the regression parameter, compute The t-test will be identical to that for correlation. The CI will be about b*, not , and hence won’t correspond to the one for correlation (calculated using Fisher’s z transformation).
12
12 G89.2228 Lect 10a Okazaki: Predicting Fear of Negative from Interdependence From the data in her table 2, we compute –Mean of interdependence=4.49 –Var(interdependence)=.65, S X =.808 –Mean of FNE=38.52 –Var(FNE)=104.08, S Y =10.202 Compute b and a b=r YX (S Y /S X )=(.53)(10.2/.81)=6.69 Y = 8.46 + 6.69X + e Compute standard errors S b =S YX /[S X (N-1)] =.575 Test statistic and CI t(N-2)=b/S b = 6.69/.575 = 11.6 CI: b±(1.96)(S b ) => (5.56,7.82)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.