Download presentation
Presentation is loading. Please wait.
1
Statistical Analysis of the Nonequivalent Groups Design
2
Analysis Requirements l Pre-post l Two-group l Treatment-control (dummy-code) NOXONOONOXONOO
3
Analysis of Covariance y i = outcome score for the i th unit 0 =coefficient for the intercept 1 =pretest coefficient 2 =mean difference for treatment X i =covariate Z i =dummy variable for treatment(0 = control, 1= treatment) e i =residual for the i th unit y i = 0 + 1 X i + 2 Z i + e i where:
4
The Bivariate Distribution Program group has a 5-point pretest Advantage. Program group scores 15-points higher on Posttest.
5
Regression Results l Result is biased! CI.95( 2 =10) = 2 ±2SE( 2 ) =11.2818±2(.5682) =11.2818±1.1364 CI.95( 2 =10) = 2 ±2SE( 2 ) =11.2818±2(.5682) =11.2818±1.1364 l CI = 10.1454 to 12.4182 Predictor Coef StErr t p Constant 18.714 1.969 9.50 0.000 pretest 0.62600 0.03864 16.20 0.000 Group 11.2818 0.5682 19.85 0.000 y i = 18.7 +.626X i + 11.3Z i
6
The Bivariate Distribution Regression line slopes are biased. Why?
7
Regression and Error Y X No measurement error
8
Regression and Error Y X Y X No measurement error Measurement error on the posttest only
9
Measurement error on the pretest only Regression and Error Y X Y X Y X No measurement error Measurement error on the posttest only
10
How Regression Fits Lines
11
Method of least squares
12
How Regression Fits Lines Method of least squares Minimize the sum of the squares of the residuals from the regression line.
13
How Regression Fits Lines Y X Method of least squares Minimize the sum of the squares of the residuals from the regression line. Least squares minimizes on y not x.
14
How Error Affects Slope Y X No measurement error, No effect
15
How Error Affects Slope Y X Y X No measurement error, no effect. Measurement error on the posttest only, adds variability around regression line, but doesn’t affect the slope
16
Measurement error on the pretest only: Affects slope Flattens regression lines How Error Affects Slope Y X Y X Y X No measurement error, no effect. Measurement error on the posttest only, adds variability around regression line, but doesn’t affect the slope.
17
How Error Affects Slope Y X Y X Y X Y X Measurement error on the pretest only: Affects slope Flattens regression lines
18
How Error Affects Slope Y X Y X Y X Y X Notice that the true result in all three cases should be a null (no effect) one.
19
How Error Affects Slope Notice that the true result in all three cases should be a null (no effect) one. Y X Null case
20
How Error Affects Slope But with measurement error on the pretest, we get a pseudo-effect. Y X Pseudo-effect
21
Where Does This Leave Us? l Traditional ANCOVA looks like it should work on NEGD, but it’s biased. l The bias results from the effect of pretest measurement error under the least squares criterion. l Slopes are flattened or “attenuated”.
22
What’s the Answer? l If it’s a pretest problem, let’s fix the pretest. l If we could remove the error from the pretest, it would fix the problem. l Can we adjust pretest scores for error? l What do we know about error?
23
What’s the Answer? l We know that if we had no error, reliability = 1; all error, reliability=0. l Reliability estimates the proportion of true score. l Unreliability=1-Reliability. l This is the proportion of error! l Use this to adjust pretest.
24
What Would a Pretest Adjustment Look Like? Original pretest distribution
25
What Would a Pretest Adjustment Look Like? Original pretest distribution Adjusted dretest distribution
26
Y X How Would It Affect Regression? The regression The pretest distribution
27
Y X How Would It Affect Regression? The regression The pretest distribution
28
Y X How Far Do We Squeeze the Pretest? Squeeze inward an amount proportionate to the error.Squeeze inward an amount proportionate to the error. If reliability=.8, we want to squeeze in about 20% (i.e., 1-.8).If reliability=.8, we want to squeeze in about 20% (i.e., 1-.8). Or, we want pretest to retain 80% of it’s original width.Or, we want pretest to retain 80% of it’s original width.
29
Adjusting the Pretest for Unreliability X adj = X + r(X - X) __
30
Adjusting the Pretest for Unreliability X adj = X + r(X - X) __ where:
31
Adjusting the Pretest for Unreliability X adj = X + r(X - X) __ X adj =adjusted pretest value where:
32
Adjusting the Pretest for Unreliability X adj = X + r(X - X) __ X adj =adjusted pretest value X=original pretest value _ where:
33
Adjusting the Pretest for Unreliability X adj = X + r(X - X) __ r=reliability X adj =adjusted pretest value X=original pretest value _ where:
34
Reliability-Corrected Analysis of Covariance y i = outcome score for the i th unit 0 =coefficient for the intercept 1 =pretest coefficient 2 =mean difference for treatment X adj =covariate adjusted for unreliability Z i =dummy variable for treatment(0 = control, 1= treatment) e i =residual for the i th unit y i = 0 + 1 X adj + 2 Z i + e i where:
35
Regression Results l Result is unbiased! CI.95( 2 =10) = 2 ±2SE( 2 ) =9.3048±2(.6166) =9.3048±1.2332 CI.95( 2 =10) = 2 ±2SE( 2 ) =9.3048±2(.6166) =9.3048±1.2332 l CI = 8.0716 to 10.5380 y i = -3.14 + 1.06X adj + 9.30Z i Predictor Coef StErr t p Constant -3.141 3.300 -0.95 0.342 adjpre 1.06316 0.06557 16.21 0.000 Group 9.3048 0.6166 15.09 0.000
36
Graph of Means pretestposttestpretestposttest MEANMEANSTD DEVSTD DEV Comp49.99150.0086.9857.549 Prog54.51364.1217.0377.381 ALL52.25257.0647.36010.272
37
Adjusted Pretest l Note that the adjusted means are the same as the unadjusted means. l The only thing that changes is the standard deviation (variability). pretestadjpreposttestpretestadjpreposttest MEANMEANMEANSTD DEVSTD DEVSTD DEV Comp49.99149.99150.0086.9853.9047.549 Prog54.51354.51364.1217.0374.3447.381 ALL52.25252.25257.0647.3604.70610.272
38
Original Regression Results Original Pseudo-effect=11.28
39
Corrected Regression Results Original Corrected Pseudo-effect=11.28 Effect=9.31
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.