Statistical Analysis of the Nonequivalent Groups Design
Analysis Requirements l Pre-post l Two-group l Treatment-control (dummy-code) NOXONOONOXONOO
Analysis of Covariance y i = outcome score for the i th unit 0 =coefficient for the intercept 1 =pretest coefficient 2 =mean difference for treatment X i =covariate Z i =dummy variable for treatment(0 = control, 1= treatment) e i =residual for the i th unit y i = 0 + 1 X i + 2 Z i + e i where:
The Bivariate Distribution Program group has a 5-point pretest Advantage. Program group scores 15-points higher on Posttest.
Regression Results l Result is biased! CI.95( 2 =10) = 2 ±2SE( 2 ) = ±2(.5682) = ± CI.95( 2 =10) = 2 ±2SE( 2 ) = ±2(.5682) = ± l CI = to Predictor Coef StErr t p Constant pretest Group y i = X i Z i
The Bivariate Distribution Regression line slopes are biased. Why?
Regression and Error Y X No measurement error
Regression and Error Y X Y X No measurement error Measurement error on the posttest only
Measurement error on the pretest only Regression and Error Y X Y X Y X No measurement error Measurement error on the posttest only
How Regression Fits Lines
Method of least squares
How Regression Fits Lines Method of least squares Minimize the sum of the squares of the residuals from the regression line.
How Regression Fits Lines Y X Method of least squares Minimize the sum of the squares of the residuals from the regression line. Least squares minimizes on y not x.
How Error Affects Slope Y X No measurement error, No effect
How Error Affects Slope Y X Y X No measurement error, no effect. Measurement error on the posttest only, adds variability around regression line, but doesn’t affect the slope
Measurement error on the pretest only: Affects slope Flattens regression lines How Error Affects Slope Y X Y X Y X No measurement error, no effect. Measurement error on the posttest only, adds variability around regression line, but doesn’t affect the slope.
How Error Affects Slope Y X Y X Y X Y X Measurement error on the pretest only: Affects slope Flattens regression lines
How Error Affects Slope Y X Y X Y X Y X Notice that the true result in all three cases should be a null (no effect) one.
How Error Affects Slope Notice that the true result in all three cases should be a null (no effect) one. Y X Null case
How Error Affects Slope But with measurement error on the pretest, we get a pseudo-effect. Y X Pseudo-effect
Where Does This Leave Us? l Traditional ANCOVA looks like it should work on NEGD, but it’s biased. l The bias results from the effect of pretest measurement error under the least squares criterion. l Slopes are flattened or “attenuated”.
What’s the Answer? l If it’s a pretest problem, let’s fix the pretest. l If we could remove the error from the pretest, it would fix the problem. l Can we adjust pretest scores for error? l What do we know about error?
What’s the Answer? l We know that if we had no error, reliability = 1; all error, reliability=0. l Reliability estimates the proportion of true score. l Unreliability=1-Reliability. l This is the proportion of error! l Use this to adjust pretest.
What Would a Pretest Adjustment Look Like? Original pretest distribution
What Would a Pretest Adjustment Look Like? Original pretest distribution Adjusted dretest distribution
Y X How Would It Affect Regression? The regression The pretest distribution
Y X How Would It Affect Regression? The regression The pretest distribution
Y X How Far Do We Squeeze the Pretest? Squeeze inward an amount proportionate to the error.Squeeze inward an amount proportionate to the error. If reliability=.8, we want to squeeze in about 20% (i.e., 1-.8).If reliability=.8, we want to squeeze in about 20% (i.e., 1-.8). Or, we want pretest to retain 80% of it’s original width.Or, we want pretest to retain 80% of it’s original width.
Adjusting the Pretest for Unreliability X adj = X + r(X - X) __
Adjusting the Pretest for Unreliability X adj = X + r(X - X) __ where:
Adjusting the Pretest for Unreliability X adj = X + r(X - X) __ X adj =adjusted pretest value where:
Adjusting the Pretest for Unreliability X adj = X + r(X - X) __ X adj =adjusted pretest value X=original pretest value _ where:
Adjusting the Pretest for Unreliability X adj = X + r(X - X) __ r=reliability X adj =adjusted pretest value X=original pretest value _ where:
Reliability-Corrected Analysis of Covariance y i = outcome score for the i th unit 0 =coefficient for the intercept 1 =pretest coefficient 2 =mean difference for treatment X adj =covariate adjusted for unreliability Z i =dummy variable for treatment(0 = control, 1= treatment) e i =residual for the i th unit y i = 0 + 1 X adj + 2 Z i + e i where:
Regression Results l Result is unbiased! CI.95( 2 =10) = 2 ±2SE( 2 ) =9.3048±2(.6166) =9.3048± CI.95( 2 =10) = 2 ±2SE( 2 ) =9.3048±2(.6166) =9.3048± l CI = to y i = X adj Z i Predictor Coef StErr t p Constant adjpre Group
Graph of Means pretestposttestpretestposttest MEANMEANSTD DEVSTD DEV Comp Prog ALL
Adjusted Pretest l Note that the adjusted means are the same as the unadjusted means. l The only thing that changes is the standard deviation (variability). pretestadjpreposttestpretestadjpreposttest MEANMEANMEANSTD DEVSTD DEVSTD DEV Comp Prog ALL
Original Regression Results Original Pseudo-effect=11.28
Corrected Regression Results Original Corrected Pseudo-effect=11.28 Effect=9.31