Download presentation
Presentation is loading. Please wait.
Published byFrederick Wilson Modified over 8 years ago
1
Paper presented at the American Evaluation Association 2010 conference PLEASE do not distribute and cite only with permission; work is under journal review Emil Coman 1 Ph.D., Marlene Berg, MA 2 1- Senior Research Scientist-Evaluator, 2 – Associate Director of Training Institute for Community Research, Hartford, CT USA
2
Statement of problem Evaluation of intervention effects in community-based setting with quasi- experimental designs is challenging Structural equation modeling (SEM) or covariance structure analysis (CSA) is well equipped i. Much more flexible in modeling (e.g. Graham, 2008) ii. Allows for modeling unobserved (latent) variables iii. Can test and compare fit of alternative models Graham, J. M. (2008). The General Linear Model as Structural Equation Modeling. Journal of Educational and Behavioral Statistics, 33(4), 485.
3
The problem SEM flexibility ‘causes’ conclusion flexibility but We want firm single answer to the question: ‘did it work’? i.e. has the intervention changed for the better the intervention group vs. the comparison group?
4
The YARP problem Youth Action Research for Prevention ‘Did it work’?
5
The YARP Youth Action Research for Prevention A three-year summer and after-school research project designed to reduce drug and sexual risk behavior in urban minority adolescents in Hartford, CT. Youth identified a youth-related problem in their community, developed a research model and an action plan addressing that issue, gathered and interpret data, and actively engaged in social action to promote changes in their community. Youth learned to manage and negotiate their identities in various contexts increase in Internal Locus of Control (ILC) Berg, M., Coman, E., & Schensul, J. (2009). Youth Action Research for Prevention: A Multi-level Intervention Designed to Increase Efficacy and Empowerment Among Urban Youth. American Journal of Community Psychology, 43(3), 345-359.
6
SEM solutions Regular analyses like ANOVA face the same challenges, but can’t deal with all of them; e.g. assumptions like (Prof. David Garson’s www)www i. homogeneity of variances across the groups ii. multivariate normality iii. adequate sample size iv. random sampling ~ SEM seems to handle these well. + SEM can test and “prune away” assumptions (Rhodes, 2010) Rhodes, W. (2010). Heterogeneous Treatment Effects: What Does a Regression Estimate? Evaluation review, 34(4), 334.
7
SEM reminder Models are specified by replacing correlations (double headed arrows) with causal paths (one- directional), or deleting them (forcing them equal to zero). This produces degrees of freedom for testing the fit of the model. Fit is simply the extent to which the model yields implied (estimated) means and covariances that are similar to the observed ones. To test a specific parameter, one sets it to a value (zero, e.g.) and checks the worsening of fit.
8
SEM setup (simplest model) The problem is that the test of equal post-intervention means can be done assuming equality (or not) across groups of each of these parameters: baseline means baseline variances the baseline to post-intervention causal path the posttest error variances Pre-test Internal Locus of Control Post-test Internal Locus of Control vivi pipi eiei 1 mimi τiτi
9
One SEM result – paths equal, rest not Note: A: model implied intercepts; B: model implied means; C: model implied variances; *,= means parameter estimated but set equal across groups; first numbers are for comparison, second for intervention group.
10
5Tmvpe 4vmpe.003 SEM modeling for comparing post-intervention observed outcome means in multiple-group structured means models Intervention successful; but not all parameters may be equal.
11
5Tmvpe 1p_Paths= 1ErVars= 1v_Variances=1m_Pre- Means= 2Tm2Tv 2Tp 2Te 4vmpe.003 SEM modeling for comparing post-intervention observed outcome means in multiple-group structured means models.45.006.45.42 Intervention successful 2/5 time; unsuccessful the other 3/5; not very good to report
12
AMOS 16 setup - all possible models ---
13
5Tmvpe 1p_Paths= 1ErVars= 1v_Variances= 1T_Post - Means Test 2vp 1m_Pre- Means= 2mv2mp 2ve 2me 2Tm2pe2Tv 2Tp 2Te 3vmp 3vme3vpe 3Tvm 3Tme 3Tmp 3Tvp 3Tve 3Tpe 4vmpe 4Tvmp 4Tvpe 4Tvme.45.003.42.003 4 10 1 2 3 9 11 5 6 18 15 21 19 8 ALL MODELS Decision-tree SEM modeling for comparing post-intervention observed outcome means in multiple-group structured means models 257 12 Note: Rounded rectangles and bold text: models with good fit; snipped interrupted rectangles: fit not adequate; numbers next to boxes: to right - fit ordered from best fitting (1) up; numbers on arrows: significance of worsening of fit (Δχ 2 ); m=means; v=variances; p=autoregressive path; e = error variances; T = test of post-test equal intercepts; W1 = wave 1, W2 =wave 2 14 16 17 2022 23 2426 27 28 30 31 3mpe 4Tmpe 29 13.003.006.45.006.45.006.42.003.42.006
14
Some fit better than others ---
15
Estimates of Intervention Effects on Internal Locus of Control as Standardized Effect Sizes from All Alternative SMM Models Note: Numbers above bars are p values for nested model comparisons for the equality of post-intervention outcome intercept constraint; F’s = fit rank order of non-constrained (baseline) model according to p(χ 2 ). Labels: m = means; v = variances; p = autoregressive path; e = error variance, and T = the test of equal post-intervention intercepts, τ C2 = τ T2.
16
Additional problems 1. Unreliability of the composite measure 2. Power of different alternative SEM models to detect that specific (focal) parameter, i.e. the equality between post-test intercepts. SOLUTIONS 1. Re-model with latent ILC instead of composite, with measurement error error variances equal to the variance of the observed ILC variables times one minus their reliability (MacKinnon, 2008, p.186) 2. Assess power of the tests for all models.
17
Solution 1 All models showed significant positive intervention effect.
18
5Tmvpe 1p_Paths= 1ErVars= 1v_Variances= 1T_Post - Means Test 2vp 1m_Pre- Means= 2mv2mp 2ve 2me 2Tm2pe2Tv 2Tp 2Te 3vmp 3vme3vpe 3Tvm 3Tme 3Tmp 3Tvp 3Tve 3Tpe 4vmpe 4Tvmp 4Tvpe 4Tvme.45.003.42.003 4 10 1 2 3 9 11 5 6 18 15 21 19 8 Solution 2: Decision-Tree (DT) approach 257 12 Note: Rounded rectangles and bold text: models with good fit; snipped interrupted rectangles: fit not adequate; numbers next to boxes: to left in interrupted pentagons– power of test, and to right - fit ordered from best fitting (1) up; numbers on arrows: significance of worsening of fit (Δχ 2 ); m=means; v=variances; p=autoregressive path; e = error variances; T = test of post-test equal intercepts; W1 = wave 1, W2 =wave 2 14 16 17 2022 23 2426 27 28 30 31 3mpe 4Tmpe 29 13.003.84.15.84.15.84.15.84.15.84.15.006.45.006.45.006.42.003.42.006
19
Conclusions 1. Importance of a priori specification of alternative models 2. The utility of post-hoc power analysis 3. The need to attend to the chi-square fit measure The utility of modeling measurement errors 4. The Decision-Tree (DT) approach helps with organizing, deciding upon, and presenting appropriate alternative SEM testing of intervention effects. Limitations over-fitting concerns; no measurement modeling; etc.
20
Thank you! http://evaluationhelp.wordpress.com www.icrweb.com ---
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.