Download presentation
Presentation is loading. Please wait.
Published byTerence Poole Modified over 8 years ago
1
Standards of Evidence for Prevention Programs Brian R. Flay, D.Phil. Distinguished Professor Public Health and Psychology University of Illinois at Chicago and Chair, Standards Committee Society for Prevention Research Presented at Society for Research on Child Development Atlanta, April 7 2005
2
Standards of Evidence Each government agency and academic group that has reviewed programs for lists has its own set of standards. They are all similar but not equal –E.g., CSAP allows more studies in than DE All concern the rigor of the research The Society for Prevention Research recently created standards for the field Our innovation was to consider standards for efficacy, effectiveness and dissemination
3
Standards for 3 levels: Efficacy, Effectiveness and Dissemination Efficacy –What effects can the intervention have under ideal conditions? Effectiveness –What effects does the intervention have under real-world conditions? Dissemination –Is an effective intervention ready for broad application or distribution? Desirable –Additional criteria that provide added value to evaluated interventions
4
Overlapping Standards Efficacy Standards are basic required at all 3 levels Effectiveness Standards include all Efficacy Standards plus others Dissemination standards include all Efficacy and Effectiveness Standards plus others
5
Four Kinds of Validity (Cook & Campbell, 1979; Shadish, Cook & Campbell, 2002) Construct validity –Program description and measures of outcomes Internal validity –Was the intervention the cause of the change in the outcomes? External validity (Generalizability) –Was the intervention tested on relevant participants and in relevant settings? Statistical validity –Can accurate effect sizes be derived from the study?
6
Specificity of Efficacy Statement “Program X is efficacious for producing Y outcomes for Z population.” –The program (or policy, treatment, strategy) is named and described –The outcomes for which proven outcomes are claimed are clearly stated –The population to which the claim can be generalized is clearly defined
7
Program Description Efficacy –Intervention must be described at a level that would allow others to implement or replicate it Effectiveness –Manuals, training and technical support must be available –The intervention should be delivered under the same kinds of conditions as one would expect in the real world –A clear theory of causal mechanisms should be stated –Clear statement of “for whom?” and “under what conditions?” the intervention is expected to work Dissemination –Provider must have the ability to “go-to-scale”
8
Program Outcomes ALL –Claimed public health or behavioral outcome(s) must be measured Attitudes or intentions cannot substitute for actual behavior –At least one long-term follow-up is required The appropriate interval may vary by type of intervention and state-of-the-field
9
Measures Efficacy –Psychometrically sound Valid Reliable (internal consistency, test-retest or inter- rater reliability) Data collectors independent of the intervention Effectiveness –Implementation and exposure must be measured Level and Integrity (quality) of implementation Acceptance/compliance/adherence/involvement of target audience in the intervention Dissemination –Monitoring and evaluation tools available
10
Desirable Standards for Measures For ALL Levels –Multiple measures –Mediating variables (or immediate effects) –Moderating variables –Potential side-effects –Potential iatrogenic (negative) effects
11
At least one comparison group –No-treatment, usual care, placebo or wait-list Assignment to conditions must maximize causal clarity –Random assignment is “the gold standard” –Other acceptable designs Repeated time-series designs Regression-discontinuity Well-done matched controls –Demonstrated pretest equivalence on multiple measures –Known selection mechanism Design – for Causal Clarity
12
Level of Randomization In many drug and medical trials, individuals are randomly assigned In educational trials, classrooms or schools must be the unit of assignment –Students within classes/schools are not statistically independent -- they are more alike than students in other classes/schools Need large studies –4 or more schools per condition, preferably 10 or more, in order to have adequate statistical power
13
Generalizability of Findings Efficacy –Sample is defined Who it is (from what “defined” population) How it was obtained (sampling methods) Effectiveness –Description of real-world target population and sampling methods –Degree of generalizability should be evaluated Desirable Subgroup analyses Dosage studies/analyses Replication with different populations Replication with different program providers
14
Precision of Outcomes: Statistical Analysis Statistical analysis allows unambiguous causal statements –At same level as randomization and includes all cases assigned to conditions –Test (and adjust fo) for pretest differences –Adjustments for multiple comparisons –Analyses of (and adjustments for) attrition Rates, patterns and types Desirable –Report extent and patterns of missing data
15
Precision of Outcomes: Statistical Significance Statistically significant effects –Results must be reported for all measured outcomes –Efficacy can be claimed only for constructs with a consistent pattern of statistically significant positive effects –There must be no statistically significant negative (iatrogenic) effects on important outcomes
16
Precision of Outcomes: Practical Value Efficacy –Demonstrated practical significance in terms of public health (or other relevant) impact –Report of effects for at least one follow-up Effectiveness –Report empirical evidence of practical importance Dissemination –Clear cost information available Desirable –Cost-effectiveness or cost-benefit analyses
17
Precision of Outcomes: Replication Consistent findings from at least two different high-quality studies/replicates that meet all of the other criteria for efficacy and each of which has adequate statistical power –Flexibility may be required in the application of this standard in some substantive areas When more than 2 studies are available, the preponderance of evidence must be consistent with that from the 2 most rigorous studies Desirable –The more replications the better
18
Two Reasons for Replication 1. Scientific confidence –All findings must be replicated to avoid chance findings –Preferably by independent evaluators This is very rare so far in Prevention Science 2. Generalizability –Initial efficacy and effectiveness trials usually do not involve representative samples –Need to establish that a program that is effective with one group in one location is effective with others
19
Additional Desirable Criteria for Dissemination Organizations that choose to adopt a prevention program that barely or not quite meets all criteria should seriously consider undertaking a replication study as part of the adoption effort so as to add to the body of knowledge. A clear statement of the factors that are expected to assure the sustainability of the program once it is implemented.
20
Embedded Standards Efficacy Effectiveness Dissemination Desirable 20283143
21
Examples of Programs Come Close to Meeting These Standards? Life Skills Training (Botvin) –Multiple RCTs with different populations, implementers and types of training –Only one long-term follow-up –Independent replications of short-term effects are now appearing (as well as some failures) –No independent replications of long-term effects yet Olds’ Home Nursing Program –Multiple replications
22
Programs for Which the Research Meets the Standards, But Do Not Work DARE –Many quasi-experimental and non-experimental studies suggested effectiveness –Multiple RCTs found no effects (Ennett, et al., 1994 meta-analysis) Hutchinson (Peterson, et al., 2000) –Well-designed RCT –Published results found no long-term effects –But no published information on the program or short-term effects –Published findings cannot be interpreted because of lack of information – they certainly cannot be interpreted to suggest that social influences approaches can never have long-term effects
23
How the Standards can be used when considering programs Has the program been evaluated in a randomized controlled trial (RCT)? Were classrooms or schools randomized to program and control (no program or alternative program) conditions? Has the program been evaluated on populations like yours? Have the findings been replicated? Were the evaluators independent from the program developers?
24
Phases of Research in Prevention Program Development (Flay, 1986) 1.Basic Research 2.Hypothesis Development 3.Component Development and Pilot Studies 4.Prototype Studies of Complete Programs 5.Efficacy Trials of Refined Programs 6. Treatment Effectiveness Trials –Generalizability of effects under standardized delivery 7. Implementation Effectiveness Trials –Effectiveness with real-world variations in implementation 8. Demonstration Studies –Implementation and evaluation in multiple systems
25
School-based Prevention/Promotion Studies are Large and Complex Large randomized trials –With multiple schools or other units per condition Comparisons with “treatment as usual” Measurement of implementation process and program integrity Assessment of effects on presumed mediators –Helps test theories Multiple measures/sources of data –Surveys of students, parents, teachers, staff, community –Teacher and parent reports of behavior –School records for behavior and achievement Multiple, independent trials of promising programs –At both efficacy and effectiveness levels Cost-effectiveness analyses
26
Much of Prevention is STUCK … We’re Spinning our Wheels At the Efficacy Trial phase –How can we get more programs into effectiveness trials? At the Effectiveness Trial phase –How can we get more proven programs adopted? At the “Model Program” phase –How can we ensure the ongoing effectiveness of model programs? Lots more prevention research is needed – at all levels!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.