Cornell University Cornell Office for Research on Evaluation (CORE) Evidence-Based Programs and the RCTs “Gold Standard” Debate: An Alternative Model William.

Slides:



Advertisements
Similar presentations
Grey Literature in Public Administration Markus Weber, CCE 4 – 5 Dec 2005 Page 1 Grey Literature in Public Administration – An Example of a specific Quality.
Advertisements

Performance Measurement and Evaluation 2/8/2014 Performance Measurement and Evaluation 1 Performance Measurement and Evaluation What are they? How are.
AmeriCorps State and National Evaluation Requirements August 16,
Intelligence Step 5 - Capacity Analysis Capacity Analysis Without capacity, the most innovative and brilliant interventions will not be implemented, wont.
Protocol Development.
Engaging Patients and Other Stakeholders in Clinical Research
“Scientifically Based Evaluation Methods” Presented by Paula J. Martin COE Conference, September 13, 2004.
Designing Clinical Research Studies An overview S.F. O’Brien.
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
天 津 医 科 大 学天 津 医 科 大 学 Clinical trail. 天 津 医 科 大 学天 津 医 科 大 学 1.Historical Background 1537: Treatment of battle wounds: 1741: Treatment of Scurvy 1948:
Assessing Program Impact Chapter 8. Impact assessments answer… Does a program really work? Does a program produce desired effects over and above what.
Role of Pharmacoeconomics in a Developing country context Gavin Steel for Anban Pillay Cluster Manager: Health Economics National Department of Health.
Childhood Obesity Scenario: Quasi- Experiments and Natural Experiments Versus RCTs Steven Gortmaker, Ph.D. Harvard School of Public Health /Harvard Prevention.
Prepared by the Justice Research and Statistics Association IMPLEMENTING EVIDENCE-BASED PRACTICES.
Funding Opportunities at the Institute of Education Sciences: Information for the Grants Administrator Elizabeth R. Albro, Ph.D. Acting Commissioner National.
How to make the best of qualitative phases of mixed method research Professor Kim Usher Centre for Chronic Disease Prevention Mixed Methods in Prevention.
Office of the Auditor General of Canada The State of Program Evaluation in the Canadian Federal Government Glenn Wheeler Director, Results Measurement.
The Road From Research to Practice - Issues in Translation and Synthesis Alice Gandelman, MPH, Director CA STD/HIV Prevention Training Center.
Experimental Study.
1 Randomization in Practice. Unit of randomization Randomizing at the individual level Randomizing at the group level –School –Community / village –Health.
Clinical Trials. What is a clinical trial? Clinical trials are research studies involving people Used to find better ways to prevent, detect, and treat.
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.
RRTC-EBP-VR The Rehabilitation Research and Training Center on Effective Vocational Rehabilitation Service Delivery Practices (RRTC-EBP-VR) is established.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Copyright © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Screening and Prevention of Illnesses and Injuries: Research Methods.
Program Collaboration and Service Integration: An NCHHSTP Green paper Kevin Fenton, M.D., Ph.D., F.F.P.H. Director National Center for HIV/AIDS, Viral.
Facilitated by Sharon Schnelle, Ph.D. Social Science Research Specialist Incorporating Evidence Based Practices: Overview, Opportunities & Challenges.
January 23, 2015 This product is supported by Florida Department of Children and Families Substance Abuse and Mental Health Program Office funding.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
INTERNATIONAL SOCIETY FOR TECHNOLOGY IN EDUCATION working together to improve education with technology Using Evidence for Educational Technology Success.
Julio A. Ramirez, MD, FACP Professor of Medicine Chief, Infectious Diseases Division, University of Louisville Chief, Infectious Diseases Section, Veterans.
Introduction to the Systems Evaluation Protocol William Trochim Professor, Policy Analysis and Management Director of Evaluation for Extension and Outreach.
 Internal Validity  Construct Validity  External Validity * In the context of a research study, i.e., not measurement validity.
Evidence-Based Public Health Nancy Allee, MLS, MPH University of Michigan November 6, 2004.
Less Pain, More Gain: An Evidence-Based Approach to Long-term Deficit Reduction Jon Baron Coalition for Evidence-Based Policy March 2013.
Overview of Chapter The issues of evidence-based medicine reflect the question of how to apply clinical research literature: Why do disease and injury.
Evidence-Based Medicine Presentation [Insert your name here] [Insert your designation here] [Insert your institutional affiliation here] Department of.
Preliminary Results – Not for Citation Strengthening Institutions Program Webinar on Competitive Priority on Evidence April 11, 2012 Note: These slides.
Graduate studies - Master of Pharmacy (MPharm) 1 st and 2 nd cycle integrated, 5 yrs, 10 semesters, 300 ECTS-credits 1 Integrated master's degrees qualifications.
For ABA Importance of Individual Subjects Enables applied behavior analysts to discover and refine effective interventions for socially significant behaviors.
Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.
1 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Chapter 8 Clarifying Quantitative Research Designs.
What is a non-inferiority trial, and what particular challenges do such trials present? Andrew Nunn MRC Clinical Trials Unit 20th February 2012.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
NOTES FROM INFORMATIONAL BRIEFINGS FOR POTENTIAL REGIONAL CENTER AND CONTENT CENTER APPLICANTS JUNE 19,20 & 22, 2012 Comprehensive Centers Program.
AmeriCorps Grantee Training Evaluation and Research September 11, 2014.
+ IDENTIFYING AND IMPLEMENTING EDUCATIONAL PRACTICES SUPPORTED BY RIGOROUS EVIDENCE: A USER FRIENDLY GUIDE Presented by Kristi Hunziker University of Utah.
Matt Rearick, Ph.D. Research & Evaluation, The RLI Group Associate Professor, Human Performance Roanoke College Making Sense of the.
ISO DOCUMENTATION. ISO Environmental Management Systems2 Lesson Learning Goals At the end of this lesson you should be able to:  Name.
Natural Experiments Versus RCTs Steven Gortmaker, Ph.D. Harvard School of Public Health /Harvard Prevention Research Center.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Overview of Study Designs. Study Designs Experimental Randomized Controlled Trial Group Randomized Trial Observational Descriptive Analytical Cross-sectional.
Evaluating Impacts of MSP Grants Ellen Bobronnikov January 6, 2009 Common Issues and Potential Solutions.
National Public Health Performance Standards Local Assessment Instrument Essential Service:6 Enforce Laws and Regulations that Protect Health and Ensure.
Strategies for Effective Program Evaluations U.S. Department of Education The contents of this presentation were produced by the Coalition for Evidence-Based.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Evidence-Based Nursing An Overview What’s It All About? Betty Ackley, RN, MSN, EdS Mosby items and derived items © 2011 by Mosby, Inc., an affiliate of.
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Characteristics of Studies that might Meet the What Works Clearinghouse Standards: Tips on What to Look For 1.
Alex Ezrakhovich Process Approach for an Integrated Management System Change driven.
Evaluation Policy and Evaluation Practice Presidential Address 2008 Annual Conference of the American Evaluation Association William M.K. Trochim Do not.
Welcome to Workforce 3 One U.S. Department of Labor Employment and Training Administration Webinar Date: April 30, 2014 Presented by: U.S. Departments.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
CLINICAL PROTOCOL DEVELOPMENT
Clinical Studies Continuum
Strategies to incorporate pharmacoeconomics into pharmacotherapy
MeOTa fall conference October 22, 2016
Chapter Six Training Evaluation.
Evidence Based Practice
Response to Intervention in Illinois
Presentation transcript:

Cornell University Cornell Office for Research on Evaluation (CORE) Evidence-Based Programs and the RCTs “Gold Standard” Debate: An Alternative Model William M. Trochim Cornell University Presentation to the Centers for Disease Control and Prevention May 27, 2014

Overview Increased mandates for evidence-based programs (EBP) What constitutes evidence? –the evidence “hierarchy” An Evolutionary Evaluation Perspective –What is EE? (use papers from EERS and Denver and the EE paper) –EE objections to RCT “gold standard” The importance of lifecycles and need for criteria for when to mount an RCT (ontogeny) The danger of monocultures (phylogeny) The ComPEAT option –Competitive Practice Evaluation and Assessment Trials (ComPEAT whitepaper) Next Steps

Evidence and RCTs The evidence hierarchy

How the Evidence Hierarchy gets interpreted The Coalition for Evidence-Based ( –Top Tier Evidence “The standard we use to evaluate candidates for the Top Tier, based on the Congressional legislative language, is: “Interventions shown in well-conducted randomized controlled trials, preferably conducted in typical community settings, to produce sizeable, sustained benefits to participants and/or society.” ( process/2-page-overview-of-our-solicitation- process-and-review-criteria) process/2-page-overview-of-our-solicitation- process-and-review-criteria

Increased calls for mandates for RCTs “When authorizing a new program or reauthorizing an existing program, Congress should specifically mandate experimental evaluation of the program…Congress has the moral imperative to ensure that it allocates taxpayer dollars effectively. Experimental evaluations are the only way to determine to a high degree of certainty the effectiveness of social programs. Congress should not cave in to interest groups that are opposed to rigorous evaluation of their programs. Congress should mandate that all recipients of federal funding, if selected for participation, must cooperate with evaluations in order to receive future funding.” Muhlhaussen, D.B. (2011). Evaluating Federal Social Programs: Finding Out What Works and What Does Not. Heritage Foundation, Backgrounder #2578, finding-out-what-works-and-what-does-not finding-out-what-works-and-what-does-not One example of the type of pressure being exerted:

Standard Objections to RCTs Too difficult to do – cannot be implemented in many contexts Too expensive Too artificial – not generalizeable Too simplistic – don’t capture dynamic complexity

Evolutionary Evaluation Objections to RCTs Ontogeny Objection –RCTs need to be linked to the lifecycle phase of the development of a program –Programs naturally develop through different phases –The danger of “premature experimentation” –The need for more rigorous standards for RCTs Phylogeny Objection –Mandates for EBP can lead to significant reductions in variation –Variation is essential for evolution to occur: no variation, no evolution –The evolutionary danger of monocultures

A Competitive Practice Evaluation & Assessment Trial (ComPEAT) Designed for situations where practitioners believe they have a program that can successfully compete with the EB program(s) –Most appropriate when EBPs are mandated (in fact or in practice) Most practitioner-driven programs do not have the resources/expertise to conduct an RCT Compares practice-evolved program to compete directly with relevant EBP Does not require control groups Allows current practice-evolved program to be conducted as normal except for the addition of pre-post measurement of key outcome(s)

Steps in a ComPEAT If necessary, practitioners petition for an exception to the mandate in order to conduct a ComPEAT Identify mandated EBP(s) that is/are most directly related to the trial program Identify outcome(s) in the definitive EBP trial(s) on which EBP program showed significant treatment effects Conduct trial program as normally done Measure outcome(s) pre-post Estimate ES for trial program Statistically compare trial ES with EBP ES If no statistical difference or trial program > EBP program, the ComPEAT is considered successful and the program is considered to be a candidate for funding of a more definitive experimental test

Additional Considerations The need for a detailed description and model of the ComPEAT program –Could use well-established program documentation approaches Getting to Outcomes (GTO) Systems Evaluation Protocol (SEP) –It’s unlikely there will be a close EBP match for many programs - every program is unique to some degree (proximal similarity) The danger of “creaming” in a ComPEAT The need for some audit procedures to assure quality of ComPEAT

Advantages of ComPEAT It acknowledges the potential value of evolved practice It encourages program variation and avoids the problem of program monocultures It deals with programs that are already being successfully implemented It may encourage better interventions –It’s important to know if the practice-evolved program can compete with the EBP It provides a low-cost alternative to lots of RCTs – could do a cost comparison of ComPEAT vs EBP (even a marginally effective practice-evolved program may be more cost effective than a high-cost EBP)

Next Steps Develop some pilot studies of ComPEAT Integrate the ComPEAT idea into the existing EBP perspective –ComPEATs are a way to identify potentially promising programs rapidly and at lower cost –Have ComPEATs be a prerequisite, when appropriate, before mounting a new RCT –Work with funders to set aside a proportion of evaluation funding reserved for EBPs to do ComPEATs as well