+ IDENTIFYING AND IMPLEMENTING EDUCATIONAL PRACTICES SUPPORTED BY RIGOROUS EVIDENCE: A USER FRIENDLY GUIDE Presented by Kristi Hunziker University of Utah.

Slides:



Advertisements
Similar presentations
Synthesizing the evidence on the relationship between education, health and social capital Dan Sherman, PhD American Institutes for Research 25 February,
Advertisements

A Guide to Education Research in the Era of NCLB Brian Jacob University of Michigan December 5, 2007.
Evidence, Ethics, and the Law Ronnie Detrich Wing Institute.
How Do We Know if a Charter School is Really Succeeding? – Various Approaches to Investigating School Effectiveness October 2012 Missouri Charter Public.
Brian A. Harris-Kojetin, Ph.D. Statistical and Science Policy
Effectiveness of Mentoring Programs for Youth: Current Status and Future Prospects Invited Presentation for the Big Brothers Big Sisters -- Large Agency.
Center for the Study and Prevention of Violence University of Colorado Boulder
Designing Influential Evaluations Session 5 Quality of Evidence Uganda Evaluation Week - Pre-Conference Workshop 19 th and 20 th May 2014.
Increasing Government Effectiveness Through Rigorous Evidence About “What Works” Jon Baron Coalition for Evidence-Based Policy NASCSP Conference, March.
West Virginia Achieves Professional Development Series Volume II Standards-Based Curriculum.
The Ways and Means of Psychology STUFF YOU SHOULD ALREADY KNOW BY NOW IF YOU PLAN TO GRADUATE.
Culture and psychological knowledge: A Recap
Program Evaluation Spero Manson PhD
Evidence: What It Is And Where To Find It Building Evidence of Effectiveness Copyright © 2014 by JBS International, Inc. Developed by JBS International.
Evaluation. Practical Evaluation Michael Quinn Patton.
CAPP Evaluation: Implementing Evidence Based Programs in NYS Jane Powers ACT for Youth Center of Excellence 2011 A presentation for Comprehensive Adolescent.
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
How to Develop a Project Evaluation Plan Pat Gonzalez Office of Special Education Programs
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
How to Develop the Right Research Questions for Program Evaluation
Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar May 2014 Note: These slides are intended as guidance.
Evidence-Based Programs The benefits, uses, and applicability of data driven programming and community collaboration.
BUILDING CAPACITY FOR UNIVERSAL PREVENTION THROUGH STATE-NONPROFIT-UNIVERSITY- SCHOOL SYSTEM PARTNERSHIPS Philip J. Leaf, Ph.D. Johns Hopkins University.
Performance Measurement and Evaluation Basics 2014 AmeriCorps External Reviewer Training.
NJ - 1 Performance Measurement Reporting Development Services Group, Inc. Don Johnson For more information contact Development Services Group, Inc
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.
RRTC-EBP-VR The Rehabilitation Research and Training Center on Effective Vocational Rehabilitation Service Delivery Practices (RRTC-EBP-VR) is established.
ASSESSMENT ACCOMMODATIONS How to Select, Administer, and Evaluate Use of Accommodations for Instruction and Assessment of Students with Disabilities Ohio.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Fundamentals of Evaluation for Public Health Programs ROBERT FOLEY, M.ED. NIHB TRIBAL PUBLIC HEALTH SUMMIT MARCH 31,
Maryland’s Journey— Focus Schools Where We’ve Been, Where We Are, and Where We’re Going Presented by: Maria E. Lamb, Director Nola Cromer, Specialist Program.
INTERNATIONAL SOCIETY FOR TECHNOLOGY IN EDUCATION working together to improve education with technology Using Evidence for Educational Technology Success.
Overview of Evaluation Designs. Learning objectives By the end of this presentation, you will be able to: Explain evaluation design Describe the differences.
Observations on Observation
1 PROJECT EVALUATION IT’S ALL ABOUT STUDENTS. 2 In partnership, we help America’s students stay in school and graduate by: Reducing gaps in college access.
Randomized Clinical Trials: The Versatility and Malleability of the “Gold Standard” Wing Institute Jack States Ronnie Detrich Randy Keyworth “Get your.
Assisting GPRA Report for MSP Xiaodong Zhang, Westat MSP Regional Conference Miami, January 7-9, 2008.
Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.
Understanding and Using the Results from the NCSEAM Family Survey Batya Elbaum, Ph.D. NCSEAM Measuring Child and Family Outcomes NECTAC National TA Meeting.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Evidence-based Education and the Culture of Special Education Chair: Jack States, Wing Institute Discussant: Teri Palmer, University of Oregon.
1 Alliances for Graduate Education and the Professoriate Comparison Groups and Other issues September 18, 2008 by Catherine M. Millett, Ph.D. Policy Evaluation.
Module II: Developing a Vision and Results Orientation Cheri Hayes Consultant to Nebraska Lifespan Respite Statewide Sustainability Workshop June 23-24,
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Community Planning 101 Disability Preparedness Summit Nebraska Volunteer Service Commission Laurie Barger Sutter November 5, 2007.
An Expanded Model of Evidence-based Practice in Special Education Randy Keyworth Jack States Ronnie Detrich Wing Institute.
Sustainability Planning Framework and Process Cheri Hayes Consultant to Nebraska Lifespan Respite Statewide Sustainability Workshop June 23-24, 2015 ©
Strategies for Effective Program Evaluations U.S. Department of Education The contents of this presentation were produced by the Coalition for Evidence-Based.
1 General Elements in Evaluation Research. 2 Types of Evaluations.
Onsite Quarterly Meeting SIPP PIPs June 13, 2012 Presenter: Christy Hormann, LMSW, CPHQ Project Leader-PIP Team.
Practice Key Driver Diagram. Chapter Quality Network ADHD Project Jeff Epstein PhD CQN ADHD National Expert/CQN Data Analyst The mehealth Portal and CQN.
Developing an evaluation of professional development Webinar #2: Going deeper into planning the design 1.
Characteristics of Studies that might Meet the What Works Clearinghouse Standards: Tips on What to Look For 1.
The Value of Random Assignment Impact Evaluations for Youth-Serving Interventions? Notes from Career Academy Research and Practice James Kemple Senior.
Erik Augustson, PhD, National Cancer Institute Susan Zbikowski, PhD, Alere Wellbeing Evaluation.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
 Kim Peters, Prevention Coordinator December 14, 2011.
Response to Intervention Finding a Way Out of the ‘Research-Based’ Maze: A Guide for Schools Jim Wright
1 DEMONSTRATION PROJECTS TO ENSURE STUDENTS WITH DISABILITIES RECEIVE A QUALITY HIGHER EDUCATION PROGRAM Performance Measurement, Program and Project Evaluation.
Evidence-Based Public Health in Action: Strategies from New York Moderator: Amy Ramsay Association of State and Territorial Health Officials Speakers from.
What Makes A Juvenile Justice Program Evidence Based? Clay Yeager Policy Director, Blueprints for Healthy Youth Development.
Patricia Gonzalez, OSEP June 14, The purpose of annual performance reporting is to demonstrate that IDEA funds are being used to improve or benefit.
Are Evidence-Based Practice Websites Trustworthy and Accessible?
Presented by: Community Planning & Advocacy Council.
IV-E Prevention Family First Implementation & Policy Work Group
Building a Strong Outcome Portfolio
Presenter: Kate Bell, MA PIP Reviewer
Presentation transcript:

+ IDENTIFYING AND IMPLEMENTING EDUCATIONAL PRACTICES SUPPORTED BY RIGOROUS EVIDENCE: A USER FRIENDLY GUIDE Presented by Kristi Hunziker University of Utah Department of Educational Psychology School Psychology Program US Office of Education K H325K080308

+ How to evaluate whether an educational intervention is supported by rigorous evidence: An overview

+ Step 1. Is the intervention backed by “strong” evidence of effectiveness ? Quality of studies needed to establish “strong” evidence: Randomized controlled trials that are well-designed and implemented. + Quantity of evidence needed: Trials showing effectiveness in - Two or more typical school settings, Including a setting similar to that of your schools/classrooms = Strong Evidence

+ Step 2. If the intervention is not backed by “strong” evidence, is it backed by “possible” evidence of effectiveness? Types of studies that can comprise “possible” evidence: Randomized controlled trials whose quality/quantity are good but fall short of “strong” evidence and/or Comparison-group studies in which the intervention and comparison groups are very closely matched in academic achievement, demographics, and other characteristics

+ Types of studies that do not comprise “possible” evidence: Pre-post studies. Comparison-group studies in which the intervention and comparison groups are not closely matched. “Meta-analyses” that include the results of such lower-quality studies Step 2. If the intervention is not backed by “strong” evidence, is it backed by “possible” evidence of effectiveness?

+ Step 3. If the answers to both questions above are “no,” one may conclude that the intervention is not supported by meaningful evidence.

+ Step 1. Is the intervention backed by “strong” evidence of effectiveness? Key items to look for in the study’s description of the intervention and the random assignment process : The study should clearly describe the intervention, including: (i) who administered it, who received it, and what it cost; (ii) how the intervention differed from what the control group received; and (iii) the logic of how the intervention is supposed to affect outcomes. Be alert to any indication that the random assignment process may have been compromised. The study should provide data showing that there are no systematic differences between the intervention and control groups prior to the intervention.

+ Step 1. Is the intervention backed by “strong” evidence of effectiveness? Key items to look for in the study’s collection of outcome data: The study should use outcome measures that are “valid” – – i.e., that accurately measure the true outcomes that the intervention is designed to affect. The percent of study participants that the study has lost track of when collecting outcome data should be small, and should not differ between the intervention and control groups. The study should collect and report outcome data even for those members of the intervention group who do not participate in or complete the intervention. The study should preferably obtain data on long-term outcomes of the intervention, so that you can judge whether the intervention’s effects were sustained over time.

+ Step 1. Is the intervention backed by “strong” evidence of effectiveness? Key items to look for in the study’s reporting of results: If the study makes a claim that the intervention is effective, it should report (i) the size of the effect, and (ii) statistical tests showing the effect is unlikely to be the result of chance. A study’s claim that the intervention’s effect on a subgroup (e.g., Hispanic students) is different than its effect on the overall population in the study should be treated with caution. The study should report the intervention’s effects on all the outcomes that the study measured, not just those for which there is a positive effect.

+ Step 1. Is the intervention backed by “strong” evidence of effectiveness? Quantity of evidence needed to establish “strong” evidence of effectiveness The intervention should be demonstrated effective, through well- designed randomized controlled trials, in more than one site of implementation; These sites should be typical school or community settings, such as public school classrooms taught by regular teachers; and The trials should demonstrate the intervention’s effectiveness in school settings similar to yours, before you can be confident it will work in your schools/classrooms.

+ Step 2. If the intervention is not backed by “strong” evidence, is it backed by “possible” evidence of effectiveness? This is a judgment call that depends, for example, on the extent of the flaws in the randomized trials of the intervention and the quality of any nonrandomized studies that have been done. The following are a few factors to consider in making these judgments:

+ Step 2. If the intervention is not backed by “strong” evidence, is it backed by “possible” evidence of effectiveness? Circumstances in which a comparison-group study can constitute “possible” evidence: The study’s intervention and comparison groups should be very closely matched in academic achievement levels, demographics, and other characteristics prior to the intervention. The comparison group should not be comprised of individuals who had the option to participate in the intervention but declined. The study should preferably choose the intervention/comparison groups and outcome measures “prospectively” – i.e., before the intervention is administered. How do results generalize to minority populations? The study should meet the checklist items listed above for a well-designed randomized controlled trial (other than the item concerning the random assignment process). That is, the study should use valid outcome measures, report tests for statistical significance, and so on

+ Step 2. If the intervention is not backed by “strong” evidence, is it backed by “possible” evidence of effectiveness? Studies that do not meet the threshold for “possible” evidence of effectiveness include: Pre-post studies Comparison-group studies in which the intervention and comparison groups are not well-matched; and Meta-analyses that combine the results of individual studies which do not themselves meet the threshold for “possible” evidence.

+ Step 3. If the answers to both questions above are “no,” one may conclude that the intervention is not supported by meaningful evidence.

+ Where to find evidence-based interventions The What Works Clearinghouse ( established by the U.S. Department of Education’s Institute of Education Sciences to provide educators, policymakers, and the public with a central, independent, and trusted source of scientific evidence of what works in education. The Promising Practices Network ( web site highlights programs and practices that credible research indicates are effective in improving outcomes for children, youth, and families. Blueprints for Violence Prevention ( Is a national violence prevention initiative to identify programs that are effective in reducing adolescent violent crime, aggression, delinquency, and substance abuse. Development of practice research networks The International Campbell Collaboration ( offers a registry of systematic reviews of evidence on the effects of interventions in the social, behavioral, and educational arenas. Social Programs That Work ( offers a series of papers developed by the Coalition for Evidence-Based Policy on social programs that are backed by rigorous evidence of effectiveness.