Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar April 25, 2012 Note: These slides are intended as.

Slides:



Advertisements
Similar presentations
Management Plans: A Roadmap to Successful Implementation
Advertisements

A Roadmap to Successful Implementation Management Plans.
Evaluation Capacity Building Identifying and Addressing the Fields Needs.
Designs to Estimate Impacts of MSP Projects with Confidence. Ellen Bobronnikov March 29, 2010.
FY 2015 SSS Using Evidence to Support Proposed Strategies and Activities Competitive Preference Priorities 1(b) and 2(b)  Addressing priority(ies) in.
Experimental Research Designs
Alaska Native Education Program (ANEP) Technical Assistance Meeting September 2014 Sylvia E. Lyles Valerie Randall Almita Reed.
Computing Leadership Summit STEM Education Steve Robinson U.S. Department of Education White House Domestic Policy Council February 22, 2010.
High-Quality Supplemental Educational Services And After-School Partnerships Demonstration Program (CFDA Number: ) CLOSING DATE: August 12, 2008.
Funding Opportunities at the Institute of Education Sciences: Information for the Grants Administrator Elizabeth R. Albro, Ph.D. Acting Commissioner National.
IES Grant Writing Workshop for Efficacy and Replication Projects
Arts in Education National Grant Program (AENP) Pre-Application Webinar U.S. Department of Education Office of Innovation and Improvement Improvement Programs.
WRITING PROPOSALS WITH STRONG METHODOLOGY AND IMPLEMENTATION Kusum Singh, Virginia Tech Gavin W. Fulmer, National Science Foundation 1.
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
Proposal Writing.
Tennessee Promise Forward Mini- Grant Competition Tennessee Higher Education Commission Informational Webinar.
Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar May 2014 Note: These slides are intended as guidance.
Overview Slides April 17, 2012 Q&A Webinar i3 Scale-up and Validation Applications Note: These slides are intended as guidance only. Please refer to the.
Overview of MSP Evaluation Rubric Gary Silverstein, Westat MSP Regional Conference San Francisco, February 13-15, 2008.
Selecting a Research Design. Research Design Refers to the outline, plan, or strategy specifying the procedure to be used in answering research questions.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Summary Document March 2010 Investing in Innovation (i3) Fund Note: These slides are intended as guidance only. Please refer to the official notice of.
CNCS Evaluation Highlights Carla Ganiel, Senior Program and Project Specialist AmeriCorps State and National.
DRAFT – Not for Circulation Investing in Innovation (i3) 2012 Development Competition Summary Document February 2012 Note: These slides are intended as.
Overview Slides March 13, 2012 Q&A Webinar i3 Development Pre-Application Note: These slides are intended as guidance only. Please refer to the official.
Evaluation methods and tools (Focus on delivery mechanism) Jela Tvrdonova, 2014.
Professional Development for Arts Educators Program (PDAE) Pre-Application Webinar U.S. Department of Education Office of Innovation and Improvement Improvement.
Presenters: Martin J. Blank, Martin J. Blank, President, Institute for Educational Leadership; Director, Coalition for Community Schools S. Kwesi Rollins.
Arts in Education Model Development and Dissemination Grant Program (AEMDD) Pre-Application Webinar U.S. Department of Education Office of Innovation and.
Evaluating a Research Report
Overview of Evaluation Designs. Learning objectives By the end of this presentation, you will be able to: Explain evaluation design Describe the differences.
Mathematics and Science Education U.S. Department of Education.
Techniques of research control: -Extraneous variables (confounding) are: The variables which could have an unwanted effect on the dependent variable under.
KATEWINTEREVALUATION.com Education Research 101 A Beginner’s Guide for S STEM Principal Investigators.
Assisting GPRA Report for MSP Xiaodong Zhang, Westat MSP Regional Conference Miami, January 7-9, 2008.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
Preliminary Results – Not for Citation Strengthening Institutions Program Webinar on Competitive Priority on Evidence April 11, 2012 Note: These slides.
For ABA Importance of Individual Subjects Enables applied behavior analysts to discover and refine effective interventions for socially significant behaviors.
1 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Chapter 8 Clarifying Quantitative Research Designs.
Classifying Designs of MSP Evaluations Lessons Learned and Recommendations Barbara E. Lovitts June 11, 2008.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
WWC Standards for Regression Discontinuity Study Designs June 2010 Presentation to the IES Research Conference John Deke ● Jill Constantine.
Evidence-based Education and the Culture of Special Education Chair: Jack States, Wing Institute Discussant: Teri Palmer, University of Oregon.
November 15, Regional Educational Laboratory - Southwest The Effects of Teacher Professional Development on Student Achievement: Finding from a Systematic.
Best Practices in Demonstrating Evidence Diana Epstein, Ph.D, CNCS Office of Research and Evaluation Carla Ganiel, AmeriCorps State and National.
The Conceptual Framework: What It Is and How It Works Linda Bradley, James Madison University Monica Minor, NCATE April 2008.
Copyright © Allyn & Bacon 2008 Intelligent Consumer Chapter 14 This multimedia product and its contents are protected under copyright law. The following.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Evaluating Impacts of MSP Grants Ellen Bobronnikov January 6, 2009 Common Issues and Potential Solutions.
Securing External Federal Funding Janice F. Almasi, Ph.D. Carol Lee Robertson Endowed Professor of Literacy University of Kentucky
An Expanded Model of Evidence-based Practice in Special Education Randy Keyworth Jack States Ronnie Detrich Wing Institute.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Monitoring Afghanistan, 2015 Food Security and Agriculture Working Group – 9 December 2015.
Developing an evaluation of professional development Webinar #2: Going deeper into planning the design 1.
Characteristics of Studies that might Meet the What Works Clearinghouse Standards: Tips on What to Look For 1.
OFFICE OF ENGLISH LANGUAGE ACQUISITION NATIONAL PROFESSIONAL DEVELOPMENT PROGRAM (NPD) NPD Grant Competition Webinar 2: GPRA & Selection Criteria January.
OFFICE OF ENGLISH LANGUAGE ACQUISITION NATIONAL PROFESSIONAL DEVELOPMENT PROGRAM (NPD) NPD Grant Competition Webinar 3: Evidence and Evaluation January.
1 DEMONSTRATION PROJECTS TO ENSURE STUDENTS WITH DISABILITIES RECEIVE A QUALITY HIGHER EDUCATION PROGRAM Performance Measurement, Program and Project Evaluation.
Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar 2015 Update Note: These slides are intended as guidance.
Best Practices in CMSD SLO Development A professional learning module for SLO developers and reviewers Copyright © 2015 American Institutes for Research.
Selection Criteria and Invitational Priorities School Leadership Program U.S. Department of Education 2005.
Rigor and Transparency in Research
Stages of Research and Development
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Writing a sound proposal
Briefing: Interdisciplinary Preparation for Personnel Serving Children with Disabilities Who Have High-Intensity Needs CFDA K Office of.
Investing in Innovation (i3) Fund
Evidence Based Curriculum & Instruction
Evidence-Based Practices Under ESSA for Title II, Part A
Presentation transcript:

Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar April 25, 2012 Note: These slides are intended as guidance only. Please refer to the official documents published in the Federal Register.

Preliminary Results – Not for Citation Agenda  Evidence Standards—eligibility requirement Requirements Review process  Independent Evaluation—program requirement Guidance on qualities of high-quality evaluation plans  Questions & answers Live—submission via the Webinar chat function Post-webinar: to 2

Preliminary Results – Not for Citation Evidence Standards— Eligibility Requirement 3

Preliminary Results – Not for Citation Evidence Eligibility Requirements Are Specific to the Type of Grant 4 An application must meet the applicable evidence requirements before an award is made Any application failing to meet the applicable evidence requirements is not eligible for a grant award, regardless of its score on the Selection Criteria Any application failing to meet the applicable evidence requirements will not be considered for a different type of i3 grant award

Preliminary Results – Not for Citation Scale-up Grants: Require “Strong Evidence” of Effectiveness  High internal validity of the evidence Studies designed and implemented in ways that support causal inference  High external validity of the evidence Studies based on a sufficient representation of participants and settings that the findings support scaling  Minimum size of evidence base More than one well-designed and well-implemented experimental or quasi-experimental study OR One large, well-designed and well-implemented multi-site randomized controlled trial 5

Preliminary Results – Not for Citation Caution: Not All Associations Support Causal Inferences 6 (Mis-) Interpretive Statement: Based on the evidence, early care & education programs should reduce pupil teacher ratios in order to lower the expulsion rate. Problem: There are competing explanations for why expulsions increase with increases in pupil-teacher ratios

Preliminary Results – Not for Citation Validation Grants: Require “Moderate Evidence” of Effectiveness  Validity of the evidence High internal validity and moderate external validity or High external validity and moderate internal validity  Minimum size of evidence base At least one well-designed experimental or quasi-experimental study May have small sample sizes OR may fail to demonstrate equivalence between the intervention & comparison groups, but has no other major flaws or A correlational study with strong statistical controls for selection bias and for discerning the influence of other potential confounds 7

Preliminary Results – Not for Citation Development Grants: Require Evidence to Support the Proposed Intervention  Theoretical support for the proposed practice, strategy, or program that is based on research findings or reasonable hypotheses, including related research or theories in education and other sectors and  Some empirical evidence of the promise of the proposed practice, strategy or program based on prior implementation and evaluation of something similar, albeit potentially on a limited scale or in a limited setting 8

Preliminary Results – Not for Citation Evidence Standards— Eligibility Review Process 9

Preliminary Results – Not for Citation Responsibility for the Evidence Eligibility Reviews  The Institute of Education Sciences (IES) conducts the reviews and reports their findings to OII  IES uses What Works Clearinghouse (WWC) evidence standards to judge the causal validity of the evidence IES uses experienced WWC consultants  Reviews are limited to evidence specifically cited in the application as that supporting the eligibility requirement 10

Preliminary Results – Not for Citation Evidence Citation Tips  Provide a list of full citations for the evidence you want ED to include in the evidence eligibility review in Appendix D of your application  If possible, also append the evidence itself (if space allows)  Include in the list only those citations that are relevant to the intervention being proposed  For Scale-up and Validation applications, include in the list only those citations that are primary analyses of the effect of the intervention being proposed 11

Preliminary Results – Not for Citation Evidence Reviews: Scale-up Applications Two focal questions: 1. Does the evidence include a sufficient number and quality of studies? (1) More than one well-designed and well-implemented experimental study or well-designed and well-implemented quasi-experimental study? or (2) One large, well-designed and well-implemented multi-site randomized controlled trial? 2. Does evidence include a reasonable representation of the kinds of participants and settings proposed to receive the practice, strategy, or program under the Scale-up grant to expect the results would generalize? 12

Preliminary Results – Not for Citation Evidence Reviews: Validation Applications Question 1 of 3: What is the internal validity of the evidence of effectiveness? High = At least one well-designed and well-implemented experimental or quasi-experimental study supporting the effectiveness of the practice, strategy, or program Moderate = (a) At least one well-designed experimental or quasi- experimental study, with no major flaws except possibly failure to demonstrate equivalence between the intervention & comparison groups or (b) At least one correlational study with strong statistical controls for possible selection bias Low = No study with high or moderate internal validity 13

Preliminary Results – Not for Citation Evidence Reviews: Validation (Cont’d) Question 2 of 3: What is the external validity of the evidence? High = Evidence pertains to the kinds of participants and settings that are the focus of the Validation Grant application Moderate = Evidence pertains to participants and settings that overlap with but may be more limited than those that are the focus of the application Low = No study with high or moderate external validity 14

Preliminary Results – Not for Citation Evidence Reviews: Validation (Cont’d) Question 3 of 3: Does the evidence of effectiveness provided meet one of the following conditions: 1. High internal validity and at least moderate external validity (as defined above)? or 2. High external validity and at least moderate internal validity (as defined above)? 15

Preliminary Results – Not for Citation Evidence Review: Development Applications Two focal questions: 1. Is there a well-reasoned, theoretically supported rationale for the proposed practice, strategy, or program based on research findings or reasonable hypotheses? 2. Is there empirical evidence of the promise that the proposed practice, strategy, or program (or one similar to it) will improve student outcomes (albeit, possibly, based on implementation on a limited scale or in a limited setting compared with the participants and settings proposed to receive the treatment under the Development grant)? 16

Preliminary Results – Not for Citation What Works Clearinghouse Evidence Standards Used to Judge Internal Validity of Evidence 17

Preliminary Results – Not for Citation How Does the WWC Assess Research Evidence?  Type of design: does the study design allow us to draw causal conclusions?  Strength of data: does the study focus on relevant outcomes and measure them appropriately?  Adequacy of statistical procedures: are the data analyzed properly? 18

Preliminary Results – Not for Citation WWC Standards Apply to Causal Designs Eligible Designs Randomized controlled trials (RCTs) Quasi-experimental designs (QEDs) Potentially Eligible Designs Regression discontinuity (RDD) Single case (SCD) 19 Ineligible Designs Anecdotes and testimonials Case studies Descriptive Correlational

Preliminary Results – Not for Citation Key Elements of the WWC RCT/QED Standards Randomization Attrition Equivalence Meets Evidence Standards Meets Evidence Standards with Reservations Does Not Meet Evidence Standards Low No Yes High Yes No 20

Preliminary Results – Not for Citation Caution 1: RCTs with high sample attrition must demonstrate baseline equivalence 21 Randomization? Attrition? Baseline Equivalence? Meets Evidence Standards Meets Evidence Standards with Reservations Does Not Meet Evidence Standards Low No Yes High Yes No

Preliminary Results – Not for Citation Standards Account for Overall and Differential Attrition Rates Overall AttritionTreatment-Control Differential Attrition 15%10.7% 20%10.0% 30%8.2% 22 Sample maximum attrition thresholds for meeting WWC evidence standards

Preliminary Results – Not for Citation Caution 2: All QEDs must demonstrate baseline between the treatment and control groups 23 Randomization? Attrition? Baseline Equivalence? Meets Evidence Standards Meets Evidence Standards with Reservations Does Not Meet Evidence Standards Low No Yes No

Preliminary Results – Not for Citation Baseline Equivalence Standard Equivalence must be demonstrated on the analytic sample 24 Baseline Difference: Treatment v. Control No Adjustment Needed Adjustment Needed Does not Meet Standards <.05 sd√ sd√ >.25 sd√

Preliminary Results – Not for Citation Other Criteria  Studies must measure impacts on relevant outcomes  Outcome measures must be reliable  Outcomes must not be over-aligned with the intervention  There is not a confound between the treatment condition and one or more other factors that also could be the cause of differences in outcomes between the treatment and comparison groups 25

Preliminary Results – Not for Citation Questions & Answers Please submit your questions on evidence eligibility requirements via the Webinar chat function now 26

Preliminary Results – Not for Citation Independent Evaluation-- Program Requirement and Guidance 27

Preliminary Results – Not for Citation Evaluation Requirements 28 MUST All i3 Grantees MUST 1.Conduct an independent project evaluation 2.Cooperate with technical assistance provided by the Department or its contractors 3.Share the results of any evaluation broadly 4.Share the data sets for Validation and Scale-up evaluations 1.Conduct an independent project evaluation 2.Cooperate with technical assistance provided by the Department or its contractors 3.Share the results of any evaluation broadly 4.Share the data sets for Validation and Scale-up evaluations MUST * Note: The quality of an applicant’s project evaluation is also a selection criterion.

Preliminary Results – Not for Citation Selection Criterion D: Quality of Project Evaluation 29 “ The extent to which the methods of evaluation will provide high-quality implementation data and performance feedback, and permit periodic assessment of progress toward achieving intended outcomes.” Scale-up, Validation, and Development “ The extent to which the proposed project plan includes sufficient resources to carry out the project evaluation effectively.” Scale-up, Validation, and Development Sufficient Funding to Carry Out Evaluation Understanding of Implementation and Intermediate Outcomes

Preliminary Results – Not for Citation Selection Criterion D: Quality of Project Evaluation (Cont’d) 30 “The extent to which the evaluation will provide sufficient information about the key elements and approach of the project so as to facilitate replication or testing in other settings.” Scale-up and Validation “The extent to which the proposed evaluation will provide sufficient information about the key elements and approach of the project to facilitate further development, replication, or testing in other settings.” Development Generates Information to Support Follow- on Scaling or Other Activities

Preliminary Results – Not for Citation Notes on Selection Criterion D. Quality of Project Evaluation 31 “ The extent to which the methods of evaluation will include a well-designed experimental study, or if a well-designed experimental study of the project is not possible, the extent to which the methods of evaluation will include a well-designed quasi-experimental design.” Scale-up “The extent to which the methods of evaluation will include a well-designed experimental study or a well-designed quasi-experimental study.” Validation Design & Methodology for the Evaluation

Preliminary Results – Not for Citation Guidance on Qualities of High-quality Evaluation Plans 32

Preliminary Results – Not for Citation Evaluation Goals  Aligned with i3 performance measures  Increase strength of evidence on the impact or promise of i3-supported interventions  i3 performance measures set the expectation for all i3 independent evaluations to provide high-quality implementation data and performance feedback  Other expectations vary by grant type 33

Preliminary Results – Not for Citation Evaluation Goals: Scale-up & Validation Projects  Estimate the impacts of the i3-supported intervention as implemented at scale Expectation they will be “well-designed and well-implemented,” meaning designed, implemented and reported in a manner to meet WWC evidence standards  Provide information on key elements of the intervention for replication or testing in other settings  Reflect changes to the intervention or delivery model  Reflect the nature of sites served, implementation settings, and participants  Document costs/resource use 34

Preliminary Results – Not for Citation Evaluation Goals: Development Projects  Provide evidence of the promise of the practice, strategy, or program for improving student outcomes : Using a research design that provides a comparison for the outcomes of the intervention group (e.g., a randomized controlled trial or strong matched comparison group design) Using valid outcome measures (e.g., following the WWC standards)  Provide information on key elements of the intervention for further development, replication, or testing in other settings 35

Preliminary Results – Not for Citation Evaluation Plan Tips (1 of 6)  Provide a fully developed plan in the application Maximizes the ability to adequately address key questions Minimizes the time spent at the beginning grant specifying details and negotiating revisions with ED Key components of a high-quality evaluation plan  Logic model (What is the intervention? Who is it intended to serve? What outcomes is it expected to produce? How?)  Research questions (What do you want to learn? Who will the results pertain to?)  Proposed methods (What is the sampling frame? How will treatment and comparison groups be formed? What is the minimum detectable effect size? What data collection measures and methods will be used? How will the data be analyzed and reported?)  Coherent links among the above components 36

Preliminary Results – Not for Citation Evaluation Plan Tips (2 of 6)  Provide a well-developed logic model to guide the evaluation plan: Clear description of the i3-supported intervention/scaling mechanism Key elements of the intervention in sufficient detail to guide replication Target population for the innovation Expected pathways through which the innovation is expected to affect intermediate and ultimate outcomes of interest Expected mechanisms of change—e.g., shows how factors such as content, organization, duration, training, and other procedures are expected to lead to the intended outcomes 37

Preliminary Results – Not for Citation Evaluation Plan Tips (3 of 6)  Clarity on key research questions in the evaluation plan Creates shared expectations with partners and ED about what will be learned  Clarity on aspects of the logic model the evaluation will examine  Design (methods, sample, data collection, analysis) appropriate to address the key questions 38

Preliminary Results – Not for Citation Evaluation Plan Tips (4 of 6)  Specify proposed methods For questions about program effectiveness (i.e., causal inference questions):  Rely on experimental methods, when possible NOTE: Challenging to meet WWC evidence standards with quasi- experimental methods (as noted above)  Describe how treatment and control groups will be formed and monitored 39

Preliminary Results – Not for Citation Evaluation Plan Tips (5 of 6)  Summarize data collection and analysis procedures Describe data sources, samples, data collection procedures and timeline, and analysis methods for each research question Ensure critical components of the logic model are represented in the data collection plan  NOTE: ensure measurement of implementation fidelity  NOTE: comparative studies should document the experiences of comparison participants 40

Preliminary Results – Not for Citation Evaluation Plan Tips (6 of 6)  Ensure good match among logic model, research questions, proposed methods, and i3 goals: Sample design should yield a sample representative of the relevant population group for the i3 project & support strong impact analyses (relevant to the intervention as implemented under i3) Analysis plan should be appropriate for addressing the study questions and well-matched to the sample design Plan for reporting the findings should be consistent with the evaluation goals and design 41

Preliminary Results – Not for Citation Questions & Answers Please submit your questions on evaluation via the Webinar chat function now 42

Preliminary Results – Not for Citation Other Important Resources 43 Note: These slides are intended as guidance only. Please refer to the official Notices in the Federal Register. Investing in Innovation Fund Website: (  Notices of Final Revisions to Priorities, Requirements, and Selection Criteria  Application Packages for each competition (includes the respective Notice Inviting Applications)  Frequently Asked Questions What Works Clearinghouse Website: (  Reference Resources, Procedures and Standards Handbook  Quick Review Protocol All questions about i3 may be sent to