2016 AmeriCorps Texas All-Grantee Meeting February 25-26, 2016

Slides:



Advertisements
Similar presentations
AmeriCorps State and National Evaluation Requirements August 16,
Advertisements

Theory of Change 2/8/2014 Theory of Change 1 Theory of Change What is it? How to use it?
Muskie School of Public ServiceInstitute for Public Sector Innovation Where we are trying to go? Common outcome measures Quality data collection for outcomes.
Postgraduate Course 7. Evidence-based management: Research designs.
Evaluation Capacity Building Identifying and Addressing the Fields Needs.
S EPTEMBER SESSION 1: R ESEARCH Q UESTIONS & S ELECTING M EASURES The first steps toward getting your EA research project underway.
Assessing Evidence and Past Performance 2014 AmeriCorps External Reviewer Training.
Evidence: What It Is And Where To Find It Building Evidence of Effectiveness Copyright © 2014 by JBS International, Inc. Developed by JBS International.
Theory of Change and Evidence
How to Develop the Right Research Questions for Program Evaluation
Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar May 2014 Note: These slides are intended as guidance.
Overview of MSP Evaluation Rubric Gary Silverstein, Westat MSP Regional Conference San Francisco, February 13-15, 2008.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
CNCS Evaluation Highlights Carla Ganiel, Senior Program and Project Specialist AmeriCorps State and National.
Overview of Evaluation Designs. Learning objectives By the end of this presentation, you will be able to: Explain evaluation design Describe the differences.
Developing a Long-Term Research Agenda. Learning objectives By the end of this presentation, you will be able to: Recognize the importance of building.
1 PROJECT EVALUATION IT’S ALL ABOUT STUDENTS. 2 In partnership, we help America’s students stay in school and graduate by: Reducing gaps in college access.
Session I: Unit 2 Types of Reviews September 26, 2007 NCDDR training course for NIDRR grantees: Developing Evidence-Based Products Using the Systematic.
Assisting GPRA Report for MSP Xiaodong Zhang, Westat MSP Regional Conference Miami, January 7-9, 2008.
Preliminary Results – Not for Citation Strengthening Institutions Program Webinar on Competitive Priority on Evidence April 11, 2012 Note: These slides.
Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar April 25, 2012 Note: These slides are intended as.
Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.
©2010 John Wiley and Sons Chapter 3 Research Methods in Human-Computer Interaction Chapter 3- Experimental Design.
Designing an Experiment PAGE Essential Question How do you conduct scientific inquiry?
AmeriCorps Grantee Training Evaluation and Research September 11, 2014.
Matt Rearick, Ph.D. Research & Evaluation, The RLI Group Associate Professor, Human Performance Roanoke College Making Sense of the.
Best Practices in Demonstrating Evidence Diana Epstein, Ph.D, CNCS Office of Research and Evaluation Carla Ganiel, AmeriCorps State and National.
Christopher Manacci, MSN, ACNP, CCRN Acute Care Nurse Practitioner Critical Care Transport Services The Cleveland Clinic Director, ACNP Flight Nursing.
2016 AmeriCorps State: Grant Development October 15-30, 2015 Marisa Petreccia, Kate Pisano & Nancy Stetter.
Evaluation Designs Adrienne DiTommaso, MPA, CNCS Office of Research and Evaluation.
Using Evaluation Results and Building a Long-Term Research Agenda
Characteristics of Studies that might Meet the What Works Clearinghouse Standards: Tips on What to Look For 1.
Open Forum: Scaling Up and Sustaining Interventions Moderator: Carol O'Donnell, NCER
Research Design Quantitative. Quantitative Research Design Quantitative Research is the cornerstone of evidence-based practice It provides the knowledge.
Preliminary Results – Not for Citation Investing in Innovation (i3) Fund Evidence & Evaluation Webinar 2015 Update Note: These slides are intended as guidance.
Randomized Control Trials: What, Why, How, When, and Where Mark L. Davison MESI Conference March 11, 2016 Department of Educational Psychology.
[Presentation location] [Presentation date] (Confirm ABT logo) Building Bridges and Bonds (B3): An introduction.
Monitoring and Evaluation Presentation by Kanu Negi Office of Development Effectiveness (ODE), DFAT and David Goodwins Coffey International Development.
Comprehensive Evaluation Concept & Design Analysis Process Evaluation Outcome Assessment.
County Vocational School District Partnership Grant, Cohort 3 Technical Assistance Workshop January 6, 2017.
2016 AmeriCorps Texas All-Grantee Meeting February 25-26, 2016
Issues in Evaluating Educational Research
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Building Evidence of Effectiveness
Part 1: Program Evaluation: What is it?
Designing Effective Evaluation Strategies for Outreach Programs
The Research Design Continuum
Right-sized Evaluation
Dana Keener, Ph.D. ICF Macro 2009 AEA Annual Meeting November 12, 2009
Scientific Method-.
America’s Promise Evaluation What is it and what should you expect?
G-CASE Fall Conference November 14, 2013 Savannah, Ga
Monitoring and Evaluation of Postharvest Training Projects
Evidence Based Curriculum & Instruction
2016 AmeriCorps Texas All-Grantee Meeting February 25-26, 2016
ISTE Workshop Research Methods in Educational Technology
By the completion of this presentation, the participant will be able to:
DCE3004 RESEARCH METHODS: Quantitative Research Approach
Evaluating Your Home Visiting Program
Procedures & Collecting Data
Evidence-Based Practice
A logical approach to problem solving.
Health and Human Services Finance Committee
Building a Strong Outcome Portfolio
Group Experimental Design
Research Design Quantitative.
The CIT Model: Can We Call It Evidence-Based?
Title I, Part A Conference Call Bureau of Federal Educational Programs
Some Further Considerations in Combining Single Case and Group Designs
Which Evaluation Designs Are Right for Your State?
Presentation transcript:

2016 AmeriCorps Texas All-Grantee Meeting February 25-26, 2016

Understanding Evaluation Requirements and Evidence Levels

Overview of Session: Overview Evaluation Capacity Survey Evaluation Requirements and Timeline Small vs. Large Grantee Requirements Understanding Levels of Evidence Pernilla Johansson from Texas Evaluation Network

Evaluation Requirements Original Grant Application 1st Recompete Application 2nd Recompete Application 3rd+ Recompete Application Evaluation Plan Not Required Required (builds off last evaluation) Evaluation Report

Evaluation Requirements Small Grantee (<$500,000) Internal or Independent Impact or Process Evaluation Large Grantee (>$500,000) Independent Impact Evaluation

Evaluation Requirements Levels of Evidence No Evidence Pre-Preliminary Preliminary Moderate Strong

No Evidence Levels of Evidence No systematic collection of qualitative or quantitative program model May be evidence informed

Pre-Preliminary Levels of Evidence Data collection on at least one component of logic model has occurred Data collection processes and results are fully described Link between logic model and data is described

Preliminary: Option 1 Levels of Evidence Outcome study of own program Pre-Post Test without Comparison Group OR Post-Test Only with Comparison Group Includes data beyond performance measure data Yields promising results

Preliminary: Option 2 Levels of Evidence Replication with Fidelity Submit at least 1 RCT or QED of the intervention that will be replicated Evaluation found positive results Evaluation was conducted by independent entity Describes how the approach is the same Describes how it will be replicated with fidelity May submit process evaluation demonstrating replication with fidelity

Moderate Levels of Evidence Conducted at least one quasi-experimental design (QED) or randomized control trial (RCT) of your own program Studies evaluate same intervention as described in application Demonstrate evidence of effectiveness of one or more desired outcomes in logic model Studies conducted by independent entity Ability to generalize findings beyond single site may be limited

Strong Levels of Evidence Conducted at least one quasi-experimental design (QED) or randomized control trial (RCT) of your own program Studies evaluate same intervention as described in application Demonstrate Evidence of Effectiveness of one or more desired outcomes in logic model Studies conducted by independent entity Findings can be generalized beyond single site Intervention has been tested nationally, regionally or at the state-level OR has been conducted in multiple locations within a local geographic area

Resources Resources CNCS Evaluation Knowledge Network: http://www.nationalservice.gov/resources/evaluation OneStar Evaluation Resources: http://onestarfoundation.org/americorpstexas/grantee-resources/#Evaluation and Evidence Building

2016 AmeriCorps Texas All-Grantee Meeting February 25-26, 2016