Evaluation Designs Adrienne DiTommaso, MPA, CNCS Office of Research and Evaluation.

Slides:



Advertisements
Similar presentations
Program Evaluation: What is it?
Advertisements

PhD Research Seminar Series: Valid Research Designs
Andrea M. Landis, PhD, RN UW LEAH
Evaluation Capacity Building Identifying and Addressing the Fields Needs.
Mywish K. Maredia Michigan State University
Experimental Research Designs
Quasi-Experimental Design
Quasi Experiments Non-Experimental Research
How to Write an Evaluation Plan
Needs Assessment and Program Evaluation. Needs Assessment is: A type of applied research. Data is collected for a purpose! Can be either a descriptive.
Data Collection* Presenter: Octavia Kuransky, MSP.
Research Design and Validity Threats
SOWK 6003 Social Work Research Week 4 Research process, variables, hypothesis, and research designs By Dr. Paul Wong.
SOWK 6003 Social Work Research Week 4 Research process, variables, hypothesis, and research designs By Dr. Paul Wong.
Data Collection for Program Evaluation
Group Discussion Describe the similarities and differences between experiments , non-experiments , and quasi-experiments. Actions for Describe the similarities.
Evidence: What It Is And Where To Find It Building Evidence of Effectiveness Copyright © 2014 by JBS International, Inc. Developed by JBS International.
Methodology Tips for Constructing Instruments. Matching methods to research paradigm MethodQuantitativeQualitative Written Instrument Standardized Instrument.
Experimental Research
1 Classroom-Based Research: How to Be a Researcher in Your Classroom Basic Skills Initiative Teaching and Learning Workshop October 2009 Darla M. Cooper.
How to Develop the Right Research Questions for Program Evaluation
RESEARCH DESIGNS FOR QUANTITATIVE STUDIES. What is a research design?  A researcher’s overall plan for obtaining answers to the research questions or.
Research Design Methodology Part 1. Objectives  Qualitative  Quantitative  Experimental designs  Experimental  Quasi-experimental  Non-experimental.
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
Reporting and Using Evaluation Results Presented on 6/18/15.
Program Evaluation Using qualitative & qualitative methods.
I want to test a wound treatment or educational program in my clinical setting with patient groups that are convenient or that already exist, How do I.
Overview of MSP Evaluation Rubric Gary Silverstein, Westat MSP Regional Conference San Francisco, February 13-15, 2008.
Selecting a Research Design. Research Design Refers to the outline, plan, or strategy specifying the procedure to be used in answering research questions.
Research Design for Quantitative Studies
Planning an Applied Research Project Chapter 7 – Forms of Quantitative Research © 2014 John Wiley & Sons, Inc. All rights reserved.
Overview of Evaluation Designs. Learning objectives By the end of this presentation, you will be able to: Explain evaluation design Describe the differences.
Copyright ©2008 by Pearson Education, Inc. Pearson Prentice Hall Upper Saddle River, NJ Foundations of Nursing Research, 5e By Rose Marie Nieswiadomy.
Techniques of research control: -Extraneous variables (confounding) are: The variables which could have an unwanted effect on the dependent variable under.
Assisting GPRA Report for MSP Xiaodong Zhang, Westat MSP Regional Conference Miami, January 7-9, 2008.
Quantitative and Qualitative Approaches
Classifying Designs of MSP Evaluations Lessons Learned and Recommendations Barbara E. Lovitts June 11, 2008.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
AmeriCorps Grantee Training Evaluation and Research September 11, 2014.
ED 589: Educational Research Methods Quantitative Research Elements.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
Chapter 10 Finding Relationships Among Variables: Non-Experimental Research.
Quasi Experimental and single case experimental designs
Framework of Preferred Evaluation Methodologies for TAACCCT Impact/Outcomes Analysis Random Assignment (Experimental Design) preferred – High proportion.
Evaluating Impacts of MSP Grants Ellen Bobronnikov January 6, 2009 Common Issues and Potential Solutions.
Writing an Evaluation Plan
Adrienne DiTommaso, MPA, CNCS Office of Research and Evaluation
How to Conduct Your First Evaluation Diana Epstein, Ph.D, CNCS Office of Research and Evaluation Adrienne DiTommaso, MPA, CNCS Office of Research and Evaluation.
Agenda: Quasi Experimental Design: Basics WSIPP drug court evaluation Outcomes and Indicators for your projects Next time: bring qualitative instrument.
Research Methods Observations Interviews Case Studies Surveys Quasi Experiments.
Chapter Eight: Quantitative Methods
1 General Elements in Evaluation Research. 2 Types of Evaluations.
Research Designs Social Sciences. Survey Research Can be Qualitative or Quantitative Can be Qualitative or Quantitative Self-report Self-report Opinion.
Outcomes Evaluation A good evaluation is …. –Useful to its audience –practical to implement –conducted ethically –technically accurate.
 Describe the defining characteristics of quantitative research studies.  List and describe the basic steps in conducting quantitative research studies.
Research Design in Education Research Methods. Describe your research topic What is the nature of the problem and your research question? To answer the.
Unit 7 Research Designs. What is a Research Design?? Researcher’s strategy: Describes how the researcher(s) will answer their questions/test hypotheses.
Issues in Evaluating Educational Research
Chapter 11: Quasi-Experimental and Single Case Experimental Designs
Data Collection Methods
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Research Designs Social Sciences.
Program Evaluation Essentials-- Part 2
Chapter Eight: Quantitative Methods
Designing an Experiment
Quantitative Research
Impact Evaluation Methods
Presentation transcript:

Evaluation Designs Adrienne DiTommaso, MPA, CNCS Office of Research and Evaluation

Learning objectives By the end of this presentation, you will be able to: Explain evaluation design Describe the differences between types of evaluation designs Identify the key elements of each type of evaluation design Understand the key considerations in selecting a design for conducting an evaluation of your AmeriCorps program

What is evaluation design? Evaluation design is the structure that provides the information needed to answer each of your evaluation questions. Your intended evaluation design should be based on and aligned with the following: –Your program’s theory of change and logic model –Primary purpose of the evaluation and key research questions –Resources available for the evaluation –Funder’s evaluation requirements

Evaluation designs and CNCS requirements Evaluation Study Designs Meet Requirements Large GranteesSmall Grantees/ EAP Programs Process Design (Non-Experimental Design Studies) NoYes Outcome Design (Non-Experimental Design Studies) NoYes Outcome (Impact) Design (Quasi-Experimental* or Experimental Design Studies) Yes *Fulfills CNCS evaluation design requirement for large, recompeting grantees if a reasonable comparison group is identified and appropriate matching/propensity scoring is used in the analysis.

Basic types of evaluation designs The two “sides” of a program’s logic model align with the two types of evaluation designs: Process and Outcome.

Process evaluation Goals: Documents what the program is doing Documents to what extent and how consistently the program has been implemented as intended Informs changes or improvements in the program’s operations Common features: Does not require a comparison group Includes qualitative and quantitative data collection Does not require advanced statistical methods

Process evaluation designs Common methods include: Review of program documents and records Review of administrative data Interviews, focus group Direct observation Types of analysis: Thematic identification Confirmation of findings across sources (triangulation)

Facilitated example: Process evaluation Evaluation Design Crosswalk: Process Evaluation Research questionEvaluation designMethods Data to be collected, when, and by whom Analysis plan What kinds of clients are seeking financial education services? How are clients accessing the program? Process evaluation  Client interviews (25)  Document review: client intake forms, member activity logs  Client interviews (same as above)  Partner focus groups (4)  Evaluator will conduct interviews when clients begin program  Documents reviewed quarterly  (Same interview as above)  Evaluator will hold focus groups quarterly  Thematic analysis on interview transcripts using NVivo  Coding and thematic analysis  (Same as above)  Thematic analysis on transcripts using NVivo

Outcome evaluation Goals: Identifies the results or effects of a program Measures program beneficiaries' changes in knowledge, attitude(s), and/or behavior(s) that result from a program Common Features: Typically requires quantitative data Often requires advanced statistical methods May include a comparison group (impact evaluation)

What is a comparison or control group? A group of individuals not participating in the program or receiving the intervention Necessary to determine if the program, rather than some other factor, is causing observed changes “Comparison group” is associated with a quasi- experimental design and “control group” is associated with an experimental design

Outcome evaluation designs

Pre- test TreatmentPost- test a) Single group post-test X0 b) Single group pre- and post- test 0X0 Non-experimental designs Outcomes are only tracked for the intervention group There are several variations within the category of non- experimental outcome designs, differing only in number and timing of outcome measurement points: a)Single group post-test b)Single group pre- and post-test X = intervention is administered 0 = measurement is taken Intervention Group

Pre-testTreatmentPost- test Intervention Group 0X0 Comparison Group 00 Quasi-experimental designs Defined by collecting data on two or more study groups – an intervention group and a comparison group The intervention and comparison groups are identified from pre-existing or self-selected groups and are not formed through a random assignment process Pre-existing differences between the intervention and comparison groups at the outset of the intervention may lead to inaccurate estimates of the program’s effects X = intervention is administered 0 = measurement is taken

Types of quasi-experimental designs Regression discontinuity Differences-in-differences Comparative interrupted time series Pre/post-test with matched comparison group –Group constructed using: Propensity score matching Case matching Instrumental variable

Experimental designs Defined by collecting data on two or more study groups – an intervention group and a control group Random assignment techniques (e.g., lottery draw) are used by the evaluator to assign study participants to either the intervention or the control group Random assignment ensures the study groups are equivalent prior to intervention, thus are often considered the most credible design to show impact Pre- test TreatmentPost- test Intervention Group Randomly assigned 0X0 Control Group Randomly assigned 00 X = intervention is administered 0 = measurement is taken

Facilitated example: Outcome evaluation Evaluation Design Crosswalk: Outcome Evaluation Research question Evaluation design Methods Data to be collected, when, and by whom Analysis plan Do clients exit the program with increased knowledge of personal finance concepts relevant to their needs? Outcome evaluation Randomized control trial- clients will be randomly assigned to treatment at time of application to program Control group individuals deferred for 6 months, then eligible to participate in program  Client and control group knowledge of personal finance concepts  Pre-test: during application; post-test: for treatment group, upon completion of program. For control group, at 6 months post-deferment  Collected by evaluator via paper and pencil and online survey  Statistical analysis- descriptive statistics; between groups T-test using STATA software

Evaluation designs and CNCS requirements Evaluation Study Designs Meet Requirements Large GranteesSmall Grantees/ EAP Programs Process Design (Non-Experimental Design Studies) NoYes Outcome Design (Non-Experimental Design Studies) NoYes Outcome (Impact) Design (Quasi-Experimental* or Experimental Design Studies) Yes *Fulfills CNCS evaluation design requirement for large, recompeting grantees if a reasonable comparison group is identified and appropriate matching/propensity scoring is used in the analysis.

Questions?