Outcomes Evaluation A good evaluation is …. –Useful to its audience –practical to implement –conducted ethically –technically accurate.

Slides:



Advertisements
Similar presentations
Experimental Design True Experimental Designs n Random assignment n Two comparison groups n Controls threats to internal validity n Strongest evidence.
Advertisements

Experimental and Quasi-Experimental Research
Defining Characteristics
CHAPTER OVERVIEW The Nonequivalent Control Group Design The Static Group Comparison Single-Subject Design Evaluating Single-Subject Designs.
Evaluation Research Pierre-Auguste Renoir: Le Grenouillere, 1869.
Group Discussion Describe the fundamental flaw that prevents a nonequivalent group design from being a true experiment? (That is, why can’t these designs.
Experimental Research Designs
Correlation AND EXPERIMENTAL DESIGN
Research Design and Validity Threats
Research Problems.
Group Discussion Describe the similarities and differences between experiments , non-experiments , and quasi-experiments. Actions for Describe the similarities.
TOOLS OF POSITIVE ANALYSIS
9 Quantitative Research Designs.
METHODS IN BEHAVIORAL RESEARCH NINTH EDITION PAUL C. COZBY Copyright © 2007 The McGraw-Hill Companies, Inc.
EVALUATING YOUR RESEARCH DESIGN EDRS 5305 EDUCATIONAL RESEARCH & STATISTICS.
Experimental Design The Gold Standard?.
Research Design Methodology Part 1. Objectives  Qualitative  Quantitative  Experimental designs  Experimental  Quasi-experimental  Non-experimental.
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
INTRODUCTION TO OPERATIONS RESEARCH Niranjan Saggurti, PhD Senior Researcher Scientific Development Workshop Operations Research on GBV/HIV AIDS 2014,
Program Evaluation Using qualitative & qualitative methods.
I want to test a wound treatment or educational program in my clinical setting with patient groups that are convenient or that already exist, How do I.
Research Design for Quantitative Studies
Learning Objectives 1 Copyright © 2002 South-Western/Thomson Learning Primary Data Collection: Experimentation CHAPTER eight.
Group Discussion Explain the difference between assignment bias and selection bias. Which one is a threat to internal validity and which is a threat to.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
Day 6: Non-Experimental & Experimental Design
Power Point Slides by Ronald J. Shope in collaboration with John W. Creswell Chapter 11 Experimental Designs.
5 Chapter Training Evaluation.
SINGLE - CASE, QUASI-EXPERIMENT, AND DEVELOPMENT RESEARCH © 2012 The McGraw-Hill Companies, Inc.
Introduction to research Research designs Dr Naiema Gaber.
Research Strategies Chapter 6. Research steps Literature Review identify a new idea for research, form a hypothesis and a prediction, Methodology define.
Copyright ©2008 by Pearson Education, Inc. Pearson Prentice Hall Upper Saddle River, NJ Foundations of Nursing Research, 5e By Rose Marie Nieswiadomy.
Evaluating HRD Programs
Quantitative and Qualitative Approaches
1 Experimental Research Cause + Effect Manipulation Control.
Research methods and statistics.  Internal validity is concerned about the causal-effect relationship in a study ◦ Can observed changes be attributed.
1 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Chapter 8 Clarifying Quantitative Research Designs.
Chapter 12 Quasi-Experimental and Single-Case Designs.
Experimental Research
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Chapter 10 Experimental Research Gay, Mills, and Airasian 10th Edition
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
Reliability and Validity Threats to Internal Validity Da Lee Caryl, Fall 2006.
Ch 9 Internal and External Validity. Validity  The quality of the instruments used in the research study  Will the reader believe what they are readying.
Nonexperimental and Quasi- Experimental Designs Distinction is the degree of control over internal validity.
Quasi Experimental and single case experimental designs
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Evaluation Designs Adrienne DiTommaso, MPA, CNCS Office of Research and Evaluation.
Chapter 11.  The general plan for carrying out a study where the independent variable is changed  Determines the internal validity  Should provide.
SOCW 671: #6 Research Designs Review for 1 st Quiz.
Chapter Eight: Quantitative Methods
EXPERIMENTAL DESIGNS. Categories Lab experiments –Experiments done in artificial or contrived environment Field experiments –Experiments done in natural.
Understanding Quantitative Research Design
1. /32  A quasi-experimental design is one that looks like an experimental design but lacks the key ingredient -- random assignment. 2.
Research designs Research designs Quantitative Research Designs.
Educational Research Experimental Research Chapter 9 (8 th Edition) Chapter 13 (7 th Edition) Gay and Airasian.
Experimental Design Ragu, Nickola, Marina, & Shannon.
Chapter 11: Quasi-Experimental and Single Case Experimental Designs
EXPERIMENTAL RESEARCH
Experiments Why would a double-blind experiment be used?
Experimental Research
Quantitative Research
Quasi-Experimental Design
The Nonexperimental and Quasi-Experimental Strategies
External Validity.
Chapter 11 EDPR 7521 Dr. Kakali Bhattacharya
Non-Experimental designs: Correlational & Quasi-experimental designs
Types of Designs: R: Random Assignment of subjects to groups
Reminder for next week CUELT Conference.
Presentation transcript:

Outcomes Evaluation A good evaluation is …. –Useful to its audience –practical to implement –conducted ethically –technically accurate.

Evaluation Outcomes/impact evaluation –measurement of changes in attitude, knowledge, health status, behavior, nutrition status

6 Steps of Evaluation Establish an evaluation plan from beginning of program Obtain buy-in from administrators Allow enough staff time to make evaluation a priority

6 Steps of Evaluation Obtain permission & encourage participation from participants Be flexible & creative

6 Steps of Evaluation Use a strong research design & measures which generate data you need to support your program’s goal

Validity Internal validity –extent to which an observed effect can be attributed to a planned intervention

Validity External validity –extent to which an observed impact can be generalized to other settings & populations with similar characteristics

Threats to Validity History Measurement Selection

History External event (H) Internal programmatic or internal participant events (I) Treatment effects (X)

Measurement Methods used to collect data Instruments need to be –reliable –valid

Selection Define eligibility to participate in a program –criteria –is there a difference between those who stay in the program & those that drop

Selection Of those eligible, who accept & who refuse –3222/5000 = 64% –attended/eligible

Selection Drop out rate –1600/3222 = 50% –drop out/initially say yes

Selection Lost to follow-up –300/1622 = 18% –can’t find/completed program

Selection Identify contextual or structural variables to decrease selection bias

Regression Effects If score high on pretest, little room for improvement Will show program has poor impact Is pretest score a threat to validity?

Synergistic Effects All work together to lower internal validity

Evaluation Designs Design to increase internal validity

Evaluation Designs Design is selected based on –objectives of the program –purpose of the evaluation –availability of eval resources –type of health & behavior problem, setting & audience

Evaluation Designs Notation –R - random assignment –E - intervention group –C - true control group –C - comparison group –X - treatment

Evaluation Designs Notation –N - number of subjects –O - observation to collect data –T - time

One Group One group & one time –posttest only E OXO Non-experimental No random assignment

One Group No control/comparison group

One Group What are some of the main weaknesses of this design for increasing internal validity?

One Group When could this design be used & be appropriate?

Nonequivalent Comparison E OXO C OXO Comparison group –any group not formed by random assignment

Nonequivalent Comparison What threats to internal validity are lessened?

Baseline Data

Time Series E OOO X OOO Pattern of outcome variable Know stability of outcome measure Collect outcome variable unobtrusively

Time Series Multiple data points –increases the power of the design Equal intervals

Time Series Must still try to control of history, selection & measurement

Time Series With control or comparison group, much stronger Better control over history threat to validity

Time Series with Comparison Group E OOO X OOO C OOO X OOO

True Experimental R E OXO R C O O Establish at baseline 2 groups not sig different Best control of threats to validity

True Experimental Advantages? Disadvantages?

Post-then-pre Appropriate for assessing behavior change Participants have limited knowledge at beginning of program

Post-then-pre Example of a typical pre-test question –Do you include one food rich in vitamin C in your diet daily?

Post-then-pre Implement See handout, Table 2 After program give only a posttest

Post-then-pre Q1 - Asks about behavior because of program Q2 - What the behavior had been before the program (i.e.. The pretest question)

Post-then-pre U of NE handout data analysis problem 6 U of NE ETHT report

Success Stories Testimonials Qualitative info Audience testing

Case Study Story of an individual Can be biased Cannot be generalized

More Services Available Change in the environment

Evaluation Plans Read Moving to the Future example PERT & Gantt charts

Evaluation Plans Must be in place –objectives –specifications of the intervention & program methods

Evaluation Plans Must be in place –measurement & data collection procedures –description of methods

Evaluation Plans Main reason –when –from whom –how –by whom

Evaluation Plans Worksheet 1 & 2 Worthen & Sanders, 1987 Fill out form as a team

END Evaluation Questions?