College of Public Health and Human Sciences

Slides:



Advertisements
Similar presentations
Evaluating and Institutionalizing
Advertisements

Standardized Scales.
Systems Approach Workbook A Systems Approach to Substance Use Services and Supports in Canada Communication Tools: Sample PowerPoint presentation The original.
VIII Evaluation Conference ‘Methodological Developments and Challenges in UK Policy Evaluation’ Daniel Fujiwara Senior Economist Cabinet Office & London.
An Introduction to Monitoring and Evaluation for National TB Programs.
ISTEP: Technology Field Research in Developing Communities Instructor: M. Bernardine Dias CAs: Sarah Belousov and Ermine Teves Spring 2009.
Performance Measurement and Analysis for Health Organizations
Knowing what you get for what you pay An introduction to cost effectiveness FETP India.
SBIR Budgeting Leanne Robey Chief, Special Reviews Branch, NIH.
Crosswalk of Public Health Accreditation and the Public Health Code of Ethics Highlighted items relate to the Water Supply case studied discussed in the.
21/4/2008 Evaluation of control measures 1. 21/4/2008 Evaluation of control measures 2 Family and Community Medicine Department.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Evaluating Ongoing Programs: A Chronological Perspective to Include Performance Measurement Summarized from Berk & Rossi’s Thinking About Program Evaluation,
Research Design. Selecting the Appropriate Research Design A research design is basically a plan or strategy for conducting one’s research. It serves.
Selecting Evidence Based Practices Oregon’s initial attempts to derive a process Implementation Conversations 11/10.
Ch 9 Internal and External Validity. Validity  The quality of the instruments used in the research study  Will the reader believe what they are readying.
Quasi Experimental and single case experimental designs
Response due: March 15,  Directions state that the report must “focus on the institution’s resolution of the recommendations and Commission concerns.”
Grant Proposal for [Project Name]
Office of Service Quality
HLC Criterion Five Primer Thursday, Nov. 5, :40 – 11:40 a.m. Event Center.
RE-AIM Framework. RE-AIM: A Framework for Health Promotion Planning, Implementation and Evaluation Are we reaching the intended audience? Is the program.
Introduction to Monitoring and Evaluation. Learning Objectives By the end of the session, participants will be able to: Define program components Define.
Session 7: Planning for Evaluation. Session Overview Key definitions:  monitoring  evaluation Process monitoring and process evaluation Outcome monitoring.
Gathering Information and Scanning the Environment Chapter 3.
Chapter 6 Conducting Research in Clinical Psychology.
Strategic planning A Tool to Promote Organizational Effectiveness
Infrastructure Delivery Management Toolkit:
Performance Indicators
Statistics & Evidence-Based Practice
for CIT Program Operation Resource Development Institute
Incorporating Evaluation into a Clinical Project
Project monitoring and evaluation
Chapter 11: Quasi-Experimental and Single Case Experimental Designs
DATA COLLECTION METHODS IN NURSING RESEARCH
MODULE 8: GOVERNANCE AUDIT EVIDENCE AND REVIEW
Operational Assessment of Service Delivery
RDQ 17 Using the Alignment Brief to Build Priority for Wellness Discussion Leaders: Susan Barrett, Mid-Atlantic PBIS Network.
M&E Basics Miguel Aragon Lopez, MD, MPH
Research using Registries
How to Assess the Effectiveness of Your Language Access Program
The Costs of Homelessness: Politics, Advocacy and Research
Right-sized Evaluation
Fundamentals of Monitoring and Evaluation
College of Public Health and Human Sciences
Chapter Three Research Design.
Chapter Eight: Quantitative Methods
The Community as a Client: Assessment and Diagnosis
Safety Culture Self-Assessment Methodology
Sergio Bautista-Arredondo National Institute of Public Health Mexico
College of Public Health and Human Sciences
Monitoring vs Evaluation
Overview of working draft v. 29 January 2018
Daniel J. Raiten, Ph.D. Program Director- Nutrition
Communication Tools: Sample PowerPoint presentation
Healthcare Planning Osama A Samarkandi, PhD, RN
Continuous Improvement/eProve Regional Trainings
Finance & Planning Committee of the San Francisco Health Commission
External Validity.
The Truth about Teleworking in the Federal Government:
Introduction to Quality Improvement Methods
monitoring & evaluation THD Unit, Stop TB department WHO Geneva
ECONOMICS IN THE WFD PROCESS
School Finance Indicator Database
Communication Tools: Sample PowerPoint presentation
Monitoring and Evaluating FGM/C abandonment programs
Misc Internal Validity Scenarios External Validity Construct Validity
Monitoring and evaluation Part 1 Lecture 10. Appraisal Vs Evaluation  Appraisal is before an activity takes place  Evaluation after the activity has.
Title Team Members.
Presentation transcript:

College of Public Health and Human Sciences Public Health Policy Institute Making a Policy Evaluation Plan Presenter: Jangho Yoon, PhD Date: August 23, 2013

Evaluation

Types of evaluation Type Purpose Main questions Relevance A priori assessment of need for program Is the program appropriate for the defined problem? Adequacy Assessment of the extent to which the program may address a problem Is the program adequate in size and scope to make a difference? Progress Assess the degree to which program implementation complies with the plan. Are appropriate personnel, equipment, and financial resources available in the right quantity, in the right place, and at the right time to meet program need? Efficiency Determine whether program results could be obtained less expensively. Are program benefits sufficient for the cost incurred? Effectiveness Assess whether program results meet predetermined objectives. Did the program meet its stated objectives? Impact   Assess the long-term effects of the program. Did the program produce the observed effect? Sustainability Assess whether the effects of the program are likely to continue. What is the likelihood of the program to be self-sustaining?

Why? When? Improved decision making All phases of program development and implementation

Evaluation tool Program time line Evaluation type Evaluation method Policy gap analysis (need assessment) Relevance Adequacy Surveys Expert opinions Available data Implementation Progress Monitoring Assessing results Efficiency   Effectiveness Impact Sustainability Economic analysis Experiment Survey Expert opinion

Major realms and components

Measurement Program effects must be measured accurately. Numbers Rates The number of people served by the program, dollars budgeted and expended by the program, net value in cost analysis, etc. Rates A ratio of two measures Standardization: clients served per dollar spent. Useful for making comparisons across a number of similar programs of substantially different size. Prevalence and incidence Prevalence: the number of people at any time who actually show evidence of a disease. Incidence: the number of people who succumb to the condition or disease within a given time range. November 21, 2018

Measurement Perception Represents a broad category of measures that are generally defined along ordinal or interval scales. A common measure of attitudes, beliefs, or perceptions, uses a five-point scale: e.g., (1) strongly disagree; (2) disagree; (3) neutral; (4) agree; (5) strongly agree. Particular measures may be more or less appropriate in certain settings. Rates are more useful in discussing geographic areas or programs. Perception measures are often more useful when the individual is the unit of analysis. November 21, 2018

Pretest-posttest, single-group design Designing evaluation Pretest-posttest, single-group design Limitations Experimental design Pinnacle of evaluation research efforts. November 21, 2018

Pretest-posttest, single-group design Effect of program: (OP2 – OP1) . T1 T2   P OP1 X OP2 November 21, 2018

Design problems Suppose a health department implement a program to promote regular leisure-time physical activity. November 21, 2018

Additional events–Something else caused the outcome. Design problems Additional events–Something else caused the outcome. Time trends (or maturation)–It is due to general “upward” time trend in physical activity. Drop-outs–Participants dropped out of the program during the evaluation time period. Regression to the mean–Statistical phenomenon that occurs whenever you have a nonrandom sample from a population and two measures that are imperfectly correlated. November 21, 2018

Regression to the mean November 21, 2018

Pretest-posttest design with treatment and control groups Experimental design Pretest-posttest design with treatment and control groups Effect of Program: (OP2– OP1) – (OC2– OC1) November 21, 2018

Multiple group pretest-posttest design Experimental design Multiple group pretest-posttest design November 21, 2018