Needs Assessment and Program Evaluation. Needs Assessment is: A type of applied research. Data is collected for a purpose! Can be either a descriptive.

Slides:



Advertisements
Similar presentations
Andrea M. Landis, PhD, RN UW LEAH
Advertisements

Evaluation Research Pierre-Auguste Renoir: Le Grenouillere, 1869.
GROUP-LEVEL DESIGNS Chapter 9.
Research Methodology For reader assistance, have an introductory paragraph in which attention is given to the organization of the section in relation to.
4.11 PowerPoint Emily Smith.
Research Design and Validity Threats
SWRK 292 Thesis/Project Seminar. Expectations for Course Apply research concepts from SWRK 291. Write first three chapters of your project or thesis.
Common Designs and Quality Issues in Quantitative Research Research Methods and Statistics.
Beginning the Research Design
Understanding Organization Culture and Community Needs
Problem Identification
Problem Analysis Intelligence Step 2 - Problem Analysis Developing solutions to complex population nutrition problems (such as obesity or food insecurity)
TOOLS OF POSITIVE ANALYSIS
Methodology Tips for Constructing Instruments. Matching methods to research paradigm MethodQuantitativeQualitative Written Instrument Standardized Instrument.
Evaluation. Practical Evaluation Michael Quinn Patton.
Experimental Research
Formulating the research design
CORRELATIO NAL RESEARCH METHOD. The researcher wanted to determine if there is a significant relationship between the nursing personnel characteristics.
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
Research Process Research Process Step One – Conceptualize Objectives Step One – Conceptualize Objectives Step Two – Measure Objectives Step Two – Measure.
Program Evaluation Using qualitative & qualitative methods.
I want to test a wound treatment or educational program in my clinical setting with patient groups that are convenient or that already exist, How do I.
Selecting a Research Design. Research Design Refers to the outline, plan, or strategy specifying the procedure to be used in answering research questions.
Research Design for Quantitative Studies
Research Process Step One – Conceptualization of Objectives Step Two – Measurement of Objectives Step Three – Determine Sampling Technique Step Four –
RESEARCH A systematic quest for undiscovered truth A way of thinking
Copyright © 2014 by The University of Kansas Using the Evaluation System to Answer Key Questions About Your Initiative.
RESEARCH IN MATH EDUCATION-3
Planning an Applied Research Project Chapter 7 – Forms of Quantitative Research © 2014 John Wiley & Sons, Inc. All rights reserved.
Introduction to research Research designs Dr Naiema Gaber.
Quantitative Research Qualitative Research? A type of educational research in which the researcher decides what to study. A type of educational research.
Evaluating a Research Report
Copyright ©2008 by Pearson Education, Inc. Pearson Prentice Hall Upper Saddle River, NJ Foundations of Nursing Research, 5e By Rose Marie Nieswiadomy.
The Process of Conducting Research
Introduction to research methods 10/26/2004 Xiangming Mu.
Hypothesis & Research Questions Understanding Differences between qualitative and quantitative approaches.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Quantitative and Qualitative Approaches
1 Experimental Research Cause + Effect Manipulation Control.
Needs Assessment Qualitative & Quantitative Methods.
1 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Chapter 8 Clarifying Quantitative Research Designs.
Sampling (conclusion) & Experimental Research Design Readings: Baxter and Babbie, 2004, Chapters 7 & 9.
CHAPTER 12 Descriptive, Program Evaluation, and Advanced Methods.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
SOCW 671 # 8 Single Subject/System Designs Intro to Sampling.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
Experimental Research Methods in Language Learning Chapter 5 Validity in Experimental Research.
Copyright © 2014 by The University of Kansas Using the Evaluation System to Answer Key Questions About Your Initiative.
Quasi Experimental and single case experimental designs
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Experimental & Quasi-Experimental Designs Dr. Guerette.
Evaluation Designs Adrienne DiTommaso, MPA, CNCS Office of Research and Evaluation.
SOCW 671: #6 Research Designs Review for 1 st Quiz.
Research Design Quantitative Study Design - B. Back to Class 9.
Chapter Eight: Quantitative Methods
CJ490: Research Methods in Criminal Justice UNIT #4 SEMINAR Professor Jeffrey Hauck.
QUANTITATIVE METHODS I203 Social and Organizational Issues of Information For Fun and Profit.
Unit 8.  Program improvement or appraisal  Assessing the value of a program  Measuring the efficacy of particular components of a program  Meeting.
So You Think You’ve Made a Change? Developing Indicators and Selecting Measurement Tools Chad Higgins, Ph.D. Allison Nichols, Ed.D.
Direct Practice in Social Work, 2e Scott W. Boyle Grafton H. Hull, Jr. Jannah Hurn Mather Larry Lorenzo Smith O. William Farley University of Utah, College.
Research And Evaluation Differences Between Research and Evaluation  Research and evaluation are closely related but differ in four ways: –The purpose.
Chapter 11 Experimental Designs PowerPoint presentation developed by: Sarah E. Bledsoe & E. Roberto Orellana.
Research designs Research designs Quantitative Research Designs.
Chapter 6 Selecting a Design. Research Design The overall approach to the study that details all the major components describing how the research will.
Chapter 11: Quasi-Experimental and Single Case Experimental Designs
Planning my research journey
Chapter 2 Sociological Research Methods
Chapter Eight: Quantitative Methods
Presentation transcript:

Needs Assessment and Program Evaluation

Needs Assessment is: A type of applied research. Data is collected for a purpose! Can be either a descriptive or exploratory study. Can use either quantitative or qualitative methods. Can use a combination of both qualitative or quantitative data. We have more confidence in our data if we have similar findings using both methods Most often involves collecting information from community participants or existing data (indicators or case records). In some cases, subjects or participants can be involved in research design, data collection, and analysis.

Data Collection for Needs Assessment QualitativeQuantitative Conversational InterviewStructured Surveys Formal (structured) InterviewProgram Monitoring (using case records) Ethnography (study culture or specific group) Social Indicator Analysis (using data collected by others) Focus GroupsTime Series Analysis Nominal Group TechniqueMapping Techniques Delphi ApproachNetwork Analysis

Needs Assessment is most commonly undertaken To document needs of community residents and plan interventions (community organization). To provide a justification for new programs or services. As a component of a grant proposal. To identify gaps in services.

Most commonly used techniques for empirical studies Formal interviews with clients, workers, community residents, or key informants Surveys (respondents are usually asked to rank a list of needs) Focus groups Social indicator or case record analysis

Other characteristics of needs assessment No interventions Not easily tied to the theoretical literature Few standardized methods for instrument development or data collection. Instruments and findings should vary substantially by setting and population group. Tests for reliability/validity are limited to pre-testing the instrument. Some ideal types of quantitative methods are not necessary here. For example, sampling is often convenience or purposive sampling rather than random.

Benefits/Limitations of Social Indicators Already exist Easy to use Can be used to make comparisons over different groups and locations. May not actually measure the concept they are intended to measure. May not be available at regular time intervals Agency data may be unreliable or missing

Social Indicators Track Trends over time Economic Indicators Unemploymen t Rate 10.0% 12.0%6% Number of Children under 18 in Poverty Percentage all Children under 18 in poverty 15.0%20.0%18%

Time series analysis requires that trends be displayed in a chart

Time series limitations Does not establish cause and effect Does not control for confounding variables There may be unexplained effects Data pertains to the general population rather than subgroups. Consequently, sample can not be divided between control and experimental groups.

Program evaluations measure: Program effectiveness, efficiency, quality, and participant satisfaction with the program.

Program evaluation can also measure: How or why a program is effective or is not effective

Program evaluation looks at the program or component of a program. It is often used to assess whether a program has actually been implemented in the intended manner

The program’s goals & objectives serve as the starting place for program evaluations. Objectives must be measurable, time-limited, and contain an evaluation mechanism. Be developed in relation to a specific program or intervention plan. Specify processes and tasks to be completed. Incorporate the program’s theory of action – describe how the program works and what it is expected to do (outcomes). To start an evaluation, the evaluator must find out what program participants identify as the goal (evaluability assessment).

Examples of goals and objectives Goal: To end homelessness among women and children in Fresno. Objectives: Provide at least 50 new shelter beds by April, Evaluation criteria: number of shelter beds created. To increase referrals for emergency shelter by social service workers by 50% by June, Evaluation criteria: Documentation of referral sources. To provide training to social service professionals in order to increase referrals for care by May, Evaluation criteria: Improvement in worker knowledge of homelessness as measured by a pre and post test.

Theory of action for a hunger program might be: Advisory Committee is formed to improve food bank services This improves service delivery More food is provided Families miss fewer meals There is less hunger

Evaluations can measure process or outcomes Qualitative methods are used to answer how and why questions (process) Quantitative methods are used to answer what questions- what outcomes were produced; was the program successful, effective, or efficient.

Most common types: Outcome evaluation (quantitative - may or may not use control groups to measure effectiveness). Goal attainment (have objectives been achieved). Process evaluation (qualitative - looks at how or why a program works or doesn’t work). Implementation analysis (mixed methods – was the program implemented in the manner intended). Program monitoring (mixed methods – is the program meeting its goals – conducted while the program is in progress ).

Client satisfaction surveys are often used as one component of a program evaluation. Can provide valuable information about how clientele perceive the program and may suggest how the program can be changed to make it more effective or accessible. Client satisfaction surveys also have methodological limitations.

Limitations include: It is difficult to define and measure “satisfaction.” Few standardized satisfaction instruments, that have been tested for validity and reliability exit. Most surveys find that 80-90% of participants are satisfied with the program. Most researchers are skeptical that such levels of satisfaction exit. Hence, most satisfaction surveys are believed to be unreliable. Since agencies want to believe their programs are good, the wording may be biased. Clients who are dependent on the program for services or who fear retaliation may not provide accurate responses.

Problems with client satisfaction surveys can be addressed. Pre-testing to ensure face validity and reliability. Asking respondents to indicate their satisfaction level with various components of the program. Ensuring that administration of the survey is separated from service delivery and that confidentiality of clients/consumers is protected.

Process and Most Implementation Evaluations Assume that the program is a “black box” – with input- throughput – and output. Use some mixture of interviews, document analysis, observations, or semi-structured surveys. Gather information from a variety of organization participants: administrators, front-line staff, and clients. These evaluations also examine communication patterns, program policies, and the interaction among individuals, groups, programs, or organizations in the external environment.

Use the following criteria to determine type of evaluation Research question to be addressed. Amount of resources and time that can be allocated for research. Ethics (can you reasonably construct control groups or hold confounding variables constant?) Will the evaluation be conducted by an internal or external evaluator? Who is audience for the evaluation? How will the data be used? Who will be involved in the evaluation?

Choice of sampling methods Probability sampling: random; systematic sampling; cluster sampling Nonprobability sampling: purposive; convenience sampling; quota sampling; snowball sampling

Types of research design Experimental Design Quasi-experimental design. Constructed control/experimental groups. One group only Time Series

Common types of designs DesignRando m Sample Pre- Test Post- test Control Group Exp Group ExperimentalYes One Shot Case Study No YesNoYes One Group Pre-test/Post- test NoYes NoYes Two-Group Post-test YesNoYes Time SeriesNoYes? NoYes

Problems with designs Internal Validity – can possible alternative explanations be found for the cause and effect relationship between the independent and dependent variables? Are there confounding variables that influence the results? External Validity – Can the findings be generalized to other settings or people?

Common threats to internal & external validity Selection Bias History Maturation Testing Instrumentation Mortality Statistical Regression Interaction among any of these factors Hawthorne Effect Interaction of selection and intervention Multiple treatment interaction