Download presentation
Presentation is loading. Please wait.
Published byAlannah Banks Modified over 9 years ago
1
Making Sense of the Social World 4 th Edition Chapter 11, Evaluation Research
2
Evaluation Research Evaluation Research is conducted for a distinctive purpose: to investigate social programs. Chambliss/Schutt, Making Sense of the Social World 4th edition © 2012 SAGE Publications
3
Evaluation Terms Inputs: resources, raw materials, clients, and staff that go into a program Program process: the complete treatment or service delivered by the program Outputs: the services delivered or new products produced by the program process Outcomes: the impact of the program process on the cases processed Feedback: information about service delivery system outputs, outcomes, or operations that is available to any program inputs Stakeholders: individuals and groups who have some basis of concern with the program, often setting the research agenda and controlling research findings Chambliss/Schutt, Making Sense of the Social World 4th edition © 2012 SAGE Publications
4
History of Evaluation Research Began after expansion of federal government during the Great Depression and WWII Became more important with Great Society programs of 1960s, because program evaluation became a requirement The New Jersey Income Maintenance Experiment was the first large scale experiment to test social policy in action Decline of evaluation research firms in early 1980s as Great Society programs also declines Government Performance and Results Act of 1993 required some type of evaluation for all government programs Chambliss/Schutt, Making Sense of the Social World 4th edition © 2012 SAGE Publications
5
Design Alternatives Black Box or Program Theory: Is it important how the program gets results? Black box: if program results are of primary importance, how it works may be of secondary importance. Program theory: a descriptive or prescriptive model of how a program operates and produces effects. descriptive program theory specifies impacts that are generated and how this occurs (suggesting a causal mechanism, intervening factors, and context), generally empirically based prescriptive program theory specifies what ought to be done by the program, but has not yet been empirically tested or observed Chambliss/Schutt, Making Sense of the Social World 4th edition © 2012 SAGE Publications
6
Design Alternatives, cont’d. Researcher or Stakeholder Orientation: Is the primary evaluator of research a set of social scientific peers or a funding agency? Stakeholder approaches encourage researchers to be responsive to stakeholders (aka responsive evaluation) Social science approaches: emphasize the importance of researcher expertise and maintenance of some autonomy in order to develop the most trustworthy, unbiased program evaluation Integrative approaches: attempt to cover issues of concern to both stakeholders (including participants) and evaluators, balancing stakeholder concern with scientific credibility Chambliss/Schutt, Making Sense of the Social World 4th edition © 2012 SAGE Publications
7
Design Alternatives, cont’d. Quantitative or Qualitative Methods Evaluation research that attempts to identify the effects of a social program typically use quantitative methods Qualitative methods useful for investigating program process, learning how individuals react to treatment, understanding actual operation of programs, understanding more complex social programs Simple or Complex Outcomes Most evaluation research attempts to measure multiple outcomes Unanticipated outcomes can be missed Single outcomes may miss the process of how a program works Chambliss/Schutt, Making Sense of the Social World 4th edition © 2012 SAGE Publications
8
Foci of Evaluation Research 1. Needs assessment: an attempt to determine if a new program is needed or an old one is still required 2. Evaluability assessment: a determination if a program may be evaluated within available time and resources 3. Process Evaluation (implementation assessment): evaluation research that investigates the process of service delivery Chambliss/Schutt, Making Sense of the Social World 4th edition © 2012 SAGE Publications
9
Foci of Evaluation Research, cont’d. 4. Impact analysis: evaluation research that compares what happened after a program was implemented with what would have happened had there been no program at all 5. Efficiency analysis: cost-benefit and cost- effectiveness evaluation Chambliss/Schutt, Making Sense of the Social World 4th edition © 2012 SAGE Publications
10
Ethical Issues The direct impact on participants and their families through social programs heightens the attention to human subjects concerns Needs assessments, evaluability assessments, process analysis, and cost-benefit analysis have few special ethical considerations When program impact is focus, human subjects problems multiply Federally mandated IRBs must assess all research for adherence to ethical practice guidelines Chambliss/Schutt, Making Sense of the Social World 4th edition © 2012 SAGE Publications
11
Solving Ethical Issues To lessen any detrimental program impact: Minimize number in control group Use minimum sample size Test only new parts of the program, not the entire program Compare treatments that vary in intensity rather than presence and absence Vary treatments between settings, rather than among individuals in a single setting Chambliss/Schutt, Making Sense of the Social World 4th edition © 2012 SAGE Publications
12
Obstacles to Evaluation Research Evaluation research can miss important outcomes or aspects of the program process Researchers can be subjected to cross- pressures by stakeholders Answering to stakeholders can compromise scientific design standards Researchers may be pressured to avoid null findings or find their research findings ignored Evaluation reports might need to be overly simplified for a lay audience, and thus subject to some distortion Chambliss/Schutt, Making Sense of the Social World 4th edition © 2012 SAGE Publications
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.