Download presentation
Presentation is loading. Please wait.
Published byCourtney Fogg Modified over 10 years ago
1
EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2011
2
Agenda Stage Three theories – Peter Rossi Use-oriented theories and theorists – Utilization-focused evaluation – Michael Patton – Participatory Evaluation – Brad Cousins Questions and discussion Encyclopedia of Evaluation entries
3
“Evaluation research is more that the application of methods…it is also a political and managerial activity, an input into…policy decisions and allocations” — Peter H. Rossi
4
Biographical Sketch Born in 1921 in New York City Ph.D. in Sociology, Columbia University B.S. in Sociology, City College Professor Emeritus of Sociology at University of Massachusetts and held positions as Harvard, University of Chicago, and Johns Hopkins University Published numerous books, research monographs and articles Led many high-stakes national-level evaluations
5
Rossi’s View of Evaluation Influenced by Campbell, Cronbach, and Scriven Major function of social research in public policy formulation and change is to evaluate the effectiveness of public programs Emphasis on empirically testing social theories as part of program evaluation
6
Rossi’s Influence Extensive and diverse Sociological (e.g., books on life histories of American families) Methodological (e.g., survey research) Primarily evaluation theory and methodology
7
Rossi’s Major Contributions Tailored evaluation Comprehensive evaluation Theory-driven evaluation Demystification The “good enough” rule The metallic and plastic laws of evaluation
8
Rossi’s Theory of Social Programming Social interventions are conservative and incremental Central task is to design programs that serve the disadvantaged well Recognizes the political and economic constraints placed on social programs
9
Rossi’s Theory of Knowledge Construction Both realist and empiricists in orientation Simultaneously emphasizes fallibilism and multiplism Questions the philosophical warrants for a singular epistemology, and questions the legitimacy and value of epistemology more generally
10
Rossi’s Theory of Valuing Similar to Scriven in many respects Social need is a crucial criterion for value claims Integrates both prescriptive and descriptive theories (though never clear in explication of how to integrate)
11
Rossi’s Theory of Knowledge Use Distinguishes between instrumental, conceptual, and persuasive uses Not clear about contingencies to guide choices to facilitate types of use Demystification (e.g., the nature of social problems and their amelioration) has been criticized for being too “scientistic”
12
Rossi’s Theory of Evaluation Practice Clearly describes trade-offs and priorities depending on various circumstances (e.g., innovations, modifications, established programs) Recognizes constraints associated with trade-offs and priorities (e.g., comprehensive versus tailored evaluations) See Table 9.1, p. 383
13
Evaluation Theory Tree
14
Use-Oriented Theorists FettermanKingPreskill
15
“This class of theories [use] are concerned with designing evaluation that are intended to decision making…to ensure that evaluation results have a direct impact on decision making and organizational change” — Marvin C. Alkin
16
Use-Oriented Theories Originated from decision-oriented theories Decision-oriented theorists emphasize evaluation as assisting key decision makers in making informed decisions Evaluations should be designed to ensure direct impact on decision making and organizational change
17
“Evaluations should be judged by their utility and actual use…[and]… evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use” — Michael Q. Patton
18
Explicitly geared to ensure that evaluations make an impact and are used Evaluation is guided in collaboration with a targeted group of priority users Utilization-Focused Evaluation
19
All aspects are chosen and applied to help targeted users obtain and apply evaluation findings to their intended use and maximize the likelihood that they will In the interest of getting findings used, draws on any legitimate evaluation approach
20
Situational Analysis What decisions, if any, are the evaluation findings expected to influence? When will decisions be made? By whom? When, then, must the evaluation findings be presented to be timely and influential? What is at stake in the decisions? For whom? What controversies or issues surround the decision? What is the history and context of the decision-making process? What other factors (values, politics, personalities, promises already made) will affect the decision making?
21
Situational Analysis How much influence do you expect the evaluation to have—realistically? To what extent has the outcome of the decision already been determined? What data and findings are needed to support decision making? What needs to be done to achieve that level of influence? How will we know afterward if the evaluation was used as intended?
22
“[Practical participatory evaluation]…seeks to understand program with the expressed intention of informing and improving their implementation” — J. Bradley Cousins
23
Participatory Evaluation Evaluator works collaboratively in partnership with a select group of intended users The evaluator’s role is to provide technical support, training, and to assure and maintain quality control Involves a broad group of stakeholder participants
24
Participatory Evaluation Modified from more limited stakeholder-based aproaches Stakeholders are engaged in the entire evaluation process (e.g., design, data collection, analysis, reporting, application of findings) Assumes that involvement will increase buy-in, credibility, and use
25
“[The CIPP model encourages evaluators to engage a]…representative stakeholder review panel to help define the evaluation questions, shape evaluation plans, review draft reports and disseminate findings” — Daniel L. Stufflebeam
26
Improvement- and Accountability-Oriented Approaches Expansive and seek comprehensiveness in considering the full range of questions and criteria needed to assess a program Often employ the assessed needs of a program’s stakeholders as the foundational criteria for assessing a program
27
Improvement- and Accountability-Oriented Approaches They usually reference all pertinent technical and economic criteria for judging the merit or quality of programs Examine all relevant outcomes, not just those keyed to program objectives Use multiple qualitative and quantitative assessment methods to provide cross-checks on findings
28
Decision- and Accountability- Oriented Studies Emphasizes that program evaluation should be used proactively to help improve a program as well as retrospectively to judge value Philosophical underpinnings include an objectivist orientation to finding best answers to context-limited questions and subscription to the principles of a well- functioning democratic society, especially human rights, an enlightened citizenry, equity, excellence, conservation, probity, and accountability
29
Decision- and Accountability- Oriented Studies Serves stakeholders by engaging them in focusing an evaluation and assessing draft evaluation reports; addressing their most important questions plus those required to assess the program’s value; providing timely, relevant information to assist decision making; producing an accountability record; and issuing needed summative evaluation reports This approach is best represented by Stufflebeam’s context, input, process, and product (CIPP) model for evaluation
30
CIPP Model
31
Evaluation RolesContextInputProcessProduct Formative Evaluation: Prospective application of CIPP information to assist decision making and quality assurance. Guidance for determining areas for improvement and for choosing and ranking goals (based on assessing needs, problems, assets, and opportunities, plus contextual dynamics). Guidance for choosing a program strategy (based on identifying and assessing alternative strategies and resource allocation plans). Examination of the work plan. Guidance for implementing the operational plan (based on monitoring and judging activities and delivering periodic evaluative feedback). Guidance for continuing, modifying, adopting, or terminating the effort (based on assessing outcomes and side effects). Summative evaluation: Retrospective use of CIPP information to sum up the effort’s merit, worth, probity, equity, feasibility, efficiency, safety, cost, and significance. Comparison of goals and priorities to assessed needs, problems, assets, opportunities, and relevant contextual dynamics. Comparison of the program’s strategy, design, and budget to those of critical competitors and to goals and targeted needs of beneficiaries. Full description of the actual process and record of costs. Comparison of the designed and actual processes and costs. Comparison of outcomes and side effects to goals and targeted needs and, as feasible, to results of competitive programs. Interpretation of results against the effort’s assessed context, inputs, and processes.
32
Encyclopedia Entries CIPP Model (Context, Input, Process, Product) Cost-Benefit Analysis Cost-Effectiveness Goal Indicators Meta-Analysis Monitoring Needs Assessment Objectives Objectives-Based Evaluation Outcomes Outputs Success Case Method Tyler, Ralph W.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.