Download presentation
Presentation is loading. Please wait.
Published byVirgil Nash Modified over 8 years ago
1
Thinking About Program Evaluation HUS 3720 Instructor Terry Wimberley, Ph.D.
4
Evaluation Evaluation is the systematic assessment of the worth or merit of some object
5
Evaluation Evaluation is the systematic acquisition and assessment of information to provide useful feedback about some object.
6
Goals of Evaluation The goal of evaluation is to provide useful feedback to a variety of stakeholders, including sponsors, donors, client groups, administrators, staff, & other relevant constituencies
7
Goals of Evaluation The major goal of evaluation should be to influences decision-making or policy formulation through the provision of empirically-driven feedback.
8
Evaluation Strategies Evaluation strategies means broad, overarching perspectives on evaluation, encompassing most general groups or “camps” of evaluators (although the best evaluation work borrowing from a variety of evaluation “camps”)
9
Evaluation Strategies Scientific Experimental Models: Prioritize the desirability of impartiality, accuracy, objectivity, and the validity of information generated.
10
Evaluation Strategies Scientific Experimental Models: experimental & quasi-experimental designs objective based research (education) econometric models –cost effectiveness, –cost/ benefit analysis theory driven evaluation
11
Evaluation Strategies Management - Oriented Models: emphasize comprehensiveness in evaluation, placing evaluation within the context of organizational activities. Program Evaluation & Review Technique (PERT) Critical Path Method (CPM) Units for Treatments for Observing Observations & Settings (UTOS) Context Input for Process and Product (CIPP)
12
Evaluation Strategies Qualitative Models: emphasize the importance of observation and the need to attend to the evaluation of the evaluation context, to include the human interpretation of the evaluation process.
13
Evaluation Strategies Qualitative Models include: naturalistic evaluation critical theory & art criticism “grounded theory”
14
Evaluation Strategies Participant - Oriented Approaches: Emphasizes the importance of evaluation participation by program stakeholders. Total Quality Management (TQM) The Learning Organization Covey Approaches
15
Types of Evaluation Formative: Seek to strengthen or improve the object being evaluated. Typically focus upon program delivery, quality, and organizational context (personnel, procedures, etc.)
16
Types of Evaluation Summative:Examine the effects of outcomes of some object or objects; describing what happens subsequent to delivery of the program or technology; assessing whether the object can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the object.
17
Formative Evaluation Types: evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness
18
Formative Evaluation Types: needs assessment determines who needs the program, how great the need is, and what might work to meet the need
19
Formative Evaluation Types: structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes
20
Formative Evaluation Types : implementation evaluation monitors the fidelity of the program or technology delivery
21
Formative Evaluation Types : process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures
22
Summative Evaluation outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes
23
Summative Evaluation impact evaluation is broader and assesses the overall or net effects (intended or unintended) of the program or technology as a whole
24
Summative Evaluation cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values
25
Summative Evaluation secondary analysis reexamines existing data to address new questions or use methods not previously employed
26
Summative Evaluation meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question
27
Formative Evaluation Questions
28
What is the definition and scope of the problem or issue, or what's the question? Formulating and conceptualizing methods might be used including brainstorming, focus groups, nominal group techniques, delphi methods, brain-writing, stakeholder analysis, synectics, lateral thinking, input- output analysis, and concept mapping.
29
Where is the problem and how big or serious is it? The most common method used here is "needs assessment" which can include: analysis of existing data sources, and the use of sample surveys, interviews of constituent populations, qualitative research, expert testimony, and focus groups.
30
How should the program or technology be delivered to address the problem? Some of the methods already listed apply here, as do detailing methodologies like simulation techniques, or multivariate methods like multi-attribute utility theory or exploratory causal modeling; decision-making methods; and project planning and implementation methods like flow charting, PERT/CPM, and project scheduling.
31
How well is the program or technology delivered? Qualitative and quantitative monitoring techniques, the use of management information systems, and implementation assessment would be appropriate methodologies here.
32
Summative Evaluation Questions
33
What type of evaluation is feasible? Evaluability assessment can be used here, as well as standard approaches for selecting an appropriate evaluation design.
34
What was the effectiveness of the program or technology? One would choose from observational and correlational methods for demonstrating whether desired effects occurred, and quasi- experimental and experimental designs for determining whether observed effects can reasonably be attributed to the intervention and not to other sources.
35
What is the net impact of the program? Econometric methods for assessing cost effectiveness and cost/benefits would apply here, along with qualitative methods that enable us to summarize the full range of intended and unintended impacts.
36
Key Concepts in Evaluation Research Policy Space: Issues and forces which define the range of pertinent dialogue that is possible on any particular “problem” recognized by a significant number of stakeholders.
37
Key Concepts Stakeholders: Persons, groups, agencies or interest groups that have a vested interest in the resolution of a social problem.
38
Program Effectiveness: Three Meanings Marginal Effectiveness: Program performance compared to some benchmark. Relative Effectiveness: Effectiveness of the program compared to no intervention or change. Cost Effectiveness: Comparison’s made in terms of unit outcome per dollar spent.
39
Program Credibility: Validity Validity: The extent to which a variable measures what it is supposed to measure.
40
Internal Validity Internal Validity: The approximate truth about inferences regarding cause-effect or causal relationships.
42
External Validity External validity involves generalizing from your study context to other people, places or times, whereas construct validity involves generalizing from your program or measures to the concept of your program or measures.
43
Construct Validity Construct Validity refers to the degree to which inferences can legitimately be made from the operations in your study to the theoretical constructs on which those operations were conceptually based. Like external validity, construct validity is related to generalizing.
46
Chance & Construct Validity Statistical Conclusion Validity : Asks the question of whether “statistical inference” has been done property.
47
Reliability Reliability: Achieving the same outcome when the intervention is performed repeatedly. Predictability!
49
Measurement DEFINITION: Measurement consists of rules for assigning numbers to attributes of objects based upon rules.
50
In mathematical terms measurement is a functional mapping from the set of objects to the set of real numbers.
51
Properties of Measurement Magnitude: The property of magnitude exists when an object that has more of the attribute than another object, is given a bigger number by the rule system. This relationship must hold for all objects in the "real world".
52
Properties of Measurement Intervals: The property of intervals is concerned with the relationship of differences between objects. If a measurement system possesses the property of intervals it means that the unit of measurement means the same thing throughout the scale of numbers.
53
Properties of Measurement Rational Zero: A measurement system possesses a rational zero if an object is assigned the number zero by the system of rules. The object does not need to really exist in the "real world", as it is somewhat difficult to visualize a "man with no height
54
Property of Rational Zero The property of rational zero is necessary for ratios between numbers to be meaningful. Only in a measurement system with a rational zero would it make sense to argue that a person with a score of 30 has twice as much of the attribute as a person with a score of 15. In many application of statistics this property is not necessary to make meaningful inferences.
55
Data Types Nominal Scales: Nominal scales are measurement systems that possess none of the three properties discussed earlier. Nominal scales are subdivided into two groups: Renaming & Categorical
56
Data Types Nominal-Renaming occurs when each object in the set is assigned a different number, that is, renamed with a number. Examples of nominal-renaming are social security numbers or numbers on the back of a baseball player.
57
Data Types Nominal-categorical occurs when objects are grouped into subgroups and each object within a subgroup is given the same number. The subgroups must be mutually exclusive, that is, an object may not belong to more than one category or subgroup. An example of nominal- categorical measurement is grouping people into categories based upon stated political party preference (Republican, Democrat, or Other,) or upon sex (Male or Female.)
58
Data Types Ordinal Scales: Ordinal Scales are measurement systems that possess the property of magnitude, but not the property of intervals. The property of rational zero is not important if the property of intervals is not satisfied. Any time ordering, ranking, or rank ordering is involved, the possibility of an ordinal scale should be examined. As with a nominal scale, computation of most of the statistics is not appropriate when the scale type is ordinal.
59
Data Types Interval Scales: Interval scales are measurement systems that possess the properties of magnitude and intervals,but not the property of rational zero. It is appropriate to compute the statistics described in the rest of the book when the scale type is interval.
60
Data Types Ratio Scales: Ratio scales are measurement systems that possess all three properties: magnitude, intervals, & rational zero. The added power of a rational zero allows ratios of numbers to be meaningfully interpreted; i.e. the ratio of John's height to Mary's height is 1.32, whereas this is not possible with interval scales.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.