Presentation is loading. Please wait.

Presentation is loading. Please wait.

Goals of the Presentation

Similar presentations


Presentation on theme: "Goals of the Presentation"— Presentation transcript:

1 Special Topics in Single-Case Design Intervention Research Tom Kratochwill

2 Goals of the Presentation
Review general assessment issues in single-case design research; Feature the importance of intervention integrity and intensity monitoring and promotion; Review some cost-analysis issues in single-case design research; Discuss features and challenges of social validity assessment in intervention research.

3 General Considerations in Assessment/Measurement

4 Choice of the Dependent Variable
A major consideration in single-case design intervention research is that dependent variable assessment must be repeated across time; Traditional norm-referenced measurement does not easily lend itself to repeated assessment (sometimes called indirect assessment; e.g., self-report checklists and rating scales, informant report checklists and rating scales); Standardized instruments used for normative comparisons may not provide accurate data (e.g., checklists and rating scales; Reid & Maag, 1994). Reid, R., &Maag, J. W. (1994). How many fidgets in a pretty much: A critique of behavior rating scales for identifying students with ADHD. Journal of School Psychology, 32, Specification of the conditions of measurement should be noted (e.g., natural versus analogue, human observes versus automated recording, etc.). Shapiro, E. S., & Kratochwill, T. R. (Eds.) (2000). Conducting school-based assessments of child and adolescent behavior. New York: Guilford.

5 Measurement Systems Commonly Used in Single-Case Design Research
Automated Recording (e.g., galvanic skin response, heart rate, blood pressure); Permanent Products (e.g., academic responses in math, reading, spelling, number of clothes items on the floor); Direct Observation (e.g., assessment of behavior as it occurs as noted by human observers, video recording, computer-based/phone-based apps or systems).

6 Quality of Assessment Quality of assessment is typically determined by reliability and validity of measurement(now typically referred to as “evidence-based assessment”). Some clarification of terms for reliability: Reliability of effect (usually established through replication of the intervention in the same or repeated investigation); Reliability of the procedures (sometimes referred to as intervention fidelity or integrity). Reliability of measurement (usually determined by assessment agreement measures and required in the WWC Pilot Standards and other guidelines*);

7 Remember WWC Standard 2: Inter-Assessor Agreement
Each Outcome Variable for Each Case Must be Measured Systematically by More than One Assessor. Researcher Needs to Collect Inter-Assessor Agreement: In each phase On at least 20% of the data points in each condition (i.e., baseline, intervention) Rate of Agreement Must Meet Minimum Thresholds: (e.g., 80% agreement or Cohen’s kappa of 0.60) If No Outcomes Meet These Criteria, Study Does Not Meet Design Standards.

8 Observer Agreement in the WWC Pilot Standards
The WWC Panel not specify the method of observer agreement (e.g., using only measures that control for chance agreement). Hartmann, Barrios, and Wood (2004) reported over 20 measures to assess interobserver agreement. A good practice is to report measures that control for chance agreement and report agreement for occurrences and non-occurrences of the dependent variable. Good to assess accuracy: the degree to which observer data reflect actual performance or some true measure of behavior (e.g., video performance of the behavior).

9 Artifact, Bias, and Complexity of Assessment
Researcher needs to consider factors that can obscure observer agreement: Direct assessment by observers can be reactive; Observers can drift in their assessment of the dependent variable; Observers can have expectancies for improvement that are conveyed by the researcher; Coding systems can be too complex and lead to error.

10 Clinical Trials and Tribulations: Measuring Outcomes in Intervention Research on Selective Mutism
Child outcomes in anxiety disorder research The Three Response Systems: Motor/Cognitive/Psychophysiological (theory-based) The gold standard in treatment research on children’s fears and phobias (Morris & Kratochwill, 1985). The standard in treatment of selective mutism was child speech (Kratochwill, 1981). Morris, R. J., & Kratochwill, T. R. (1985).Treating children’s fears and phobias: A behavioral approach. New York: Pergamon Press. Kratochwill, T. R. (1981). Selective mutism: Implications for research and treatment. Hillsdale NJ: Erlbaum

11 Selective Mutism Motor Speech (with select persons)
Speech prompted-analogue Speech natural-initiated Social Engagement Parent Outcomes Change in Parent Response to the Child Teacher Outcomes Change in Teacher Response to the Child

12 Selective Mutism Cognitive Thoughts of avoidance
Thinking something bad is going to happen Negative self-talk about school, home, friends, etc.

13 Selective Mutism Psychophysiological Arousal as reflected in GSR
Heart Rate Blood Pressure

14 Journal Article Reporting Standards for Quantitative Research in Psychology: The APA Publications and Communications Board Task Force Report Mark Appelbaum University of California, San Diego, Harris Cooper Duke University, Rex B. Kline Concordia University, Montréal Evan Mayo-Wilson Johns Hopkins University, Arthur M. Nezu Drexel University, Stephen M. Rao Cleveland Clinic, Cleveland, Ohio

15 Measures and covariates
• Define all primary and secondary measures and covariates, including measures collected but not included in this report. Data collection • Describe methods used to collect data. Quality of measurements • Describe methods used to enhance the quality of measurements, including • Training and reliability of data collectors • Use of multiple observations

16 Instrumentation • Provide information on validated or ad hoc instruments created for individual studies, for example, psychometric and biometric properties. Masking • Report whether participants, those administering the experimental manipulations, and those assessing the outcomes were aware of condition assignments. • If masking took place, provide statement regarding how it was accomplished and if and how the success of masking was evaluated

17 Psychometrics • Estimate and report values of reliability coefficients for the scores analyzed (i.e., the researcher’s sample), if possible. Provide estimates of convergent and discriminant validity where relevant. • Report estimates related to the reliability of measures, including • Interrater reliability for subjectively scored measures and ratings • Test–retest coefficients in longitudinal studies in which the retest interval corresponds to the measurement schedule used in the study • Internal consistency coefficients for composite scales in which these indices are appropriate for understanding the nature of the instruments being employed in the study • Report the basic demographic characteristics of other samples if reporting reliability or validity coefficients from those sample(s), such as those described in test manuals or in the norming information about the instrument

18 Reference: Applebaum, M., Klien, R. B., Cooper, H., & Mayo-Wilson, E. (2018). Journal Article Reporting Standards for Quantitative Research in Psychology: The APA Publications and Communications Board Task Force Report. American Psychologist, 73(1), DOI: /amp

19 Follow-up Assessment Some Considerations in Follow-up Assessment:
Importance of Follow-up Assessment; Distinction Between Follow-up and Generalization; Intervention “on or off” During Follow-up Interval; Type of Follow-up Assessment Measures; Length of Follow-up Assessment.

20 Importance of Follow-up Assessment
Adds to Credibility of the Intervention Research; Provides the Researcher with Information on the Durability of the Intervention; May Provide Information to the Researcher on the Generalized Effects of the Intervention; May be required in the RFP or journal.

21

22 Distinction Between Follow-up and Generalization Assessment
Follow-up is an Optional Assessment Process and can be used to Assess Generalization; Generalization can be Measured in Stimulus or Response Modes; A Useful Conceptual Tool for the Types of Generalization Assessment that can be Measured is the “Generalization Map.” [Allen, J. S., Tarnowski, K. J., Simonian, S., Elliott, D., & Drabman, R. S. (1991)The generalization map revisited: Assessment of generalized effects in child and adolescent behavior therapy. Behavior Therapy, 22 (3), Drabman, R. S., Hammer, D„ & Rosenbaum, M. S. (1979). Assessing generalization in behavior modification with children: The generalization map. Behavioral Assessment, 1, ]

23 Intervention “on or off” During Follow-up Interval
A Major Issue that the Researcher Must Consider in Follow-up Assessment is Whether the Intervention will be Continued or Discontinued During the Follow-up Interval. For example: Intervention “on” in original or modified fashion; Intervention “off” but re-established if needed; Intervention “off” based on pre-established criteria.

24 Type of Follow-up Assessment Measures Consider the following options:
The researcher may adopt the original assessment protocol used during the intervention trial; The researcher may adopt an abbreviated form of the original assessment (e.g., direct observation but in a short time period); The researcher may adopt an alternative form of the original assessment (e.g., use self-report or checklist and rating scales rather than direct observation).

25 Length of Follow-up Assessment
Factors to consider include the following: Nature of the problem/issue under consideration; Importance of follow-up for the problem/issue under consideration; Recommendations from prior research/researchers in the area of the intervention/problem under consideration; Policy of the funding agency or journal for follow-up assessment; Practical and logistical issues surrounding the investigation (e.g., availability of the participants, cost, research staff).

26 Intervention Integrity

27 Intervention Integrity: A Developing Standard
Assessing (and especially promoting) intervention or treatment Integrity is a more recent and developing consideration in intervention research (see Sanetti & Kratochwill, 2014). Sanetti, L. M., & Kratochwill, T. R. (Eds.) (2014). Treatment Integrity: A Foundation for Evidence-Based Practice in Applied Psychology. Washington, DC: American Psychological Association.

28 Representative Features of Intervention Integrity Data
Across Design Intervention Phases Across Therapists/Intervention Agents Across Situations Across Sessions Across Cases

29 Where does intervention integrity fit?
1. Screening/ assessment data suggest prevention/intervention is warranted Where does intervention integrity fit? 2. Evidence-based intervention selected & implemented 3. Program outcomes (SO) assessed 3. Intervention integrity (II) assessed Data reviewed 5a. Continue intervention + SO + II 4. Data-based decisions 5b. Continue intervention / promote intervention integrity + SO - II -SO - II 5c. Implement intervention strategies to promote intervention integrity - SO + II 5d. Change intervention

30 Treatment Integrity: Definition
“Fidelity of implementation is traditionally defined as the determination of how well an intervention is implemented in comparison with the original program design during an efficacy and/or effectiveness study” (O’Donnell, 2008, p.33) Five distinct dimensions of intervention integrity were proposed: adherence, exposure, quality of delivery, participant responsiveness, and program differentiation (Dane & Schneider, 1998) Treatment integrity encompasses three different aspects: treatment adherence, therapist competence, and treatment differentiation (Waltz, Addis, Koerner, &Jacobson, 1993). The degree to which treatment is delivered as intended (Yeaton & Sechrest, 1981) Treatment integrity is the extent to which required intervention components are delivered as designed in a competent manner while proscribed procedures are avoided by an interventionist trained to deliver the intervention in a particular setting to a particular population (Perepletchikova, 2014). Implementation is defined as a specified set of activities designed to put into practice an activity or program of known dimensions (Sanetti & Kratochwill, 2009) “The degree to which an intervention program is implemented as planned’’ (Gresham et al., 1993, p. 254). The ‘‘extent to which patients follow the instructions that are given to them for prescribed treatments’’ (Haynes et al., 2002, p. 2). “Implementation is defined as a specified set of activities designed to put into practice an activity or program of known dimensions” (Fixsen et al. 2005, p.5).

31 Intervention Integrity Data Definition of Intervention
Intervention Integrity in Research [Sanetti, L. M. H., Gritter, K. L., Dobey, L. M. (2011). Treatment integrity of interventions with children in the school psychology literature from 1995 to School Psychology Review, 40, 72-84]. Intervention Integrity Data Definition of Intervention

32 Intervention Integrity Assessment in School Psychology Practice [Cochran, W. S., & Laux, J. M. (2008). A survey investigating school psychologists' measurement of treatment integrity in school‐based interventions and their beliefs about its importance. Psychology in the Schools, 45, ]

33 Implementation Supports
Negative reinforcement Intervention manual Test driving interventions Expert consultation Intervention scripts Video support Intervention choice Performance feedback Implementation planning Classroom Check Up Motivational interviewing Instructional coaching Participant modeling Treatment planning protocol Role play Prompts Collaborative consultation Self-monitoring Direct training Sanetti & Collier-Meek (2014)

34 Perspectives on intervention integrity
Performance Feedback (Noell et al., 1997, 2002, 2005) Negative Reinforcement (DiGennaro et al., 2005, 2007) Behavior Performance Deficit / text/skype cues

35 Contributions from related fields
Psychology Education Contributions from related fields Medicine Health Psychology Prevention Science Intervention integrity may be more complicated than a behavior performance deficit.

36 Shifting conceptualization of treatment integrity
Prevention/Intervention programs require implementers to change their behavior Implementing interventions with a high level of integrity can be considered an adult behavior change process.

37 Mental Health and Education Related Fields
Health Action Process Approach(HAPA) Theory of Reasoned Action Transtheoretical Approach Behavioral theory

38 Focus on the HAPA Looked for a theory that…
Is conceptually clear and consistent, Is parsimonious but would enable us to describe, explain, and predict behavior change. Explicates behavior performance, not just development of behavioral intention. Enabled us to design effective interventions that would produce change in the predicted behaviors. Has empirical support and practical utility.

39 Breast self-examination
Empirical Support Healthy eating Exercise Behaviors Breast self-examination Seat belt use Dental flossing See Schwarzer, R.( 2008) Modeling health behavior change: How to predict and modify the adoption and maintenance of health behaviors. Applied Psychology, 57,

40 Project PRIME Planning Realistic intervention Implementation and
Maintenance by Educators Funded by the Institute for Education Sciences, U.S. Department of Education (R324A100051)

41 Project PRIME PRIME was designed originally to prevent teachers’ level of intervention implementation from declining. Developed a system of supports to facilitate teachers’ intervention implementation. Delivered through a problem-solving consultation model (Bergan & Kratochwill, 1990; Kratochwill & Bergan, 1990).

42 Three Components of PRIME:
Implementation Planning: A detailed guide walks consultants and consultees through the process of Action planning Coping planning

43 Action Planning Specifies how and under what circumstances an intended intervention action is to be completed Intention to implement Initiation of implementation

44 Action Planning Continued…
Completion of Action Plan helps to Define the intervention components and steps All aspects of the intervention are accounted for by a component (e.g., a social skills lesson) Each step corresponds to behavior(s) performed to implement an intervention component (e.g., introducing the topic of the social skills lesson) Detail logistical planning on implementation Identify potential resource barriers

45 Action Planning Continued…
Logistical planning questions answered for each implementation step include: Where will the step be implemented? How often will the step be implemented? For how long will the step be implemented? What resources (materials, time, space, personnel, etc.) are needed to complete this step? Throughout implementation Action Plan can be updated to reflect changes in implementation

46 Coping Planning Completion of Coping Plan helps to:
Identify up to 4 of the most significant barriers to intervention implementation Develop “coping” strategies to address identified barriers Barriers to implementation are listed e.g., major changes in class schedule due to statewide testing Throughout implementation the coping plan is updated to reflect changes in implementation

47 Three Components of PRIME:
2. Implementation Beliefs Assessment (IBA) Assesses implementer’s Outcome expectancies Self-efficacy Implementation Maintenance Recovery

48 Three Components of PRIME:
3. Strategies to Increase Implementation Intention and Self-efficacy Eight strategies have been identified. Detailed guides walk consultants through strategy implementation. Strategies include: Participant modeling Role play

49 Multi-Tiered Implementation Supports (MTIS)
Intensive strategies that typically require ongoing support Tier 3 Tier 2 Tier 1 Selected strategies based on the implementer’s intervention integrity and stage of learning Feasible and widely relevant implementation support strategies that can be easily embedded into typical programs

50 MTIS: Tier 1 Strategies Tier 2 Tier 1 Tier 3 Intervention manual
Intervention scripts Collaborative/Expert consultation Instructional coaching Intervention choice Test driving interventions Direct training Tier 3 Tier 2 Tier 1

51 MTIS: Tier 1 & 2 Strategies
Implementation Planning Treatment planning protocol Tier 3 Tier 2 Tier 1

52 MTIS: Tier 2 Strategies Tier 2 Tier 1 Tier 3 Role play
Participant modeling Tier 3 Tier 2 Tier 1

53 MTIS: Tier 2 & 3 Strategies
Motivational interviewing Self-monitoring Prompts Video support Tier 3 Tier 2 Tier 1

54 MTIS: Tier 3 Strategies Tier 2 Tier 1 Tier 3 Performance feedback
Performance feedback with negative reinforcement Tier 3 Tier 2 Tier 1

55 Direct Training to Promote Intervention Integrity
In a review of the single-case design literature on school-based interventions, Fallon, Kurtz, and Mueller (2018) concluded that direct training to promote intervention integrity is an evidence-based practice. Direct training included instructions, modeling, practice, and feedback. Fallon, L. M., Kurtz, K. D., & Mueller, M. R. (2018). Direct training to improve educators’ treatment integrity: A systematic review of single-case design studies. School Psychology Quarterly, 33,

56 Take away messages: Intervention integrity is important, and cannot be assumed in prevention/intervention program implementation; Intervention integrity can be conceptualized as a behavior change process; Related fields can inform prevention/ intervention science practice in psychology and education; Multiple strategies implemented in a tiered model may be necessary to promote intervention integrity.

57 Intervention Intensity

58 Intervention Intensity: Definition and Dimensions
Intervention intensity typically refers to intervention strength and traditionally, the most commonly recognized dimension is dose of the intervention (Codding & Lane, 2015). Coding, R. S. & Lane, K. L. (2015). A spotlight on treatment intensity: An important and often overlooked component of intervention inquiry. Journal of Behavioral Education, 24, 1–10

59 Dimensions of Intervention Intensity
Dose Learning trial/session Session length Session frequency Length of intervention Cumulative intensity Group Size

60 Dimensions of Intervention Intensity (Continued)
Intervention Design Form Positive corrective feedback Pace of instruction Opportunities to practice or respond Transitions between subjects or classes Goal specificity Intervention complexity

61 Dimensions of Intervention Intensity (Continued)
Interventionist Characteristics Experience, knowledge, skills Implementation Characteristics Intervention management and planning Materials and tangible resources required Deviation from classroom routines

62 Cost-Analysis in Single-Case Design Applied and Clinical Research

63 Goals of the Presentation
Provide a perspective on the importance of cost-analysis; Provide an overview of the different aspects of cost analysis including: Basic Cost-Analysis Cost-Effectiveness Analysis Cost-Benefit Analysis (Benefit-Cost Analysis) Cost-Utility Analysis Cost-Feasibility Analysis

64 Goals Continued Provide some challenges in cost-analysis in single-case design research; Provide some pros and cons in conducting various types of cost-analysis; Provide some examples of using the results of cost-analysis in single-case design research.

65 Some Historical Considerations
Cost factors in applied and clinical research have always been a consideration in development and implementation of prevention and intervention programs, especially in educational and psychological research (Lewin et al., 2018). Formal assessment of cost and cost-analysis is now just coming into focus in reporting applied and clinical research, including single-case design investigations.

66 Cost-Analysis Approaches
Basic Cost-Analysis Cost-Effectiveness Analysis Cost-Benefit Analysis Cost-Utility Analysis Cost-Feasibility Analysis

67 Cost-Analysis Components Include Several Considerations:
What is the cost question? What is the measure of cost? What is the measure of outcomes? What are the positive aspects of the cost-analysis? What are the negative aspects of the cost-analysis?

68 Basic Cost-Analysis A basic cost-analysis addresses the question of what the cost of implementation of the intervention will be with a specific participant (s) or unit of analysis (e.g., small group, classroom). The measure of cost is the monetary value of the resources to implement the intervention. The measure of outcome is the effectiveness of the intervention.

69 Basic Cost Analysis Example
Example: An intervention (B) is implemented in an ABAB Single-Case Design and costs $ for the participant. When replications of the ABAB design across participants is conducted we can report a summary measure of costs: S1 ABAB=$475.00 S2 ABAB= $450.00 S3 ABAB=$465.00 S4……etc.

70 Considerations in Basic Cost-Analysis
An outcome measure(s) in the single-case design study is used to establish intervention effectiveness; Can be useful with a small number of cost variables and outcome measures; Can be challenging when there are multiple cost variables and outcome measures.

71 Cost-Effectiveness Analysis
Cost-effectiveness analysis addresses the question of which intervention provides a certain positive outcome for the lowest cost. The measure of cost is the monetary value of the resources to implement the intervention. The measure of outcome is the effectiveness of the intervention.

72 Cost-Effectiveness Example
Example: Two interventions (B and C) are implemented in an Alternating Treatment Design and provide nearly identical positive outcomes (both produce more positive outcomes relative to baseline). However, Intervention B costs $1,000 to implement for the participant and Intervention C costs $1,700 to implement for the participant.

73 Considerations in Cost-Effectiveness
Outcomes measures in single-case design establish the effectiveness; Can be useful with a single or small number of outcome measures; Can be challenging when there are multiple outcome measures especially with each yielding different results; Can be challenging when there are more than two alternative interventions compared.

74 Cost-Utility Analysis
Cost-Utility Analysis addresses the question of which intervention/program yields the greatest utility at the lowest cost (or the highest utility at a specified cost); Utility refers to value or satisfaction on a range of outcomes; The measure of cost is the monetary value of the resources to implement the intervention; The measure of outcome is a unit(s) of utility.

75 Cost-Utility Example Example: Two interventions (B and C) are compared in an alternating treatment design with one primary outcome measure. The two interventions produce nearly identical outcomes but “consumers” have a preference for intervention C. In this case the “social validity” measure includes subjective evaluation and normative comparison as the primary feature of cost-utility assessment.

76 Considerations in Cost-Utility
The researcher can include participant preferences in the effectiveness analysis; Participant preferences are taken into account in any decisions made to adopt the intervention; The researcher can include multiple measures of effectiveness into a total utility metric; There may be challenges in preference measurement (as in social validity assessment); Useful in comparison of two or more intervention alternatives.

77 Cost-Benefit Analysis
Cost-Benefit Analysis addresses the question of which intervention provides a certain level of benefits for the lowest cost (or the highest level of benefits for a certain cost). Cost-Benefit Analysis also addresses the question of whether the benefits of a single alternative intervention are larger than the costs; The measure of cost is the monetary value of the resources to implement the intervention (all variables are translated into costs); The measure of outcomes is the monetary value of benefits (i.e., the outcome is converted to dollars)

78 Cost-Benefit Example Example: An interventions (B) is implemented in a Multiple Baseline Design across participants and provides a positive outcome on reducing disruptive classroom behavior (positive outcomes relative to baseline). Follow up (short and long term) indicates that reduction in classroom disruptive behavior has resulted in the savings of costs of other disciplinary programs not needed with the participants involved in the tested program.

79 Considerations in Cost-Benefit
The researcher must consider the absolute worth of a project/intervention; The researcher can compare the cost-benefits findings across a wide variety of intervention research studies; The researcher may have challenges in converting all benefits to a monetary value, especially immediately following the program implementation.

80 Cost-Feasibility Analysis
Cost-Feasibility Analysis addresses the question of whether a single intervention can be implemented within an existing budget; The measure of cost is the monetary value of the resources to implement the intervention (all variables are translated into costs); No outcome measures are involved in this analysis.

81 Cost-Feasibility Analysis Example
Example: A single-case researcher must conduct a Goal 2 pilot study with a multiple baseline design across participants. The researcher must determine if he/she has the monetary resources to implement the intervention across two cohorts of 5 participants in each cohort.

82 Considerations in Cost-Feasibility Analysis
The researcher has the option to consider alternatives in the analysis that can be eliminated prior to conducting the study; The determination of alternatives does not take into account the “worth” of the intervention because no outcome measures have been assessed to determine feasibility (unless a “pilot” study has been conducted).

83 Standards of Evidence: Conducting and Reporting Costs*
Table 1 SPR standards for economic evaluation of prevention programs Section and related standards I. Standards for framing an economic evaluation I.1. State the empirical question being addressed by the economic evaluation I.2. Describe in detail the program being evaluated and its comparator I.3. Describe the evaluation of the prevention program’s efficacy or effectiveness in terms of its impact on behavioral and other noneconomic outcomes I.4. Determine and describe the perspectives from which analyses are conducted I.5. Describe the time period and systems included and excluded in the

84 Standards of Evidence: Conducting and Reporting Costs
II. Standards for estimating costs of prevention programs II.1. Plan cost analyses prospectively and then conduct them concurrently with program trials II.2. Use an ingredients method in cost analysis II.3. Describe comprehensively the units and resources needed to implement the intervention, disaggregated by time II.4. Include resources consumed but not paid for directly II.5. Resources needed to support program adoption, implementation, sustainability, and monitoring should be included in cost estimates

85 Standards of Evidence: Conducting and Reporting Costs
III. Standards for valuing effects of prevention programs III.1. Estimate findings for each program outcome separately from benefit estimates and describe the context of the evaluation III.2. Balance the rigor of direct valuation of outcomes with the validity of indirect valuation in contemporary society III.3. Consider outcomes with negative monetary values as negative benefits rather than part of program costs

86 Standards of Evidence: Conducting and Reporting Costs
IV. Standards for summary metrics IV.1. Estimate all costs and benefits in current monetary units or in monetary units for the most recent year available IV.2. Estimate current values for benefits and costs that accrue over time by selecting and reporting a reputable discount rate IV.3. Estimate and report the total, per-participant average, and marginal costs of the program IV.4.When applying benefits across multiple outcomes to generate total economic values, avoid double counting of economic impact IV.5. Use the net present value with a confidence interval as the principle summary metric of benefit-cost analyses IV.6. Describe the advantages and limitations of any additional summary metrics that are included in the evaluation. Some metrics should be used only when certain conditions are met

87 Standards of Evidence: Conducting and Reporting Costs
V. Standards for handling estimate uncertainty V.1. Test the uncertainty in estimates and report the manner in which it is handled VI. Standards for reporting economic evaluations VI.1. The principle of transparency should guide the reporting of economic evaluation results VI.2. Use a two-step reporting process that summarizes the most essential features and results of an evaluation in a table or brief report and offers supporting technical detail elsewhere VI.3.WhenMonte Carlo analysis is performed, present a histogram of the net present value distribution as well as the percentage of simulations that return a positive net present value *(Source: Crowley et al. , 2018).

88 Challenges with Cost Assessment*
What counts as a cost? What counts as a benefit? How do we assess the impact value of an intervention?

89 Challenges (Continued)*
For ongoing programs, over what time interval should costs/benefits be evaluated? How do we extrapolate benefits beyond the data collection period? (Source: From Crowley et al., 2018).

90

91 Cost-Analysis References
Crowley, D. M. Dodge, K., Barnett, S., Corso, P., Duffy, S., Graham, P., Greenberg, M. T., Hill, L., Haskins, R., Jones, D. E., Karoly, L., Kuklinski, M., & Plotnick, R. (2018). Standards of Evidence for Conducting and Reporting Economic Evaluations in Prevention Science. Prevention Science, 19(3), Crowley, D. M., Hill, L. G., Kuklinski, M. R., & Jones, D. E. (2013). Research priorities for economic analyses of prevention: Current issues and future directions. Prevention Science. DOI /s z

92 Cost-Analysis References
Levin, H. M., & McEwan, P. J. (2001). Cost-effectiveness analysis: Methods and applications (Vol. 4). Thousand Oaks, CA: Sage. Levin, H., Belfield, C., Muennig, P. A., & Rouse, C. (2006). The costs and benefits of an excellent education for all of America’s children. New York: Columbia University. Levin, H. M., McEwan, P. J., Belfield, C., Bowden, A. B., & Shand (2018)(3rd ed.). Economic evaluation in education: Cost-effectiveness and benefit-cost analysis. Los Angeles: SAGE.

93 Social Validity in Single-Case Design Research

94 Social Validity in Intervention Research
Social Validity involves three questions about an intervention (Kazdin, 2011): Are the goals of the intervention relevant to the person’s life? Is the intervention acceptable to “consumers” and others involved in the procedures? Are the outcomes of the intervention important (i.e., do changes make a difference in lives of persons involved)? Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings. New York: Oxford. Carter, S. L., & Wheeler, J. J. (2019). The social validity manual: Subjective evaluations of interventions (2nd ed.). San Diego: Academic Press.

95 Social Validity in Intervention Research
Three social validation methods can be used in intervention research: Social Comparison Subjective Evaluation Sustainability of the Intervention

96 Social Validity in Intervention Research
Social Comparison Normative social comparison data can be used when such information provides a good benchmark for positive functioning; Typically social comparison involves identification of a “peer group” similar to the client; The peer group would consist of persons who are functioning in an adequate or positive manner; Sometimes standardized assessment instruments can be used for social comparison purposes (e.g., checklists and rating scales).

97 Social Validity in Intervention Research
Some challenges with social comparison: It is not so easy to establish normative comparisons; Normative comparisons may be unrealistic; Normative range of functioning may be an impossible goal given the level of impairment; Normative goals may not reflect overall quality of life; Standardized instruments used for normative comparisons may not provide accurate data (e.g., checklists and rating scales; again, see Reid & Maag, 1994).

98 Social Validity in Intervention Research
Subjective Evaluation Involves the assessment by significant others who have knowledge of the client and can make a judgment of the need for intervention and the outcomes of intervention; Individuals rendering a “subjective” judgment may be parents, teachers, or professionals who have expert status in an area; Specific goals may be established and serve as the basis for intervention and a benchmark for outcome evaluation.

99 Social Validity in Intervention Research
Some challenges with subjective evaluation: Global ratings may be biased; Persons completing the rating (s) may perceive a small change as major when in fact, not much of significance has occurred; Subjective ratings may not correspond to other outcome data (e.g., direct observation).

100 Social Validity in Intervention Research
Sustainability The degree to which the effects of a intervention are sustained over time (Kennedy, 2005). The assessment is a measure of how long the program stays in place or is adopted. Kennedy, C. H. (2005). Single-case designs for educational research. Boston: Allyn and Bacon.

101 Social Validity in Intervention Research
Some Challenges with sustainability Requires a long time to conduct the analysis; Factors other than consumer acceptance may influence ratings/evaluation of sustainability; Sustainability is an indirect index of social validity.

102 Questions and Discussion


Download ppt "Goals of the Presentation"

Similar presentations


Ads by Google