Download presentation
Presentation is loading. Please wait.
1
Masked Visual Analysis (MVA)
A method to ensure control of the Type I error rate when visually analyzing single-case studies
2
Type I Error Control A Type I error is committed when an analyst concludes there was an effect when there really was not. In conventional statistical analyses researchers often set the Type I error rate to .05. With traditional visual analyses it is difficult to know how likely it is that a researcher will incorrectly conclude there was an effect.
4
Type I Error Studies Estimate Study .24 Matyas & Greenwood, Stocks & Williams, Fisch, Borckardt, Murphy, Nash, & Shaw, /.01 Carter, 2009
5
MVA Steps for Randomized Designs
1. Plan the study 2. Spit research team into two groups: a) intervention team b) analysis team 3. Intervention team makes random assignment, but does not tell analysis team 4. Intervention team conducts the study 5 Intervention team creates a masked graph 6. Analysis team analyzes masked graph
6
Example Application (Thanks to Kendall DeLoatche)
Study to examine the effect of parent/child interaction therapy (PCIT) on the number of praises given by parent during interaction with child Design Type: Multiple Baseline Across 4 Participants Intervention Schedule: Baseline lengths of 3, 4, 5, and 6 Randomization: Randomize order of participants for intervention
7
Dyadic Parent-Child Interaction Coding System (DPICS): LABELED PRAISES
Session
8
Compute the p-value The p-value is computed as:
p = # specifications/# possible assignments # possible assignments = 4*3*2*1 = 24 p = 1/24 = .0417
9
Type I error control If there were no treatment effects the data would be the same regardless of which random assignments were made. As a consequence, the Analysis Team would make the same decisions and the same specification (e.g., always say the order is 1, 2, 4, 3). Because the assignments are made randomly the probability that the assignment corresponds to the one the Analysis Team would pick is 1 out of the # possible (e.g., the order 1, 2, 4, 3 would be selected randomly 1 out of 24 times).
10
MVA Steps for Response-Guided Randomized Designs
1. Set study parameters Research team agrees upon: Deign type (e.g., MB) Minimums (e.g., minimum of 5 observations per phase) Randomization (e.g., random order of participants in MB)
11
2. Split into two teams Analysis Team
Visually analyze the data and direct the Intervention Team Intervention Team Conduct the study based on the agreed upon parameters and the direction of the Analysis Team
12
3. Conduct the study The Intervention Team begins the study and sends the collected outcome data to the Analysis Team The Analysis Team analyzes the data and makes decisions about when it would be appropriate to make a random assignment The Intervention Team makes random assignments when directed by the Analysis Team and continues to collect and send the outcome measures to the Analysis Team, but they never disclose the results of the random assignments The Analysis Team indicates when the study should be concluded
13
4. Compute the p-value The Analysis Team specifies what they believe are the results of the random assignments The Intervention Team indicates if they are correct If not correct, the Analysis Team continues to make specifications until a correct specification is made The p-value is computed as: p = # specifications/# possible assignments
14
Example 1: Multiple Baseline Design – 4 Participants
Step 1: Set study parameters Dependent Variable? % of time on task Design Type? Multiple Baseline Across Participants Minimums? At least 5 baseline observations Staggers of at least 2 observations Treatment phases with at least 3 observations
15
Example 1: Multiple Baseline Design – 4 Participants
Step 1: Set study parameters Randomization? Randomize order of participants for intervention How many possible assignments of participants to treatment order? Who is 1st, 2nd, 3rd, and 4th? P = 4! = 24 possible assignments If the treatment has no effect, the probability that a masked visual analyst could identify the correct order p = 1/24 = .0417
16
Example 1: Multiple Baseline Design – 4 Participants
Step 2: Split into two teams Step 3: Conduct the study
17
% Time on Task Session
19
% Time on Task Session
20
% Time on Task Session
21
% Time on Task Session
26
Example 1: Multiple Baseline Design – 4 Participants
Step 4: Compute the p-value Analysis Team make a specification Intervention Team, are they correct? If the treatment has no effect, the probability that a masked visual analysts could have identified the correct order p = 1/24 = .0417
27
Example 2: Multiple Baseline Design – 3 Participants
Step 1: Set study parameters Design Type? Multiple Baseline Across Participants Minimums? At least 5 baseline observations Staggers of at least 3 observations Treatment phases with at least 5 observations If outlier, at least 3 additional observations in phase
28
Example 2: Multiple Baseline Design – 3 Participants
Dependent Variable? % intervals with prosocial behavior Randomization? - How many possible assignments of participants to treatment order? Who is 1st, 2nd, and 3rd? P = 3! = 6 possible assignments - What if we randomly select from Participant 1, Participant 2, Participant 3, and no one? P=4! = 24 possible assignments, if correct p = 1/24 = .0417
29
Example 2: Multiple Baseline Design – 3 Participants
Step 2: Split into two teams Step 3: Conduct the study
37
Example 2: Multiple Baseline Design – 3 Participants
Step 4: Compute the p-value Analysis Team make a specification Intervention Team, are they correct? If the treatment has no effect, the probability that a masked visual analysts could have identified the assignments p = 1/24 = .0417
38
Example 3: Multiple Probe Design
Step 1: Set study parameters Design Type? Multiple Probe with 5 Participants Minimums? At least 5 observations in each phase At least 3 consecutive observations prior to intervention At least 3 consecutive observations after an intervention Temporal staggers of at least 2 observations Randomization? Random assignment of treatment to blocks of observations, where there is one mystery block for each participant at the point the participant becomes eligible for intervention 25=32, so if correct with 5 blocks, p = 1/32 =
39
Example 3: Multiple Probe Design
Step 2: Split into two teams Step 3: Conduct the study
40
A B ? Dave ? John ? Bob ? Dan ? Theresa
41
Example 3: Multiple Probe Design
Step 4: Compute the p-value Analysis Team make a specification Intervention Team, are they correct? Yes? p = 1/32 =
42
Example 4: Reversal Design
Step 1: Set study parameters Dependent Variable? Number of Disruptive Behaviors Design Type? Reversal Minimums? At least 5 observations per phase At least 3 phase changes (at least ABAB) Randomization? Random assignment of treatment to blocks of observations Because each assignment has 2 possibilities, need 5 assignments to obtain over 20 possible assignments and a p-value < .05. 25=32, so if correct p = 1/32 =
43
Example 4: Reversal Design
Step 2: Split into two teams Step 3: Conduct the study
57
Example 4: Reversal Design
Step 4: Compute the p-value Analysis Team make a specification Intervention Team, are they correct? If the treatment has no effect, the probability that a masked visual analysts could have identified the assignments p = 1/32 =
58
Example 5: Alternating Treatments Design
Step 1: Set study parameters Design Type? Alternating Treatments (2 treatments) Minimums? At least 5 alternating pairs Randomization? Random assignment of one observation in the pair to A and one to B Because each assignment has 2 possibilities, need 5 assignments to obtain over 20 possible assignments and a p-value < .05. 25=32, so if correct with 5 pairs, p = 1/32 =
59
Example 5: Alternating Treatments Design
Step 2: Split into two teams Step 3: Conduct the study
66
Example 5: Alternating Treatments Design
Step 4: Compute the p-value Analysis Team make a specification Intervention Team, are they correct? Yes? p = 1/64 = No? Make a second specification If correct this time, p = 2/64 =
67
Applications and Illustrations
Byun, T. M., Hitchcock, E., & Ferron, J. M. (2017). Masked visual analysis: Minimizing type I error in response-guided single-case design for communication disorders. Journal of Speech, Language, and Hearing Research, 60, 1455=1466. DeLoatche, K. J. (2015). Parent-child interaction therapy as a treatment for ADHD in early childhood: A multiple baseline single-case design (Unpublished doctoral dissertation). University of South Florida, Tampa. Dickerson, E. (2016). Computerized cognitive remediation therapy (CCRT): Investigating change in the psychological and cognitive function of adolescent psychiatric patients. (Unpublished doctoral dissertation). Northeastern University, Boston. Ferron, J. M., & Levin, J. R. (2014). Single-case permutation and randomization statistical tests: Present status, promising new developments. In T. R. Kratochwill & J. R. Levin (Eds.), Single-case intervention research: Statistical and methodological advances (pp ). Washington, DC: American Psychological Association. Ferron, J., & Jones, P. K. (2006). Tests for the visual analysis of response-guided multiple-baseline data. Journal of Experimental Education, 75, Ferron, J. M., Joo, S.-H., & Levin, J. R. (accepted). A Monte-Carlo evaluation of masked-visual analysis in response-guided versus fixed-criteria multiple-baseline designs. Journal of Applied Behavior Analysis. Hinojosa, S. M. (2016). Teacher child interaction therapy: An ecological approach to intervening with young children who display disruptive behaviors. (Unpublished doctoral dissertation). University of South Florida, Tampa. Hua, Y., Yuan, C., Monroe, K., Hinzman, M. L., Alqahtani, S., Abdulmohsen, A., & Kern, A. M. (2016). Effects of the reread- adapt and answer-comprehend and goal setting intervention on decoding and reading comprehension skills of young adults with intellectual disabilities. Developmental Neurorehabilitation. Ottley, J. R., Coogle, C. G., Rahn, N. L., & Spear, C. (2017). Impact of bug-in-ear professional development on early childhood co-teachers’ use of communication strategies. Topics in Early Childhood Special Education, 36, McKenney, E. L., Mann, K. A., Brown, D. L., & Jewell, J. D. (2017). Addressing cultural responsiveness in consultation: An empirical demonstration. Journal of Educational and Psychological Consultation. McKeown, D., Kimball, K., & Ledford, J. (2015). Effects of asynchronous audio feedback on the story revision practices of students with emotional/behavioral disorders. Education and Treatment of Children, 38,
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.