INTRODUCTION Project VIABLERESULTSRESULTS CONTACTS This study represents one of of several investigations initiated under Project VIABLE. Through Project.

Slides:



Advertisements
Similar presentations
Project VIABLE: Behavioral Specificity and Wording Impact on DBR Accuracy Teresa J. LeBel 1, Amy M. Briesch 1, Stephen P. Kilgus 1, T. Chris Riley-Tillman.
Advertisements

Using Video Segments to Enhance Early Clinical Experiences of Prospective Teachers Kristen Cuthrell, Michael Vitale, College of Education, East Carolina.
A “Best Fit” Approach to Improving Teacher Resources Jennifer King Rice University of Maryland.
Experimental Research Designs
Statistical Issues in Research Planning and Evaluation
Susan Malone Mercer University.  “The unit has taken effective steps to eliminate bias in assessments and is working to establish the fairness, accuracy,
Online Career Assessment: Matching Profiles and Training Programs Bryan Dik, Ph.D. Kurt Kraiger, Ph.D.
Direct Behavior Rating: An Assessment and Intervention Tool for Improving Student Engagement Class-wide Rose Jaffery, Lindsay M. Fallon, Sandra M. Chafouleas,
Assessing the Social Acceptability of Brief Experimental Analysis in the Context of a Complete Reading Intervention Program Greta Fenske, Erin Liffrig,
Multimodal feedback : an assessment of performance and mental workload (NASA-TLX) 남종용.
The Influence of Social Goals and Perceived Peer Attitudes on Intentions to Use Tobacco and Alcohol in an Adolescent Sample Elisa M. Trucco, B.A. and Craig.
Assessment Centre Procedures: Reducing Cognitive Load During the Observation Phase Nanja J. Kolk & Juliette M. Olman Department of Work and Organizational.
PPA 502 – Program Evaluation Lecture 10 – Maximizing the Use of Evaluation Results.
Examination of Holland’s Predictive Pattern Order Hypothesis for Academic Achievement William D. Beverly and Robert A. Horn Northern Arizona University,
Physical Aggression and Self-Injury in Juvenile Delinquent Nikki J. Deaver University of Nebraska-Lincoln Methods Participants: Participants were 43 youths.
Alcohol Consumption Past 90-day drinking was assessed with self-report items measuring typical quantity of alcohol consumption, drinking frequency, and.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Chapter One: The Science of Psychology
Chapter 14 Inferential Data Analysis
Social Anxiety and Depression Comorbidity Influences on Positive Alcohol Expectancies Amy K. Bacon, Hilary G. Casner, & Lindsay S. Ham University of Arkansas.
How to Develop the Right Research Questions for Program Evaluation
Jared A. Rowland, M.S., Michael M. Knepp, M.S., Sheri L. Towe, M.S., Chris S. Immel, M.S., Ryoichi J.P. Noguchi, M.S., Chad L. Stephens, M.S. & David W.
RESULTS CONCLUSIONS REFERENCES As hypothesized and observed in some of our previous work, significant LPS-induced learning decrements were noted, including.
Determining Sample Size
ASSESSMENT ACCOMMODATIONS How to Select, Administer, and Evaluate Use of Accommodations for Instruction and Assessment of Students with Disabilities Ohio.
Office of Institutional Research, Planning and Assessment January 24, 2011 UNDERSTANDING THE DIAGNOSTIC GUIDE.
Generalizability and Dependability of Direct Behavior Ratings (DBRs) to Assess Social Behavior of Preschoolers Sandra M. Chafouleas 1, Theodore J. Christ.
Recent public laws such as Individuals with Disabilities Education Improvement Act (IDEIA, 2004) and No Child Left Behind Act (NCLB,2002) aim to establish.
Northcentral University The Graduate School February 2014
Lecture 8A Designing and Conducting Formative Evaluations English Study Program FKIP _ UNSRI
Chapter One: The Science of Psychology. Ways to Acquire Knowledge Tenacity Tenacity Refers to the continued presentation of a particular bit of information.
The Impact of Training on the Accuracy of Teacher-Completed Direct Behavior Ratings (DBRs) Teresa J. LeBel, Stephen P. Kilgus, Amy M. Briesch, & Sandra.
Evaluating a Research Report
Tracking Treatment Progress of Families with Oppositional Preschoolers Jaimee C. Perez, M.S., Stephen Bell, Ph.D., Robert W. Adams Linda Garzarella, B.A.,
Project VIABLE: Overview of Directions Related to Training to Enhance Adequacy of Data Obtained through Direct Behavior Rating (DBR) Sandra M. Chafouleas.
The Use of Distance Learning Technology by Business Educators for Credentialing and Instruction Christal C. Pritchett, Ed.D. NABTE Research Session Anaheim,
Training Interventionists to Implement a Brief Experimental Analysis of Reading Protocol to Elementary Students: An Evaluation of Three Training Packages.
Information commitments, evaluative standards and information searching strategies in web-based learning evnironments Ying-Tien Wu & Chin-Chung Tsai Institute.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Training Individuals to Implement a Brief Experimental Analysis of Oral Reading Fluency Amber Zank, M.S.E & Michael Axelrod, Ph.D. Human Development Center.
Self-assessment Accuracy: the influence of gender and year in medical school self assessment Elhadi H. Aburawi, Sami Shaban, Margaret El Zubeir, Khalifa.
1 of 29 Department of Cognitive Science Adv. Experimental Methods & Statistics PSYC 4310 / COGS 6310 Mixed Model ANOVA Michael J. Kalsher PSYC 4310/6310.
Does training on self-regulated learning facilitate students' learning with hypermedia Presenter: Jenny Tseng Professor: Ming-Puu Chen Date: March 15,
EDCI 696 Dr. D. Brown Presented by: Kim Bassa. Targeted Topics Analysis of dependent variables and different types of data Selecting the appropriate statistic.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Evaluating Impacts of MSP Grants Ellen Bobronnikov January 6, 2009 Common Issues and Potential Solutions.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Teaching the Control of Variables Strategy in Fourth Grade Classrooms Robert F. Lorch, Jr., William J. Calderhead, Emily E. Dunlap, Emily C. Hodell, Benjamin.
Outline of Today’s Discussion 1.The Chi-Square Test of Independence 2.The Chi-Square Test of Goodness of Fit.
ABSTRACT The purpose of the present study was to investigate the test-retest reliability of force-time derived parameters of an explosive push up. Seven.
Experimental and Ex Post Facto Designs
The Psychologist as Detective, 4e by Smith/Davis © 2007 Pearson Education Chapter One: The Science of Psychology.
Effectiveness of the ‘Change in Variable’ Strategy for Solving Linear Equations Mustafa F. Demir and Jon R. Star, Michigan State University Introduction.
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 11 Testing for Differences Differences betweens groups or categories of the independent.
Training Strategies to Improve Accuracy Sayward E. Harrison, M.A./C.A.S. T. Chris Riley-Tillman, Ph.D. East Carolina University Sandra M. Chafouleas, Ph.D.
1 of 29 Department of Cognitive Science Adv. Experimental Methods & Statistics PSYC 4310 / COGS 6310 Mixed Model ANOVA Michael J. Kalsher PSYC 4310 Advanced.
Recent research has shown that some rejected individuals will try to forge social connections with new individuals, which may serve to replenish a sense.
Crystal Reinhart, PhD & Beth Welbes, MSPH Center for Prevention Research and Development, University of Illinois at Urbana-Champaign Social Norms Theory.
Project VIABLE - Direct Behavior Rating: Evaluating Behaviors with Positive and Negative Definitions Rose Jaffery 1, Albee T. Ongusco 3, Amy M. Briesch.
Demonstrating Institutional Effectiveness Documenting Using SPOL.
Smith/Davis (c) 2005 Prentice Hall Chapter One The Science of Psychology PowerPoint Presentation created by Dr. Susan R. Burns Morningside College.
Effects of Word Concreteness and Spacing on EFL Vocabulary Acquisition 吴翼飞 (南京工业大学,外国语言文学学院,江苏 南京211816) Introduction Vocabulary acquisition is of great.
Better to Give or to Receive?: The Role of Dispositional Gratitude
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Logan L. Watts, Ph.D. Baruch College, CUNY
The Relationship Between Emphasis of Cell-phone Use on Performance and Anxiety: Classroom Implications Jordan Booth, Leah Cotton, Jeni Dillman, Kealey.
Lab Roles and Lab Report
Dana Keener, Ph.D. ICF Macro 2009 AEA Annual Meeting November 12, 2009
Office of Education Improvement and Innovation
Mapping the ACRL Framework and Nursing Professional
Presentation transcript:

INTRODUCTION Project VIABLERESULTSRESULTS CONTACTS This study represents one of of several investigations initiated under Project VIABLE. Through Project VIABLE, empirical attention is being directed toward the development and evaluation of formative measures of social behavior involving a direct behavior rating (DBR). The goal of Project VIABLE is to examine the DBR through 3 phases of investigation including (1) foundations of measurement, (2) decision making and validity, and (3) feasibility. MATERIALS & METHODS Participants were 177 undergraduate students recruited from a university in the southeast. Participants were assigned to one of six conditions a priori. Each condition was comprised of one of three types of training (Standard, FOR, and FOR+RET) and one of two levels of exposure (3 or 6 clips). In all conditions, participants were first asked to complete information pertaining to demographics and DBR familiarity. Next, participants were shown 3 one-minute pre-test clips and asked to rate one student on a specific behavior after each clip. All participants were then presented with a brief presentation on DBR. In the FOR+RET conditions, participants were also presented with common examples of errors in rating (e.g. the halo effect, leniency/severity, central tendency, primacy/recency). The presenter then demonstrated the correct way to rate a student’s behavior. Standard condition participants were provided with the true score for each pre-test clip. FOR and FOR+RET participants were given both the true score and an explanation for this score. Next, participants rated one student on one behavior in each of 3 or 6 (depending on the condition) modeling clips. In the FOR and FOR+RET conditions, participants were asked to write an explanation of why they chose the rating they did. The presenter then offered feedback on the clip. In the Standard conditions, feedback included the provision of true score. In the FOR and FOR+RET conditions, feedback included both the true score as well as the replaying of the clip as the presenter pointed out the reasons for each rating. Finally, all participants were asked to view and rate 6 experimental clips. For additional information, please direct all correspondence to Chris Riley-Tillman as or Jessica Amon as Preparation of this poster was supported by a grant from the Institute for Education Sciences (IES), U.S. Department of Education (R324B060014). DBR refers to the rating of a specified behavior at least daily, and then sharing that information with someone other than the rater. The question as to how much training is necessary to facilitate appropriate rater accuracy among DBR users has recently begun to be explored. Extant research has suggested that providing DBR users with training incorporating practice and performance feedback resulted in greater accuracy than exposure to a brief familiarization training session (Schlientz, et. al, 2009). Additional findings have indicated that the inclusion of practice with feedback resulted in improved rater accuracy over and above practice alone when rating student disruptive behavior. Despite these findings, a review of literature in related fields suggests that additional training components may be of interest. Specifically, work within the area of industrial/organizational psychology has supported training that calls attention to (a) common rater errors, and/or (b) rater frame-of-reference may lead to increased accuracy. The purpose of this study was to therefore examine the impact of adding Frame of Reference (FOR) and Rater Error Training (RET) to standard DBR training involving practice and feedback (STANDARD). In addition, the amount of exposure to practice with feedback was evaluated. Project VIABLE: Critical Components of DBR Training to Enhance Rater Accuracy Jessica Amon*, Shannon Brooks*, Stephen Kilgus**, Sandra M. Chafouleas**, & Chris Riley-Tillman* * East Carolina University, ** University of Connecticut. Six outcome variables were of interest, which included the accuracy with which participants rated each experimental video clip. Accuracy was calculated by taking the absolute value of the difference between each individual’s rating and the true score for that particular video clip (A = |x i – x true |). Lower scores were indicative of greater accuracy. The absolute value of each result was calculated to ensure that no pattern of rating would erroneously lead any one group to appear more accurate than another. See Table 1 for a summary of descriptive statistics for accuracy scores by condition, behavior target, and rate of behavior. A repeated measures MANOVA revealed a statistically significant (a) main effect for experimental clip (Wilks’ Lambda F = 4.571, p <.000, partial η 2 =.932), (b) two-way interaction between experimental clip and practice level (Wilks’ Lambda F = 3.923, p =.002, partial η 2 =.105), and (c) three-way interaction between experimental clip, type of training, and practice level (Wilks’ Lambda F = 2.896, p =.002, partial η 2 =.08). The finding of the statistically significant three-way interaction may be taken to suggest that the moderating influence of type of training on the effect of practice level on accuracy varied across experimental clips. In other words, the effect of level of practice was not consistent across types of training. Furthermore, the influence of the within-subjects factor of ‘Experimental Clip’ suggests that these relationships were not consistent across each clip (e.g., a statistically significant difference between ST-3 and ST-6 that existed when rating experimental clip 1 may not have existed with regard to experimental clip 4). A series of post hoc comparisons were then conducted to further elucidate any differences amongst groups regarding mean rating accuracy. Comparisons were kept within experimental clip, as it was determined that such would comprise the most meaningful contrasts. That is, comparisons between groups across clips were considered to be uninterpretable, as a decision could not be made as to whether any difference was due to clip or training content. All possible unique (within dependent variable) comparisons were made, resulting in a total of 90 contrasts (15 unique contrasts per each of 6 dependent variables). Of the 90 possible unique contrasts, four were found to be statistically significant at the.0006 level. For a summary of these specific contrasts, please see Table 2. Schlientz, M. D., Riley-Tillman, T. C., Briesch, A. M., Walcott, C. M., & Chafouleas, S.M., (2009) The Impact of Training on the Accuracy of Direct Behavior Ratings (DBRs). School Psychology Quarterly, 24, Harrison, S. & Riley-Tillman, T.C. (2010, March). Direct Behavior Ratings: Training Strategies to Improve Accuracy. Presentation at the National Association of School Psychologists Annual Convention, Chicago, IL. LeBel, T. J., A., Briesch, A. M., Kilgus, S. P., Riley-Tillman, T. C., Chafouleas, S. M., & Christ, T. J. (2009, February). Behavioral Specificity and Wording Impact on Direct Behavior Rating Accuracy. Poster presentation at the National Association of School Psychologists Annual Convention, Boston MA.Behavioral Specificity and Wording Impact on Direct Behavior Rating Accuracy.BIBLIOGRAPHY Table 1. Average un-transformed absolute accuracy scores across all experimental clips. 1 – Standard training 2 – Frame-of-Reference training 3 – Frame-of-Reference + Rater Error Training. Note – Low, medium, and high refer to the percent of time the behavior was displayed during each particular clip by the student of interest (0-25%, %, and %, respectively). Each of the six levels (3 levels X 2 behaviors) corresponds to an experimental video clip of student behavior rated by participants. SUMMARY AND CONCLUSIONS The current investigation serves as an extension of the literature pertaining to training components to promote rating accuracy using DBR. Prior research has produced somewhat mixed findings. Work conducted by both Harrison et al. (2010) and Schlientz et al. (2009) has suggested the “more is better” approach, with the most intensive approaches to training leading to the most accurate raters. In contrast, LeBel and colleagues’ (2009) findings indicated that only moderate levels of training may be sufficient. It is this latter finding with which the current investigation is aligned. Results of the current study were generally consistent, with most groups not exhibiting greater accuracy over the others, to a statistically significant degree. However, data did suggest that when rating certain clips, the affordance of more practice with feedback did lead to greater accuracy, regardless of the type of training given. This would seem to indicate that as long as sufficient opportunity to practice is provided, one may receive a less intensive form of training (e.g., Standard Training) and still produce accurate ratings of student behavior. This suggests the potential feasibility of DBR training in school settings as relatively efficient training procedures might be incorporated as a first step in improving rater accuracy. It is recommended that future DBR-related work focus on the development of a standardized DBR training package. Subsequent investigations should then examine the use of this package, including both its feasibility and effectiveness within applied settings. * - The most accurate group within the comparison. Table 2. Statistically significant comparisons of average rating accuracy Academic EngagementDisruptive Behavior ConditionLowMediumHighLowMediumHigh Standard (3) 1 M SD (1.88) (1.22) (0.96) (1.30) (1.10) (0.46) Standard (6) 1 M SD (0.53) (1.13) (0.70) (0.92) (0.87) (0.58) FOR (3) 2 M SD (1.50) (1.16).724 (0.63) (0.80) (1.12) (0.49) FOR (6) 2 M SD (1.33) (1.87) (0.87) (0.90) (1.08) (0.44) FOR+RET (3) 3 M SD (1.36) (1.05) (1.59) (0.48) (1.10) (0.69) FOR+RET (6) 3 M SD (0.92) (1.12) (1.16) (1.43) (1.00) (0.77) ClipComparisonEstimateSEtpCohen’s d 4ST (3) vs. FOR (6)* ST (3) vs. FOR+RET (6)* FOR (6)* vs. FOR+RET (3) FOR+RET (3) vs. FOR+RET (6)*