Download presentation
Presentation is loading. Please wait.
1
Director of Learning and Technology
Engagement 2.0 Bryant Patten Director of Learning and Technology MedU
2
MedU We build Virtual Patient (VP) cases 501(c)(3) non profit
Hanover, NH 14 people 29,000 new students each year 1,000,000+ cases viewed / year
3
Disclosure Employee of MedU
6
By engagement we mean: " the extent to which students are willing and
able to take on the learning task at hand.” or "the learner actively engages in cognitive processes for learning.” Our focus is cognitive engagement
7
History pre 2013 Tracked time on case and performance
never showed performance Time on card / all cards not meaningful
8
Why Committed to continuous improvement of our metrics Research
9
Engagement Score Algorithm V1.0 2013
Components answerScore timeScore toolbarScore summaryScore Equally weighted
10
Summary Statement / Machine Learning
Lightside System Current version is 0 or 1 Code directly embedded in CASUS display system
11
Development and Validation of an Engagement Metric for Virtual Patient Cases
Norm Berman, MD, Anthony Artino, PhD Geisel School of Medicine at Dartmouth and Uniformed Services University of the Health Sciences BACKGROUND Isolated online environments require learner autonomy and may not inherently foster learner cognitive engagement. Many clerkship directors are using time on case as an indicator of engagement, but empirical evidence suggests this approach is not optimal. Little is known about the factors in a virtual patient (VP) case that will promote cognitive engagement, which we define as “the degree to which students are willing and able to take on the learning task at hand.” OBJECTIVES To develop and validate a computer-generated dynamic engagement score based on student interactions with MedU VP cases. METHODS Engagement Score Development We developed an engagement score that includes four equally weighted components based on student interactions with the case, each of which is tracked by the VP software. A scoring algorithm and preliminary cut-points for determining low, moderate or good engagement were developed after reviewing log data from 20 randomly selected students. Engagement Score Validation Content: Six medical educators were surveyed to establish content validity of the score components. Response process: Four faculty members reviewed log data for 10 cases and scored student engagement as either low, moderate or good. We then assessed rater agreement with the empirically derived scoring cut-points using Pearson correlation, and we assessed inter-rater reliability using intra-class correlation for these ratings. Consequence: We displayed the engagement score to students as a routine aspect of MedU case use. RESULTS Engagement Score Components Time on page: > 20 seconds MCQ answer accuracy: cumulative percent Use of clinical reasoning toolbar: scaled score (0–12) Summary statement automated analysis and case match: binary score (0, 1) Total Score Values: Red < 0.3; Yellow 0.3–0.5; Green > 0.5 Validity Evidence Content: All educators agreed that the components of the score reflect engagement. Response process Mean Pearson correlation = 0.98 Mean inter-rater reliability = 0.98 Consequence: Display of engagement score to students impacts their behavior. Good engagement increased from 72% in week 1 to 86% in week 5. BACKGROUND METHODS RESULTS OBJECTIVES Engagement CONCLUSIONS A machine-generated engagement metric, based on student actions in a VP case, is feasible. Validity evidence suggests these scores may reflect important aspects of students’ cognitive engagement with the VP cases. IMPLICATIONS The engagement score appears to be a good indicator of student interaction with MedU cases, and may be better than time on case. The engagement score, as an indicator of cognitive engagement, can serve as an important outcome measure in efforts to improve the design of VPs. The next step in collecting validity evidence for our engagement scores will include correlating these scores with students’ self-reported cognitive engagement using a survey instrument that is currently being validated. CONCLUSIONS IMPLICATIONS REFERENCES REFERENCES Artino AR. Think, feel, act: motivational and emotional influences on military students online academic success. J Comput High Educ (2009) 21:146–166 Berman NB, Fall LH, Smith S, Levine DA, Maloney CG, Potts M, Siegel B, Foster-Johnson L: Integration strategies for using virtual patients in clinical clerkships. Acad Med 2009, 84(7):942–949. 11
12
Response from educators
Strong favorable response Some summative use Request for more student information
13
Response from students
Anecdotal negative reaction when used summatively Intense interest in algorithm when used summatively We do NOT tell students score components Odd combination of Digital Native and Immigrant
14
Performance of students
Close to 90% getting green engagement 80% are getting credit on summary statement 65% correct answers
15
Engagement Score Algorithm V1.5
CORE Components answerScore timeScore clicks (hyperlinks & images) Equally weighted Not Yet Displayed
16
Concerns Students gaming system
Allowing faculty full access via publication Tension between faculty & students requiring engagement vs spying on me
17
Engagement Score Algorithm V2.0
Currently Under Development Components multi-part answerScore timeScore toolbarScore (with semantic analysis) click tracking multi-part summaryScore targeted engagement tools
18
Summary Statement / Machine Scoring V2.0
iParadigms System (turnitin) Five values of 0, 1 or 2 Accuracy Narrows DD Semantic qualifiers Transformative language Global summary Web services model
19
Possible Future Directions
Next Generation Machine Learning Parameterized Algorithm Feed our LA work Standards and Collaboration
20
Engagement Standard Does one exist? Is it time to create one?
21
tin can
22
IMS Caliper Analytics™ Interoperability Standards Reach Candidate Final Release Status - May 6, 2015
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.