Download presentation
Presentation is loading. Please wait.
Published byMadalyn Boor Modified over 10 years ago
1
What Does That Mean? Author Selection Of Virtual Patient Metrics Rachel Ellaway, David Topps, Richard Witham Northern Ontario School of Medicine
2
Designs for learning PBL Simulation CAL OSCE Games Virtual Patients Design using patterns, methods, templates
3
Virtual Patients “an interactive computer simulation of real-life clinical scenarios for the purpose of health professions training, education, or assessment. Users may be learners, teachers, or examiners” Ellaway, Candler et al. 2006 Response to changing needs Technological possibilities Breadth of applications Current themes
4
Problem Statement “virtual patients can … be used in learner assessment, but scoring rubrics should emphasize non-analytical clinical reasoning rather than completeness of information or algorithmic approaches. Potential variations in VP design are practically limitless, yet few studies have rigorously explored design issues” Cook, D. and Triola M. (2009) Virtual Patients: a critical literature review and proposed next steps. Medical Education (in press)
5
Metrics, assessment and feedback Educational game = rules + experience + simulation + educational design Educational design: objectives/intervention/feedback/outcomes Formative: feedback at decision point, outcomes guided Summative: feedback at end of activity, outcomes unguided Game rules: agency, feedback, assessment
6
OpenLabyrinth Pattern-based: medical model Narrative-based: timeline, character, motive, causality Game-based: branching, strategy, scores, counters and rules OpenLabyrinth: open source VP authoring, delivery and feedback system – supports all three forms although without strong templating is more useful for narrative and game based VPs
7
Diff kinds of OL VP metrics Reaching end point(s) Time taken Number of steps taken Patient model – survival, pulse, BP Professional model – DDx, Rx Other counters (keys, strength, chance factors) Steps/areas visited or avoided Sequence of steps Confidence of decision Aggregate/function of some or all of the above
8
OL metrics (time and sequence) sequence key nodes time per decision
9
OL metrics (counters and sequence) counter value change over time: trend, max, min end values
10
OpenLabyrinth Design OL three design dimensions: narrative, simulation, game OL three implementation dimensions: topology, rules, content Authoring process: Deductive – objectives > key points > narrative > CSP > branching > rules > media Inductive – narrative > CSP > branching > key points > objectives > rules > media Recurring issues with best use of metrics
11
Author selection of metrics Focus on critical story path, Branching distractors, or multiple clues/resources Typically one successful endpoint + several failure endpoints – few mazes or phases Counters – time, patient health (typically not money or reputation) Conditionals to regulate flow rather than measure Largely formative – summative have tended to resort to tried and tested e.g. key feature problem modes
12
Four dimensions of Validation Face Validity - presentation and interface – not a metrics issue Content Validity - relation to domain and context – not a metrics issue Predictive Validity - functions as predicted by practitioners/experts Convergent and Discriminant Validity - performance correlates with other measures as predicted by practitioners/experts
13
Predictive Validity Techniques design: review overall options and design standard setting (modified Angoff – probabilistic expert estimate of performance of minimally passing learners) suitability: pilot with 3-5 representative candidates to evaluate understandability, accessibility, performability, usability and applicability runtime: review and validate different ways VP can be executed by a candidate –Experts/authors (problems with non-linear extrapolation of expertise) –Excellent candidates –Minimally passing candidates
14
Metrics and Standards SCORM1 has basic tracking (pilot Peter- MedBiq 2005), expectation of more in SCORM2 – watch this space MedBiquitous VP is an extension of SCORM – tracking is a base functional requirement – nodes visited and counter values in a session OL implements MVP (at least two variants) However, no standard currently exists that models objective measures of performance in VPs – tracking is purely for runtime
15
R&D: telemetric research VERSE (Virtual Educational Research Services Environment) Remote telemetrics tracking and database for Second Life, haptics (Omni), OpenLabyrinth, Mitsubishi light surfaces Generic data tracking model Requires major storage and parsing Creates new opportunities for metrics development and modeling
16
R&D: telemetric research Network enabled platform Edge services = device + wrapper Heterogenous devices: virtual patients (OpenLabyrinth), mannequins (LaerdalSimMan 3G), light fields (virtualised cameras), 3D visualization (RSV and Volseg), multiple data sources (CMA, Medline) Integrated service model for connecting, controlling and intertwining heterogenous devices (physical, online, endpoint, model, source, renderer, aggregator)
17
HSVO Service Architecture
18
Alien VPs and metrics Physiognomic models – physiognomes Ontology and AI Persistent avatars EHRs as VPs Data shadows Increasing convergence – augmented by new mashups – geotagging, transponder feeds, twitters
19
Where next? Current VP models provide rich but relatively unused metrics New VP models produce rapidly increasing dimensions and details Key issues: testing of validation methods testing of correlation between VP and real world performance Identify and separate causal and coincident factors Develop predictive mapping between VP design, selection and use of metrics and reliability of conclusions Platform and design development still in flux – creativity and opportunity ahead of knowledge Standards follow once evidence base is established and validated
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.