Download presentation
Presentation is loading. Please wait.
Published byCurtis Stokes Modified over 6 years ago
1
Safety & usability - methods for evaluating the user interface from the ePrescribing & Common User Interface programmes James Fone & Kit Lewis
2
eg. Contextual observation Participatory design Usability testing Participatory risk assessment Heuristic evaluation
3
Kit Lewis User Experience Lead, ePrescribing 28th April 2009
Usability and Safety – A New Method for Expert Evaluation of Healthcare IT Systems Kit Lewis User Experience Lead, ePrescribing 28th April 2009
5
A bit of design history…
1943 – The US Air Force noted a high frequency of a certain type of “pilot error”, and sent a psychologist (Alphonse Chapanis) to investigate…
6
Relief & stress release
Wartime, long missions, high stress, little rest – relief to have made it home alive…
7
Only in certain types of aircraft (P-47, B-17, B-25)… Why?
Wheels-up after landing Only in certain types of aircraft (P-47, B-17, B-25)… Why?
8
Proximity & similarity of levers
Levers in close proximity with similar shape and actuation Proximity & similarity of levers Flap lever – retract after landing Landing gear lever – DON’T retract after landing * P-47 cockpit schematic (1942)
9
x Proximity & similarity of levers
Levers physically separated with mnemonically-shaped handles Proximity & similarity of levers Flap lever – retract after landing Landing gear lever – DON’T retract after landing Tail-hook lever – retract after landing * F-4B cockpit schematic (1960)
10
‘Error trap’ in aviation
11
Error traps in healthcare?
We can reduce these with better design We don’t have much influence on these
12
Healthcare UI error traps
Small set of categories, derived from Basic error types from psychology & human factors Industry-standard usability heuristics Initially Nielsen 1990, then refined and extended by many others THEA method for error prediction in early-stage design Harrison et al, York & Newcastle Universities, 2001 Observed errors reported in literature Numerous references to usability-related problems CUI programme: experience producing UI standards Usability best practice
13
Making actions (and sequences) logically and visually clear and distinct from each other (and the “background”) Reducing the need for the user to remember what to do next, where they are now or what information is available Ensuring that the current “situation” is both logically and visually clear, and that the effects of actions are under-stood both before and after the fact
15
Step 1 Nurse has just finished administering Patient A’s medications. Time to move on to the next patient.
16
Step 2 Nurse checks Patient B’s wristband against the patient banner on-screen, and prepares to administer medications.
17
Step 3 Patient A calls from the next bed, asking for his as-required medication for pain. Nurse clicks back to Patient A’s drug chart.
18
Step 4 Nurse administers Patient A’s medication, then clicks to return to Patient B. But because he’s in a hurry, he clicks twice by mistake…
19
Result Patient C’s medications are administered to Patient B.
21
Step 1 A doctor using a tablet PC wants to prescribe 500 micrograms of betamethasone for Patient A. She enters “betam” and selects “betamethasone”.
22
Step 2 The system returns a list of all the standard prescriptions for betamethasone. As she walks down the corridor, the doctor prepares to select her prescription.
23
Step 3 As she’s walking along, she inadvertently selects the standard prescription adjacent to the one she wanted…
24
Step 4 Now that the standard prescription has been selected, the doctor hits “OK” to complete the process while moving aside to allow a trolley past.
25
Result Patient A has just been prescribed a dose of 5000 micrograms of bethametasone, a 10-fold dose error.
26
Scoring method Evaluation follows a consistent scenario
Starting assumption: no problems, score = 100% Each error trap uncovered reduces overall score Error traps scored according to severity of problem Severe: – 30 points, Major: – 15 points, Minor: – 5 points Multiple instances of same underlying problem not counted Category scores weighted according to potential risk to patients (total = 1.0) eg. “Dissociating a task from its object” has 0.2 weighting
27
Category weighting 0.2 weighting - Potential to result in severe / fatal errors A - Dissociating a task from its object B - Disguising the switching of context 0.13 or 0.1 weighting - Potential to result in major errors D - Over-complicating the selection of an item or parameter E - Warning the user inappropriately L - Requiring memory or calculation within and between screens 0.07 or 0.04 weighting - Potential to result in minor errors C - Confusing the sequence or state of a process F - Creating the illusion of support G - Creating the illusion of completeness H - Using confusing or inconsistent words, symbols and layout J - Overloading the user with information L - Not providing feedback in response to user input
28
Scoring method Example
An error trap is found within Category D: Over-complicating the selection of an item or parameter “Drug product search results are presented alphanumerically not in ascending order of calculated strength, making mis-selection errors more likely and potentially increasing their impact” The problem is severe: subtract 30 points The weighting for the category is 0.1 So the error trap reduces the overall score by 30 x 0.1 = 3%
29
ePrescribing system evaluation
Aim: assess functional capabilities and safety features of 11 existing hospital ePrescribing systems Questionnaire (functionality & technology platform) Scenario-based demonstration assessed by a panel of 30 clinicians and ePrescribing experts Scenario covers admission, in-patient prescribing, pharmacy verification, decision support, administration & discharge Scenario is closely related to NHS CFH ePrescribing Hazard Framework & Functional Specification “Error trap” expert usability evaluation
30
Perception of completeness
31
Total points scored per category
Hard to evaluate objectively
32
Error trap score vs. panel’s scores
At the end of the scripted demo, the panelists voted Yes / No to the question: Is this system safe? The panel’s total score against the NHS CFH ePrescribing Hazard Framework The total score of the panel’s subjective usability questions, asked after each section of the demo Using the “Error trap” method
33
Conclusion Strong correlation between different scoring methods (aided by high-quality scenario) Expert usability evaluation score is a very good prediction of how clinicians and eP experts will view a prescribing system Strong validation of the categories and overall approach Still dependent on the “expert” and the scenario Supplements end-user involvement: early-and-often, using prototypes (paper, interactive, full system)
34
Thankyou Questions after James’s presentation…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.