Download presentation
Presentation is loading. Please wait.
Published byRoland Bennett Modified over 9 years ago
1
Reasoning about human error with interactive systems based on formal models of behaviour Paul Curzon Queen Mary, University of London Paul Curzon Queen Mary, University of London 1
2
Acknowledgements Ann Blandford (UCL) Rimvydas Rukš ė nas (QMUL) Jonathan Back (UCL) George Papatzanis (QMUL) Dominic Furniss (UCL) Simon Li (UCL) …+ various QMUL/UCL students Ann Blandford (UCL) Rimvydas Rukš ė nas (QMUL) Jonathan Back (UCL) George Papatzanis (QMUL) Dominic Furniss (UCL) Simon Li (UCL) …+ various QMUL/UCL students
3
Background The design of computer systems (including safety critical systems) has historically focused on the hardware and software components of an interactive system People have typically been outside the system as considered for verification The design of computer systems (including safety critical systems) has historically focused on the hardware and software components of an interactive system People have typically been outside the system as considered for verification 1
4
Can we bring users into the development process? In a way that talks at the same level of abstraction as established software development That accounts for cognitive causes of error That doesn’t require historical data to establish probabilities That doesn’t demand strong cognitive science background of the analyst In a way that talks at the same level of abstraction as established software development That accounts for cognitive causes of error That doesn’t require historical data to establish probabilities That doesn’t demand strong cognitive science background of the analyst 1
5
The Human error Modelling (HUM) project Systematic investigations of human error and its causes Formalise results in a user model included in the “system” for verification Model of cognitively plausible behaviour Investigate ways of “informalising” the knowledge to make it usable in practice focus on dynamic context-aware systems Improve understanding of actual usability design practice Systematic investigations of human error and its causes Formalise results in a user model included in the “system” for verification Model of cognitively plausible behaviour Investigate ways of “informalising” the knowledge to make it usable in practice focus on dynamic context-aware systems Improve understanding of actual usability design practice 1
6
Systematic Errors Many errors are systematic They have cognitive causes NOT due to lack of knowledge of what should do If we understand the patterns of such errors, then we can minimise their likelihood through better design Formalise the behaviour from which they emerge and we can develop verification tools to identify problems Many errors are systematic They have cognitive causes NOT due to lack of knowledge of what should do If we understand the patterns of such errors, then we can minimise their likelihood through better design Formalise the behaviour from which they emerge and we can develop verification tools to identify problems 1
7
Post-completion errors (PCEs) Characterised by there being a clean-up or confirmation operation after achievement of main goal Infrequent but persistent Examples: Leaving the original on the photocopier Leaving the petrol filler cap at the petrol station …etc. Characterised by there being a clean-up or confirmation operation after achievement of main goal Infrequent but persistent Examples: Leaving the original on the photocopier Leaving the petrol filler cap at the petrol station …etc. 1
8
Experiments: eg Fire engine dispatch
9
Call prioritization
10
The structure of specifications
11
Generic user model in SAL Cognitive principles: Non-determinism Relevance Salience Mental vs. physical Pre-determined goals Reactive behaviour Voluntary completion Forced termination Cognitive principles: Non-determinism Relevance Salience Mental vs. physical Pre-determined goals Reactive behaviour Voluntary completion Forced termination UserModel{goals,actions, … } = … TRANSITION ([]g,slc: Commit_Action: … ) [] ([]a: Perform_Action: … ) [] Exit_Task: … [] Abort_Task: … [] Idle: … 1
12
Recent Work: salience and cognitive load Our early work suggested importance of salience and cognitive load… Humans rely on various cues to correctly perform interactive tasks: procedural cues are internal; sensory cues are provided by interfaces; sensory cues can strengthen procedural cueing (Chung & Byrne, 2004). Cognitive load can affect the strength of sensory & procedural cues. Our early work suggested importance of salience and cognitive load… Humans rely on various cues to correctly perform interactive tasks: procedural cues are internal; sensory cues are provided by interfaces; sensory cues can strengthen procedural cueing (Chung & Byrne, 2004). Cognitive load can affect the strength of sensory & procedural cues. 1
13
Aims To determine the relationship between salience and cognitive load; To extend (refine) our cognitive architecture with salience and load rules; To assess the formalization by modeling the task used in the empirical studies. To highlight further areas where empirical studies are needed. To determine the relationship between salience and cognitive load; To extend (refine) our cognitive architecture with salience and load rules; To assess the formalization by modeling the task used in the empirical studies. To highlight further areas where empirical studies are needed. 1
14
Approach Use fire engine dispatch to develop an understanding of the link between cognitive load and salience Re-analyse all previous experiments to refine and validate understanding, identifying load and salience of individual elements Informally devise rule for the relationship Formalise the informal rule in user model Model and verify one detailed experimental scenario - fire engine dispatch Compare models predicted results with those from the experiment. Use fire engine dispatch to develop an understanding of the link between cognitive load and salience Re-analyse all previous experiments to refine and validate understanding, identifying load and salience of individual elements Informally devise rule for the relationship Formalise the informal rule in user model Model and verify one detailed experimental scenario - fire engine dispatch Compare models predicted results with those from the experiment. 1
15
Experimental setting Hypothesis: slip errors are more likely when the salience of cues is not sufficient to influence attentional control. Variables: intrinsic and extraneous cognitive load. Hypothesis: slip errors are more likely when the salience of cues is not sufficient to influence attentional control. Variables: intrinsic and extraneous cognitive load. 1
16
Fire engine dispatch
17
Results 1
18
Interpretation of empirical data High intrinsic load reduces the salience of procedural cues. High intrinsic & extraneous load may reduce the salience of sensory cues High intrinsic load reduces the salience of procedural cues. High intrinsic & extraneous load may reduce the salience of sensory cues 1
19
Formal salience and load rules Types: Salience {High,Low,None}; Load {High,Low} Procedural: if default High intrinsic High then procedural Low else procedural default Sensory: if default High intrinsic High extraneous High then sensory {High, Low} else sensory default Types: Salience {High,Low,None}; Load {High,Low} Procedural: if default High intrinsic High then procedural Low else procedural default Sensory: if default High intrinsic High extraneous High then sensory {High, Low} else sensory default 1
20
Levels of overall salience HighestSalience( … ) … procedural High procedural Low sensory High HighSalience( … ) … procedural None sensory High LowSalience( … ) … HighestSalience( … ) … procedural High procedural Low sensory High HighSalience( … ) … procedural None sensory High LowSalience( … ) … 1
21
Choice priorities [] g,slc: Commit_Action: HighestSalience(g, … ) (HighSalience(g, … ) NOT( h: HighestSalience(h, … ))) (LowSalience(g, … ) NOT( h: HighestSalience(h, … ) HighSalience(g, … ))) … commit[ … ] committed; status … [] g,slc: Commit_Action: HighestSalience(g, … ) (HighSalience(g, … ) NOT( h: HighestSalience(h, … ))) (LowSalience(g, … ) NOT( h: HighestSalience(h, … ) HighSalience(g, … ))) … commit[ … ] committed; status … 1
22
Correctness verification Use model checking to reason about properties of combined user model - fire engine dispatch system Compare to actual results from the experiment Use model checking to reason about properties of combined user model - fire engine dispatch system Compare to actual results from the experiment 1
23
Correctness verification Functional correctness: System EVENTUALLY(Perceived Goal Achieved) ‘Decide mode’ goal: System ALWAYS (Route Constructed Mode chosen) Functional correctness: System EVENTUALLY(Perceived Goal Achieved) ‘Decide mode’ goal: System ALWAYS (Route Constructed Mode chosen) 1
24
Formal verification & empirical data LoadError ExtraneousIntrinsicInitializeModeTerm Low + HighLow + LowHigh + High
25
Results (again) 1
26
Summary Abstract (simple) formalisation of salience & load: close correlation with empirical data for some errors; Initialization error - match Mode error - false positives Termination error - 1 condition false negative further refinement of salience & load rules requires new empirical studies. Demonstrates how empirical studies and formal modelling work can feed each other. Abstract (simple) formalisation of salience & load: close correlation with empirical data for some errors; Initialization error - match Mode error - false positives Termination error - 1 condition false negative further refinement of salience & load rules requires new empirical studies. Demonstrates how empirical studies and formal modelling work can feed each other. 1
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.