Download presentation
Presentation is loading. Please wait.
Published byWyatt Webb Modified over 11 years ago
1
Human Performance Metrics for ATM Validation Brian Hilburn NLR Amsterdam, The Netherlands
2
Overview Why consider Human Performance? How / When is HumPerf considered in validation? Difficulties in studying HumPerf Lessons Learnt Toward a comprehensive perspective… ( example data )
3
Traffic Growth in Europe 0 2 4 6 8 10 12 19751980198519901995200020052010 Actual Traffic Traffic Forecast (H) Traffic Forecast (M) Traffic Forecast (L) Movements (millions)
4
Accident Factors
5
l Unexpected human (ab)use of equipment etc. l New types of errors and failures l Costs of real world data are high l New technologies often include new & hidden risks l Operator error vs Designer error l Transition(s) and change(s) are demanding Implementation (and failure) is very expensive! Why consider HUMAN metrics?
6
l Titanic l Three Mile Island l Space shuttle l Bhopal l Cali B-757 l Paris A-320 l FAA/IBM ATC Famous Human Factors disasters
7
When human performance isnt considered...
8
…...!!!!!!
9
What is being done to cope? Near and medium term solutions l RVSM l BRNAV l FRAP l Civil Military airspace integration l Link 2000 l Enhanced surveillance l ATC tools
10
ATM: The Building Blocks Displays (eg CDTI) Tools (eg CORA) Procedures (eg FF-MAS Transition) Operational concepts (eg Free Flight)
11
Monitoring in Free Flight: Ops Con drives the ATCos task!
12
NLR Free flight validation studies l Human factors design & measurements l Ops Con + displays + procedures + algorithms l Retrofit automation & displays –TOPAZ: no safety impairment…. –no pilot workload increase with.. –3 times present en-route traffic –delay, fuel & emission savings l ATC controller impact(s) –collaborative workload reduction l Info at NLR website
13
The aviation system test bed Data links Two way Radio Experiment Scenario Manager scenario 'events' scenario 'events' System data Human data Human data System data
14
Evaluating ATCo Interaction with New Tools Human Factors trials ATCos + Pilots Real time sim Subjective data Objective data also
15
Objective Measures Heart Rate Respiration Scan pattern Pupil diameter Blink rate Scan randomness Integrated with subjective instruments... HEART Analysis Toolkit
16
Correlates of Pupil Diameter Emotion Age Relaxation / Alertness Habituation Binocular summation Incentive (for easy problems) Testosterone level Political attitude Sexual interest Information processing load Light reflex Dark reflex Lid closure reflex Volitional control Accommodation Stress Impulsiveness Taste Alcohol level
17
Pupil Diameter by Traffic Load
18
RIVER IBE 326 AMC 282 05 10 15 Time line Hand-off Datalink Traffic Pre-acceptance Arrival management tool Communication tool Automation: assistance or burden? Conflict detection & resolution tools
19
Low Traffic Visual scan trace, 120 sec.
20
Visual scan trace, 120 sec High Traffic
21
Positive effect of automation on heart rate variability
22
Positive effect of automation on pupil size
23
Better detection of unconfirmed ATC data up-links
24
No (!) positive effect on subjective workload
25
Objective vs Subjective Measures Catch 22 of introducing automation: Ill use it if I trust it. But I cannot trust it until I use it!
26
Automation & Traffic Awareness
27
Converging data: The VINTHEC approach l Team Situation Awareness EXPERIMENTAL correlate behavioural markers w physio ANALYTICAL Game Theory Predictive Model of Teamwork VS
28
Free Routing: Implications and challenges Implications: Airspace definition Automation tools Training ATCo working methods Ops procedures Challenges: Operational Technical Political Human Factors FRAP
29
Sim 1: Monitoring for FR Conflicts l ATS Routes l Direct Routing Airways plus direct routes l Free Routes Structure across sectors
30
Response time (secs) Sim 1: Conf Detection Response Time
31
Studying humans in ATM validation Decision making biases-- ATC = skilled, routine, stereotyped Reluctance-- Organisational / personal (job threat) Operational rigidity -- unrealistic scenarios Transfer problems-- Skills hinder interacting w system Idiosyncratic performance-- System is strategy tolerant Inability to verbalise skilled performance-- Automaticity
32
Moving from CONSTRUCT to CRITERION: Evidence from CTAS Automation Trials Time-of-flight estimation error, by traffic load and automation level.
33
Controller Resolution Assistant (CORA) EUROCONTROL Bretigny (F) POC: Mary Flynn Computer-based tools (e.g. MTCD, TP, etc.) Near-term operational Two phases CORA 1: identify conflicts, controller solves CORA 2: system provides advisories
34
CORA: The Challenges Technical challenges… Ops challenges… HF challenges Situation Awareness Increased monitoring demands Cognitive overload mis-calibrated trust Degraded manual skills New selection / training requirements Loss of job satisfaction
35
CORA: Experiment Controller preference for resolution order Context specificity Time benefits (Response Time) of CORA
36
Construct Operationalised Definition Result SA|ATA-ETA|Auto x Traf WorkloadPupDiam TX - PupDiam base Datalink display reduces WL Dec Making/Response biasIntent benefits Strategies VigilanceRT to AlertsFF = CF AttitudeSurvey responsesFF OK, but need intent info Synthesis of results
37
Validation strategy l Full Mission Simulation –Address human behaviour in the working context l Converging data sources (modelling, sim (FT,RT), etc) l Comprehensive data (objective and subjective) l Operationalise terms (SA, WL) l Assessment of strategies –unexpected behaviours, or covert Dec Making strategies
38
Human Performance Metrics: Potential Difficulties l Participant reactivity l Cannot probe infrequent events l Better links sometimes needed to operational issues l Limits of some (eg physiological) measures –intrusiveness –non-monotonicitytask dependence wrt –reliability, sensitivity –time-on-task, motor artefacts l Partial picture –motivational, social, organisational aspects
39
Using HumPerf Metrics l Choose correct population l Battery of measures for converging evidence l Adequate training / familiarisation l Recognise that behaviour is NOT inner process l More use of cog elicitation techniques l Operator (ie pilot / ATCo) preferences –Weak experimentally, but strong organisationally?
40
Validation metrics: Comprehensive and complementary l Subj measures easy, cheap, face valid l Subj measures can tap acceptance (wrt new tech) l Objective and subjective can dissociate l Do they tap different aspects (eg of workload)? –Eg training needs identified l Both are necessary, neither sufficient
41
Operationalise HF validation criteria l HF world (SA, Workload) vs l Ops world (Nav accuracy, efficiency) l Limits dialogue between HF and Ops world l Moving from construct (SA) to criterion (traffic prediction accuracy)
42
Summing Up: Lessons Learnt l Perfect USER versus perfect TEST SUBJECT (experts?) l Objective vs Subjective Measures –both necessary, neither sufficient l Operationalise terms: pragmatic, bridge worlds l Part task testing in design; Full mission validation l Knowledge elicitation: STRATEGIES
43
Summing Up (2)... Why consider Human Performance? » New ATM tools etc needed to handle demand » Humans are essential link in system How / When is HumPerf considered in validation? » Often too little too late… Lessons Learnt » Role of objective versus subjective measures » Choosing the correct test population » Realising the potential limitations of experts Toward a comprehensive perspective… » Bridging the experimental and operational worlds
44
Thank You... for further information: Brian Hilburn NLR Amsterdam tel: +31 20 511 36 42 hilburn@nlr.nl www.nlr.nl
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.