Download presentation
Presentation is loading. Please wait.
1
Human & Organizational
Performance – H.O.P. Bob Edwards, M.Eng.
2
The Complex Adaptive Environment
The more complex our organization or process is, the more error prone it is and the less tolerant of error it becomes. (Conklin, 2012)
3
H.O.P. helps us improve our operational learning …
… it’s not like a traditional program . . .
4
H.O.P. improves our traditional programs by increasing our collaboration & operational learning.
5
Works hand in hand with our Safety and Quality Management System’s approach:
VPP (4 Elements) ANSI Z10 (PDCA) OHSAS (Risk Based & OE) Upcoming ISO (Similar to Z10) Internal System (i.e. Framework) ISO 9001 (Quality).
6
interested in learning!
Our Goal with H.O.P. . . . is to become a lot less surprised by human error and failure . . . . . . and instead, become a lot more interested in learning!
7
Reliable and Resilient!
Through improved operational learning, our organizations are becoming more Reliable and Resilient!
8
History of Human Performance
9
Nuclear Power Plants
10
Aviation Safety
11
Automotive Safety
12
General Industry
13
The Principles of Human Performance
People make errors Error-likely situations are predictable Individual behaviors are influenced Operational upsets can be avoided Our response to failure matters.
14
Todd Conklin, Ph.D.
15
Implementation Process
Leadership Commitment Selected & Trained Pilot Sites Implemented Learning Teams Developed Advocates / Coaches Created Centers of Excellence Shared Success Stories
16
HOP - GE Businesses
17
HOP – Other Companies
18
The Principles of Human Performance
People make errors Error-likely situations are predictable Individual behaviors are influenced Operational upsets can be avoided Our response to failure matters
19
Was this Kenny’s FAULT?
20
What about an injured firefighter?
21
To err is human . . . . . . to forgive is Divine. (Alexander Pope, ) …and neither are Marine Corps Policy!
22
Free Billy!
23
Blame is very common but not helpful at all!
24
The Principles of Human Performance
People make errors Error-likely situations are predictable Individual behaviors are influenced Operational upsets can be avoided Our response to failure matters
25
“Mistakes arise directly from the way the mind handles information, not through stupidity or carelessness.” (Edward de Bono PhD)
26
Error Frequencies Simple math error with self-check: 3 in 100
(P.L. Clemens, 2002)
27
Error Frequencies Inspector oversight of operator: 1 in 10
(P.L. Clemens, 2002)
28
Error Frequencies High stress / dangerous activity: 3 in 10
(P.L. Clemens, 2002)
29
Error Tolerance is Strongly Situational
A batting average of .330 is worthy of Hall of Fame . . . . . . would that be acceptable … . . . for a concert pianist? (P.L. Clemens, 2002)
30
Operational Errors Immediate results
31
Organizational Errors
Delayed results
32
Robots vs. Humans
33
Error is not a Choice… Error is not Violation.
34
Deviation Types Intentional Deviation (Rule Breaking)
Unintentional Deviation (Error) Normalized Deviation (Common)
35
HOP is NOT the absence of rules or discipline
HOP is the notion that if you depend on a person doing something 100% right 100% of the time… …you will be disappointed... …A LOT (Baker, 2014)
36
The Texting Trash Hauler
37
HOP Terminology Error Traps Provocative Error Traps Trigger
Latent Conditions
39
Error Trap
41
Provocative Error Trap
42
Workers don’t cause failures
Worker’s “trigger” latent conditions that lie dormant in our organizations Errors are consequences not causes. (Conklin, 2012 / Dekker, 2006)
43
Being surprised by it doesn’t make things better.
Error is common . . . Being surprised by it doesn’t make things better.
44
The Principles of Human Performance
People make errors Error-likely situations are predictable Individual behaviors are influenced Operational upsets can be avoided Our response to failure matters
45
People Are As Safe As They Need To Be, Without Being Overly Safe…
In Order To Get Their Job Done. (Conklin)
46
Drivers Are As Safe As They Need To Be, Without Being Overly Safe…
In Order To Get To Their Destination. (Edwards)
47
Beyond the Safety Triangle
300 – Near Misses Injury DAFW Reduction Frequency Reduction Severity
48
Changing Behavior?? Behavior Modification Behavior Change
Observable Hidden Belief Systems (Values) What gets rewarded – gets repeated (Edgar Schein)
49
Performance Modes – Error Rates
Knowledge Based Error Rate: 1:2 Why? No knowledge or reference point Rule Based Error Rate: 1:1,000 Why? Misinterpretation or bad application Low Attention to Task High Skill Based Error Rate: 1:10,000 Why? Complacency. Low Familiarity with Task High (Jens Rasmussen)
50
Performance Modes – Error Rates
Knowledge Based Inadequate knowledge No knowledge No reference point Error Rate: 1:2 Training / demonstration Coaching / mentoring / feedback Rule Based Error Rate: 1:1,000 Misapplication of good rules Application of bad rules Failure to apply a good rule Procedures & control panels Checklists / Cross - checks Low Attention to Task High Skill Based Error Rate: 1:10,000 Omissions Slips / trips / lapses Complacency (how?) Rumble strips / Drift Sensing Automation / Auto Shutoffs Low Familiarity with Task High (Jens Rasmussen)
51
Right defense for the right mode
52
The Principles of Human Performance
People make errors Error-likely situations are predictable Individual behaviors are influenced Operational upsets can be avoided Our response to failure matters
53
Great performance is not the absence of errors. . .
. . . it’s the presence of defenses. (Conklin, 2012)
54
Procedures are important…
But they are not sufficient enough to create safety Our organizations have become complex-webs of procedures that are incomplete and difficult. (Conklin)
55
Defenses Types of Defenses Strength of Defenses Layers of Defense
Sustainability of Defenses
56
Hierarchy of Controls ?? Elimination Substitution Engineering Controls
Administrative Controls Behavior (Cultural) PPE Not so focused on it being a hierarchy More focused on ownership and effectiveness.
57
Layers of Defenses All Procedure Steps Critical Steps
November 10, 2009 Layers of Defenses Example: Walking near a cliff Absence of barriers or fences Improve technique Taking extra care Pay attention Margin – proximity of treadway to edge All Procedure Steps Critical Steps All Risk-Important Actions Main chute fails to open 0.1% Fatality rate 0.001% Source:
58
Strength of Defenses
59
Sustainability of Defenses
. . . and one year later . . .
60
We want to make it easier to do right
than to do wrong!
61
The Principles of Human Performance
People make errors Error-likely situations are predictable Individual behaviors are influenced Operational upsets can be avoided Our response to failure matters
62
Workers Are Masters of Complex Adaptive Behavior… (Conklin)
63
Work as Planned vs. Work in Practice “Masters of the blue line”
Normally Successful! (Conklin, 2012)
64
Accidents are unexpected combinations of normal variability.
(Conklin)
65
combinations of normal variability.
Success is also the unexpected combinations of normal variability. (Conklin)
66
3 Parts of an Event (Conklin) 66
67
bias our judgment of the pre-event context.
3 Parts of an Event The Challenge: Not to let post-event hindsight bias our judgment of the pre-event context. (Conklin) How do you know…? 67
68
“Underneath every seemingly obvious, simple story of error, there is a second deeper story. A more complicated story a story about the system in which people work.” (Dekker, 2006)
69
Our traditional approach . . .
. . . looked for root cause Linear Approach Event 5 4 3 2 1 Root Cause? The problem is, the failure was not linear . . . . . . and there almost NEVER is one root cause.
70
Start back in process . . . . . . move towards the event.
Production pressure Fear of reporting Latent Conditions Inadequate defenses Many of the things that we find that led to the failure, were not identified in traditional hazard assessments! System Weaknesses Event Resource constraints Errors Hazards & Risks Flawed processes Local Factors Normal Variability System deficiencies Near Misses Design shortcomings Poor communication (Edwards/Baker/Howe, 2014)
71
The Pressure to Fix . . . . . . Outweighs the Desire to Learn!
Learning Information Best Solutions Response and Containment Event Time 71 71
72
Everybody knows . . . Audible Alarm
73
“. . . blame is the enemy of understanding.” (Andrew Hopkins)
74
Beyond Taylorism Break the concept that the planner is smarter than the worker Bring the worker and planner together to create the plan Anticipate that human error will occur Expect operational drift Understand why the drift occurs Learn from “masters of the blue line” Learn from failure Learn from success.
75
Knows the most about the hazards.
Sharp End Workers Highest Influence Over System Highest Injury Potential Front-line Supervisors Managers Leaders Company Knows the most about the hazards. Customers Regulators Blunt End (Conklin)
76
Operational Learning
77
… you want to understand why it made sense for people to do what they did … in their context (not yours!) … (Dekker, 2006) Safe assumption – people came to do work Michelle story – “what are you thinking?”
78
Daily Operational Learning
Possible Questions: What went well today? What surprised you? What did you learn today? What will help us tomorrow? Pre-Job Brief Post-Job Brief Leads to frequent meaningful daily improvements. Sometimes uncovers significant system problems
79
Event Response Respond seriously and deliberately to events, near misses and good catches Remember, events are “Information Rich!” Promote a culture of learning & collaboration Learn before taking action Reduce operational complexity Fix processes and systems – NOT people.
80
How do we do this? Pull the right people together
Give them time and space to learn and discover Create an open dialog (Trust) Ask meaningful questions Listen and Learn (Be interested) Empower the team to help solve the problems. Talking: You didn’t come to work today to… Pull away from the shop – do it there, brains can’t take a rest from the job Learn – teach the coach Discover – 4 year old, everything is discovery – think we’ve seen it all, turn the switch back on – coupling, took at where we learned Trust – Quiet guy – fire / Catoosa county trench work– open door policy – near misses Tell me about the kind of work you do – trust, common ground Common ground – we all what to be successful Questions – that’s your goal as a coach Interested – normally thinking about your response Notes – don’t sanitize – all we need them to know it is complex – not a black line problem – VP/stone and sword joke, hang charts anyway Empower – help them think it through
81
Shift the question from “why” . . . . . . to “how”. (Conklin)
Physics - Gas, spark, blew up
82
Operational Learning Not an investigation Not worried about collusion Not focused on the “one true story” Not focused on the one “root cause?” It is the story as each person saw the event It is the story of complexity It is the story of normal variability and coupling It is the story of how work gets done. Don’t care about event Example: pissing contest on the phone and fork truck – buggies are old, yolks are bent Put one true story in as last
83
The Learning Team Process
Determine need for Learning Team 1st Session – Learning Mode only Provide “Soak Time” 2nd Session – Start in Learning Mode Define defenses / build new ones Tracking actions & criteria for closure Tell the story. Hour long Soak time – overnight – “we couldn’t figure out how to do it…” – how many sessions Defenses – what do you want to do different? Crane story – crane program, not high risk – weak signal story – moving crane – put the letter in (used to be mad) Use your same crane actions Creates cultural confidence Freewill vs determinism – more influence than we think
84
Learning Team Session One
. . . is to learn and discover . . . . . . and not to fix! Cure for cancer
85
Sample questions for Session 1 -
19 September 2018 Sample questions for Session 1 - - How far back in the process should we start? - Tell me about your work. How hard is it to get things done? - How doable are your procedures? - Do you have the right tools? - What were the conditions leading up to the event? - What other near misses have you seen in this area? - What worked well? What failed or went wrong? - Where else could a similar event happen? - What else should I know? Who should this be shared with? - How did the employee’s actions (or inaction) make sense in their context? (not yours!) - Who else should we invite to the next session? The second to last line comes up as two
86
Wall of Discovery Add in Bob’s story – longest forks used in narrow aisle Add blow up picture of George Came from George the fork truck driver – more powerful
88
Soak Time At least overnight (if at all possible)
Allows time to process learnings Allows time to go look Allows time to study and research How many times have you said – “let me sleep on it”?
89
Sample questions for Session 2 -
19 September 2018 Sample questions for Session 2 - What else did you think of since we last met? What worked well and what did not? What do you want to do to make it better? How strong are our defenses? Who owns the action items? When should we follow up to make sure things are getting done and working better? How can we tell the story? Who should we tell it to? Help carry people past the “retrain” notion
90
Immediately following an event …
19 September 2018 Immediately following an event … DO ASK AVOID ASKING What is the current condition of the person or people? “Is it a recordable?” “When will the line be running again?” Are our operations safe and stable at this time? “Why did this happen?” “Who did this?” “Were they following procedure?” Tell me the story of how this happened This slide give some examples of simple questions you can use to help facilitate the CET. The question becomes, how do you know when you know enough? When you are done asking questions? The more investigations you do using CET, the more it won’t be necessary to dictate an end point – you will know, because you have a very accurate picture of the situation and its contributing factors. But if this is one of your first times, trying using this rule of thumb: If you cannot answer question 10 from the list on this page, you probably don’t know enough yet. The final question presupposes that you understand that humans make the best decisions they know how to in a given situation. If an employee knew a failure would occur as a result of their actions, they would not have made that decision. Assuming that an employee understands the implications of the action that led to a failure, is called hindsight bias. In light of this, you cannot fully understand the context of the event until you understand how the employee’s actions made sense to them at the time. Although this rule of thumb can be helpful, it is not an end all be all; knowing how it made sense for the employee to turn on a sump sucker tells us very little about the event context. Remember to use all you’ve learned about human factors and organizational performance during these investigations. There is no one tool or trick that will cover every scenario. What will it take for us to be able to show that it is safe to run again? “Unless you can prove it’s not safe, turn it back on!” Is this an opportunity for operational Learning to help us understand? “What is the root cause and corrective action?”
91
When do we need to learn? Post-event (Injury/Quality/Operations)
Near Miss or Close Call Good Catch Interesting Successes High Risk Operations Challenging Design Problems Anytime you can’t explain something. 91
92
Bias and other Obstacles
Hindsight Bias Group Think Confirmation Bias Counter Factual Similarity Bias Need for Blame Admission of Blame Irrationality Bounded Rationality Fundamental Attribution Error Irrational Escalation Bias (Sunk Cost)
93
Tracking Actions Example of how it was tracked – helped with the culture Cautious not to add bureaucracy
94
Learning Team Closure Criteria
Continue the effort with the Learning Team until the Team has implemented sufficient levels of defenses to satisfy: The Learning Team The affected employees Management AND . . . We have communicated our story.
95
We can NOT engineer out the possibility of every mistake…
…we cannot error proof the world We CAN build layers of defenses to build in space for mistakes.
96
Learning from success . . .
97
Safety Defined Safety is not the freedom from risk …
. . . it is the freedom from unacceptable risk. M. Bidez, 2013
98
When we believe we know the answer . . .
. . . we stop asking questions . . . we stop listening . . . we stop learning! 98
99
The power to ask the right questions . . .
. . . comes from acknowledging that you don’t know the right answer. 99
100
Workplaces and organizations are easier to manage than the minds of individual workers. You cannot change the human condition, but you can change the conditions under which people work. (Dr. James Reason) 100
101
Nevada military depot mortar explosion kills seven Marines
(Reuters) - A mortar explosion at a U.S. Army munitions depot in Nevada killed seven Marines from Camp Lejeune, North Carolina, and injured seven other service members during a live-fire training exercise, military officials said on Tuesday. (March 18, 2013) Nevada military depot mortar explosion kills seven Marines Paul Szoldra, 9:16 a.m. Mar 19, 2013 Marines: Human error to blame for deadly blast in Nevada Jim Michaels, USA TODAY12:34 p.m. EDT May 29, 2013 A training accident in Nevada that killed seven Marines during a live fire exercise earlier this year was caused by "human error," the Marines said in a statement Wednesday. 101
102
“I have never been especially impressed by the heroics of people convinced they are about to change the world. I am more awed by those who struggle to make one small difference.” (Ellen Goodman) 102
103
Recommended Reading Check out Todd Conklin’s Podcasts “Pre-Accident
Investigations” (Todd Conklin) “The Field Guide To Understanding Human Error” (Sidney Dekker) “Managing The Unexpected” (Weick & Sutcliffe) Bob Edwards / / C: Check out Todd Conklin’s Podcasts
104
104
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.