Download presentation
Presentation is loading. Please wait.
Published byPeter Park Modified over 9 years ago
1
First, Do No Harm: Building a Culture of Patient Safety at Novant Health Physician Education Part 1: Safety Concepts and Theory Each day, hundreds of patients are entrusted to our care. Caring for others or supporting those who provide patient care is a responsibility that we each take seriously. It is the reason that we chose a career in healthcare. We each likely can recall a situation that resulted in harm to a patient or to an employee. We’ll call these safety “events” of harm. We don’t intend for these events to happen, and often when we look back on the event, it is easy to identify how the errors that led to the event could have been prevented. How does it make you feel when you realize that an event could have been prevented and that a patient or employee could have been spared from harm? The thought can make your heart sink. Safety of patients and employees has always been a priority at Novant. Yet, we are sharpening our focus to create a culture of safety – adopting and ingraining shared values and beliefs about how we act and interact – so that we can make our organization an even safer place. This initiative is about each one of us and the actions we can and will be expected to take to improve the safety and performance of our organization. ©2009 Healthcare Performance Improvement, LLC. ALL RIGHTS RESERVED Prepared for Novant Health for their non-exclusive, internal use only. 1
2
Goal and Objectives Goal: Objectives:
Understand the Novant Safety Behaviors and commit to making them personal work habits Objectives: Describe what we mean by building and sustaining our patient safety culture. Explain why people make errors in complex systems and how we can reduce errors from propagating through these systems. Present an overview of the Safety Behaviors here at Novant in preparation for the second part of our CME program Today we have three objectives. First I want to share with you what we mean by building and sustaining our patient safety culture. Second, I want to do some fundamentals. I want to give you an understanding of why people experience errors in complex systems and how concepts like Crew Resource Management have been developed and applied over the past decade to help share information to reduce errors from propagating through these systems. And last, I want to give you an overview of the Safety Behaviors here at Novant in preparation for the second part of our CME program where we will explain how physicians can use those in practice with the overall goal of reducing harm in our hospital by 80% every 2 years. 2
3
Why are we here? Mary Nicholas Lizzy Molly Carson Kiko Damon Mary Beth
Richard Megan With this in mind, I’d like to turn it over to the President of Acute Care Services for Novant Health, Greg Beier, for his view on why we are embarking on this journey toward a culture of patient safety.
4
Safety Culture – “A 747 a Day”
2000 IOM report, To Err is Human: Building a Safer Health System 44,000 to 98,000 Americans dying annually from medical errors 98,000 = 270 people / day (747 capacity) 44,000 = 120 people / day (737 capacity) Let’s talk about the numbers around medical error in the USA. Here are the results of some studies done in the last few years, that have brought focus to medical errors in the healthcare industry The Institute of Medicine came out with a report in 2000: To Err is Human. Perhaps this was the 9/11 wake up call for the healthcare industry. The numbers actually validated a previous study by Dr. Lucian Leape, a researcher from Harvard, who published similar numbers five years earlier. The findings of the study brought us to realize that, as a healthcare industry, we weren’t as safe as we perhaps once thought. The report projected 44,000 to 98,000 deaths occur each year as the result of medical errors. A retrospective study conducted by HealthGrades and released in 2005 supported the projections of the IOM report, revealing that 298,865 patients had died as a result of preventable patient safety incidents during the 3-year period 2001 to That’s 99,622 people a year, in line with the higher end death toll estimate of the IOM report. Great point to emphasize the enormity of this number: - A 747 jumbo jet holds 268 passengers (if configured with a business and first class section). That’s like crashing one per day and killing everyone on board. - Or another way to compare: In 2005, a total of 37,006,027 people were admitted to the hospital (Source: AHA Fast Facts on US Hospitals). Assuming 98,000 deaths per year, approximately 1 of every 378 people admitted to a US hospital dies as a result of a preventable medical error. Additional point to emphasize: As in all other industries, we are challenged to do more and get better results with little or no increase in resources. That means that we need to work smarter. Practicing behaviors that will lead to safer, more reliable, and more productive performance will help us do just that.
5
Published Cases HPI - a Reliability company
Savannah, GA 500 bed academic institution 89% reduction in 2 years 50% reduction in 18 months AHA Quest for Quality Award 2004 TJC Eisenberg Quality Award 2005 HPI - a Reliability company Comprehensive safety culture engagement Over 140 hospitals nationwide
6
A deviation from standard of care or practice expectations that…
Safety Event Classification SEC SM Serious Safety Event Reaches the patient Results in moderate to severe harm or death Cause Analysis: Root Cause Analysis (RCA) Required Serious Safety Events Precursor Safety Event Reaches the patient Results in minimal to no detectable harm Analysis: RCA or Apparent Cause Analysis (ACA) Precursor Safety Events At Novant we are carefully tracking these “serious events of harm” to our patients. We look at the events from a patient’s perspective and ask ourselves, “Did we deviate from a standard of care or practice expectation” and if so, did the deviation reach the patient and result in moderate to severe harm or death. If so, it is classified as a Serious Safety Event and we’re going to put it on a chart to measure it in order to track our improvement over time. You can think of these events like Sentinel Events, NQF Never Events or state reportable events, but they’re more than that. They are us being hard on ourselves, comparing ourselves to best practices and saying “did we really do all we could for that patient?” Its not a legal definition – its our definition, and since we look at it from a harm perspective – was the harm moderate to severe up to and including patient mortality – it gives us a clear delineation to separate it from other events. So, if the errors (deviations) reached the patient, but resulted in minimal to no detectable harm, then it is classified as a Precursor Safety Event. We’ll still keep track of it and do some kind of investigation to hopefully learn some valuable lessons, but we won’t track it for now on a chart. And, if the errors do not reach the patient, either from a good defensive barrier or a lucky catch, then it is classified as a Near Miss Event, and we’ll see if there are any obvious things we did well or not so well from a lessons-learned standpoint to apply in the future. What’s important to realize is that for every Serious Safety Event that occurs in our system, there are dozens, maybe hundreds of Precursors out there and hundreds, maybe thousands of Near Miss Events. And from a human performance standpoint, the causes of the Near Misses are the same causes as the Precursor Events are the same causes as the Serious Safety Events – its just that on that one fateful day, in the case of the Serious Safety Event the holes in the Swiss Cheese all lined up, resulting in an event of harm to a patient. Near Miss Safety Event Does not reach the patient – error is caught by a last strong detection barrier designed to prevent event Cause Analysis: No formal Near Miss Safety Event © 2006 Healthcare Performance Improvement, LLC. ALL RIGHTS RESERVED.
7
Serious Safety Event Rate # SSE during past 12 months
SSER Calculation Serious Safety Event Rate SSER SM Rolling 12-month rate of Serious Safety Events per 10,000 adjusted patient days Why a 12-month rolling average? Smoothes the curve for infrequent events Encourages sustainability in reliable safety performance (it takes 12 months for an event to “drop out” of the average) # SSE during past 12 months # APD for past 12 months X 10,000 SSER = 7
8
Serious Safety Event Rate
4/15/2017 Serious Safety Event Rate SSER SM 1000 Bed Hospital SSER JAN 2005: 1.21 SSER JAN 2007: % reduction Number of Patients Harmed
9
Novant Health (9 hospitals)
Rolling 12-month rate of Serious Safety Events per 10,000 adjusted patient days Here we see the current Serious Safety Event Rate at Novant as of June 1st. The blue bars correlate to the right side of the chart – they are the number of patients harmed each month. If anybody ever asks you “what do those blue bars represent?” I want you to say – PEOPLE! Those blue bars are the people who were harmed in our hospitals due to a deviation in a standard of care. The red line is a rolling 12-month average that correlates to the left side of the chart. It takes into account the last 12 months worth of data and is normalize per adjusted patient days to account for both inpatient and outpatient numbers. It also gives us a more sustainable metric so we are not so focused on one particularly good or bad month. Is anyone troubled by the fact that our rate is dramatically rising? Well, I can see why you might not like this trend, but in reality it is an indication that we are starting to recognize many more of the instances where a patient was harmed due to some kind of failure on our part. We are seeing a healthier reporting culture being established and leaders on your Medical Review Committees are taking a closer look at cases to hold ourselves to a higher standard than we have in the past. Soon we expect to see the curve flatten out as we establish a baseline SSER . Then, based on our hard work together moving safety behaviors to practice habits, we will see a dramatic drop in the rate – 80% over a two year period if we all practice these Safety Behaviors together! 9
10
Journey to Improving Reliability
Optimized Outcomes 10-6 10-5 10-4 10-3 10-2 10-1 Process Design Evidence-Based Best Practices Technology Enablers Process optimization/simplification Tactical interventions Behavior Accountability Behavior Expectations Knowledge & Skills – Error Prevention Reinforce & Build Accountability Integrated With Frequency of Failure Time
11
Why Do Events Happen? #1 Prevent the #2 Find and fix
Active Errors by individuals result in initiating action(s) EVENT of HARM Multiple Barriers - technology, processes, and people - designed to stop active errors (our “defense in depth”) Latent Weaknesses in barriers #1 Prevent the human errors #2 Find and fix system and process problems Two Strategies to Eliminate Safety Events: The Swiss Cheese Effect is a model that explains how human error results in events of harm. This model was developed by James Reason, a psychologist studying aviation safety events in the UK. Complex systems fail in complex ways. The healthcare system is designed wherever possible with defense-in-depth such that single human errors do not result in harm. The defense-in-depth is shown by the series of barriers. The barriers are invariably imperfect (shown by the holes in the barriers and hence the name “Swiss Cheese Effect”). For harm to occur, the system must be triggered by an active error and experience a total breakdown of all of the barriers in the system intended to prevent that error from reaching the patient. Safety is defined as the absence of harm. Patient Safety is then defined as no harm to patients. Events are the enemy of the complex system, not human error. Human error is only undesirable because errors can lead to events. (On average a sentinel event is the result of eight human errors.) Active errors trigger the system. The system, represented by a series of barriers against that error reaching the patient, works to find and fix the active error. We all know from working in the system that the system is not perfect. The system is full of holes, hence the name “Swiss Cheese Effect.” In healthcare, the system is more often additional people instead of technology such as forcing functions in computers and barcode scanners. There are two basic approaches to reducing the event rate. First, reduce the human error rate. The event rate is proportional to the human error rate so any decrease in human error rate will reduce the number of events. Behavior-based approaches to safety culture are designed to reduce the human error rate, and thereby event rate, by 80% in two years. Second, find and fix the holes in the Swiss cheese. Continuous improvement approaches to system reliability are designed to reduce event rate by 50% in two years. Novant plans to continue finding and fixing the system problems, add behavior-based prevention through patient safety culture, and measure the results by the reduction in the numbers of sentinel plus critical events. Adapted from Dr. James Reason, Managing the Risks of Organizational Accidents, 1997
12
Influencing Behaviors at the Sharp End
4/15/2017 Influencing Behaviors at the Sharp End Design of Culture Outcomes Behaviors of Individuals & Groups Structure Technology & Environment Work Processes Policy & Protocol Now, let us consider why care providers, trying their best to do a good job for their patients, experience errors. The Sharp End model is shown in the figure above. The Sharp End model was developed by Cook and Woods to explain why error occurs in a complex system. (The expression “sharp end” was coined by James Reason; the model was first depicted by Cook & Woods.) The system is represented by the inverted triangle. The system shapes behavior through five types of factors. Saving culture for last, structure is how we’re organized – what units, what service lines, who specializes and who doesn’t. The more we specialize, the more we have to handoff, so you automatically introduce more communication errors. Policy and Protocol are the guidance documents for the do’s and don’ts of the system. Process includes how things are done – efficiently or not, too many steps or not enough, best practice or just how we’ve always done things. Technology includes bar code scanners, computerized order entry, and robotic medication dispensing while Environment is physical in nature: the layout of the unit, the room, the space, and even the design of the medical device or tool. Culture is shared values and beliefs of the people. What the people truly believe, not what they think they believe or what they say, has a strong effect on behavior. What is important is noticed, and what is important gets done. Human error is experienced when the care provider, at the sharp (pointed) end, experiences a difference between the ideal condition (defined by the blunt end of the system) and real world (nearer to the sharp end). This difference challenges the provider, thereby increasing the probability of an error. The most common example is time pressure. Time pressure is the difference between the ideal (sufficient time) and the real (time allowed). Time pressure increases the probability of human error by factors ranging from four to seven times. The best way to reduce human error is to improve the system. Novant’s first approach has always been to improve the system or process. Safety Culture adds to the system approach. Adding a behavior-based approach to safety culture prepares the care providers with simple behaviors to prevent human error when subjected to a system problem, thereby accelerating the rate of improvement. “You have to manage a system. The system doesn't manage itself.” W. Edwards Deming "A bad system will DEFEAT a good person every time.“ W. Edwards Deming Adapted from R. Cook and D. Woods, Operating at the Sharp End: The Complexity of Human Error (1994)
13
As Humans, We Work in 3 Modes
Knowledge-Based Performance “Figuring It Out Mode” Rule-Based Performance “If-Then Response Mode” Skill-Based Performance “Auto-Pilot Mode” Introduction: Brief review slide: no need to go into details: next slides will provide details for each mode of performance. Human beings experience three different types of errors – skill-based errors, rule-based errors, and knowledge-based errors. In skill-based performance, a well-developed skill pattern exists in your brain, developed through practice and repetition of a an act: it becomes “auto pilot”. In rule-based performance, we perceive a situation and our brain scans for a rule – usually learned through education or experience – and we act to apply the rule. In knowledge-based performance, we’re in a new or unfamiliar situation. We have no developed skill, and we are not aware of an established rule to apply. This is a problem solving, or a figuring-it-out, mode. Now we’ll talk about each mode of human performance, the types of errors we experience in each, and specific error prevention strategies. 13
14
Skill-Based Performance
What You’re Doing At The Time: Very routine, frequent tasks that you can do without even thinking about it – like you’re on auto-pilot 3 in 1,000 acts performed in error (pretty reliable!) Slip – Errors of commission – the act is performed wrong Lapse – Errors of omission – you fail to do what we meant to do Fumble – Motor skill errors Errors We Experience Stop and think before acting Error Prevention Strategy Introduction: Brief review slide: no need to go into details: next slides will provide details for each mode of performance. Human beings experience three different types of errors – skill-based errors, rule-based errors, and knowledge-based errors. In skill-based performance, a well-developed skill pattern exists in your brain, developed through practice and repetition of a an act: it becomes “auto pilot”. In rule-based performance, we perceive a situation and our brain scans for a rule – usually learned through education or experience – and we act to apply the rule. In knowledge-based performance, we’re in a new or unfamiliar situation. We have no developed skill, and we are not aware of an established rule to apply. This is a problem solving, or a figuring-it-out, mode. Now we’ll talk about each mode of human performance, the types of errors we experience in each, and specific error prevention strategies. 14 14
15
Rule-Based Performance
What You’re Doing At The Time: Responding to a situation by recalling and using a rule that you learned either through education or experience Used the wrong rule – You were taught or learned the wrong response for the situation Errors You Experience Educate about the right rule Error Prevention Strategy Misapplied a rule – You knew the right response but picked another response instead Think a second time Non-compliance – Chose not to follow the rule (usually, thinking that not following the rule was the better option at the time) Reduce burden, increase risk awareness, improve coaching In rule-base performance, you perceive a situation and your brain scans for a rule – usually learned through education or experience – and you act to apply the rule. In this use, the term rule means more than policy or law. Rules describe our knowledge of how the world works. They are learned principles. We have rules for everything – oil and water do not mix and everything that goes up must come down. In rule-based performance, there is familiarity with the task (we know the rule), yet conscious thought is applied to determine what rule fits best with the current situation. Rule-based errors occur in three varieties: Wrong rule errors occur when an incorrect answer is learned as the right answer. When asked the price of a first class stamp, you respond by saying, “42 cents.” Yet the cost of a first class stamp now is 44 cents. What happens the first two weeks of the year when you write a check? Misapplication of a rule occurs when your thinking becomes confused. This is not a knowledge problem – you know the right answer – but a critical thinking problem. As an example, ask a group of people to spell three words out loud. Say the word MOST. Say the word COAST. Say the word BOAST. Then ask the question, what do you put in the toaster? Many in the group will quickly respond toast. We know the correct answer to the question, yet became biased by pattern and responded with an answer that seemed a best-fit for the situation. Non-compliance occurs when the rule is known and thought about at the time, but a choice is made to do otherwise, usually thinking that a better result can be achieved with the same or less effort. For example, short-cutting seatbelt use because you are in a hurry. The probability of experiencing a rule-based error is 1 in 100, or 1%. This also is a pretty reliable mode for us humans. Experts live in rule-based thinking – they know what to do, and they stop when they don’t know what to do. There are different error prevention strategies based on the type of rule-based error. For a wrong rule error, teach your colleague the correct rule. For rule misapplication, coach your colleague to think a second time about the response before acting. For non-compliance, coach your colleague by either reinforcing a professional standard or teaching the risk or consequences of the non-compliance. 1 in 100 choices made in error (not too bad!) 15
16
Knowledge Based Performance
Lack of Knowledge Based Performance What You’re Doing At The Time: Problem solving in a new, unfamiliar situation. You come up with the answer by: Using what we do know Taking a guess Figuring it out by trial-and-error You came up with the wrong answer (a mistake) Errors You Experience STOP and find an expert who knows the right answer Error Prevention Strategy In knowledge-based performance, you’re problem solving in a new or unfamiliar situation – you don’t know the rules, or perhaps no rule exists for the situation, and you certainly don’t have an established skill. You lack or have very little familiarity with the task, and a high degree of conscious thought has to be applied to figure it out. This mode could be better called lack of knowledge-based performance because you’re outside your area of practice or facing a very complex case. When faced with a situation that doesn’t fit with anything that you have learned, you attempt to cobble together an answer by breaking-down the situation into manageable pieces that you can recall. Here is a contrasting of a rule-based problem and a knowledge-based problem: Problem 1 – What is 4 + 4? The answer is 8. This is a rule-based problem, and you’re able to respond lightening fast with high accuracy. Problem 2 – What is the volume of a cylinder 4 meters in diameter and 8 meters tall? The answer is ??? This is a knowledge-based problem for most of us in which our response is very slow, error-prone and relies on a good memory of middle school math. In knowledge-based performance, 3 to 6 of 10 decisions (30-60%) will be the wrong ones – a very high probability of human error. The best strategy to prevent knowledge-based error is to STOP when you find yourself in knowledge-based performance and find an expert source – someone who or something that can provide the rules. If you see a coworker operating in knowledge-based performance, intervene to stop them and offer assistance, or help them find an expert source. Change your knowledge-based error into someone else’s rule-based success. Physicians have an excellent strategy for preventing knowledge-based errors – call for a consult. If the case is complex or beyond the physician’s experience or expertise, they consult with a specialist who knows the rules. 30-60 of 100 decisions made in error (yikes!) 16
17
Power Distance Geert Hofstede’s Power Distance
Extent to which the less powerful expect and accept that power is distributed unequally Measure of interpersonal power or influence superior-to-subordinate as perceived by the subordinate Leads to strong Authority Gradients, which is the perception of authority as perceived by the subordinate In our discussion and formation of our Novant Safety Behaviors, one thing we wanted to specifically address was the issue of Power Distance and Authority Gradient. Power Distance is the extent to which the less powerful expect and accept that power is distributed unequally. I want to emphasize that power distance is given as much as it is taken. In the 1960’s and 1970’s Dutch psychologist Geert Hofstede conducted extensive research on this topic as part of his “Hofstede’s Dimensions”. One of these dimensions, which describe how humans interact with one another, is a particularly interesting measure known as the Power Distance Index, or PDI. This index measures Power Distance among cultures, and indicates attitudes toward hierarchy – in particular, how a culture values and respects authority. The higher the Power Distance in a culture, the less likely those in subordinate roles will question the actions or directions of individuals in authority. From a culture standpoint, countries like Saudi Arabia, Singapore, The Philippines, and certain Latin American countries have extremely high Power Distances, where certain gender, social or subordinate professional groups do not question the actions of those in positions of authority. They ranked at the top of the scale, where you can see the United States was toward the bottom, indicating a moderate to low Power Distance culture. What’s interesting is that despite the relatively flat power distance rating in the general American culture, it is viewed as quite high among certain professional groups in the hospital arena. And so we can see that while physicians don’t always recognize the effect, nurses and technicians will view it as quite high, leading them to be reluctant to raise questions and voice concerns to members of the medical staff. USA Moderate to low PD (38th of 50 countries) Surgeons & anesthesiologists view low Nurses view as significantly higher
18
Korean Airlines Flight 801
High Power Distance Minor Technical Failure Bad Weather Fatigue The phenomena is best understood by reflecting on the crash of a 747 in Guam in 1997 that is described by Malcolm Gladwell in his recent book Outliers. Captain Park Yong-chul of Korean Air flight 801 was an award-winning, 42 year old pilot with almost 9000 flight hours, including 3200 hours in the On the night of August 5, 1997, he was confronted by four layers in the Swiss Cheese – he was fatigued after a long day of flying, was in bad weather on a night approach to the Won Pat International Airport in Guam, and was flying with a minor technical malfunction that had altered the type of instrument approach they were conducting. Despite these three stage-setting layers in the Swiss Cheese, the biggest problem in the cockpit was the extremely high Power Distance resulting from the Korean culture. Captain Park’s First Officer and Flight Engineer both suspected something was wrong – that the plane was off course on the approach, but in that culture it was extremely difficult for subordinates to question the authority and decisions of senior officers. They had all grown up in a culture where enormous attention is given to the relative standing of people in a conversation, and deference is given to superiors so as not to bring offense. In one situation a copilot who pointed out his own error was slapped by the Captain with the back of his hand for making the error. Imagine the fear that infused the Korean Air cockpits at that time. These two junior officers both tried to hint and hope to the Captain, staying firmly within the cultural boundaries of their high power-distance communications structure, but neither clearly voiced their safety concerns to Captain Park. As they descended from their final approach altitude, the aircraft was over 2 miles farther away from the runway than they thought and heading right for Nimitz Hill, a 600-ft mountain southwest of the airport. Despite their suspicions that the plane was in extremis, in bad weather with obstructions nearby, the two subordinate officers did not take deliberate action until six seconds before impact, when they both finally called for a missed approach – a procedure to climb and go around for another landing attempt – a maneuver they should have performed much, much earlier. On the flight data recorders you can hear the ground proximity warning alarms going off and the terror in their voices people were killed. In the wake of this and seven other accidents over a ten-year period, Korean Airlines finally took decisive action and brought in an outsider from Delta Airlines to help improve their safety performance. They focused on these cultural elements that stood in the way of good information sharing and decision making. They literally transformed their culture and now have been accident free since They have made such a turnaround, in fact, that in 2006 they were given a Phoenix Award by the Air Transport Association for their safety culture transformation efforts.
19
Authority Gradient Perception of authority as perceived by the subordinate Culturally imbedded & handed down Requires active measures to overcome in order to communicate clearly & share vital information Authority Gradient is a subset of Power Distance, and develops over time as subordinates learn to not cross the line established in high power distance cultures. This is a learned trait that is handed down from one generation to another, and is often strengthened by outward symbols within an organization. Often those on the superior side of the gradient do not even recognize it exists, yet from the subordinates standpoint the signs are obvious. So in a hospital, the first thing we see when we arrive in our car are reserved parking spaces for physicians, and when we enter the lobby we find a sign for the physician’s lounge, and even in our communications we refer to physician’s by their title. Now, these are all well-earned signs of respect that I am not even remotely suggesting we take away. However, it is important for members of the medical staff to recognize how these totems help create authority gradients in our culture an then take measured steps to counter them in order to make staff feel comfortable raising questions when they are unclear about something or voicing safety concerns. Of course, the strongest contributor to authority gradients is how we interact with one-another from an interpersonal relationship standpoint. And so, one physician who has a bad night after being woken up numerous times by nurses who don’t have all the information they should, and that physician lashes out in a verbally abusive manner, he contributes to these authority gradients that then become very difficult to overcome. And his words and actions may affect a different physician 5 years down the road because that particular nurse once had a bad experience calling in the middle of the night and now, when a patient most needs it, is afraid to make the call – just like the copilot of Korean Airlines flight 801 who was afraid to voice his concerns and make the call for the aircraft to climb and go around for another approach. Dr. MD
20
Crew Resource Management
The airline industry developed the concept of Crew Resource Management, or CRM, in the 1970’s to help aircrew cross the authority gradient and share information needed for the safe conduct of operations. CRM grew out of a NASA workshop convened in 1979 to address the issue of air safety after a number of terrible accidents that were primarily attributable to pilot error and specifically related to intimidating cockpit environments set by Captains who thought copilots should only “speak when spoken to”. The disaster at Tenerife in 1977 has been historically used as the best example of how this played out in practice. Captain Jacob van Zantent was Royal Dutch Airline’s chief 747 instructor pilot, training nearly all of KLM’s other 747 pilots. He had an impeccable reputation, even being featured in KLM’s inflight magazine as the poster-boy for the airline. On March 27, 1977, events unfolded to create a sense of urgency on his part to take off in poor weather at a small airfield on Tenerife, where his and other aircraft had diverted after a bomb scare at their destination airport in the Canary Islands. After some confused communications, he prepared for takeoff, even though other members of his crew had voiced muted concern over the clearance. He confidently told them he knew what he was doing and began his takeoff roll. Eight seconds before impact they saw the outline of another 747 from Pan Am airlines who was taxiing down the runway in the opposite direction as directed by the tower to get ready for takeoff after the KLM Captain van Zantent attempted to pull up early but it was to no avail, impacting the Pan Am jet just aft of the cockpit. The crash killed 583 people, and was directly attributable to poor communications in an environment that did not promote information sharing and the voicing of safety concerns. The healthcare industry has seized upon and adopted many of the core concepts from CRM over the last decade to help teams operate more effectively, and I know many folks from Novant have seen this first hand through your Team Effectiveness Training in which some staff and physicians have participated. While our First, Do No Harm culture transformation is new and distinct from your TET work, you will find some of our Novant Safety Behaviors cover similar themes. Practice with a Questioning Attitude is all about helping staff to improve critical thinking skills. It is not about asking questions, it’s about questioning the answers. And when they question you as a physician, it is not intended to question your knowledge, skill or authority, but to help them recognize those situations when something doesn’t seem quite right or when they need to seek greater understanding, and then feeling comfortable asking that question or stopping the line when uncertain. Supporting Each Other through Crosscheck and Assist or the ARCC technique are ways to improve human reliability through the cross-monitoring of human performance. ARCC technique is a measured response tool that gives people a graduated method to elevate safety concerns in a structured manner to help them feel comfortable crossing authority gradients. We’ll cover each of these Safety Behaviors and Error Prevention Tools in much greater detail in Part 2 of our CME program.
21
Assertiveness The willingness to state and maintain a position until convinced otherwise by facts Requires initiative and courage to act Behavior Continuum PASSIVE ASSERTIVE OVER-AGGRESSIVE ‘Too nice’ Actively involved Dominating Procrastinates Ready for action Intimidating Avoids conflict Useful contributor Abusive ‘Along for the ride’ Speaks up Hostile ARCC can help us all become more Assertive for the best interest of our patients. Assertiveness means we’re willing to state and maintain a position until convinced otherwise by the facts. It sometimes takes courage – but our patients deserve it! It doesn’t mean we have to be over-aggressive, mean or rude. As you can see here on this Behavior Continuum, being assertive is the sweet spot between being over-aggressive and passive. If we’re too passive, we may feel intimidated, want to avoid conflict or just not interested, and this may lead to a failure to point out when something isn’t quite right for our patients. If we’re too aggressive, you can see the traits coming out – dominating, intimidating, abusive or hostile – that create a poisonous work environment where people don’t want to share information or cross-check one another. So we need to find that sweet spot in the middle – Assertiveness where we are actively involved, useful contributors who speak up for safety in the best interest of our patients. We can all use ARCC technique to help us become more assertive.
22
Five Principles of High Reliability Organizations (HROs)
Three Principles of Anticipation Preoccupation with Failure Regarding small, inconsequential errors as a symptom that something’s wrong Sensitivity to Operations Paying attention to what’s happening on the front-line Reluctance to Simplify Encouraging diversity in experience, perspective, and opinion Two Principles of Containment Commitment to Resilience Developing capabilities to detect, contain, and bounce-back from events that do occur Deference to Expertise Pushing decision making down and around to the person with the most related knowledge and expertise What is it that HRO’s do to improve reliability? Weick and Sutcliffe talk about five areas of focus – Three known as Principles of Anticipation and two as Principles of Containment. Let’s start at the bottom – Deference to Expertise. This is all about flattening out the hierarchies in our organization. It gets back to that topic of power distance and authority gradients we talked about in our discussion of ARCC. The other day I had a surgeon share a quote one of his partners used to say to his team before starting their day’s procedures – “This is my operating room – no independent thought!” Well, you can imagine the impact this had on people sharing information and pointing out mistakes. There was no effort made to pull in the collective expertise of the members of that team, and eventually that neurosurgeon conducted a wrong site surgery on a patient undergoing brain surgery that changed his life. Being sensitive to operations gets to the heart of the two functions of a leader that started our discussion: Building and Reinforcing Accountability for our compliance expectations and Finding and fixing system problems that make it difficult for our people to perform effectively. This is why leaders in HRO’s take a measured approach to getting out of their seats to make daily observations of the work areas under their control while influencing behaviors. There’s no better way to find those roadblocks making it difficult to comply with our expectations and to let our people know first hand what we expect from them. This top element – Preoccupation with Failure – is critical to becoming an HRO. In HRO’s leaders are fanatically preoccupied with failure – they are always looking for seemingly small errors and problems as indications something bigger is lurking around the corner.
23
Novant Safety Behaviors & Error Prevention Tools
Practice with a Questioning Attitude A. Stop, Reflect & Resolve in the face of uncertainty Communicate Clearly A. Use SBAR-Q to share information B. Communicate using three-way repeat backs and read backs C. Use phonetic and numeric clarifications Know & Comply with Red Rules A. Practice 100% compliance with Red Rules B. Expect Red Rule compliance from all team members C. If compliance with a Red Rule is not possible, STOP action until any uncertainty can be resolved Self-check: Focus on Task A. Use the STAR technique Support Each Other A. Cross-check and Assist B. Use 5:1 Feedback to encourage safe behavior C. Speak up using ARCC – “I have a concern” On April 27th, a group of over 200 Novant leaders, staff and physicians met in Statesville to go over these concepts and data in much greater detail, breaking up into discussion and focus groups to select Safety Behaviors and specific evidenced-based error prevention tools that can easily be adopted system-wide to help us make fewer errors. Here you see the outcome of their work that we now need to turn into practice habit across Novant. On the left are the five (5) safety behaviors that will help us dramatically reduce our error rates. The five safety behaviors are: Practice with a Questioning Attitude Communicate Clearly Know and Comply with Red Rules Self-check: Focus on Task Support Each Other There’s a specific reason we put them in this order. When we think about how we approach the many tasks that make up our day, we should think about building reliability through the adoption of low-risk safety behaviors in this process order: Practice with a questioning attitude (watch for those red flags) Communicate clearly (tell others, seek to understand) Know & comply with Red Rules (stick with what we know is right) Self-check (now check yourself) Support each other (and check others) On the right of the slide is a one-page handout that is in your packet. The bullets below each Safety Behavior are the specific evidenced-based Error Prevention Tools that we are now going to cover in much greater detail.
24
Novant Contact Information Sue DeCamp-Freeze Senior Director Clinical Improvement (704) Catherine Fenyves Patient Safety Manager (704)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.