Crisis Resource Management (CRM) Concepts starting in aviation as Crew Resource Management Majority of plane crashes caused by communication errors.

Slides:



Advertisements
Similar presentations
Principles in Crisis Resource Management (in Disasters) Reynaldo O. Joson, MD, MHA, MHPEd, MSc Surg April 16, 2012 MDH MBFI Hall Pre-convention Workshop.
Advertisements

MCIC Perioperative Initiative February 14, 2006 Operating Room Briefings.
Being an effective team player
Communication Assumptions Fatigue Distractions HIPAA ®
Leadership ®. T EAM STEPPS 05.2 Mod Page 2 Leadership ® 2 Objectives Describe different types of team leaders Describe roles and responsibilities.
TEAMWORK AND COMMUNICATION TRAINING
To instil Practitioner & Patient confidence... Dr M Bloch Consultant Anaesthetist NHSG.
Can Teamwork Enhance Patient Safety?.  Teamwork is a set of interrelated behaviors, cognitions and attitudes that combine to facilitate coordinated,
Communication in Health Care
TeamSTEPPS ® Primary Care Module Richard Ricciardi, PhD, NP Agency for Healthcare Research and Quality Center for Primary Care, Prevention and Clinical.
Leading Teams.
Pulling it All Together
Team Training Dr. Steve Training & Development INP6325 * Adapted from Salas & Canon-Bowers.
Prepared for the RHQN December, 2013 TeamSTEPPS and Reducing Patient Falls.
Situation Monitoring. T EAM STEPPS 05.2 Mod Page 2 Situation Monitoring 2 Teamwork Exercise #2.
Review for Unit/Area-Based Coach Training. T EAM STEPPS 05.2 Mod Page 2 Introduction Mod Page 2 2 Teamwork Is All Around Us.
Research Paper Critical Analysis Research Paper Critical Analysis 10 ways to look at a research paper systematically for critical analysis.
Coaching Workshop.
Clarifying Key Concepts.
Debriefing in Medical Simulation Manu Madhok, MD, MPH Emergency Department Children’s Hospital and Clinics of Minnesota.
WORKSHOP Surgical Errors and Assessment of Non- Technical Skills for Surgeons (NoTSS) Jonathan Beard Eleanor Robertson.
Bridging the Gap: Simulation in Orientation. Course Objectives Identify opportunities to utilize simulation as a tool in Orientation to bridge the gap.
Creating a Culture of Teamwork to Save Lives: What Does it Take? Eduardo Salas, Ph.D. Department of Psychology & Institute of Simulation & Training University.
What can Healthcare Learn from Team Training and Other Domains? David P. Baker, Ph.D. American Institutes for Research.
COMP3615/5615 Capstone Projects
Team Training in EM Residency Education CORD Academic Assembly 2012 Ryan Fringer, MD Christopher McDowell, MD MEd.
What is a Leader? “Leadership is not about personality; it’s about behavior – an observable set of skills and abilities.” Model the Way Inspire a Shared.
Threat and Error Management in Aviation
01-1-S230-EP Unit S230-EP S230-EP Unit 1 Objectives Describe the values and principles of operational leadership. Identify the qualities.
A partnership of the Healthcare Association of New York State and the Greater New York Hospital Association September 20, 2015 Executive Briefing Drawn.
Team Structure The ratio of We’s to I’s is the best indicator of the development of a team. –Lewis B. Ergen NEXT: ™
Team Strategies and Tools to Enhance Performance and Patient Safety
Topic 4 Being an effective team player. LEARNING OBJECTIVE understand the importance of teamwork in health care know how to be an effective team player.
Communication Assumptions Fatigue Distractions HIPAA.
How can TeamSTEPPS Improve Patient Outcomes in the ER? Coaching for Long-term Success Susan M. Hohenhaus, MA, RN, FAEN President, Hohenhaus & Associates,
Military Psychology: Teams and Teamwork Dr. Steven J. Kass.
CUSP for VAP: EVAP Shadowing Another Professional Kathleen Speck, MPH November 14, 2013.
Strategies and Tools to Enhance Performance and Patient Safety.
Team Structure The ratio of We’s to I’s is the best indicator of the development of a team. –Lewis B. Ergen NEXT:
Usability and Human Factors Unit 4a Human Factors and Healthcare.
Situation Monitoring “Attention to detail is one of the most important details ...” –Author Unknown ™
August 2015 MODULE 2: COMMUNICATION Public Health Incident Leadership.
Evolution Cabin Crew Training & Assessment for the Future.
August 2015 CONCLUSION Public Health Incident Leadership.
Strategies and Tools to Enhance Performance and Patient Safety: National Implementation 20 November
A/Prof Andrew Dean July 2015 WORKING IN HOSPITAL TEAMS.
Team development and trust Describe what influences trust within a team and how to build trust Identify stages of team development and how these are affected.
Mutual Support. Mutually supportive??? Mutual support & teamwork  Willingness and preparedness to assist others, and to ask for assistance when needed.
Teamwork Training Improves the Clinical Care of Trauma Patients Jeannette Capella, MD, Stephen ReMine, MD, Stephen Smith, MD, Allan Philp, MD, Tyler Putnam,
Strategies and Tools to Enhance Performance and Patient Safety UNC Health Care Refresher Training.
Strategies and Tools to Enhance Performance and Patient Safety Adoption in Action AHRQ funded project UNCHCS/RTI partnership READY Training OR 6.
For Office-Based Care Communication. T EAM STEPPS 05.2 Mod Page 2 Page 2 Office-Based Care ® Communication The first of the four main TeamSTEPPS.
Communication and Optimal Resolution (CANDOR) Toolkit Module 3 – Preparing for Implementation: Change Readiness and Gap Analysis.
For Office-Based Care Leading Teams. T EAM STEPPS 05.2 Mod Page 2 Page 2 Office-Based Care Leading Teams Definitions of leadership center on the.
PST Human Factors Jan Shaw Manchester Royal Infirmary CMFT.
Strategies and Tools to Enhance Performance and Patient Safety Adoption in Action AHRQ funded project UNCHCS/RTI partnership.
Clarifying Key Concepts. T EAM STEPPS 05.2 Mod Page 2 Introduction Mod Page 2 Objective To clarify key concepts that a majority did not.
CRISIS RESOURCE MANAGEMENT
Strategies and Tools to Enhance Performance and Patient Safety Adoption in Action AHRQ funded project UNCHCS/RTI partnership Welcome to TeamSTEPPS booster.
Mutual Support.
Coaching.
Tools & Strategies Summary
Foundations of Interprofessional Collaboration (FIPC): An Introduction to TeamSTEPPS® LEVEL 3 Focusing on Teamwork in the Clinical Environment Helping.
Lessons Learned for Healthcare from the Air Carrier Industry
Dr Anita McCarron Consultant in Anaesthesia UCL Hospitals
TeamSTEPPS Team Strategies and Tools to Enhance Performance & Patient Safety Lori Eckenrode BSN, RNC-OB Stacy.
Team Structure…Active Learning
Situation Monitoring Know the plan, share the plan, review the risks.
The Effects of Debriefing Following Medical Error
R. Clinton Crews, MPH, Amy Paulson & Frances D. Butterfoss, Ph.D.
Presentation transcript:

Crisis Resource Management (CRM) Concepts starting in aviation as Crew Resource Management Majority of plane crashes caused by communication errors. Blend of technical and non-technical skills that improved cockpit performance and enhanced safety. Adapted by Dr. Gaba for use with anesthesiologists.

CRM Principles Know the environment Anticipate and plan Call for help early Exercise leadership and followership Distribute the workload Mobilize and use all resources Communicate effectively Monitor, cross-check and use available data Prevent and manage fixation errors Re-evaluate and use cognitive aids Effective teamwork Allocate attention wisely Set priorities dynamically Rall M, Gaba, D. Miller Anesthesia, 6th Ed.

TeamSTEPPS tools Team Events Situation monitoring Briefs – planning Huddles – problem solving Debriefs – process improvement Situation monitoring Shared mental model D – describe situation E – express concerns S – suggest alternatives C – consequences C- I am concerned U –I am uncomfortable S- This is a safety issue

Team Performance Observation Tool Used to rate team performance on simulation scenarios Can guide debriefing discussion Gi

The first assessment tool that I would like to highlight is the Team Performance Observation Tool (TPOT) which is a part of the TeamSTEPPS training program from the AHRQ and the department of defense. TeamSTEPPS seeks to teach skills that will result in improved teamwork and communication skills amongst providers to ultimately result in safer patient care. This tool is divided into 5 main sections identified as Team Structure, Leadership, Situation Monitoring, Mutual Support, and Communication. These topics correspond directly to the topics covered in the TeamSTEPPS course which suggests that this assessment tool can be used not only to assess the actual teamwork but also potentially as an assessment of learning of the TeamSTEPPS curriculum. Within each major category there are several subtopics which are each evaluated as well. These topics are given a score based on a likert rating scale which is defined by both anchors as well as numbers. This process of providing both numerical anchors as well as word anchors can be confusing for some raters as there may be a tendency to score based on the numbers instead of the words. Additionally, the numbers provide a false sense that the difference between the words are equally spaced which may not be true. The benefits of this tool lies in its simplicity. There are only a total of 25 items for the rater to rate and the language is identical to the course language which will likely lead to an improved reliability

Using Simulation in TeamSTEPPS® Training http://www.ahrq.gov/teamsteppstools/simulation/index.html

Observable Behaviors •Brief team on situation and goals •Encourage the sharing of information •Use commonly understood terminology •Direct communication to specific person(s) •Confirm communication (closed loop) •Ask for clarification •State directions and information clearly •Explain medical information using commonly understood language •Health professional confirms communication from patient •Update patient on changing conditions •Verbalize problems •Announce plans, seek confirmation or consultation •Introduce self/other HC workers, explain role

Individual Rating Tools salzman

Team Work Rating Tools salzman

Evaluation of Teams Characteristics of Effective Teams Team Leadership Backup Behavior Mutual Performance Monitoring Communication Shared Mental Model Mutual Trust Main topics on this slide Over the next several slides I will review many different aspects of teamwork evaluation tools that have been published in the literature. Many of the evaluation tools will cover these major topics in some format. You will see that there is a wide variety of approaches to the evaluation process some each of which has some advantages and disadvantages. Certain benefits may be relevant for some of you in your specific environments, and for others, these same benefits may result in severe limitation of the evaluation tool to be able to deliver useful information for you as the evaluator of teams. As we transition to the following slides, we will discuss a variety to topics relevant for assessment including the variability of measuring similar constructs or behaviors, variability in measurement scales, how reliability and validity can impact your assessment tool, how checklist items are anchored, and how training of raters may impact the usefulness of your assessment.

The first assessment tool that I would like to highlight is the Team Performance Observation Tool (TPOT) which is a part of the TeamSTEPPS training program from the AHRQ and the department of defense. TeamSTEPPS seeks to teach skills that will result in improved teamwork and communication skills amongst providers to ultimately result in safer patient care. This tool is divided into 5 main sections identified as Team Structure, Leadership, Situation Monitoring, Mutual Support, and Communication. These topics correspond directly to the topics covered in the TeamSTEPPS course which suggests that this assessment tool can be used not only to assess the actual teamwork but also potentially as an assessment of learning of the TeamSTEPPS curriculum. Within each major category there are several subtopics which are each evaluated as well. These topics are given a score based on a likert rating scale which is defined by both anchors as well as numbers. This process of providing both numerical anchors as well as word anchors can be confusing for some raters as there may be a tendency to score based on the numbers instead of the words. Additionally, the numbers provide a false sense that the difference between the words are equally spaced which may not be true. The benefits of this tool lies in its simplicity. There are only a total of 25 items for the rater to rate and the language is identical to the course language which will likely lead to an improved reliability

The first assessment tool that I would like to highlight is the Team Performance Observation Tool (TPOT) which is a part of the TeamSTEPPS training program from the AHRQ and the department of defense. TeamSTEPPS seeks to teach skills that will result in improved teamwork and communication skills amongst providers to ultimately result in safer patient care. This tool is divided into 5 main sections identified as Team Structure, Leadership, Situation Monitoring, Mutual Support, and Communication. These topics correspond directly to the topics covered in the TeamSTEPPS course which suggests that this assessment tool can be used not only to assess the actual teamwork but also potentially as an assessment of learning of the TeamSTEPPS curriculum. Within each major category there are several subtopics which are each evaluated as well. These topics are given a score based on a likert rating scale which is defined by both anchors as well as numbers. This process of providing both numerical anchors as well as word anchors can be confusing for some raters as there may be a tendency to score based on the numbers instead of the words. Additionally, the numbers provide a false sense that the difference between the words are equally spaced which may not be true. The benefits of this tool lies in its simplicity. There are only a total of 25 items for the rater to rate and the language is identical to the course language which will likely lead to an improved reliability

This second tool published by Guise et al to evaluate different teamwork behaviors related to obstetric events and the goal was to develop a brief tool that could be used to objectively evaluate teamwork both in a simulation environment as well as during everyday clinical care. In this study the authors created standardized videos of poor, average, and excellent teamwork which were then presented in a blinded fashion to the three evaluators who all had training in CRM and the authors sought to determine the kappa to quantify the interrater reliability. by Guise et al

The CTS – Clinical teamwork scale contains a total of 15 items for evaluation which are divided into similar categories that we saw above in the TeamSTEPPS version. Here the rating scale is slightly different. There is an overall 11 item scale and some descriptive words above the numbers to help anchor the numbers. When they created the videos the three scenarios were meant to mirror the Poor, Average, and Good categories and hopefully have the raters pick a numerical value within that category. The questions to the left were more fully described The authors describe their reasons for choosing an 11 point scale Clinical teams in practice are likely to perform above average yet additional improvements are always possible Raters have a tendency to evaluate teams favorably which results in narrowing of the entire spectrum of the rating scale which would be used. This halo effect and respondents tend to rate themselves and others highlycan result in a tendency for all of the ratings to be clustered in a small area and they were concerned about an inability to discriminate smaller differences in performance among qualified teams when using a 3 or 5 point scale.

Here is where they describe the questions to the left

In this paper, the authors set out to develop and evaluate a participate rating scale for assessing high performance teamwork skills in simulation medicine settings. The primary objective of the project was to develop a scale that is brief for practical use in training settings, evaluates behavior of a medical team that represent a range of CRM skills and is sufficiently behavioral and transparent to be used reliably even by naïve training participants with little or no CRM knowledge or experience. Methods: resident physicians and nurses participated in a crisis resource management simulation, the used an evaluation tool based on previously published targeted criteria. Following participation in a scenario, the participants retrospectively rated the performance of their team using the measure with a 4 point rating scale and then debriefing with feedback and instruction about CRM and teamwork occurred

Here is an example of the assessment tool, which is different from the previous in that there are fewer items to be ranked, there are fewer ratings, and several of the items can be marked as NA if they did not occur. Participant reflected on the performance of their team and rated the behaviors on how frequently they occurred on a 4 point scale After analysis of the data, the authors found that the lowest end of the rating scale was used <5% of the time and the two lowest were combined. Which resulted in a 3 point scale A combination of the original plan in combination with the analysis of the Data resulted in brief measure that the authors argue can be used practically in training settings. Perhaps downside is that broadly focuses on CRM principles

The goal of this study was to develop a valid, reliable, and feasible teamwork assessment measure for emergency resuscitation team performance. The authors accomplished this through a review of the literature, drafting a novel instrument based on expert input, review by experts for face validity and content validity, and instrument testing on 56 recorded hospital and simulated resuscitation events for validity and reliability and finally assessed live simulated resuscitations.

To develop this tool, the authors reviewed the literature for previously published instruments from which they abstracted and refined elements and resuscitation items into a list of 11 items grouped into three categories – leadership, teamwork, task management. Rating prompts were included to guide completion and the items were all rated on a scale of 0-4 based on frequency of occurrence. This rating tool also contained an overall global rating scale rated on a scale of 1-10 as binary nature of checklists can tend to overlook the more holistic components of clinical competence. When assessing for validity and reliability – Content validity, was appropriate They measured internal consistency - .98 with cronbach alpha Inter-rater reliability – cohen’s kappa of 0.55 – fair, but when evaluated raw data, for areas where no agreement – only varied by 1.

6 members of an operating team during 464 major adult cardiac surgical cases at 3 hospitals prospectively collected intraoperative precursor events. They were able to collect 1627 reports of problematic precursor events. The authors describe this as a “simple questionairre” that saught to collect information in real time or during the temporal phase of the operation in which they occurred to try to limit the bias from arising from retrospective assessment of events after the occurance of adverse outcomes.

This is a two page assessment tool which is designed a little differently from the others. This was prospectively collected There is an incredible amount of data potentially to be collected, requires extensive training. Variable rates of responses based on the consistency of team members working together, variations in data collection processes, organizational differences in different OR settings.

The ANTS (Anaesthets Non Technical Skills) is a behaviorally anchored teamwork assessment tool derived from attitudinal surveys of anesthesiolgists, real-time observations of anesthesiologies caring for patients, and quality assurance reviews of critical incidents with adverse outcomes. It was developed using psychological research techniques to identify the skills and structure them into a meaningful hierarchy It was designed to describe the main non-technical skills that are important for good anesthetic practice

To evaluate for reliability of the system – expected that the elements in each category would be closely related to each other (internal consistency) and that the individual elements would be related to their own categories better than to other categories. The authors found this to be the case based on coorelations and consistency among elements tested with cronbach alpha. What about rater accuracy? – degree of raters’ agreement with the baseline reference. Averaged across scenarios – 88-97% of raters matched the reference rating to within 1 scale point. Based on evaluation of participant comments – this limitation in accuracy seemed to occur as a consequence of not knowing where to set the boundaries for each scale point and the authors hypothesize that this would be resolvable with training and calibration. Overall – usability – anesthesiolgists thought that the system addressed an important area of anaesthetic practice that is currently not well explained and they did not appear to have any major problems. Additionally, the layout and design were appropriate.

These authors attempted to construct a NON specialty-specific and widely applicable behavior-based assessment tool for use across health care professions for guaging team-work skills The CATS assessment is intended to evaluate team and not individual performance, and a behavior is scored for the team regardless of which team member exhibits the behavior. THUS, if looking for an evaluation tool for the individual members of the team this may not be the best tool.

Categories: Main areas: coordination, situational awareness, communication The behavior markers were selected from CRM behavior-based markers used in aviation and the military A table of descriptions for all of the anchors was also created. Should a crisis arise, CATS has a section designed to capture the specific leadership skill for effective team coordination. The tool was tested by a group of physicians and non clinical patient safety/quality improvement specialist while watching videoed simulations of an emergency c-section as well as code training scenarios. Additionally, some live surgeries were observed. Based on these observations, the formatting of the layout changed, the scoring and observation process changed. The raters could choose between 3 ratings – observed and good, variation in quality, and expected by not observed Total score / total marks made = quality score This quality score could be used to track teaching initiatives.

Best Practices Grounded in theory Multi-source measurement Design to meet specific learning outcomes Capture relevant competencies Measure multiple levels of performance Link to the training event Observable behaviors Multi-source measurement Capture performance processes Create diagnostic power Train observers Facilitate Debriefing Rosen, M.A. et al, Sim Healthcare 3:33-41, 2008 In the past short time we have covered a variety of different team work assessment tools. Some have been designed for specific specialties, others for use across specialties. There is also the difference of the assessment of an individual within the team as compared to the team as a whole. In many of these checklists we have reviewed how to think about the reliability (does the tool produce reproducible results) and the validity (are the results actually correct) and how these concepts are important to think about when either designing or using a checklist to measure a complex behavior. Rosen et al put together a list of best practices for thinking about and potentially designing a team assessment tool. Now having a mental framework for the various types of tools that are available, we would actually like to provide you an opportunity to watch a teamwork activity, use an assessment tool to rate the teamwork, and then we will spend some time discussing the activity and the assessment tool. To introduce and take you through the next part of this activity I would like to turn this over to Susan Eller, an EM nurse by training and the director of interprofessional education in our simulation center.

Hunt, EA

The CTS – Clinical teamwork scale contains a total of 15 items for evaluation which are divided into similar categories that we saw above in the TeamSTEPPS version. Here the rating scale is slightly different. There is an overall 11 item scale and some descriptive words above the numbers to help anchor the numbers. When they created the videos the three scenarios were meant to mirror the Poor, Average, and Good categories and hopefully have the raters pick a numerical value within that category. The questions to the left were more fully described The authors describe their reasons for choosing an 11 point scale Clinical teams in practice are likely to perform above average yet additional improvements are always possible Raters have a tendency to evaluate teams favorably which results in narrowing of the entire spectrum of the rating scale which would be used. This halo effect and respondents tend to rate themselves and others highlycan result in a tendency for all of the ratings to be clustered in a small area and they were concerned about an inability to discriminate smaller differences in performance among qualified teams when using a 3 or 5 point scale.

To evaluate for reliability of the system – expected that the elements in each category would be closely related to each other (internal consistency) and that the individual elements would be related to their own categories better than to other categories. The authors found this to be the case based on coorelations and consistency among elements tested with cronbach alpha. What about rater accuracy? – degree of raters’ agreement with the baseline reference. Averaged across scenarios – 88-97% of raters matched the reference rating to within 1 scale point. Based on evaluation of participant comments – this limitation in accuracy seemed to occur as a consequence of not knowing where to set the boundaries for each scale point and the authors hypothesize that this would be resolvable with training and calibration. Overall – usability – anesthesiolgists thought that the system addressed an important area of anaesthetic practice that is currently not well explained and they did not appear to have any major problems. Additionally, the layout and design were appropriate.