Presentation is loading. Please wait.

Presentation is loading. Please wait.

Module 1 Practical Evidence-Based Medicine: An Introduction

Similar presentations


Presentation on theme: "Module 1 Practical Evidence-Based Medicine: An Introduction"— Presentation transcript:

1 Module 1 Practical Evidence-Based Medicine: An Introduction
Welcome to module 1. This is an introductory module designed to give the neurology resident an overview of the principles of evidence-based medicine. Many of the concepts described in this lecture will be dealt with in more detail in subsequent modules.

2 A Scenario ? We start with a case. A patient presents within 48 hours of developing right-sided facial weakness. You are convinced that the patient has Bell’s palsy and decide to start steroids. You ask a colleague what dose of steroids she typically uses to treat Bell’s Palsy. Your colleague states that she does not use steroids to treat Bell’s Palsy as it has not been in this situation as she believes steroids are ineffective. A discussion ensues.

3 Give steroids? Clinical Reasoning
We introduce a metaphor. We have a decision needing support. In this case, the decision pertains to whether we should use steroids for treating this patient with Bell’s Palsy. Our clinical reasoning will support this decision. Other names for evidence-based medicine (EBM) are reason-based medicine or science-based medicine. Some clinicians get understandably annoyed when they are encouraged to practice EBM. After all, it is quite natural to think that one practices reasonably. However, beyond the informal, often successful, implicit reasoning used by the practicing clinician, is an explicit, transparent way of reasoning that supplements the informal methods and improves patient care. That, in essence, is what evidence-based medicine is—explicit reason-based medicine.

4 Evidence-Based Medicine An Introduction
Different kinds of reasoning Faulty Reasoning Deduce from principles Induce from evidence A hierarchy of evidence Counting and the 2 x 2 table Sources of Error Systematic error Random error When logic isn’t enough Clinical Reasoning Give steroids? This slide presents an overview of the topics to be covered. First, we will delve into the different types of reasoning used in medicine. We start by discussing faulty reasoning. Subsequently, we discuss basic properties of evidence-based inferences emphasizing that there is a hierarchy of evidence from weak to strong. Later, we discuss the need for quantitative analysis of evidence, while emphasizing that the statistical tools necessary to be an effective practitioner of EBM are not overwhelming. Next, we will introduce the major sources of error including touching upon sources of systematic error and random error. Finally, we conclude with a discussion of what to do when logic is not enough to guide our decisions. Faulty reasoning. Unfortunately, poorly reasoned arguments pervade medicine. Often they do so in subtle, unrecognized ways.

5 Deceitful You may have at one time heard somebody say that they don’t believe the results of this study because it was sponsored by a pharmaceutical company. The implication is that the authors of the study are intentionally deceiving us. The logical consequence of assuming that people lie, is that one can choose to only believe those studies that agree with our preconceptions. Trusting researchers is a necessary step to begin the EBM process. In uncommon circumstances, that trust is violated. Fortunately, such instances of fraud seem rare. This is not to say that the results might be biased or have a certain “spin”. However, bias is not the same as lying. We start with the assumption that researchers are not fabricating evidence.

6 “The use of steroids for Bell’s palsy has become the standard of care in the community.”
“The consequences of disfiguring facial weakness are so devastating that the use of steroids is mandatory.” Fortunately, deceit is not something commonly encountered in the medical literature or in our discussions with colleagues regarding the best course of action. However, there is a type of unreasonable argument that is very commonly encountered. This slide gives two examples of such unreasoned arguments. Superficially, both statements seem to provide reasons to support the use of steroids for Bell’s palsy.

7 Deceitful Fallacious However, on further inspection, it becomes clear that these arguments, along with many others, are logical fallacies.

8 Fallacious Irrelevant Rhetoric Psychological appeal Emotion-Driven
Persuasion Logical fallacies are unreasonable. They are irrelevant to the question that is being considered. They use rhetoric and often have a powerful psychological appeal. They are often emotion driven and are often very persuasive. Indeed, there primary purpose is to persuade. Oftentimes, they are more persuasive than reason-based arguments. They are commonly used in medicine.

9 Irrelevant to the question
For patients with Bell’s Palsy does the early use of steroids vs no steroids improve facial functional recovery? PICO When we say that fallacious arguments are unreasonable because they are irrelevant, we mean that they are irrelevant to a specific question. In this case, they question is “In patients with Bell’s palsy, does the early use of steroids improve facial functional recovery?” There are three explicitly identified parts in this question. There is a patient population with a condition. There is an intervention or an action. There is an outcome. Any reasoned discussion about whether steroids should be used must be focused on this specific question.

10 PICO Question Patient Intervention Co-intervention Outcome
Using the PICO technique aids in explicitly defining a relevant clinical question. Identifying the patient population, intervention (and co-intervention) and the pertinent outcomes is key to determining what is relevant and irrelevant.

11 Popularity “The use of steroids for Bell’s palsy has become the standard of care in the community.” The first statement that we encountered is an example of the fallacy of popularity. It makes an argument that most people use steroids for Bell’s palsy. Therefore you should use it too. The fact that it is popular does not mean that it is effective. The popularity of the intervention is irrelevant to its effectiveness. This argument, however, is often times quite persuasive. How often have you heard this in your practice? Doing procedure X has become the “standard of care.”

12 Begging the Question “The consequences of disfiguring facial weakness are so devastating that the use of steroids is mandatory.” This statement is an example of the fallacy of “begging the question.” Here the discussant makes a convincing argument that bad outcomes are bad. From this however, it does not logically follow that the use of steroids is necessary because the relevant question is “do steroids reduce the probability of bad outcomes?” This is an example of begging the question. The implication is, “you have to do something.”

13 Irrelevant Outcomes I’ll be sued. I won’t be reimbursed
There are other commonly encountered fallacies related to their irrelevance to the outcome of interest. For our question the relevant outcome is normal facial function. One fallacy based on an irrelevant outcome which is quite persuasive is “I better give steroids or I will be sued if the patient does not do well”. Of course, this is irrelevant to whether steroids are effective. Another irrelevant outcome that is, unfortunately, commonly considered and may persuade clinicians to perform a procedure is whether they will be reimbursed. The next time you are rounding, listen carefully for examples of fallacious reasoning. You will hear them over and over again. Gently point them out to the perpetrator—even if it is your attending.

14 Give steroids? Getting beyond… Where I Trained…
We return to our metaphor of a decision needing support. Often times when you ask a clinician why they do what they do, the common response is “where I trained that is what we did.” Now assuming, you were trained by competent professors and that the science of medicine has not changed since you were trained, this logic seems reasonable. However, these assumptions may be incorrect. We need to get beyond this traditional justification by being more explicit.

15 Deceitful Fallacious Reasoned
We start with assuming that people are telling us the truth. Next we identify and discard fallacious logic. It is time to be reasonable.

16 Reasoned Relevant Logical appeal Data-Driven Truth
Evidence-based inferences are reason-based. They are relevant to the clinical question at hand. They are based upon logic. They are data-driven. Their purpose is not to persuade but to find the truth.

17 Evidence-Based Medicine An Introduction
Different kinds of reasoning Faulty Reasoning Deduce from principles Induce from evidence A hierarchy of evidence Counting and the 2 x 2 table Sources of Error Systematic error Random error When logic isn’t enough Clinical Reasoning Give steroids? Overview slide: Let us discuss different types of reasoning.

18 Give steroids? Principles
We find that there are several ways of reasonably approaching the question of the use of steroids for patients with Bell’s palsy. During our training we are taught numerous established medical and neurological principles. Perhaps, these principles are sufficient to justify the use of steroids in patients with Bell’s palsy.

19 Deductive Inference From Principles
The right side of the brain controls the left side of the body My patient can’t control the left side of his body My patient has a problem with the right side of his brain Deduction is one way we reason. Here is an example of a useful deductive inference commonly made by neurologists based upon principles of neuroanatomy. Much of what we do when localizing a lesion relies on our knowledge of these basic established principles. Can we make a similar deductive inference regarding steroids and Bell’s palsy?

20 Deductive Inference CN VII swollen by inflammation CN VII is compressed within the temporal bone Steroids reduce inflammation Steroids will reduce compression of CN VII within temporal bone and speed recovery Here is one potential deductive inference that seems to support the use of steroids in patients with Bell’s palsy. We start with a couple of premises. We know from autopsy studies on people who happen to die with Bell’s palsy that the facial nerve is inflamed and swollen. We also know that cranial nerve 7 is compressed within the temporal bone. The compression leads to focal demyelination and sometimes axonal loss resulting in facial weakness. We also know that steroids reduce inflammation and swelling. Therefore, cannot we deduce that steroids will reduce compression of cranial nerve 7 within the temporal bone and speed recovery for patients’ with Bell’s palsy? This is a reasonable argument. In this instance, however, the argument is not particular convincing. There are other premises that have not been considered. For example, there is some evidence that Bell’s palsy is caused by an infection, specifically Herpes simplex. We know that steroids interfere with the immune response. Perhaps steroids would worsen outcomes by interfering with the body’s natural immunological defenses.

21 Deductive Inference CN VII swollen by inflammation CN VII is compressed within the temporal bone Steroids reduce inflammation However, in many cases, Bell’s palsy is caused by an infection, specifically Herpes simplex. We know that steroids interfere with the immune response. Perhaps steroids would worsen outcomes by interfering with immunological defenses Here is one potential deductive inference that seems to support the use of steroids in patients with Bell’s palsy. We start with a couple of premises. We know from autopsy studies on people who happen to die with Bell’s palsy that the facial nerve is inflamed and swollen. We also know that cranial nerve 7 is compressed within the temporal bone. The compression leads to focal demyelination and sometimes axonal loss resulting in facial weakness. We also know that steroids reduce inflammation and swelling. Therefore, cannot we deduce that steroids will reduce compression of cranial nerve 7 within the temporal bone and speed recovery for patients’ with Bell’s palsy? This is a reasonable argument. In this instance, however, the argument is not particular convincing. There are other premises that have not been considered. For example, there is some evidence that Bell’s palsy is caused by an infection, specifically Herpes simplex. We know that steroids interfere with the immune response. Perhaps steroids would worsen outcomes by interfering with the body’s natural immunological defenses.

22 Give steroids? Principles
As in this instance, deductions from established principles are sometimes insufficient to support a decision. They are often useful to suggest a course of action—that is generate a hypothesis—but not sufficient to completely support a decision. Principles

23 Decision Principles There are times when deductions from established principles are enough to support a decision. One such example is commonly used to criticize EBM: the parachute. The argument goes that the proponents of EBM would require a randomized controlled trial to see if it is safe to jump out of an airplane with or without a parachute. This criticism fails to recognize, as you will soon see, that evidence is only one of the pillars used in clinical reasoning. Established principles form the first pillar. The principles of gravity and aerodynamics, along with vast experience, completely support the decision to wear a parachute when you jump out of airplanes (or not to jump in the first place).

24 Reasoned Relevant Reason Logical appeal Data-Driven Deduction Truth
(Principles) We have introduced the first type of reason-based argument to help us decide whether to use steroids—a deductive argument following principles. We make these types of deductions daily in taking care of our patients. Deduction is a very useful form of reasoning.

25 Give steroids? Experience Principles
In the case of steroids and Bell’s palsy, our deductions are not sufficient to support the use of steroids. We must rely on another method of reasoning. We turn to our experience. Principles

26 Analogy and Inductive Inference
Jane has Bell’s Palsy. If we treat her with steroids, she will get better. John had Bell’s Palsy and was treated with steroids, he got better Sue had Bell’s Palsy and was treated with steroids, she got better Bob had Bell’s Palsy and wasn’t treated with steroids, he didn’t get better We recall seeing three patients with Bell’s palsy over the past 5 years. We list our experience with these cases – John, Sue, and Bob. (Technically, this is a case series). They all had Bell’s palsy. Two were treated with steroids and got better. One was not treated with steroids and did not get better. We conclude that our current patient with Bell’s palsy should be treated with steroids. This is an example of an inductive inference. It is the type of reasoning most commonly meant when one refers to evidence-based medicine. Induction does not replace deductions from established principles. Rather, the two forms of reasoning complement each other. The inductive inference is based upon knowledge of what happened to previous patients with Bell’s palsy. Each case - John, Sue, and Bob – is referred to as an analogy-or, in medical speak, an anecdote. We make an inference of what we should do for our patient based upon the outcomes of previous Bell’s palsy patients.

27 Reasoned Relevant Reason Induction (Evidence) Logical appeal
Data-Driven Truth Induction (Evidence) Deduction (Principles) This is also a reasoned argument. It is induction, not deduction. Induction flows from experience (AKA evidence). Deduction flows from established principles.

28 Evidence Cases Experience is Evidence. In a real sense, the terms experience and evidence are synonymous in the context of this discussion. “Evidence” in the context of EBM refers to our collective experience of what happens to patients with the condition of interest.

29 Evidence-Based Medicine An Introduction
Different kinds of reasoning Faulty Reasoning Deduce from principles Induce from evidence A hierarchy of evidence Counting and the 2 x 2 table Sources of Error Systematic error Random error When logic isn’t enough Clinical Reasoning Give steroids? Overview. Now we will talk about some basic properties of evidence-based inferences (or experience-based inferences). Specifically, we acknowledge that there is a hierarchy of evidence.

30 Evidence-Based Inferences…
Are never certain Are not valid or invalid Hierarchy It is important to realize that evidence-based inferences are never certain. Induction is never absolutely certain. Inductive arguments cannot be judged to be valid or invalid. They are relatively strong or relatively weak (or anything in between). When you are reviewing a clinical study of patients during journal club, it is incorrect to come to the conclusion that the study was valid or invalid. The evidence provided from such studies will be strong or weak or somewhere in between. By their nature, such studies cannot be completely convincing—even a large randomized masked trial, which would provide stronger evidence than our 3 patient case series, cannot be completely convincing. The inferences made by induction are never certain. Strong Weak

31 This inference seems weak
Jane has Bell’s Palsy. If we treat her with steroids, she will get better. John had Bell’s Palsy and was treated with steroids, he got better Sue had Bell’s Palsy and was treated with steroids, she got better Bob had Bell’s Palsy and wasn’t treated with steroids, he didn’t get better Let us go back to our 3-patient case series evidence-based inference. Now that we realize that evidence-based inferences are judged to be strong or weak, you would probably judge this inference to be weak. You are right. Why? This inference seems weak

32 Inferences from informally recalled experience often mislead
Often too few cases Selective recall: remember those That are more recent With extreme results That support our pre-conceptions Experts not immune to these limitations Inferences from informally recalled experience often misleads us. They mislead for two reasons: 1. Oftentimes our experience is too limited. Our collection of cases is too small. We have seen too few cases to be able to make an inference with any strong degree of confidence. When you look at our 3-patient case series one of the things that made you judge that the inference was weak, was the number of cases. This is one of reasons we seek advice from experts. An expert most likely has had experience with more cases. Her collection of cases is larger than ours. This, coupled with her superior knowledge of established principles, makes her inferences stronger than a non-expert. 2. However, there is another problem with informally recalled experience that not even the experts are immune to. We tend to selectively recall the cases that we see. We remember those that are the most recent, have the most extreme results, and those that support our preconceptions. These facts have been empirically validated in the psychological literature.

33 Give steroids? Evidence Principles
Formally recorded cases allow for stronger evidence-based inferences. Is there some way we can make a stronger evidence-based inference regarding the use of steroids for Bell’s palsy. Collecting larger number of cases and not relying on our selective memories would be two steps to improve our inference. Principles

34 Of all Bell’s Palsy Cases: 319
More, unselected Cases It becomes clear just from our inspection of our initial three-patient case series that we need more cases and we would want to make sure that the cases are unselected. We do not want to fall into the trap of selective recall that plagues informally recalled experience. To accomplish this, let us do a hypothetical census over the past ten years of all of the Bell’s palsy cases seen at our institution. We scour all of the records from 1998 – 2008 and find that there were 319 cases of Bell’s palsy seen at our institution. As best we can, we abstract from these patients’ charts demographic characteristics, co-morbidities, steroid use and outcomes. Three-hundred-nineteen seems like a lot of cases. How we will make sense of this data. One way would be to list the cases like we did with our three-patient case series. Obviously, this wouldn’t be very practical with 319 cases. Census from 1998 to 2008 Of all Bell’s Palsy Cases: 319

35 319 Cases Rats.. I’m going to have to start counting these cases
Given the number of cases we conclude that we are going to have to summarize the data in some meaningful way. To do that, we need to use statistics. Do not fear. The amount of statistical knowledge needed to be an effective practitioner of EBM is not overwhelming. In fact, if you can count and do some simple arithmetic, you have all the tools you need. 319 Cases

36 Evidence-Based Medicine An Introduction
Different kinds of reasoning Faulty Reasoning Deduce from principles Induce from evidence A hierarchy of evidence Counting and the 2 x 2 table Sources of Error Systematic error Random error When logic isn’t enough Overview slide. That brings us to a discussion of counting and the usefulness of the 2 x 2 table.

37 Outcome All Good 239 Poor 80 TOTAL 319 Outcome
We take our 319 cases and we count the number of patients that had a good outcome versus a poor outcome relative to the recovery of facial function. We see that 239 had a good outcome and 80 had a poor outcome. TOTAL 319

38 Outcome All Good 75% Poor 25% TOTAL 100% Outcome Distribution
If we percentage those outcomes we find that 75% of our patients had a good outcome and 25% had a poor outcome. This simple table describes the outcome distribution. It shows how the patients with good and poor outcome are distributed across the two possible outcomes. Outcome is referred to as a variable. A variable is a characteristic that can take on more than one value. In this case, it can take on two values – good or poor. TOTAL 100%

39 Treatment All Steroids 167 No Steroids 152 TOTAL 319
Steroid Distribution Treatment All Steroids 167 No Steroids 152 This is the distribution of another variable – whether the patients were treated with steroids or not? One hundred sixty-seven patients received steroids and 152 patients did not. We are making progress in summarizing our data. TOTAL 319

40 Relationships between variables
Steroids and outcomes What we were really interested in is the relationship between variables. We want to know what the relationship is between steroids and outcomes. Were the patients who received steroids more likely to have good outcomes than patients who did not receive steroids?

41 2 X 2 Table Outcome Treatment Good Poor All Steroids 167 No Steroids
152 Total 239 80 319 To analyze this, we look at an extremely powerful tool--the 2 x 2 table. It is worth your time to understand exactly what a 2 x 2 table is telling you. This is the most important quantitative tool we have to make sense of our data. In the rows, we have patients who received steroids and did not receive steroids. The total in the “all” column at the right of the table is the total number of patients who received steroids and who the total number that did not receive steroids. These are the same numbers as in the steroid distribution table. Within the columns, we list the patients with good and poor outcomes. In the bottom row lists their totals 239 and 80. This represents the outcome distribution.

42 Expected if No Relationship
Outcome Treatment Good Poor All Steroids 125 42 167 No Steroids 114 38 152 Total 239 80 319 Filling in the 2 x 2 table, we can see the number of patients on steroids who had a good outcome versus a poor outcome and the number of patients not on steroids who had a good outcome or a poor outcome.

43 Expected if no Relationship
Outcome Treatment Good Poor All Steroids 75% 25% 100% No Steroids Total For this hypothetical data, percentaging the numbers in the rows, we see that both steroid-treated and untreated patients had a 75% chance of a good outcome. There is no association between steroids use and outcome. The previous slide shows the actual numbers of cases in each cell of the 2x2 table you would expect if there was no association between steroids and outcomes. The lack of the relationship becomes clear when you calculate the percentage of patients within each row having a good outcome.

44 “Actual” Outcome Treatment Good Poor All Steroids 150 17 167
No Steroids 89 63 152 Total 239 80 319 Let us change the results of our hypothetical census for the rest of the discussion. This is the “actual” data. This is the distribution of outcomes by steroids.

45 “Actual” Outcome Treatment Good Poor All Steroids 90% 10% 100%
No Steroids 59% 41% Total 75% 25% Calculating percentage across rows we find an association between steroids and having a good outcome. Ninety percent of patients who got steroids had a good outcome versus 59% of patients who did not get steroids. The 2 x 2 table makes the presence of an association clear. However, to see the association, one must look at the entire table. It would be simpler if we could have a single number that would describe the association between steroid use and outcomes. Statisticians have spent careers devising ways of calculating a single number to describe the association between variables.

46 2 X 2 Table Outcome Treatment Good Poor All Steroids a b 167
No Steroids c d 152 Total 239 80 319 I am going to illustrate some of the commonly used measures of association here. You are going to hear more about those in future modules. The purpose of all of them is to describe the strength of association between variables. For example, if you received steroids, how much more likely were you to have a good outcome. The way of calculating these measures of association are commonly described using this “a, b, c, d” notation. The specific notation is not important. Once you understand how a 2x2 table works, these measures are easily calculated.

47 Measures of Association
Outcome Treatment Good Poor All Steroids a b 167 No Steroids c d 152 Total 239 80 319 Outcome Treatment Good Poor All Steroids 90% 10% 100% No Steroids 59% 41% Total 75% 25% Relative Risk good outcome is defined as: Proportion of treated patients with a good outcome Proportion of untreated patients with a good outcome a/(a+b) c/(c+d) Here is how you calculate three measures of association using the abcd notation. The first, and the one that we will use for this example, is known as the relative risk. In this example, the relative risk of a good outcome is calculated by dividing the proportion of patients on steroids with good outcomes (the numerator) by the proportion of patients not on steroids with good outcomes (the denominator). Another measure of association is the risk difference. To calculate the risk difference you subtract the proportion of patients not on steroids with good outcomes from the proportion of patients on steroids with good outcomes. The final measure of the association between variables we will consider today is the odds ratio. The odds of a good outcome on steroids is calculated as a divided by b. The odds of a good outcome in patients not on steroids is c divided by d. Dividing the odds of a good outcome on steroids by the odds of a good outcome off steroids gives you the odds ratio. In another module, you will hear more about these measures of association. Realize that they are just methods of measuring the strength of the association between one variable and another. = In our example: 90% / 59% = 1.5

48 Measures of Association
Outcome Treatment Good Poor All Steroids a b 167 No Steroids c d 152 Total 239 80 319 Outcome Treatment Good Poor All Steroids 90% 10% 100% No Steroids 59% 41% Total 75% 25% Relative Difference of a good outcome is defined as: Proportion of treated patients with a good outcome minus the Proportion of untreated patients with a good outcome Here is how you calculate three measures of association using the abcd notation. The first, and the one that we will use for this example, is known as the relative risk. In this example, the relative risk of a good outcome is calculated by dividing the proportion of patients on steroids with good outcomes (the numerator) by the proportion of patients not on steroids with good outcomes (the denominator). Another measure of association is the risk difference. To calculate the risk difference you subtract the proportion of patients not on steroids with good outcomes from the proportion of patients on steroids with good outcomes. The final measure of the association between variables we will consider today is the odds ratio. The odds of a good outcome on steroids is calculated as a divided by b. The odds of a good outcome in patients not on steroids is c divided by d. Dividing the odds of a good outcome on steroids by the odds of a good outcome off steroids gives you the odds ratio. In another module, you will hear more about these measures of association. Realize that they are just methods of measuring the strength of the association between one variable and another. = a/(a+b) - c/(c+d) In our example: 90% - 59% = 31%

49 Measures of Association
Outcome Treatment Good Poor All Steroids a b 167 No Steroids c d 152 Total 239 80 319 Outcome Treatment Good Poor All Steroids 90% 10% 100% No Steroids 59% 41% Total 75% 25% Odds: Odds of a Good Outcome on treatment = a/b Odd of a Good Outcome not treated = c/d ODDS RATIO = a/b divided by c/d Here is how you calculate three measures of association using the abcd notation. The first, and the one that we will use for this example, is known as the relative risk. In this example, the relative risk of a good outcome is calculated by dividing the proportion of patients on steroids with good outcomes (the numerator) by the proportion of patients not on steroids with good outcomes (the denominator). Another measure of association is the risk difference. To calculate the risk difference you subtract the proportion of patients not on steroids with good outcomes from the proportion of patients on steroids with good outcomes. The final measure of the association between variables we will consider today is the odds ratio. The odds of a good outcome on steroids is calculated as a divided by b. The odds of a good outcome in patients not on steroids is c divided by d. Dividing the odds of a good outcome on steroids by the odds of a good outcome off steroids gives you the odds ratio. In another module, you will hear more about these measures of association. Realize that they are just methods of measuring the strength of the association between one variable and another. = a/(a+b) - c/(c+d) In our example, Odds Ratio: 90%/10% divided by 59%/41% = 6.25

50 Bell’s Palsy patients receiving steroids were 1
Bell’s Palsy patients receiving steroids were 1.5 times more likely to have good outcomes. Therefore, I should offer my patients with Bell’s Palsy steroids. Maybe, we have our answer. We know that in our study patients on steroids were more likely to have a better outcome. Should we be using steroids for our patients with Bell’s palsy?

51 Evidence-Based Medicine An Introduction
Different kinds of reasoning Faulty Reasoning Deduce from principles Induce from evidence A hierarchy of evidence Counting and the 2 x 2 table Sources of Error Systematic error Random error When logic isn’t enough Our hypothetical census provides stronger evidence than our 3-patient case series. How much stronger is this evidence. We will systematically consider the sources of error in our census. How strong is the inference from our study? There are two sources of error to consider.

52 Bias 0.6 0.8 1 1.2 1.4 0.4 1.6 Truth Measured Systematic Error Incorrect results from poor study design or execution More likely to be too high or too low Risk of Bias Measured: Semi-quantitatively Class of Evidence One source of error is bias. Bias, otherwise known as systematic error, is a tendency for a study to give incorrect results that are either too high or too low. This results from problems with study design or execution. It is more likely to show a result that is too high or too low. The ruler on the top of the slide shows the relative risk we calculated. Let us assume that the truth is that steroids are of no benefit to Bell’s palsy. We measured a relative risk of The difference between truth and what we measured could be explained by bias. Unfortunately, we usually cannot directly measure bias in a study because we do not normally know the truth. We do the study to discover the truth. Hence, we can only determine the risk of bias in a study. How can we measure the risk of bias? Unfortunately, we can only measure the risk of bias semi-quantitatively. We do not have hard quantitative methods of measuring bias risk (unlike measuring random error). Often, the risk of bias is measured by giving a study a grade. For example, the AAN grades a study using a four-tiered grading system. Class 1 studies are judged to have a low risk of bias and Class 4 studies are judged to have a high risk of bias. You will hear more about that when we talk about practice guidelines. Bias results from poor study design or execution. The risk of bias is determined based upon study characteristics known to increase or decrease the bias in a study. What are the potential sources of bias in our study?

53 Our Study Poor Good +St -St
Patients not receiving steroids were more often older, diabetic and hypertensive Sometimes had to “guess” the outcome from the record. In our study, we started with a cohort of patients who had Bell’s palsy. They are represented in this little cartoon here by a blue dot. Each dot is a patient. Of course, I cannot show you 319 single dots. Some of the patients were given steroids and some were not given steroids. The patients were followed and their facial functional outcomes were determined to be either to be poor or good. As we have seen, the 2x2 table shows us the relationship between getting steroids and outcome. As we inspect the co-morbidities of our patients we observe that the patients not receiving steroids were more often older, more often diabetic, and more often hypertensive than the patients getting steroids. Age, diabetes, and hypertension are independent risk factors for poor outcomes in patients with Bell’s palsy. In our study, we cannot determine whether the poorer outcome in patients who didn’t get steroids was related to steroid use or these other risk factors. This study is confounded. Confounding is a major source of bas. Another source of bias in this study is the way we determined patient outcomes. Since we were retrospectively reviewing charts, at times we had to make an educated guess as to how the patients did—whether they had good or poor outcomes. Perhaps, some of the data abstracters unconsciously tended to assume that patients that got steroids did better. This would be an example of another common source of bias--misclassification. For example, patients with poor outcomes may have been misclassified as having good outcomes.

54 Confounding Misclassification Poor Good +St -St Major Sources of Bias
Our study had two major sources of bias.. There were confounding differences between the patients who got steroids and didn’t get steroids and that might explain the association between steroid use and improved outcome independent of whether the steroids have any effect. Another common source of bias is misclassification. Patients who were determined to have a poor outcome may have actually have had a good outcome and vice versa. They were misclassified. -St

55 Less Bias The Randomized Masked Trial
Poor Good +St How can you minimize bias? You reduce the risk of bias by changing the study design. The best design is the randomized masked trial. Why is that design better? Unlike our observational study, in the randomized masked trial, patients are randomly allocated to receive steroids or not receive steroids. Because they are randomly allocated, by chance, the patients on steroids are usually pretty similar to the patients not on steroids relative to confounding variables such as age, diabetes and hypertension. Randomization minimizes confounding. Additionally, masking the outcome assessors to treatment allocation helps avoid misclassification bias. If the outcome assessor is unaware of whether the patient is getting steroids or not, they are less likely to misclassify the outcome based upon their expectations. R -St

56 Randomized Masked Trial
What is the risk of Bias? Randomized Masked Trial Single Case Report Hierarchy Low High In our hierarchy of evidence, we can see that the studies with the highest risk of bias are case reports. The study with the lowest risk of bias is the randomized masked trial. Our original three-patient case series is only a little stronger than the case report.. Our hypothetical census is stronger but because of confounding and misclassification it still has a moderately high risk of bias. Our census would be graded Class III using the AAN classification on evidence scheme. Our Study Class III

57 Chance Random (Sampling) Error --Incorrect result from bad luck
Equally likely to be too high or too low -Measured by: P-values Confidence intervals Chance One source of error is systematic error (bias). Systematic error relates to study design and execution. The other source of error is random error (chance). Random error relates to the number of patients in the study (sample size). Even in the best designed and executed randomized masked trial one can get an incorrect result by bad luck. Unlike systematic error, which tends to distort the association between variable in one direction or another, random error is equally likely to make the result too high or too low. There is an entire module dedicated to a discussion of random error. The essence of the concept is fairly easy to grasp. Consider an unbiased coin that is weighted so that the truth is that if you toss it an infinite number of times, half the time it would come up heads and half the time it would come up tails. If you tossed the coin only four times (a sample of 4) there is a reasonable chance that just by bad luck, the coin would come up heads four times. You could be mislead to believe that the truth is that the coin in weighted to always come up heads. That is an example of random error or sampling error. By chance, the coin came up heads four times in a row. The more tosses you make, the less likely it would be that an evenly weighted coin would come up heads each time. The more cases you include in a study, the less likely you are to get an incorrect result by chance. There is an entire branch of mathematics dedicated to measuring the contribution of chance to the results of a clinical trial. Unlike bias, the contribution of random error can be precisely measures using p-values, or better, confidence intervals. As you will find out, calculating these numbers is not difficult and can easily be done with a simple spreadsheet. The important thing is to understand the concept.

58 Were there enough cases?
Hierarchy More Less How much random error is in our hypothetical study? To determine this we can plug the numbers from our 2x2 table into a calculator to find the 95% confidence intervals of the relative risk of 1.5. (Such calculators can be found on the web.) We find that the 95 confidence intervals are 1.3 to That would mean that if we repeated the study in the same way that 95 out of 100 times, we would expect to get a result between 1.3 and This confidence interval is relatively narrow-- there is not a lot of random error in our hypothetical study. Relative Risk of Good outcome on steroids: 1.5 (95% confidence intervals 1.3 to 1.8)

59 Magnitude of Effect + Quality of Evidence
Conclusion Magnitude of Effect + Quality of Evidence Strong Relative Risk Good Outcome 1.5 (95% CI 1.3 to 1.8) + Class III After assessing our hypothetical study’s risk of bas and its random error, as well as the magnitude of the association between steroids and facial functional outcomes, we come to a conclusion. If our hypothetical study was the strongest evidence available regarding the question of steroid use in Bell’s palsy, we would have to admit that the evidence is insufficient to support our decision because of a moderately high risk of bias. (In module 15 you will find out that there is other evidence regarding this question.) Weak

60 Give steroids? Evidence Principles
It turns out for this question, with the evidence we have reviewed, that principles and evidence are not enough. Principles

61 Evidence-Based Medicine An Introduction
Different kinds of reasoning Faulty Reasoning Deduce from principles Induce from evidence A hierarchy of evidence Counting and the 2 x 2 table Sources of Error Systematic error Random error When logic isn’t enough Clinical Reasoning Give steroids? Overview: What do we do when logic is not enough? How do we fill in the gap to support our decision?

62 Reasoned Relevant Reason Logical appeal Intuition Data-Driven
Truth Intuition (Judgment) Induction (Evidence) Deduction (Principles) In that instance, we rely on the informal reasoning known as intuition. We use our judgment to fill in the gap. We use our best guess.

63 Give steroids? Judgment Evidence Principles
Most reasoned clinical decisions can be thought of as being based upon three pillars. The first pillar is that of principles, the second evidence, and the third judgment. The relative contribution of each pillar depends on the clinical question, how much is known about the specific disease state and therapy and, the number and quality of relevant clinical studies. Rather than just thinking about how I was trained to do things, it is better to more explicitly think about your decisions in this way. This is practicing evidence-based medicine. Principles

64 Judgment Hypothesis Give Steroids? Individualization
What you do when principles and evidence are not enough: Your best guess. Important for us (and experts) to distinguish hypothesis from principles. Evidence separates them. Individualization Often the evidence does not perfectly apply to your patient Taking into account your patient’s values Judgment Evidence Principles Give Steroids? What is this judgment? It has two components. The first is hypothesis generation. When principles and evidence are not enough to support a decision, you make your best guess. You may believe some clinicians are better at guessing than other people. When you do not know the answer, it is hard to know whose guess (if any) is correct. It is important to have the ability to distinguish judgment from principles—to distinguish the hypothesis you generate from judgment from established principles. Know what you do not know. Know what is not known. The second important aspect of judgment is individualization. All patients are unique. No study--even the best randomized masked trial-- is completely applicable to your patient. The evidence that comes from that study is helpful. It can be a major reason why you make a particular decision but you always must judge how well it applies to your patient’s unique circumstances.

65 Action Incorporating patient values
Benefits Risks Judgment requires a consideration of a patient’s particular values. For one patient, the relative benefits and risks of a procedure may lead them to make one decision where another patient under the same circumstances but with a completely different value set, might make a different choice. Often times, we make these value judgments implicitly. They are difficult to quantify. Be aware that you are making them.

66 Summary of Concepts Three pillars of clinical decision making
Judgment Evidence Principles Evidence varies from Strong to Weak If you can count, you can practice EBM Two sources of Error: Bias and Chance You have heard an overview of important concepts of evidence-based medicine. Remember that clinical decision making can be explicitly thought of as being built upon three pillars. #1 – Established principles of medicine and neurology. #2 Evidence. #3. Judgment when evidence and principles are not enough. Evidence exists in a hierarchy that extends form the relative weak informally recalled experience of an individual clinician to the large randomized masked trial. Realize that counting, the 2 x 2 table and simple arithmetic calculations are the only statistical tools needed to be effective practitioner of EBM. Finally, understand that the strength of evidence is determined by assessing two sources of error: Bias and Chance.

67 As you have also seen, EBM follows a predefined process.
Define a question to determine what is relevant. Gather evidence: collections of similar cases. Determine how good that evidence is and what it shows. Use that evidence to help make a decision on how to act in the best interest of your patient. These EBM concepts and processes will be discussed in more detail throughout the course.

68 Decision Judgment Principles Action Question Conclusion Evidence


Download ppt "Module 1 Practical Evidence-Based Medicine: An Introduction"

Similar presentations


Ads by Google