Instructor Resource Chapter 18 Copyright © Scott B. Patten, 2015. Permission granted for classroom use with Epidemiology for Canadian Students: Principles,

Slides:



Advertisements
Similar presentations
Postgraduate Course 7. Evidence-based management: Research designs.
Advertisements

Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)
Deriving Biological Inferences From Epidemiologic Studies.
Causality Causality Hill’s Criteria Cross sectional studies.
Traps and pitfalls in medical statistics Arvid Sjölander.
Study Designs in Epidemiologic
Causality Inferences. Objectives: 1. To understand the concept of risk factors and outcome in a scientific way. 2. To understand and comprehend each and.
1 Case-Control Study Design Two groups are selected, one of people with the disease (cases), and the other of people with the same general characteristics.
Reading the Dental Literature
The burden of proof Causality FETP India. Competency to be gained from this lecture Understand and use Doll and Hill causality criteria.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Epidemiology Kept Simple
Scientific method - 1 Scientific method is a body of techniques for investigating phenomena and acquiring new knowledge, as well as for correcting and.
THREE CONCEPTS ABOUT THE RELATIONSHIPS OF VARIABLES IN RESEARCH
Critical Appraisal of an Article by Dr. I. Selvaraj B. SC. ,M. B. B. S
Are the results valid? Was the validity of the included studies appraised?
Chapter 2: The Research Enterprise in Psychology
Chapter 5 Research Methods in the Study of Abnormal Behavior Ch 5.
Chapter 2: The Research Enterprise in Psychology
Epidemiologic Study Designs Nancy D. Barker, MS. Epidemiologic Study Design The plan of an empirical investigation to assess an E – D relationship. Exposure.
1 Causation in Epidemiological Studies Dr. Birgit Greiner Senior Lecturer.
EBD for Dental Staff Seminar 2: Core Critical Appraisal Dominic Hurst evidenced.qm.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Evidence-Based Medicine 3 More Knowledge and Skills for Critical Reading Karen E. Schetzina, MD, MPH.
Retrospective Cohort Study. Review- Retrospective Cohort Study Retrospective cohort study: Investigator has access to exposure data on a group of people.
Web of Causation; Exposure and Disease Outcomes Thomas Songer, PhD Basic Epidemiology South Asian Cardiovascular Research Methodology Workshop.
Study Designs Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /4/20151.
Causal inference Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /4/20151.
Chapter 1: The Research Enterprise in Psychology.
 Is there a comparison? ◦ Are the groups really comparable?  Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
Instructor Resource Chapter 10 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
Instructor Resource Chapter 5 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
Instructor Resource Chapter 8 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
Instructor Resource Chapter 2 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
A short introduction to epidemiology Chapter 2b: Conducting a case- control study Neil Pearce Centre for Public Health Research Massey University Wellington,
Deciding how much confidence to place in a systematic review What do we mean by confidence in a systematic review and in an estimate of effect? How should.
Gile Sampling1 Sampling. Fundamental principles. Daniel Gile
1 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Chapter 8 Clarifying Quantitative Research Designs.
Chapter 2 Nature of the evidence. Chapter overview Introduction What is epidemiology? Measuring physical activity and fitness in population studies Laboratory-based.
A short introduction to epidemiology Chapter 10: Interpretation Neil Pearce Centre for Public Health Research Massey University, Wellington, New Zealand.
Instructor Resource Chapter 9 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
Conducting and Reading Research in Health and Human Performance.
Causation.
The Discussion Section. 2 Overall Purpose : To interpret your results and justify your interpretation The Discussion.
Instructor Resource Chapter 11 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
System error Biases in epidemiological studies FETP India.
Instructor Resource Chapter 19 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
Issues concerning the interpretation of statistical significance tests.
Reading Health Research Critically The first four guides for reading a clinical journal apply to any article, consider: the title the author the summary.
Hypothesis Testing An understanding of the method of hypothesis testing is essential for understanding how both the natural and social sciences advance.
Instructor Resource Chapter 14 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
Instructor Resource Chapter 3 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
EBM --- Journal Reading Presenter :呂宥達 Date : 2005/10/27.
Instructor Resource Chapter 17 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
Instructor Resource Chapter 15 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
EVALUATING u After retrieving the literature, you have to evaluate or critically appraise the evidence for its validity and applicability to your patient.
Instructor Resource Chapter 13 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
INDUCTIVE ARGUMENTS The aim of this tutorial is to help you learn to recognize, analyze and evaluate inductive arguments.
Instructor Resource Chapter 12 Copyright © Scott B. Patten, 2015.
SAMPLING DISTRIBUTION OF MEANS & PROPORTIONS. SAMPLING AND SAMPLING VARIATION Sample Knowledge of students No. of red blood cells in a person Length of.
SAMPLING DISTRIBUTION OF MEANS & PROPORTIONS. SAMPLING AND SAMPLING VARIATION Sample Knowledge of students No. of red blood cells in a person Length of.
1 Study Design Imre Janszky Faculty of Medicine, ISM NTNU.
Case control & cohort studies
Measures of disease frequency Simon Thornley. Measures of Effect and Disease Frequency Aims – To define and describe the uses of common epidemiological.
© 2010 Jones and Bartlett Publishers, LLC
Causation Analysis in Occupational and Environmental Medicine
Critical Appraisal วิจารณญาณ
Interpreting Epidemiologic Results.
Presentation transcript:

Instructor Resource Chapter 18 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles, Methods & Critical Appraisal (Edmonton: Brush Education Inc.

Chapter 18. Causal judgement in epidemiology

Objectives Describe the role of judgement in causal reasoning in epidemiology. Describe causal reasoning in the context of primary, secondary, and tertiary prevention. List classic criteria for weighing judgements about causality. Describe how greater-than-additive risk helps identify causal mechanisms.

Critical appraisal and causal judgement Findings of causality first require valid epidemiological estimates—estimates, in other words, that arise from properly conducted, well- designed studies. Findings of causality also require judgement. Causal significance is never self-evident—so, in critical appraisal, judgements about causation must always be carefully weighed. Lists of criteria can help because they provide a structure for weighing causal evidence.

Criteria for causal judgement The most widely used criteria derive from 2 classic sources: a 1965 paper by British epidemiologist Sir Bradford Hill a 1964 report on smoking and health from the US Surgeon General

Epidemiology and causation The concept of causation, as it applies in epidemiology, differs from concepts of causation encountered in other types of scientific research. Rather than searching directly for the underlying pathophysiological events that lead to disease, epidemiologists seek modifiable links in specific causal chains of events that ultimately lead to diseases and other adverse health outcomes.

Epidemiology and causation (continued) This reflects the connection of epidemiology to the public-health goal of prevention. It is not necessary to understand every link in a chain of causal events to prevent a disease— interrupting a single link may be enough.

Epidemiology and causation (continued) The ultimate test for an epidemiological finding is its ability to make a difference in clinical practice or public health. Typically, this would unfold in terms of primary, secondary, and tertiary prevention (but especially with the ideal goal of primary prevention). These terms will be defined in a few minutes.

Critical appraisal In critical appraisal, you start by assessing the validity of a study’s estimates of epidemiological effect before you consider any claims it makes about causality. This is a hierarchical relationship: valid estimates of exposure-disease association precede conclusions about exposure-disease causality (disease etiology).

Critical appraisal (continued) The hierarchical relationship between validity and causality usually means that an invalid estimate cannot support causal inference. However, it is important not to be too dogmatic about this. Nondifferential misclassification bias, for example, makes associations appear weaker than they are, so there may be instances in which an invalid estimate nevertheless provides evidence of a strong association.

Critical appraisal (continued) As a task of critical appraisal, assessing a study’s conclusions about causality differs from assessing its vulnerability to random error and bias. Random error and bias are relatively objective concepts: confidence intervals quantify vulnerability to random error, and concrete mechanisms link methodological features such as error rates and selection probabilities to systematic error. Causality, however, is more subjective.

Critical appraisal (continued) Critical appraisers often find themselves taking issue with investigators’ judgements about the causal implications of their findings—even findings based on estimates that they agree are valid. Critical appraisers sometimes conclude such investigators are “biased.” This is a different use of bias than in the technical context of systematic error and internal validity that has been discussed so far in the course.

Primary prevention Primary prevention seeks to interrupt a destructive chain of causal events. For example: if people are becoming sick from drinking contaminated water, this is a modifiable link. Chains of causality leading up to this modifiable link may explain why a population has become exposed to unclean water—e.g., war, deficient civic institutions etc. Additional links on the causal chain may also explain why the unclean water makes people sick: these links may involve reproduction of microbes, molecular actions of toxins, etc. But, the key preventive link is the modifiable link.

Secondary prevention Secondary prevention, which involves screening for diseases, also brings a narrow focus to chains of causality. The links between screening activities and improved health depend on many factors—for example, whether people participate in screening, whether screening tests are accurate, and whether follow-up treatments are effective, etc. These are all links on a causal chain.

Tertiary prevention Tertiary prevention—reducing negative impacts of established disease—targets specific links in chains of causality that lead to better outcomes of clinically established diseases (e.g., the administration of drugs or delivery of other treatment or rehabilitative practices). The effectiveness of a treatment—a drug, for example—depends on chains of causal events ranging from regulatory and economic issues (both strongly related to availability of a drug) to the drug’s molecular actions.

Causal webs Chains of causality link together into causal webs. Within this universe of causality, epidemiologic studies tend to be opportunistic. They look for best opportunities to intervene in processes of causation.

Causal criteria strength of association consistency specificity temporality biological gradient biological plausibility coherence experimental evidence analogy

Strength of association Strength of association is a causal criterion identified by Bradford Hill. It says that strong associations are more likely to indicate causality than weak associations. Epidemiological studies are almost always subject to some source of error, such as random error, bias, or confounding. It is less likely that data indicating a strong association would arise from minor study-design defects. So, although a weak association does not preclude causality, a strong association is more suggestive.

Strength of association (continued) Note that “strength” depends on the metric of association: a large ratio may reflect a small difference and vice versa. Also, precision is important: a strong point estimate may not be inconsistent with a weak effect.

Consistency The criterion of consistency refers to consistent findings among many studies investigating a possible association (i.e., within a literature of studies), not to the findings of any particular study. Again, this criterion reflects practical realities. No single epidemiological study is entirely perfect. All studies are vulnerable—in varying ways and degrees—to random and systematic error. However, different studies are likely to have somewhat different vulnerabilities to error. A consistent result seen in different types of studies conducted in different times and places is more likely to represent a causal effect.

Specificity This criterion is rarely used. However, it was included in Hill’s paper and also in the Surgeon General’s report. In infectious diseases, it was once believed that an infectious agent would rarely be present in healthy individuals (e.g., see Koch’s or Pasteur’s postulates).

Specificity (continued) The most important risk-factor associations for chronic diseases tend not to be in any way specific. For example, obesity is a risk factor for type 2 diabetes, but lots of obese people do not have diabetes and lots of people with diabetes are not obese.

Temporality Temporality refers to the question of whether exposure to a risk factor precedes the disease outcome. This criterion is important since a cause must precede an effect. Studies that fail to clarify the temporal relationship between exposure and disease therefore fail to quantify a necessary feature of a causal relationship. Some study designs are inherently better at clarifying the issue of temporality than others.

Biological gradient Biological evidence may suggest that a higher level or intensity of exposure should lead to greater incidence of disease. In such cases, a biological gradient seen in an epidemiological study may provide supportive evidence for causation. However, a biological gradient might not be seen even if there was a causal effect. For example: a saturation effect a threshold effect

Biological plausibility This criterion links the interpretation of epidemiological information to biological data from basic science. If an epidemiological estimate suggests an effect that is not very plausible according to other elements of current biological knowledge, it is likely that the finding represents an artifact rather than truth.

Biological plausibility (continued) The main problem with this criterion is philosophical. Thinking back to the story of John Snow, it would have been a mistake to dismiss the link between sewage-contaminated water and cholera just because the biological agent (the cholera bacillus) was unknown. Epidemiological data often precede, anticipate, or motivate collection of relevant biological data and should not therefore be subjugated to biological data.

Coherence This criterion is similar to biological plausibility, but engages a wider field of inquiry, beyond biology. Coherence refers to the fit of an epidemiological estimate with results from any type of scientific research or theory. As with the issue of biological plausibility, such considerations are important, but should not be viewed as necessary features of causal interpretation.

Experimental evidence Hill’s paper referred to experimental or “semiexperimental” evidence as a criterion for weighing causality. In current epidemiology, experimental evidence indicates that a study has employed randomization. Randomization is a uniquely powerful strategy for addressing the issue of confounding in epidemiological research, but all strategies to control confounding—not just experimentation or randomization—are central to causal reasoning in epidemiology.

Analogy Analogy refers to causal inference drawn from comparison or correspondence to other causal associations. Hill listed this criterion, but its usefulness is questionable and it is rarely used.

Component cause models While criteria are often used to weigh causal judgements, another model of causation has been put forward (by Rothman). This is the component cause model or “causal pie” model. It proposes that sets of component causes combine to produce a causal mechanism.

Component cause model (continued) The model creates some new terminology and simplifies other terminology. A “necessary” cause is present in each causal combination of component causes. A triggering or precipitating exposure is just the last exposure to complete a causal mechanism. The induction period is the period of time between exposure to a component cause (an identified risk factor) and the other exposures that complete the causal mechanism.

Component cause model (continued) The component cause model suggests that greater- than-additive interactions between 2 component causes indicate their coparticipation in at least 1 causal mechanism. This has interesting implications for epidemiological analyses, which have tended to use relative measures (e.g., risk ratios rather than risk differences). Component cause models suggest that difference-based measures may align better with causal reasoning.

Component cause model (continued) A statistical interaction between an exposure and an extraneous variable is usually taken as an indication of effect modification. If the parameter in question is a risk difference, the interaction may indicate that the 2 variables participate together in an etiological mechanism.

End