Validity in epidemiological research Deepti Gurdasani.

Slides:



Advertisements
Similar presentations
Case-control study 3: Bias and confounding and analysis Preben Aavitsland.
Advertisements

Agency for Healthcare Research and Quality (AHRQ)
Understanding the Variability of Your Data: Dependent Variable.
Chance, bias and confounding
HYPOTHESIS TESTING Four Steps Statistical Significance Outcomes Sampling Distributions.
Bias and errors in epidemiologic studies Manish Chaudhary BPH( IOM) MPH(BPKIHS)
Copyright c 2001 The McGraw-Hill Companies, Inc.1 Chapter 7 Sampling, Significance Levels, and Hypothesis Testing Three scientific traditions critical.
Thomas Songer, PhD with acknowledgment to several slides provided by M Rahbar and Moataza Mahmoud Abdel Wahab Introduction to Research Methods In the Internet.
Unit 6: Standardization and Methods to Control Confounding.
Chapter 4 Hypothesis Testing, Power, and Control: A Review of the Basics.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 7: Gathering Evidence for Practice.
Chapter 1: Introduction to Statistics
Evidence-Based Medicine 4 More Knowledge and Skills for Critical Reading Karen E. Schetzina, MD, MPH.
Study Design. Study Designs Descriptive Studies Record events, observations or activities,documentaries No comparison group or intervention Describe.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Study Designs Afshin Ostovar Bushehr University of Medical Sciences Bushehr, /4/20151.
Experimental Design making causal inferences Richard Lambert, Ph.D.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Techniques of research control: -Extraneous variables (confounding) are: The variables which could have an unwanted effect on the dependent variable under.
No criminal on the run The concept of test of significance FETP India.
Case-control study Chihaya Koriyama August 17 (Lecture 1)
Study Designs for Clinical and Epidemiological Research Carla J. Alvarado, MS, CIC University of Wisconsin-Madison (608)
Case Control Study Dr. Ashry Gad Mohamed MB, ChB, MPH, Dr.P.H. Prof. Of Epidemiology.
Chapter 10 Data Interpretation Issues. Learning Objectives Distinguish between random and systematic errors Describe sources of bias Define the term confounding.
Academic Research Academic Research Dr Kishor Bhanushali M
Issues concerning the interpretation of statistical significance tests.
Instructor Resource Chapter 14 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
© 2006 by The McGraw-Hill Companies, Inc. All rights reserved. 1 Chapter 7 Sampling, Significance Levels, and Hypothesis Testing Three scientific traditions.
Instructor Resource Chapter 15 Copyright © Scott B. Patten, Permission granted for classroom use with Epidemiology for Canadian Students: Principles,
Matching. Objectives Discuss methods of matching Discuss advantages and disadvantages of matching Discuss applications of matching Confounding residual.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Confounding Biost/Stat 579 David Yanez Department of Biostatistics University of Washington July 7, 2005.
1 Study Design Imre Janszky Faculty of Medicine, ISM NTNU.
Purpose of Epi Studies Discover factors associated with diseases, physical conditions and behaviors Identify the causal factors Show the efficacy of intervening.
(www).
1 Basics of Inferential Statistics Mark A. Weaver, PhD Family Health International Office of AIDS Research, NIH ICSSC, FHI Lucknow, India, March 2010.
Epidemiological Study Designs And Measures Of Risks (1)
Methods of Presenting and Interpreting Information Class 9.
Understanding Populations & Samples
Chapter 8 Introducing Inferential Statistics.
BIAS AND CONFOUNDING Nigel Paneth.
Descriptive Epidemiology According to Person, Place, and Time
Sample size calculation
Chapter 6  PROBABILITY AND HYPOTHESIS TESTING
Present: Disease Past: Exposure
PROCESSES OF SCIENTIFIC INQUIRY
Inference and Tests of Hypotheses
Epidemiological Methods
Understanding Results
Hypothesis Testing and Confidence Intervals (Part 1): Using the Standard Normal Lecture 8 Justin Kern October 10 and 12, 2017.
BIAS AND CONFOUNDING
Data, conclusions and generalizations
Random error, Confidence intervals and P-values
Validity Generalization
Critical Reading of Clinical Study Results
Unlocking the Mysteries of Hypothesis Testing
11/20/2018 Study Types.
Chapter 9 Experimental Research: An Overview
Evaluating Associations
ERRORS, CONFOUNDING, and INTERACTION
Evaluating Effect Measure Modification
Sampling and Power Slides by Jishnu Das.
Critical Appraisal วิจารณญาณ
The objective of this lecture is to know the role of random error (chance) in factor-outcome relation and the types of systematic errors (Bias)
Interpreting Epidemiologic Results.
Inferential statistics Study a sample Conclude about the population Two processes: Estimation (Point or Interval) Hypothesis testing.
Dr Luis E Cuevas – LSTM Julia Critchley
Type I and Type II Errors
Confounders.
Presentation transcript:

Validity in epidemiological research Deepti Gurdasani

What do we mean by validity?

General and epidemiological definitions of validity — “The truth, soundness or force of a statement” — “The extent to which a variable or intervention or study measures what it is supposed to measure or accomplishes what it is supposed to accomplish” — “In short, were we right?”

Internal and external validity Internal validity — a study should have the ability to examine what it sets out to examine External validity — a study that can be appropriately generalized to the population at large — no external validity (generalisability) without internal validity — but you can have internal validity without external validity

So what factors do we need to consider when assessing the validity of results from an epidemiological study? (Is the association, if any, real?) — chance — bias — confounding

Chance

Random error — Chance — “a measure of how likely it is an event will occur” — The presence of random variation must always be kept in mind in designing studies and in interpreting data and results — Relates to sampling variation (we only sample a subset of the population) and study size (logistical restrictions on how many people we can sample) — Epidemiology attempts to measure “true” association between an exposure and disease

Statistical inference — We use a statistical framework to interpret our data — Informs our decision as to whether the observed association may be real — Statistical hypothesis testing

Power, type I and type II errors H1 true and Ho “rejected” (a) – Correct decision Ho true and Ho “rejected” (b) – Incorrect decision – Type I error H1 true and Ho “accepted” (c) – Incorrect decision – Type II error Ho true and Ho “accepted” (d) – Correct decision Truth TestH 1 trueH 0 true H o “rejected”ab (TI)(a+b) H o “accepted” c (TII)d(c+d) (a+c)(b+d)N Probability of type I error (“rejecting” the null when is true) = b / (b+d) Probability of a type II error (“accepting” the null when it is true) =  = c / (a+c) Statistical power (probability of “rejecting” the null when it is false) = 1 -  = a / (a+c)

Bias

Bias — a definition — “a partiality that prevents objective consideration of an issue or situation” — “a one-sided inclination of the mind”

Bias in epidemiology — “deviation of results or inferences from the truth” — “any trend in the collection, analysis, interpretation, publication, or review of data that can lead to conclusions that are systematically different from the truth” — Last, 2001

Bias in epidemiology is a systematic error — at the level of the study (design) — if the design, process and procedures of the study are unbiased — then the study is valid — evaluating the role of bias as an alternative explanation for an observed association is a fundamental step in interpreting any study result

Types of bias Two principal types of within study bias — selection bias 1. Select groups to study — information bias 2. Gather information on each group

Selection bias — this bias occurs when there is a systematic difference in the characteristics of people who were selected for the study and those who were not — AND where those characteristics are related to the exposure and outcome of interest

Selection bias: one relevant group in the population (exposed cases in this example) has a higher probability of being included in the study sample Individuals have different probabilities of being in the study according to their exposure and outcome status

Approaches to reducing bias — bias cannot usually be controlled or measured directly (unlike confounding) —must therefore be avoided, or reduced in the design of the study — in some contexts you can estimate the possible impact of a biased design — case control studies — cohort studies

Approaches to reducing bias — case control studies Selection bias — ensure response rates are equivalent between cases and controls — controls must be from the same sampling frame as cases (the same population base) - thus have the potential to be a case from that population — select more than one control group Information bias — adhere to protocol-based data collation without reference to case- control status — validate and standardize exposure or outcome assessment (collate information that predates outcome) — use multiple data sources

Approaches to reducing bias — cohort and cross sectional studies Selection bias — ensure response rates are high — collate data on non responders (demographic information) - can provide insights into possible differential loss (bias) and generalisability — estimate impact of differential loss — reduce loss to follow-up and attrition (central “flagging” for case ascertainment, hospital records) Information bias — validate and standardize exposure or outcome assessment — detection bias — compare effect estimates by disease stage — multiple information sources

Confounding

To confound — a definition — “to cause to become confused or perplexed” — “to fail to distinguish; mix up” — “that which contradicts or confuses”

Epidemiological definition of confounding — “Distortion of the effect estimate of an exposure on an outcome, caused by the presence of an extraneous factor associated both with the exposure and the outcome,” Last, 4 th Edition

Confounding in epidemiology — confounding is a central issue in epidemiology — this phenomenon can distort the observed exposure—disease relation — leading to an inappropriate interpretation of the exposure—disease relation

Confounding and epidemiology: a further discussion of the problem — experimental and observational study designs — randomisation — equality of comparisons

Bias and confounding — bias (systematic error) leads us to observe an association in our sample population that differs from that which exists in the total population — confounding is not an artefact; given the absence of systematic and random error, we would see the same association between exposure and disease in our sample population as in the total population — concern for confounding comes into play when we interpret the observed association

Random error and confounding Confounding

Examples of confounding in epidemiology

Birth order and Down syndrome Prevalence of Down syndrome at birth by birth order Birth order Affected Babies per 1000 Live Births

Maternal age and Down syndrome Prevalence of Down syndrome at birth by maternal age Maternal Age Affected Babies per 1000 Live Births

Birth order, maternal age and prevalence of Down syndrome

A classical definition of confounding — confounding can be thought of a mixing of effects — a confounding factor, therefore, must have an effect and must be imbalanced between the exposure groups to be compared — (1) a confounder must be associated with the disease — (2) AND, a confounder must be associated with the exposure

Assessment of confounding - 1 Stratified analysis — previous example illustrated a stratified analysis — compare (“eye-ball”) effect estimates among strata of the possible confounder — do you still see an association within strata? — is the crude estimate similar to the stratum-specific estimates? — if not – the association is likely to be due to confounding: that is, the effect of the risk factor is simply due to its association with the extraneous factor (the confounder)

A third requirement for a confounder

Requirements for a confounder — (1) a confounder must be associated with the disease — (2) and, a confounder must be associated with the exposure — (3) and, a confounder must NOT have an effect on the exposure (or vice versa) THUS, a confounder has no causal relation with the exposure of interest

“intermediates” and confounding — variation in a factor that is caused by the exposure (and thus an intermediate step in the causal pathway between exposure and disease is likely to have properties (1) and (2) – see previous slide — causal intermediates are NOT confounders - they are part of the association we are studying — we therefore do not take into account the effect of intermediates in our analysis (usually)

Confounders or intermediates? Social class, dietary patterns and risk of coronary heart disease (CHD)? Genetic variation CRP gene, CRP levels in blood, and CHD risk? Aspirin use, vitamin intake and risk of colorectal cancer?

Classical definition of confounding — (1) a confounder must be associated with the disease — (2) and, a confounder must be associated with the exposure — (3) and, a confounder must not have an effect on the exposure (or vice versa)

Other definitions of confounding Collapsible Counterfactual

Assessing and controlling for confounding: study design — randomisation — matching (frequency and individual)* — restriction Limitation/disadvantages? *efficiency?

Assessing and controlling for confounding: analytical approaches — stratified analysis* — standardisation — conditional analysis — multivariable (regression) analysis Limitation/disadvantages? * Sparse data problem, residual confounding

Actual and potential confounders — analytical strategies and choice of confounders — conceptual choices — inappropriate to rely on statistical significance to identify confounding, although it can inform conceptual choices