Presentation is loading. Please wait.

Presentation is loading. Please wait.

Validity in epidemiological research Deepti Gurdasani.

Similar presentations


Presentation on theme: "Validity in epidemiological research Deepti Gurdasani."— Presentation transcript:

1 Validity in epidemiological research Deepti Gurdasani

2 What do we mean by validity?

3 General and epidemiological definitions of validity — “The truth, soundness or force of a statement” — “The extent to which a variable or intervention or study measures what it is supposed to measure or accomplishes what it is supposed to accomplish” — “In short, were we right?”

4 Internal and external validity Internal validity — a study should have the ability to examine what it sets out to examine External validity — a study that can be appropriately generalized to the population at large — no external validity (generalisability) without internal validity — but you can have internal validity without external validity

5 So what factors do we need to consider when assessing the validity of results from an epidemiological study? (Is the association, if any, real?) — chance — bias — confounding

6 Chance

7 Random error — Chance — “a measure of how likely it is an event will occur” — The presence of random variation must always be kept in mind in designing studies and in interpreting data and results — Relates to sampling variation (we only sample a subset of the population) and study size (logistical restrictions on how many people we can sample) — Epidemiology attempts to measure “true” association between an exposure and disease

8 Statistical inference — We use a statistical framework to interpret our data — Informs our decision as to whether the observed association may be real — Statistical hypothesis testing

9 Power, type I and type II errors H1 true and Ho “rejected” (a) – Correct decision Ho true and Ho “rejected” (b) – Incorrect decision – Type I error H1 true and Ho “accepted” (c) – Incorrect decision – Type II error Ho true and Ho “accepted” (d) – Correct decision Truth TestH 1 trueH 0 true H o “rejected”ab (TI)(a+b) H o “accepted” c (TII)d(c+d) (a+c)(b+d)N Probability of type I error (“rejecting” the null when is true) = b / (b+d) Probability of a type II error (“accepting” the null when it is true) =  = c / (a+c) Statistical power (probability of “rejecting” the null when it is false) = 1 -  = a / (a+c)

10 Bias

11 Bias — a definition — “a partiality that prevents objective consideration of an issue or situation” — “a one-sided inclination of the mind”

12 Bias in epidemiology — “deviation of results or inferences from the truth” — “any trend in the collection, analysis, interpretation, publication, or review of data that can lead to conclusions that are systematically different from the truth” — Last, 2001

13 Bias in epidemiology is a systematic error — at the level of the study (design) — if the design, process and procedures of the study are unbiased — then the study is valid — evaluating the role of bias as an alternative explanation for an observed association is a fundamental step in interpreting any study result

14 Types of bias Two principal types of within study bias — selection bias 1. Select groups to study — information bias 2. Gather information on each group

15 Selection bias — this bias occurs when there is a systematic difference in the characteristics of people who were selected for the study and those who were not — AND where those characteristics are related to the exposure and outcome of interest

16 Selection bias: one relevant group in the population (exposed cases in this example) has a higher probability of being included in the study sample Individuals have different probabilities of being in the study according to their exposure and outcome status

17 Approaches to reducing bias — bias cannot usually be controlled or measured directly (unlike confounding) —must therefore be avoided, or reduced in the design of the study — in some contexts you can estimate the possible impact of a biased design — case control studies — cohort studies

18 Approaches to reducing bias — case control studies Selection bias — ensure response rates are equivalent between cases and controls — controls must be from the same sampling frame as cases (the same population base) - thus have the potential to be a case from that population — select more than one control group Information bias — adhere to protocol-based data collation without reference to case- control status — validate and standardize exposure or outcome assessment (collate information that predates outcome) — use multiple data sources

19 Approaches to reducing bias — cohort and cross sectional studies Selection bias — ensure response rates are high — collate data on non responders (demographic information) - can provide insights into possible differential loss (bias) and generalisability — estimate impact of differential loss — reduce loss to follow-up and attrition (central “flagging” for case ascertainment, hospital records) Information bias — validate and standardize exposure or outcome assessment — detection bias — compare effect estimates by disease stage — multiple information sources

20 Confounding

21 To confound — a definition — “to cause to become confused or perplexed” — “to fail to distinguish; mix up” — “that which contradicts or confuses”

22 Epidemiological definition of confounding — “Distortion of the effect estimate of an exposure on an outcome, caused by the presence of an extraneous factor associated both with the exposure and the outcome,” Last, 4 th Edition

23 Confounding in epidemiology — confounding is a central issue in epidemiology — this phenomenon can distort the observed exposure—disease relation — leading to an inappropriate interpretation of the exposure—disease relation

24 Confounding and epidemiology: a further discussion of the problem — experimental and observational study designs — randomisation — equality of comparisons

25 Bias and confounding — bias (systematic error) leads us to observe an association in our sample population that differs from that which exists in the total population — confounding is not an artefact; given the absence of systematic and random error, we would see the same association between exposure and disease in our sample population as in the total population — concern for confounding comes into play when we interpret the observed association

26 Random error and confounding Confounding

27 Examples of confounding in epidemiology

28 Birth order and Down syndrome Prevalence of Down syndrome at birth by birth order Birth order Affected Babies per 1000 Live Births

29 Maternal age and Down syndrome Prevalence of Down syndrome at birth by maternal age Maternal Age Affected Babies per 1000 Live Births

30 Birth order, maternal age and prevalence of Down syndrome

31 A classical definition of confounding — confounding can be thought of a mixing of effects — a confounding factor, therefore, must have an effect and must be imbalanced between the exposure groups to be compared — (1) a confounder must be associated with the disease — (2) AND, a confounder must be associated with the exposure

32 Assessment of confounding - 1 Stratified analysis — previous example illustrated a stratified analysis — compare (“eye-ball”) effect estimates among strata of the possible confounder — do you still see an association within strata? — is the crude estimate similar to the stratum-specific estimates? — if not – the association is likely to be due to confounding: that is, the effect of the risk factor is simply due to its association with the extraneous factor (the confounder)

33 A third requirement for a confounder

34 Requirements for a confounder — (1) a confounder must be associated with the disease — (2) and, a confounder must be associated with the exposure — (3) and, a confounder must NOT have an effect on the exposure (or vice versa) THUS, a confounder has no causal relation with the exposure of interest

35 “intermediates” and confounding — variation in a factor that is caused by the exposure (and thus an intermediate step in the causal pathway between exposure and disease is likely to have properties (1) and (2) – see previous slide — causal intermediates are NOT confounders - they are part of the association we are studying — we therefore do not take into account the effect of intermediates in our analysis (usually)

36 Confounders or intermediates? Social class, dietary patterns and risk of coronary heart disease (CHD)? Genetic variation CRP gene, CRP levels in blood, and CHD risk? Aspirin use, vitamin intake and risk of colorectal cancer?

37 Classical definition of confounding — (1) a confounder must be associated with the disease — (2) and, a confounder must be associated with the exposure — (3) and, a confounder must not have an effect on the exposure (or vice versa)

38 Other definitions of confounding Collapsible Counterfactual

39 Assessing and controlling for confounding: study design — randomisation — matching (frequency and individual)* — restriction Limitation/disadvantages? *efficiency?

40 Assessing and controlling for confounding: analytical approaches — stratified analysis* — standardisation — conditional analysis — multivariable (regression) analysis Limitation/disadvantages? * Sparse data problem, residual confounding

41 Actual and potential confounders — analytical strategies and choice of confounders — conceptual choices — inappropriate to rely on statistical significance to identify confounding, although it can inform conceptual choices


Download ppt "Validity in epidemiological research Deepti Gurdasani."

Similar presentations


Ads by Google