Presentation is loading. Please wait.

Presentation is loading. Please wait.

Please take an i>clicker from the box in front of the room.

Similar presentations


Presentation on theme: "Please take an i>clicker from the box in front of the room."— Presentation transcript:

1 Please take an i>clicker from the box in front of the room

2 Classification Schemes for Error Szklo and Nieto –Bias (Systematic error) Selection Bias Information/Measurement Bias –Confounding –Chance (Random error) Other Common Approach –Bias (Systematic error) Selection Bias Information/Measurement Bias Confounding Bias –Chance (Random error) Think of the “BIG 4” in all of your work

3 Descriptive studies –Objective: Estimate measures of disease occurrence (e.g., prevalence or incidence) Analytic studies –Objective: Estimate measures of association between exposure (predictors) and outcome (e.g., disease)

4 Bias in Clinical Research: Measurement Bias In descriptive studies In analytic studies –Misclassification of dichotomous exposure & outcome variables non-differential misclassification differential misclassification magnitude and direction of bias –Mis-measurement of interval scale variables –Advanced topics (mention only) misclassification of multi-level categorical variables misclassification of confounding variables back-calculating to the truth

5 Measurement Bias Definition –bias that is caused when any measurement collected about or from subjects is not completely reproducible or valid (accurate) any type of variable: exposure, outcome, or confounder –aka: misclassification bias; information bias (S&N text); identification bias misclassification is what happens when there is error in measurement of a categorical variable, for which everyone is “classified”

6 Misclassification of Dichotomous Variables

7 What does d/(b+d) refer to? Sensitivity - APositive predictive value - CSpecificity - BNegative predictive value - D Characterizing the Measurement of a Dichotomous Variable (e.g., present vs absent)

8 What does d/(b+d) refer to? Sensitivity - APositive predictive value - C Specificity - B Negative predictive value - D Characterizing the Measurement of a Dichotomous Variable (Present vs Absent)

9 Terms Used to Characterize Measurement/Classification of Dichotomous Variables (Terms for Validity) Sensitivity –the ability of a measurement to identify correctly those who HAVE the characteristic (disease or exposure) of interest. Specificity –the ability of a measurement to identify correctly those who do NOT have the characteristic of interest Applies to any dichotomous variable, not just diagnoses Positive predictive value = a/(a+b) Negative predictive value = d/(c+d)

10 Causes for Misclassification Questionnaire problems –inaccurate recall –socially desirable responses –ambiguous questions –under or overzealous interviewers Biological specimen collection –problems in specimen collection or processing or storage Biological specimen testing –inherent limits of detection –faulty instruments Data management problems in coding Study design or analytic problems (See Problem Set) –incorrect time period assessed, particularly for exposure –lumping of outcome variables (composite variables)

11 SOURCE POPULATION = CALIFORNIA STUDY SAMPLE = PRE-ELECTION POLL (Field Poll) Descriptive Study: Measurement Bias Deukmejian Bradley +7% 1982 California Governor Election

12 SOURCE POPULATION = CALIFORNIA STUDY SAMPLE = PRE-ELECTION POLL (Field Poll, one of the largest pre-election surveys) Descriptive Study: Measurement Bias “Bradley Effect” = Respondents who favored Deukmejian sought to avoid appearing racist and hence did not state true choice in pre-election survey Deukmejian Deukmejian 49% Bradley +7% Bradley 48% 1982 California Governor Election

13 SOURCE POPULATION STUDY SAMPLE Contrast with Selection Bias Uneven dispersion of arrows e.g., Dewey backers were over- represented

14 Descriptive Biomedical Studies: Measurement Bias e.g., Prevalence of: –Flossing –Condom use –Exercise –Etc. “Social desirability bias” –Humans tend to give socially desirable responses

15 Bias in Clinical Research: Measurement Bias Measurement bias in descriptive studies Measurement bias in analytic studies –Misclassification of dichotomous exposure & outcome variables non-differential misclassification differential misclassification magnitude and direction of bias –Mismeasurement of interval scale variables –Advanced topics (mention only) misclassification of multi-level categorical variables misclassification of confounding variables back-calculating to the truth

16 Diseased Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE Non-Differential Misclassification of Exposure: Imperfect Sensitivity Problems with sensitivity in measurement of exposure - independent of disease status e.g., case-control study exposure = alcohol abuse Evenly weighted arrows = non-differential

17 Non-differential Misclassification of Exposure Truth: No misclassification (100% sensitivity/specificity) ExposureCasesControls Yes5020 No5080 OR= (50/50)/(20/80) = 4.0 Presence of 70% sensitivity in exposure classification ExposureCasesControls Yes50-15=3520-6=14 No50+15=6580+6=86 OR= (35/65)/(14/86) = 3.3 Effect of non-differential misclassification of dichotomous exposures: Bias “toward the null” value of 1.0

18 Diseased Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE Non-Differential Misclassification of Exposure: Imperfect Specificity Problems with specificity of exposure measurement - independent of disease status e.g., exposure = self- reported second-hand smoke exposure

19 Non-differential Misclassification of Exposure Truth: No misclassification (100% sensitivity/specificity) ExposureCasesControls Yes5020 No5080 OR= (50/50)/(20/80) = 4.0 Presence of 70% specificity in exposure classification ExposureCasesControls Yes50+15=6520+24=44 No50-15=3580-24=56 OR= (65/35)/(44/56) = 2.4 Effect of non-differential misclassification of dichotomous exposures: Bias toward the null value of 1.0

20 Diseased Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE No misclassification e.g., exposure = self- reported second-hand smoke exposure 50 20 80 OR = 4.0

21 Diseased Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE Non-Differential Misclassification of Exposure: Imperfect Specificity e.g., exposure = self- reported second-hand smoke exposure OR = 2.4 65 50 35 44 8056 differences become blurred

22 Diseased Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE Non-Differential Misclassification of Exposure: Imperfect Specificity and Sensitivity Problems with sensitivity - independent of disease status Problems with specificity - independent of disease status

23 Assuming no sampling error, what will the observed OR be? OR = 3.1 - AOR = 2.4 - COR = 2.8 - BOR = 1.6 - D Exposure Cases Controls Yes5020 No5080 True OR = (50/50) / (20/80) = 4.0 But, now assume non-differential exposure misclassification problems with both sensitivity and specificity Sensitivity = 70% Specificity = 70% OR = 1.2 - E

24 Assuming no sampling error, what will the observed OR be? OR = 3.1 - AOR = 2.4 - COR = 2.8 - B OR = 1.6 - D Exposure Cases Controls Yes5020 No5080 True OR = (50/50) / (20/80) = 4.0 But, now assume non-differential exposure misclassification problems with both sensitivity and specificity Sensitivity = 70% Specificity = 70% OR = 1.2 - E

25 Exposure Cases Controls Yes5020 No5080 True OR = (50/50) / (20/80) = 4.0 True Cases Controls Distribution exp unexp exp unexp (gold standard) 50 50 20 80 Study distribution: Cases Controls Exposed 35 15 50 14 24 38 Unexposed 15 35 50 6 56 62 sensitivity 0.70 0.70 0.70 0.70 or specificity Exposure Cases Controls Yes5038 No5062 Observed OR = (50/50) / (38/62) = 1.6 Non-Differential Misclassification of Exposure: Imperfect Sensitivity and Specificity SOURCE POPULATION STUDY SAMPLE Sensitivity = 0.7 Specificity = 0.7

26 Assuming no sampling error, what will the observed OR be? OR = 3.5 - AOR = 3.0 - COR = 3.2 - BOR = 2.8 - D Exposure Cases Controls Yes5020 No5080 True OR = (50/50) / (20/80) = 4.0 But, now assume non-differential exposure misclassification problems with both sensitivity and specificity Sensitivity = 90% Specificity = 80% OR = 2.4 - E

27 Assuming no sampling error, what will the observed OR be? OR = 3.5 - AOR = 3.0 - COR = 3.2 - BOR = 2.8 - D Exposure Cases Controls Yes5020 No5080 True OR = (50/50) / (20/80) = 4.0 But, now assume exposure misclassification problems with both sensitivity and specificity Sensitivity = 90% Specificity = 80% OR = 2.4 - E

28 Non-Differential Misclassification of Exposure: Imperfect Sensitivity and Specificity Exposure Cases Controls Yes5020 No5080 True OR = (50/50) / (20/80) = 4.0 True Cases Controls Distribution exp unexp exp unexp (gold standard) 50 50 20 80 Study distribution: Cases Controls Exposed 45 10 55 18 16 34 Unexposed 5 40 45 2 64 66 sensitivity 0.90 0.80 0.90 0.80 or specificity Exposure Cases Controls Yes5534 No4566 Observed OR = (55/45) / (34/66) = 2.4 SOURCE POPULATION STUDY SAMPLE Sensitivity = 0.9 Specificity = 0.8 Seemingly respectable Sn and Sp result in substantial bias

29 Non-Differential Misclassification of Exposure: Imperfect Sensitivity & Specificity and Uncommon Exposure Exposure Cases Controls Yes5020 No500800 True OR = (50/500) / (20/800) = 4.0 True Cases Controls Distribution exp unexp exp unexp (gold standard) 50 500 20 800 Study distribution: Cases Controls Exposed 45 100 145 18 160 178 Unexposed 5 400 405 2 640 642 sensitivity 0.90 0.80 0.90 0.80 or specificity Exposure Cases Controls Yes145178 No405642 Observed OR = (145/405) / (178/642) = 1.3 SOURCE POPULATION STUDY SAMPLE e.g., radon exposure Sensitivity = 0.9 Specificity = 0.8 Higher exposure prevalence is more balanced and more resilient to misclassification

30 Non-differential Misclassification of Exposure: Magnitude of Bias on the Odds Ratio True OR=4.0 2.20.080.90 2.80.200.90 3.00.370.90 1.90.200.600.90 3.20.200.950.90 1.90.200.850.60 2.60.200.850.90 Observed ORPrev of Exp in controls SpecificitySensitivity

31 Bias as a function of non-differential imperfect sensitivity and specificity of exposure measurement 0.9 0.7 0.5 Sensitivity of exposure measurement Specificity of exposure measurement Copeland et al. AJE 1977 True OR = 2.67 Case-control study Prevalence of exposure in controls = 0.2 Apparent Odds Ratio 2.8 2.5 2.2 1.9 1.6 1.3 1.0.50.55.60.65.70.75.80.85.90.95 1.00

32 Bias as a function of non-differential imperfect sensitivity and specificity of exposure measurement 0.9 0.7 0.5 Sensitivity of exposure measurement Specificity of exposure measurement Copeland et al. AJE 1977 True OR = 2.67 Prevalence of exposure in controls = 0.2 Apparent Odds Ratio 2.8 2.5 2.2 1.9 1.6 1.3 1.0.50.55.60.65.70.75.80.85.90.95 1.00 When does OR fall below 2?

33 Non-Differential Misclassification of Exposure in a Cohort Study: Effect of Sensitivity, Specificity and Prevalence of Exposure Flegal et al. AJE 1986 True Risk Ratio = 10 Apparent Risk Ratio U = sensitivity; V = specificity All RR < 8 If Pe >.25, ↑ Sn. influ. Dependence upon Prev. Sn Sn Sn Sn Sn Sp Sp Sp Sp Sp SnSp Sn and Sp

34 In the presence of non- differential misclassification of exposure (e.g., sensitivity and specificity of 80%), what can we say in our Discussion section about any measures of association derived from the exposure? Are an underestimate of truth - ATend to underestimate truth - C Will, on average, underestimate truth - B Are an overestimate of truth - D Need more information - E

35 Are an underestimate of truth - A Tend to underestimate truth - C Will, on average, underestimate truth - B Are an overestimate of truth - D Need more information - E In the presence of non- differential misclassification of exposure (e.g., sensitivity and specificity of 80%), what can we say in our Discussion section about any measures of association derived from the exposure?

36 Non-differential misclassification of exposure and “Bias towards the null” In any single study, non-differentiality by itself does not guarantee that the observed measure of association is falsely low Reason: in any single study, the observed results are a function of bias plus CHANCE –Only if a study is repeated many times over and the findings averaged, can we say that the observed measure of association is biased towards the null Don’t say: “Because we had non-differential misclassification of exposure, our findings are an underestimate of the true measure of association.” (i.e., do not be definitive) Instead, say: “Because imperfect sensitivity and specificity of was generally the same irrespective of outcome, our findings tend to be (or, “on average are likely to be) an underestimate of the true association.” Jurek et al. IJE 2005

37 Non-Differential Misclassification of Exposure: Rules of Thumb Regarding Sensitivity & Specificity Exposure Cases Controls Yes50100 No50300 True OR = (50/50) / (100/300) = 3.0 SOURCE POPULATION Sens + Spec = 1 gives OR = 1 (no effect) Sens + Spec >1 but <2 gives attenuated effect Sens + Spec < 1 gives reversal of effect Coding error

38 Diseased Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE Non-Differential Misclassification of Outcome Problems with outcome sensitivity -independent of exposure status Problems with outcome specificity - independent of exposure status Evenly weighted arrows = non-differential

39 Bias as a function of non-differential imperfect sensitivity and specificity of outcome measurement in a cohort study Sensitivity of outcome measurement 0.9 0.7 0.5 Specificity of outcome measurement Copeland et al. AJE 1977 True risk ratio = 2.0 Cumulative incidence in unexposed = 0.05 Steep bias with change in specificity Relatively less influence from sensitivity Apparent Risk Ratio

40 Non-Differential Misclassification of Outcome: Effect of Incidence of Outcome Copeland et al. AJE 1977 Specificity of outcome measurement 0.2 0.1 0.1 0.05 0.05 0.025 Cumulative incidence of outcome Exposed Unexposed True risk ratio = 2.0 Sensitivity of outcome measurement held fixed = 0.9 Apparent Risk Ratio

41 Special Situation In a Cohort or Cross-sectional Study Misclassification of outcome If specificity of outcome measurement is 100% Any degree of imperfect sensitivity, if non-differential, will not bias the risk ratio or prevalence ratio e.g., Risk difference, however, is changed by a factor of (1 minus sensitivity), in this example, 30% (truth=0.1; biased = 0.07) Truth 70% sensitivity

42 When specificity of outcome is 100% in a cohort or cross-sectional study Sensitivity of outcome measurement 0.9 0.7 0.5 Specificity of outcome measurement Copeland et al. AJE 1977 True risk ratio = 2.0 Cumulative incidence in unexposed = 0.05 Apparent Risk Ratio

43 When specificity of outcome measurement is 100% in a cohort or cross sectional study Worth knowing about when defining outcomes, such as choosing cutoffs for continuous variables on ROC curves Choosing most specific cutoff (or 100% cutoff) will lead to least biased ratio measures of association Example of ROC curve

44 What should you choose as your primary outcome variable? >5 days of cough - AAbnormal chest x-ray - C Microbiologic diagnosis of pertussis - B >5 days of cough + microbiologic diagnosis of pertussis - D Efficacy of a pertussis (whooping cough) vaccine in adults RCT of: Approved (in kids) pertussis vaccine vs. control vaccine for the prevention of pertussis in adults Ward et al. NEJM 2005

45 What should you choose as your primary outcome variable? >5 days of cough - AAbnormal chest x-ray - C Microbiologic diagnosis of pertussis - B >5 days of cough + microbiologic diagnosis of pertussis - D Efficacy of a pertussis (whooping cough) vaccine in adults RCT of: Approved (in kids) pertussis vaccine vs. control vaccine for the prevention of pertussis in adults Ward et al. NEJM 2005

46 Efficacy of a pertussis (whooping cough) vaccine in adults Outcome: Cough > 5 days –No. of events: 2672 (and lots of statistical power) –Result: No significant difference between groups Outcome: Cough + microbiologic pertussis confirmation –No. of events: 10 –Result: rate ratio = 0.08 (92% vaccine efficacy) (95% CI = 0.01 to 0.68) Acellular vaccine vs. control vaccine for the prevention of pertussis in adults (Ward et al. NEJM 2005)

47 Pervasiveness of Non-Differential Misclassification Direction of this bias is typically towards the null Therefore, called a “conservative” bias Goal, however, is to get the truth Consider how much underestimation of effects must be occurring in research How many “negative” studies are truly “positive”?

48 Differential Misclassification of Exposure Weinstock et al. AJE 1991 Nested case-control study in Nurses Health Study cohort Cases: women with new melanoma diagnoses Controls: women w/out melanoma - by incidence density sampling Measurement of exposure: questionnaire about self-reported “tanning ability”; administered shortly after melanoma development

49 Question asked after diagnosis Question asked before diagnosis (NHS baseline) Virtually unchanged Substantially changed

50 Melanoma Tanning ability + - No Yes SOURCE POPULATION STUDY SAMPLE “Tanning Ability” and Melanoma: Differential Misclassification of Exposure Imperfect specificity of exposure measurement - mostly in cases Bias away from the null leading to spurious association

51 Congenital Malformation Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE Differential Misclassification of Exposure: Exposures During Pregnancy and Congenital Malformations Cases more likely than controls to remember a variety of exposures Cases might be more likely than controls to falsely state a variety of exposures Uneven weighting of arrows = differential

52 Differential Misclassification of Exposure: Magnitude of Bias on the Odds Ratio True OR=3.9

53 Misclassification of Dichotomous Exposure or Outcome: Summary of Bias

54 Relating Last Week to This Week: Relating Validity / Reproducibility of Individual Dichotomous Measurements to Measurement Bias in Inferences in Analytic Studies Validity –How sensitivity and specificity of a measurement results in measurement bias covered in prior slides Reproducibility –Recall that a measurement with imperfect reproducibility will typically lack perfect validity when used in practice -- (unless it is repeated many many times)

55 Reproducibility and Validity of a Measurement With only one shot at the measurement, most of the time you will be off the center of the target

56 Imperfect reproducibility leads to 90% sensitivity and 90% specificity of height measurement – non-differential with respect to outcome

57 Reproducibility of Measurement Validity of Measurement in Practice Validity of Analytic Inferences Derived from Measurement “Measurement Bias” Systematic Error of Measurement

58 Bias in Clinical Research: Measurement Bias Measurement bias in descriptive studies Measurement bias in analytic studies –Misclassification of dichotomous exposure & outcome variables non-differential misclassification differential misclassification magnitude and direction of bias –Mismeasurement of interval scale variables –Advanced topics (mention only) misclassification of multi-level categorical variables misclassification of confounding variables back-calculating to the truth

59 Effect of Lack of Validity and Reproducibility in Interval Scale Measurements Lack of Validity (Systematic Error) –Measurements systematically off truth by some multiplicative factor or absolute difference xx truthobserved xxxxx xx truthobserved Lack of Reproducibility –Measurements off truth by some random factor or difference

60 Relating the Validity and Reproducibility of Measurements to Measurement Bias in Analytic Studies – Interval Scale Variables Validity (Systematic error) Result moves systematically up or down scale by given factor or absolute difference e.g., systematic error in an interval scale outcome variable Mean Ratio of Means Difference in Means Bias depending upon measure of association

61 Relating the Reproducibility and Validity of Measurements to Measurement Bias in Analytic Studies – Interval Scale Variables Reproducibility (Random error) e.g., random error in an exposure variable Assuming: Exposure is normally distributed with variance,  2 True Random error is normally distributed with variance,  2 E Then, the observed regression coefficient is equal to the true regression coefficient times: i.e., the greater the measurement error, the greater the attenuation (bias) towards the null (e.g., if ICC is 0.5, the measure of association is halved) (i.e. reproducibility, the intraclass correlation coefficient) Truth and Error Truth Regression Dilution Bias

62 Relating the Reproducibility and Validity of Measurements to Measurement Bias in Analytic Studies – Interval Scale Variables See Extra Slides for Additional Examples

63 Advanced Topics Misclassification of multi-level categorical exposure variables –some of the rules change regarding direction of bias –See Extra Slides for examples Mis-measurement of confounding variables –When confounding variables are mis-measured, the net result is failure to fully control (adjust) for that variable You are left with “residual confounding” You have not fully adjusted for the variable “Adjusted” measures of association may be over or under-estimated –Very common problem Researchers focus mainly on optimal measurement of exposure & outcome By the time confounders surface, everyone is too exhausted –e.g., when controlling for smoking, does classification of people into smokers and non-smokers based on current smoking capture the essence of the exposure?

64 Advanced Topics Back-calculating to unbiased results –thus far, truth about measurement quality and the relationships between exposure/outcome variables have been assumed We have then predicted what the bias will be in observed results –In practice, we have observed results and sometimes a guess about the measurement quality –when extent of classification errors (e.g., PPV, NPV, sensitivity & specificity, ICC) are known, it is possible to back-calculate to truth –if exact classification errors are not known, it is possible to perform sensitivity analyses to estimate a range of study results given a range of possible classification errors –“Quantitative bias analysis”

65 Poor Reproducibility Poor Validity Good Reproducibility Good Validity Managing Measurement Bias Prevention and avoidance are critical –study design phase is critical –use state-of-the-art techniques, blinding, SOPs, and replicates Little to be done after study (but back-correction may be possible) Become an expert in the measurement of your primary variables For the other variables, seek out the advice of experts (teams) Optimize the reproducibility/validity of your measurements!

66 Extra slides

67 Mismeasurement of Interval Scale Variables: Summary of Bias When Relating to Perfectly Measured Variables

68 Correlating one interval scale measurement to another –e.g., weight and cholesterol Correlation is attenuated directly proportional to ICC of measurements (r = correlation coefficient) –e.g., If ICC of both weight and cholesterol is 0.80 –20% attenuation Relating the Reproducibility of Measurements to Measurement Bias in Analytic Studies – Interval Scale Variables

69 Non-differential Misclassification of Multi-level Exposure ExposureExposure Misclassification between adjacent exposure categories Truth Bias away from the null Dosemeci et al. AJE 1990

70 Misclassification of Multi-level Exposure ExposureExposure Misclassification between adjacent and non-adjacent exposure categories Truth Appearance of J-shaped relationship Dosemeci et al. AJE 1990


Download ppt "Please take an i>clicker from the box in front of the room."

Similar presentations


Ads by Google