Presentation is loading. Please wait.

Presentation is loading. Please wait.

Polls hit target on presidential race after season of discontent WILL LESTER, Associated Press; Friday, November 5, 2004 Public opinion polls didn't have.

Similar presentations


Presentation on theme: "Polls hit target on presidential race after season of discontent WILL LESTER, Associated Press; Friday, November 5, 2004 Public opinion polls didn't have."— Presentation transcript:

1 Polls hit target on presidential race after season of discontent WILL LESTER, Associated Press; Friday, November 5, 2004 Public opinion polls didn't have another "Dewey Defeats Truman" moment this year despite months of widespread grumbling about challenges facing the industry. In fact, polls taken just before the voting forecast the presidential election results quite accurately. The polling business came under fire recently because of worries about cell-phone-only users who are not polled, low response rates to traditional telephone polling and unpredictable heavy voter turnout. Some polls a few months before Tuesday's election showed widely divergent results. Elections are one of the best chances to measure polling results against reality. And this year, there were no surprises. Most national polls taken just before the election showed President Bush with a slight lead, while others showed the race even. The final poll by the Pew Research Center for the People & the Press projected 51 percent for Bush and 48 percent for Sen. John Kerry -- exactly where the results stood the morning after the election. The George Washington University-Battleground Poll had the same final result as Pew. Ipsos- Public Affairs, which polls for The Associated Press, forecast a finish of Bush 50, Kerry 48.

2 Bias in Clinical Research: Measurement Bias Misclassification of dichotomous exposure & outcome variables –non-differential misclassification –differential misclassification –magnitude and direction of bias Misclassification of multi-level and continuous variables –some of the rules changes Advanced topics –misclassification of confounding variables –back-calculating to the truth

3 Measurement Bias Definition –bias that is caused when the information collected about or from subjects is not completely valid (accurate) any type of variable: exposure, outcome, or confounder –aka: misclassification bias; information bias (text); identification bias misclassification is the immediate result of an error in measurement

4 Misclassification of Dichotomous Variables: Terms Related to Measurement Validity Sensitivity –the ability of a measurement to identify correctly those who have the characteristic (disease or exposure) of interest. Specificity –the ability of a measurement to identify correctly those who do NOT have the characteristic of interest

5 Causes for Misclassification Questionnaire problems –inaccurate recall –ambiguous questions –under or overzealous interviewers Biological specimen collection –problems in specimen collection or processing or storage Biological specimen testing –inherent limits of detection –faulty instruments Data management problems in coding Design or analytic problems –incorrect time period assessed –improper lumping of variables (composite variables)

6 Diseased Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE Non-Differential Misclassification of Exposure: Imperfect Sensitivity Problems with sensitivity in the measurement of exposure - independent of disease status e.g., case-control study exposure = alcohol abuse

7 Non-differential Misclassification of Exposure Truth: No misclassification (100% sensitivity/specificity) ExposureCasesControls Yes5020 No5080 OR= (50/50)/(20/80) = 4.0 Presence of 70% sensitivity in exposure classification ExposureCasesControls Yes50-15=3520-6=14 No50+15=6580+6=86 OR= (35/65)/(14/86) = 3.3 Effect of non-differential misclassification of 2 exposure categories: Bias the OR toward the null value of 1.0

8 Diseased Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE Non-Differential Misclassification of Exposure: Imperfect Specificity Problems with specificity of exposure measurement - independent of disease status e.g., exposure = self- reported second-hand smoke exposure

9 Non-differential Misclassification of Exposure Truth: No misclassification (100% sensitivity/specificity) ExposureCasesControls Yes5020 No5080 OR= (50/50)/(20/80) = 4.0 Presence of 70% specificity in exposure classification ExposureCasesControls Yes50+15=6520+24=44 No50-15=3580-24=56 OR= (65/35)/(44/56) = 2.4 Effect of non-differential misclassification of 2 exposure categories: Bias the OR toward the null value of 1.0

10 Diseased Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE Non-Differential Misclassification of Exposure: Imperfect Specificity and Sensitivity Problems with sensitivity - independent of disease status Problems with specificity - independent of disease status

11 Non-Differential Misclassification of Exposure: Imperfect Sensitivity and Specificity Exposure Cases Controls Yes5020 No5080 True OR = (50/50) / (20/80) = 4.0 True Cases Controls Distribution exp unexp exp unexp (gold standard) 50 50 20 80 Study distribution: Cases Controls Exposed 45 10 55 18 16 34 Unexposed 5 40 45 2 64 66 sensitivity 0.90 0.80 0.90 0.80 or specificity Exposure Cases Controls Yes5534 No4566 Observed OR = (55/45) / (34/66) = 2.4 SOURCE POPULATION STUDY SAMPLE

12 Non-Differential Misclassification of Exposure: Imperfect Sensitivity & Specificity and Uncommon Exposure Exposure Cases Controls Yes3010 No70190 True OR = (30/70) / (10/190) = 8.1 True Cases Controls Distribution exp unexp exp unexp (gold standard) 30 70 10 190 Study distribution: Cases Controls Exposed 27 14 41 9 38 47 Unexposed 3 56 59 1 152 153 sensitivity 0.90 0.80 0.90 0.80 or specificity Exposure Cases Controls Yes4147 No59153 Observed OR = (41/59) / (47/153) = 2.3 SOURCE POPULATION STUDY SAMPLE e.g. radon exposure

13 Non-differential Misclassification of Exposure: Magnitude of Bias on the Odds Ratio True OR=4.0 2.20.0770.90 2.80.200.90 3.00.3680.90 1.90.200.600.90 3.20.200.950.90 1.90.200.850.60 2.60.200.850.90 Observed ORPrev of Exp in controls SpecificitySensitivity

14 Bias as a function of non-differential imperfect sensitivity and specificity of exposure measurement 0.9 0.7 0.5 Sensitivity of exposure measurement Specificity of exposure measurement Copeland et al. AJE 1977 True OR = 2.67 Prevalence of exposure in controls = 0.2 Apparent Odds Ratio 2.8 2.5 2.2 1.9 1.6 1.3 1.0.50.55.60.65.70.75.80.85.90.95 1.00

15 Bias as a function of non-differential imperfect sensitivity and specificity of exposure measurement 0.9 0.7 0.5 Sensitivity of exposure measurement Specificity of exposure measurement Copeland et al. AJE 1977 True OR = 2.67 Prevalence of exposure in controls = 0.2 Apparent Odds Ratio 2.8 2.5 2.2 1.9 1.6 1.3 1.0.50.55.60.65.70.75.80.85.90.95 1.00

16 Non-Differential Misclassification of Exposure in Cohort Study: Effect of Prevalence of Exposure U = sensitivity V = specificity Flegal et al. AJE 1986

17 Non-Differential Misclassification of Exposure: Effect at Different Magnitudes of True Association Flegal et al. AJE 1986 U = sensitivity V = specificity

18 Non-Differential Misclassification of Exposure: Rules of Thumb Regarding Sensitivity & Specificity Exposure Cases Controls Yes50100 No50300 True OR = (50/50) / (100/300) = 3.0 SOURCE POPULATION Sens + Spec = 1 gives OR = 1 (no effect) Sens + Spec >1 but <2 gives attenuated effect Sens + Spec < 1 gives reversal of effect

19 Diseased Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE Non-Differential Misclassification of Outcome Problems with sensitivity - independent of exposure status Problems with specificity - independent of exposure status

20 Bias as a function of non-differential imperfect sensitivity and specificity of outcome measurement in a cohort study Sensitivity of outcome measurement 0.9 0.7 0.5 Specificity of outcome measurement Copeland et al. AJE 1977 True risk ratio = 2.0 Cumulative incidence in unexposed = 0.05

21 Non-Differential Misclassification of Outcome: Effect of Incidence of Outcome Copeland et al. AJE 1977 Specificity of outcome measurement 0.2.1 0.1 0.05 0.05 0.025 Cumulative incidence of outcome Exposed Unexposed True risk ratio = 2.0 Sensitivity of outcome measurement held fixed = 0.9

22 Special Situation In a Cohort or Cross-sectional Study Misclassification of outcome If specificity of outcome measurement is 100% Any degree of imperfect sensitivity, if non-differential, will not bias the risk ratio or prevalence ratio e.g. Risk difference, however, is changed by a factor of (1 minus sensitivity), in this example, 30% (truth=0.1; biased = 0.07) Truth 70% sensitivity

23 When specificity of outcome is 100% in a cohort or cross sectional study Sensitivity of outcome measurement 0.9 0.7 0.5 Specificity of outcome measurement Copeland et al. AJE 1977 True risk ratio = 2.0 Cumulative incidence in unexposed = 0.05

24 In contrast, 100% specificity of exposure measurement still results in bias 0.9 0.7 0.5 Sensitivity of exposure measurement Specificity of exposure measurement Copeland et al. AJE 1977 True OR = 2.67 Prevalence of exposure in controls = 0.2 Apparent Odds Ratio 2.8 2.5 2.2 1.9 1.6 1.3 1.0.50.55.60.65.70.75.80.85.90.95 1.00

25 When specificity of outcome measurement is 100% in a cohort or cross sectional study Worth knowing about when choosing cutoff for continuous variables on ROC curves Choosing most specific cutoff (or 100% cutoff) will lead to least biased ratio measures of effect

26 Pervasiveness of Non-Differential Misclassification Direction of this bias is typically towards the null Therefore, this is called a “conservative” bias Goal, however, is to get the truth Consider how much underestimation of effects must be occurring in research How many “negative” studies are truly “positive”?

27 Differential Misclassification of Exposure Weinstock et al. AJE 1991 Nested case-control study in Nurses Health Study cohort Cases: women with new melanoma diagnoses Controls: women w/out melanoma - by incidence density sampling Measurements of exposure: questionnaire about self-reported “tanning ability”; administered shortly after melanoma development

28 Question asked after diagnosis Question asked before diagnosis (NHS baseline)

29 Melanoma Tanning ability + - No Yes SOURCE POPULATION STUDY SAMPLE “Tanning Ability” and Melanoma: Differential Misclassification of Exposure Imperfect specificity of exposure measurement - mostly in cases Bias away from the null

30 Congenital Malformation Exposed + - +-+- SOURCE POPULATION STUDY SAMPLE Differential Misclassification of Exposure: Exposures During Pregnancy and Congenital Malformations Cases more likely than controls to remember a variety of exposures Cases might be more likely than controls to falsely state a variety of exposures

31 Differential Misclassification of Exposure: Magnitude of Bias on the Odds Ratio True OR=3.9

32 Misclassification of Dichotomous Exposure or Outcome: Summary of Effects

33 Non-differential Misclassification of Multi-level Exposure ExposureExposure Misclassification between adjacent exposure categories Truth Bias away from the null Dosemeci et al. AJE 1990

34 Misclassification of Multi-level Exposure ExposureExposure Misclassification between adjacent and non-adjacent exposure categories Truth Appearance of J-shaped relationship Dosemeci et al. AJE 1990

35 Measurement Error in Continuous Exposure Variables Systematic error: Response moves systematically up or down the scale; no real effect in analytic studies Random error: Assuming: exposure is normally distributed with variance,  2 True random error is normally distributed with variance,  2 E Then, the observed regression coefficient is equal to the true regression coefficient times: i.e. the greater the measurement error, the greater the attenuation (bias) towards the null (i.e. reproducibility) T

36 Advanced Topics Misclassification of confounding variables –net result is failure to fully control (adjust) for that variable (left with residual confounding) –measures of association may be over or under-estimated Back-calculating to unbiased results –thus far, truth about relationships have been assumed –in practice, we just have observed results –when extent of classification errors (e.g., sensitivity and specificity) are known, it is possible to back-calculate to truth –if exact classification errors are not known, it is possible to perform sensitivity analyses to estimate a range of study results given a range of possible classification errors

37 Poor Reproducibility Poor Validity Good Reproducibility Good Validity Managing Measurement Bias Prevention and avoidance are critical –study design phase is critical; little to be done after study over Become an expert in the measurement of your primary variables For the other variables, seek out the advice of other experts Optimize the reproducibility/validity of your measurements!


Download ppt "Polls hit target on presidential race after season of discontent WILL LESTER, Associated Press; Friday, November 5, 2004 Public opinion polls didn't have."

Similar presentations


Ads by Google