Download presentation
Presentation is loading. Please wait.
Published byOliver Brooks Modified over 9 years ago
1
October 15
2
In Chapter 19: 19.1 Preventing Confounding 19.2 Simpson’s Paradox 19.3 Mantel-Haenszel Methods 19.4 Interaction
3
§19.1 Confounding Confounding is a systematic distortion in a measure of association due to the influence of “lurking” variables Confounding occurs when the effects of an extraneous lurking factor get mixed with the effects of the explanatory variable (The word confounding means “to mix together” in Latin.) When groups are unbalanced with respect to determinants of the outcome, comparisons will tend to be confounded.
4
Techniques that Mitigate Confounding Randomization – see Ch 2; randomization of an exposure balances group with respect to potential confounders (especially effective in large samples) Restriction – imposes uniformity in the study base; participants are made homogenous with respect to the potential confounder
5
Mitigating Confounding, cont. Matching – balances confounders; require matched analyses techniques (e.g., §18.6) Regression models – mathematically adjusts for confounding variables Stratification – subdivides data into homogenous groups before pooling results
6
§19.2 Simpson’s Paradox Simpson’s paradox is a severe form of confounding in which there is a reversal in the direction of an association caused by the confounding variable
7
Simpson’s Paradox – Example Gender bias? Are male applicants more likely to get accepted into a particular graduate school? Data reveal: AcceptedRejectedTotal Male198162360 Female88112200 Total286274560 Male incidence of acceptance = 198/360 = 0.55 RR = 0.55 / 0.44 = 1.25 (males 25% more likely to be accepted) Female incidence of acceptance = 88/200 =0.44
8
Simpson’s Paradox – Example Consider the lurking variable "major applied to” –Business School (240 applicants) –Art School (320 applicants) Perhaps males were more likely to apply to the major with the higher acceptance rate? To evaluate this hypothesis, stratify the data according to the lurking variable as follows:
9
Stratified Data – Example Business School Applicants SuccessFailureTotal Male18102120 Female2496120 Total42198240 p^ male = 18 / 120 = 0.15 p^ female = 24 / 120 = 0.20 All Applicants AcceptedRejectedTotal Male198162360 Female88112200 Total286274560 Art School Applicants SuccessFailureTotal Male 18060240 Female 641680 Total 24476320 p^ male = 180 / 240 = 0.75 p^ female = 64 / 80 = 0.80 Stratify
10
Overall, men had the higher acceptance rate Within each school, women had the higher acceptance rate How do we reconcile this paradox? The answer lies in the fact that men were more likely to apply to the art school, and the art school had much higher acceptance rate. The lurking variable MAJOR confounded the observed relation between GENDER and ACCEPT Stratified Data, cont.
11
Stratified Analysis, cont. By stratifying the data, we achieved like- to-like comparisons and mitigated confounding We can then combine the strata-specific estimates to derive an summary measure of effect that shows the true relation between GENDER and ACCEPT
12
The Mantel-Haenszel estimate is a summary measure of effect adjusted for confounding 19.3 Mantel-Haenszel Methods
13
M-H Summary RR - Example Business School (Stratum 1) SuccessFailureTotal Male18102120 Female2496120 Total42198240 RR^ 1 = (18 / 120) / (24 / 120) = 0.75 Art School (Stratum 2) SuccessFailureTotal Male18060240 Female641680 Total24476320 RR^ 2 = (180 / 240) / (64 / 80) = 0.94 This RR suggests that men were 10% less likely than women to be accepted to the Grad school.
14
Mantel-Haenszel Inference CIs for M-H estimates are calculated by computer Results are tested for significance with chi-square test statistic (H 0 : RR = 1) See text for formulas (95% CI 0.78 - 1.04) M-H RR = 0.90 X 2 stat = 1.84, df = 1, P = 0.175
15
Other Mantel-Haenszel Statistics Mantel-Haenszel methods are available for other measures of effect, such as odds ratio, rate ratios, and risk difference. Mantel-Haenszel methods for ORs are described on pp. 471–3.
16
19.4 Interaction Statistical interaction occurs when a statistical model does not adequately predict the joint effects of two or more explanatory factors Statistical interaction = heterogeneity of the effect measures Our example had strata-specific RRs of 0.75 and 0.94. Do these effect measures reflect the same underlying relationship, or is there heterogeneity? We can test this question with a chi-square interaction statistic.
17
Test for Interaction A.Hypotheses. H 0 : Strata-specific measures in population are homogeneous (no interaction) vs. H a : Strata-specific measures are heterogeneous (interaction) B.Test statistic. A chi-square interaction statistic is calculated by the computer program. (Several such statistics are used. WinPepi cites Rothman, 1986, Formula 12-59 and Fleiss, 1981, Formula 10.35) C.P-value. Convert the chi-square statistic to a P- value; interpret.
18
Test for Interaction – Example A. H 0 : RR 1 = RR 2 (no interaction) vs. H 0 : RR 1 ≠ RR 2 (interaction) B.Hand calculation (next slide) shows chi- sq = 0.78 with 1 df. [WinPepi calculated 0.585 using a slightly different formula.] C. P = 0.38. The evidence against H 0 is not significant. Retain H 0 and assume no interaction. Strata-specific RR estimates from the illustrative example are submitted to a test of interaction
19
Interaction Statistic – Hand Calculation Ad hoc interaction statistic presented in the text:
20
Example of Interaction Asbestos, Lung Cancer, Smoking Smokers had an OR of lung cancer for asbestos of 60. Non-smokers had an OR of 2. Apparent heterogeneity in the effect measure (“interaction”). Case-control data
21
Test for Interaction – Asbestos Example A.H 0 :OR 1 = OR 2 versus H a :OR 1 ≠ OR 2 B.Chi-square interaction = 21.38, 1 df Output from WinPepi > Compare2.exe > Program B: C.P = 3.8 × 10 −6 Conclude “significant interaction.” When interaction is present, avoid the summary adjustments because this would obscure the interaction.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.