Presentation is loading. Please wait.

Presentation is loading. Please wait.

From Propensity Scores And Mediation To External Validity

Similar presentations


Presentation on theme: "From Propensity Scores And Mediation To External Validity"— Presentation transcript:

1 From Propensity Scores And Mediation To External Validity
WHAT'S NEW IN CAUSAL INFERENCE: From Propensity Scores And Mediation To External Validity Judea Pearl University of California Los Angeles (

2 OUTLINE Unified conceptualization of counterfactuals, structural-equations, and graphs Propensity scores demystified Direct and indirect effects (Mediation) External validity mathematized

3 TRADITIONAL STATISTICAL
INFERENCE PARADIGM Data Inference Q(P) (Aspects of P) P Joint Distribution e.g., Infer whether customers who bought product A would also buy product B. Q = P(B | A)

4 THE STRUCTURAL MODEL PARADIGM Joint Distribution Data Generating Model
Q(M) (Aspects of M) Data M Inference M – Invariant strategy (mechanism, recipe, law, protocol) by which Nature assigns values to variables in the analysis. “Think Nature, not experiment!”

5 ORACLE FOR MANIPILATION
FAMILIAR CAUSAL MODEL ORACLE FOR MANIPILATION X Y Z Here is a causal model we all remember from high-school -- a circuit diagram. There are 4 interesting points to notice in this example: (1) It qualifies as a causal model -- because it contains the information to confirm or refute all action, counterfactual and explanatory sentences concerned with the operation of the circuit. For example, anyone can figure out what the output would be like if we set Y to zero, or if we change this OR gate to a NOR gate or if we perform any of the billions combinations of such actions. (2) Logical functions (Boolean input-output relation) is insufficient for answering such queries (3)These actions were not specified in advance, they do not have special names and they do not show up in the diagram. In fact, the great majority of the action queries that this circuit can answer have never been considered by the designer of this circuit. (4) So how does the circuit encode this extra information? Through two encoding tricks: 4.1 The symbolic units correspond to stable physical mechanisms (i.e., the logical gates) 4.2 Each variable has precisely one mechanism that determines its value. INPUT OUTPUT

6 STRUCTURAL CAUSAL MODELS
Definition: A structural causal model is a 4-tuple V,U, F, P(u), where V = {V1,...,Vn} are endogeneas variables U = {U1,...,Um} are background variables F = {f1,..., fn} are functions determining V, vi = fi(v, u) P(u) is a distribution over U P(u) and F induce a distribution P(v) over observable variables e.g.,

7 CAUSAL MODELS AND COUNTERFACTUALS
Definition: The sentence: “Y would be y (in situation u), had X been x,” denoted Yx(u) = y, means: The solution for Y in a mutilated model Mx, (i.e., the equations for X replaced by X = x) with input U=u, is equal to y. The Fundamental Equation of Counterfactuals:

8 READING COUNTERFACTUALS
FROM SEM Data shows: a = 0.7, b = 0.5, g = 0.4 A student named Joe, measured X=0.5, Z=1, Y=1.9 Q1: What would Joe’s score be had he doubled his study time?

9 READING COUNTERFACTUALS
Q1: What would Joe’s score be had he doubled his study time? Answer: Joe’s score would be 1.9 Or, In counterfactual notaion:

10 READING COUNTERFACTUALS
Q2: What would Joe’s score be, had the treatment been 0 and had he studied at whatever level he would have studied had the treatment been 1?

11 CAUSAL MODELS AND COUNTERFACTUALS
Definition: The sentence: “Y would be y (in situation u), had X been x,” denoted Yx(u) = y, means: The solution for Y in a mutilated model Mx, (i.e., the equations for X replaced by X = x) with input U=u, is equal to y. Joint probabilities of counterfactuals: In particular:

12 THE FIVE NECESSARY STEPS
OF CAUSAL ANALYSIS Define: Assume: Identify: Estimate: Test: Express the target quantity Q as a function Q(M) that can be computed from any model M. Formulate causal assumptions A using some formal language. Determine if Q is identifiable given A. Estimate Q if it is identifiable; approximate it, if it is not. Test the testable implications of A (if any).

13 THE LOGIC OF CAUSAL ANALYSIS
CAUSAL MODEL (MA) CAUSAL MODEL (MA) A - CAUSAL ASSUMPTIONS A* - Logical implications of A Causal inference Q Queries of interest Q(P) - Identified estimands T(MA) - Testable implications Statistical inference Data (D) Q - Estimates of Q(P) Goodness of fit Provisional claims Model testing

14 IDENTIFICATION IN SCM G Z1 Z2 Z3 Z4 Z5 X Z6 Y
Find the effect of X on Y, P(y|do(x)), given the causal assumptions shown in G, where Z1,..., Zk are auxiliary variables. G Z1 Z2 Z3 Z4 Z5 X Z6 Y Can P(y|do(x)) be estimated if only a subset, Z, can be measured?

15 ELIMINATING CONFOUNDING BIAS THE BACK-DOOR CRITERION
P(y | do(x)) is estimable if there is a set Z of variables such that Z d-separates X from Y in Gx. G Gx Z1 Z1 Z2 Z2 Z Z3 Z3 Z4 Z5 Z5 Z4 X Z6 Y X Z6 Y Moreover, (“adjusting” for Z)  Ignorability

16 EFFECT OF WARM-UP ON INJURY
(After Shrier & Platt, 2008) Watch out! ??? Front Door No, no! Warm-up Exercises (X) Injury (Y)

17 PROPENSITY SCORE ESTIMATOR
(Rosenbaum & Rubin, 1983) Z1 Z2 P(y | do(x)) = ? L Z4 Z3 Z5 X Z6 Y Adjustment for L replaces Adjustment for Z Theorem:

18 WHAT PROPENSITY SCORE (PS) PRACTITIONERS NEED TO KNOW
The asymptotic bias of PS is EQUAL to that of ordinary adjustment (for same Z). Including an additional covariate in the analysis CAN SPOIL the bias-reduction potential of others. X Y Z Z In particular, instrumental variables tend to amplify bias. Choosing sufficient set for PS, requires knowledge of the model.

19 Instrumental variables are Bias-Amplifiers in linear
SURPRISING RESULT: Instrumental variables are Bias-Amplifiers in linear models (Bhattarcharya & Vogt 2007; Wooldridge 2009) Z U (Unobserved) c3 c1 c2 “Naive” bias Adjusted bias X c0 Y

20 When Z is allowed to vary, it absorbs (or explains)
INTUTION: When Z is allowed to vary, it absorbs (or explains) some of the changes in X. U c1 X Y Z c2 c3 c0 U c1 X Y Z c2 c3 c0 When Z is fixed the burden falls on U alone, and transmitted to Y (resulting in a higher bias) Z U c3 c1 c2 X c0 Y

21 WHAT’S BETWEEN AN INSTRUMENT AND A CONFOUNDER? Should we adjust for Z?
Y X ANSWER: CONCLUSION: Yes, if No, otherwise Adjusting for a parent of Y is safer than a parent of X

22 CONCLUSIONS The prevailing practice of adjusting for all covariates, especially those that are good predictors of X (the “treatment assignment,” Rubin, 2009) is totally misguided. The “outcome mechanism” is as important, and much safer As X-rays are to the surgeon, graphs are for causation

23 REGRESSION VS. STRUCTURAL EQUATIONS (THE CONFUSION OF THE CENTURY)
Regression (claimless, nonfalsifiable): Y = ax + Y Structural (empirical, falsifiable): Y = bx + uY Claim: (regardless of distributions): E(Y | do(x)) = E(Y | do(x), do(z)) = bx The mothers of all questions: Q. When would b equal a? A. When all back-door paths are blocked, (uY X) Q. When is b estimable by regression methods? A. Graphical criteria available

24 TWO PARADIGMS FOR CAUSAL INFERENCE Observed: P(X, Y, Z,...)
Conclusions needed: P(Yx=y), P(Xy=x | Z=z)... How do we connect observables, X,Y,Z,… to counterfactuals Yx, Xz, Zy,… ? N-R model Counterfactuals are primitives, new variables Super-distribution Structural model Counterfactuals are derived quantities Subscripts modify the model and distribution

25 “SUPER” DISTRIBUTION IN N-R MODEL X Y Z Yx=0 Yx=1 Xz=0 Xz=1 Xy=0 U
1 Y 1 Z 1 Yx=0 1 Yx=1 1 Xz=0 1 Xz=1 1 Xy=0 0 1 U u1 u2 u3 u4 inconsistency: x = 0  Yx=0 = Y Y = xY1 + (1-x) Y0

26 ARE THE TWO PARADIGMS EQUIVALENT?
Yes (Galles and Pearl, 1998; Halpern 1998) In the N-R paradigm, Yx is defined by consistency: In SCM, consistency is a theorem. Moreover, a theorem in one approach is a theorem in the other. Difference: Clarity of assumptions and their implications

27 AXIOMS OF STRUCTURAL COUNTERFACTUALS
Yx(u)=y: Y would be y, had X been x (in state U = u) (Galles, Pearl, Halpern, 1998): Definiteness Uniqueness Effectiveness Composition (generalized consistency) Reversibility

28 FORMULATING ASSUMPTIONS
THREE LANGUAGES 1. English: Smoking (X), Cancer (Y), Tar (Z), Genotypes (U) X Y Z U 2. Counterfactuals: Z X Y 3. Structural:

29 COMPARISON BETWEEN THE
N-R AND SCM LANGUAGES Expressing scientific knowledge Recognizing the testable implications of one's assumptions Locating instrumental variables in a system of equations Deciding if two models are equivalent or nested Deciding if two counterfactuals are independent given another Algebraic derivations of identifiable estimands

30 GRAPHICAL – COUNTERFACTUALS SYMBIOSIS
Every causal graph expresses counterfactuals assumptions, e.g., X  Y  Z Missing arrows Y  Z 2. Missing arcs Y Z consistent, and readable from the graph. Express assumption in graphs Derive estimands by graphical or algebraic methods

31 (direct vs. indirect effects)
EFFECT DECOMPOSITION (direct vs. indirect effects) Why decompose effects? What is the definition of direct and indirect effects? What are the policy implications of direct and indirect effects? When can direct and indirect effect be estimated consistently from experimental and nonexperimental data?

32 WHY DECOMPOSE EFFECTS? To understand how Nature works
To comply with legal requirements To predict the effects of new type of interventions: Signal routing, rather than variable fixing

33 LEGAL IMPLICATIONS OF DIRECT EFFECT X Z Y
Can data prove an employer guilty of hiring discrimination? (Gender) X Z (Qualifications) Y (Hiring) What is the direct effect of X on Y ? (averaged over z) Adjust for Z? No! No!

34 FISHER’S GRAVE MISTAKE
(after Rubin, 2005) What is the direct effect of treatment on yield? (Soil treatment) X Z (Plant density) (Latent factor) Y (Yield) Compare treated and untreated lots of same density No! No! Proposed solution (?): “Principal strata”

35 NATURAL INTERPRETATION OF AVERAGE DIRECT EFFECTS
Robins and Greenland (1992) – “Pure” X Z z = f (x, u) y = g (x, z, u) Y Natural Direct Effect of X on Y: The expected change in Y, when we change X from x0 to x1 and, for each u, we keep Z constant at whatever value it attained before the change. In linear models, DE = Natural Direct Effect

36 DEFINITION AND IDENTIFICATION OF NESTED COUNTERFACTUALS
Consider the quantity Given M, P(u), Q is well defined Given u, Zx*(u) is the solution for Z in Mx*, call it z is the solution for Y in Mxz Can Q be estimated from data? Experimental: nest-free expression Nonexperimental: subscript-free expression

37 DEFINITION OF INDIRECT EFFECTS X Z z = f (x, u) y = g (x, z, u) Y
No Controlled Indirect Effect Indirect Effect of X on Y: The expected change in Y when we keep X constant, say at x0, and let Z change to whatever value it would have attained had X changed to x1. In linear models, IE = TE - DE

38 POLICY IMPLICATIONS OF INDIRECT EFFECTS
What is the indirect effect of X on Y? The effect of Gender on Hiring if sex discrimination is eliminated. GENDER QUALIFICATION HIRING X Z IGNORE f Y Deactivating a link – a new type of intervention

39 MEDIATION FORMULAS The natural direct and indirect effects are identifiable in Markovian models (no confounding), And are given by: Applicable to linear and non-linear models, continuous and discrete variables, regardless of distributional form.

40 Z g xz Linear + interaction m1 m2 X Y In linear systems

41 IN UNCONFOUNDED MODELS
MEDIATION FORMULAS IN UNCONFOUNDED MODELS X Z Y

42 Z m1 m2 X Y In linear systems Disabling mediation direct path DE
TE - DE TE IE Is NOT equal to:

43 MEDIATION FORMULA FOR BINARY VARIABLES Z X Y X Z Y E(Y|x,z)=gxz
E(Z|x)=hx n1 n2 1 n3 n4 n5 n6 n7 n8

44 RAMIFICATION OF THE MEDIATION FORMULA
DE should be averaged over mediator levels, IE should NOT be averaged over exposure levels. TE-DE need not equal IE TE-DE = proportion for whom mediation is necessary IE = proportion for whom mediation is sufficient TE-DE informs interventions on indirect pathways IE informs intervention on direct pathways.

45 TRANSPORTABILITY -- WHEN CAN
WE EXTRPOLATE EXPERIMENTAL FINDINGS TO DIFFERENT POPULATIONS? X Y Z = age X Y Z = age Experimental study in LA Measured: Problem: We find (LA population is younger) What can we say about Intuition: Observational study in NYC Measured:

46 TRANSPORT FORMULAS DEPEND
ON THE STORY X Y Z (a) X Y Z (b) X Y (c) Z a) Z represents age b) Z represents language skill c) Z represents a bio-marker

47 (Pearl and Bareinboim, 2010)
TRANSPORTABILITY (Pearl and Bareinboim, 2010) Definition 1 (Transportability) Given two populations, denoted  and *, characterized by models M = <F,V,U> and M* = <F,V,U+S>, respectively, a causal relation R is said to be transportable from  to * if 1. R() is estimable from the set I of interventional studies on , and 2. R(*) is identified from I, P*, G, and G + S. S = external factors responsible for M  M*

48 TRANSPORT FORMULAS DEPEND
ON THE STORY X Y Z (a) S X Y Z (b) S X Y (c) Z S a) Z represents age b) Z represents language skill c) Z represents a bio-marker ? ?

49 WHICH MODEL LICENSES THE TRANSPORT OF THE CAUSAL EFFECT
Y X S (b) Y X S (c) X Y Z S X Y (d) Z S W X Y ( Z S (c) W X Y (f) Z S (c) W X Y (f Z S S S S X X X W W W Z Z Z Y Y Y (e)

50 DETERMINE IF THE CAUSAL EFFECT IS TRANSPORTABLE
V What measurements need to be taken in the study and in the target population? T S X W Z Y The transport formula

51 CONCLUSIONS I TOLD YOU CAUSALITY IS SIMPLE
Formal basis for causal and counterfactual inference (complete) Unification of the graphical, potential-outcome and structural equation approaches Friendly and formal solutions to century-old problems and confusions. No other method can do better (theorem)

52 Thank you for agreeing with everything I said.


Download ppt "From Propensity Scores And Mediation To External Validity"

Similar presentations


Ads by Google