Download presentation
Presentation is loading. Please wait.
Published byVanessa Randall Modified over 9 years ago
1
THE MATHEMATICS OF CAUSE AND EFFECT: With Reflections on Machine Learning Judea Pearl Departments of Computer Science and Statistics UCLA
2
1.The causal revolution – from statistics to counterfactuals 2.The fundamental laws of causal inference 3.From counterfactuals to practical victories a)policy evaluation b)attribution c)mediation d)generalizability – external validity e)missing data OUTLINE
3
TRADITIONAL STATISTICAL INFERENCE PARADIGM Data Inference Q(P) (Aspects of P ) P Joint Distribution e.g., Infer whether customers who bought product A would also buy product B. Q = P(B | A)
4
How does P change to P ′? New oracle e.g., Estimate P ′(cancer) if we ban smoking. FROM STATISTICAL TO CAUSAL ANALYSIS: 1. THE DIFFERENCES Data Inference Q(P′) (Aspects of P ′) P′ Joint Distribution P Joint Distribution change
5
e.g., Estimate the probability that a customer who bought A would buy B if we were to double the price. FROM STATISTICAL TO CAUSAL ANALYSIS: 1. THE DIFFERENCES Data Inference Q(P′) (Aspects of P ′) P′ Joint Distribution P Joint Distribution change
6
Data Inference Q(M) (Aspects of M ) Data Generating Model M – Invariant strategy (mechanism, recipe, law, protocol) by which Nature assigns values to variables in the analysis. Joint Distribution THE STRUCTURAL MODEL PARADIGM M “A painful de-crowning of a beloved oracle!”
7
WHAT KIND OF QUESTIONS SHOULD THE ORACLE ANSWER? Observational Questions: “What if we see A” Action Questions: “What if we do A?” Counterfactuals Questions: “What if we did things differently?” Options: “With what probability?” (What is?) (What if?) (Why?) THE CAUSAL HIERARCHY P(y | A) P(y | do(A) P(y A ’ | A) - SYNTACTIC DISTINCTION
9
STRUCTURAL CAUSAL MODELS: THE WORLD AS A COLLECTION OF SPRINGS Definition: A structural causal model is a 4-tuple, where V = {V 1,...,V n } are endogenous variables U = {U 1,...,U m } are background variables F = {f 1,..., f n } are functions determining V, v i = f i (v, u) P(u) is a distribution over U P(u) and F induce a distribution P(v) over observable variables e.g.,
10
Definition: The sentence: “ Y would be y (in situation u ), had X been x,” denoted Y x (u) = y, means: The solution for Y in a mutilated model M x, (i.e., the equations for X replaced by X = x ) with input U=u, is equal to y. The Fundamental Equation of Counterfactuals: COUNTERFACTUALS ARE EMBARRASINGLY SIMPLE
11
THE TWO FUNDAMENTAL LAWS OF CAUSAL INFERENCE 1.The Law of Counterfactuals ( M generates and evaluates all counterfactuals.) 2.The Law of Conditional Independence ( d -separation) (Separation in the model ⇒ independence in the distribution.)
12
THE LAW OF CONDITIONAL INDEPENDENCE Each function summarizes millions of micro processes. C (Climate) R (Rain) S (Sprinkler) W (Wetness) U1U1 U2U2 U3U3 U4U4 CS
13
Each function summarizes millions of micro processes. Still, if the U 's are independent, the observed distribution P ( C,R,S,W ) must satisfy certain constraints that are: (1) independent of the f ‘s and of P ( U ) and (2) can be read from the structure of the graph. C (Climate) R (Rain) S (Sprinkler) W (Wetness) U1U1 U2U2 U3U3 U4U4 CS THE LAW OF CONDITIONAL INDEPENDENCE
14
D -SEPARATION: NATURE’S LANGUAGE FOR COMMUNICATING ITS STRUCTURE C (Climate) R (Rain) S (Sprinkler) W (Wetness) Every missing arrow advertises an independency, conditional on a separating set. Applications: 1.Model testing 2.Structure learning 3.Reducing "what if I do" questions to symbolic calculus 4.Reducing scientific questions to symbolic calculus
15
SEEING VS. DOING Effect of turning the sprinkler ON
16
Q(P) - Identified estimands T(M A ) - Testable implications A* - Logical implications of A Causal inference Statistical inference A - CAUSAL ASSUMPTIONS Q Queries of interest Data ( D ) THE LOGIC OF CAUSAL ANALYSIS Goodness of fit Model testingProvisional claims Q - Estimates of Q(P) CAUSAL MODEL (M A )
17
THE MACHINERY OF CAUSAL CALCULUS Rule 1: Ignoring observations P(y | do{x}, z, w) = P(y | do{x}, w) Rule 2: Action/observation exchange P(y | do{x}, do{z}, w) = P(y | do{x},z,w) Rule 3: Ignoring actions P(y | do{x}, do{z}, w) = P(y | do{x}, w) Completeness Theorem (Shpitser, 2006)
18
DERIVATION IN CAUSAL CALCULUS Smoking Tar Cancer Probability Axioms Rule 2 Rule 3 Rule 2 Genotype (Unobserved)
19
EFFECT OF WARM-UP ON INJURY (After Shrier & Platt, 2008) No, no!
20
TRANSPORTABILITY OF KNOWLEDGE ACROSS DOMAINS (with E. Bareinboim) 1.A Theory of causal transportability When can causal relations learned from experiments be transferred to a different environment in which no experiment can be conducted? 2.A Theory of statistical transportability When can statistical information learned in one domain be transferred to a different domain in which a.only a subset of variables can be observed? Or, b.only a few samples are available?
21
Extrapolation across studies requires “some understanding of the reasons for the differences.” (Cox, 1958) “`External validity’ asks the question of generalizability: To what population, settings, treatment variables, and measurement variables can this effect be generalized?” (Shadish, Cook and Campbell, 2002) “An experiment is said to have “external validity” if the distribution of outcomes realized by a treatment group is the same as the distribution of outcome that would be realized in an actual program.” (Manski, 2007) "A threat to external validity is an explanation of how you might be wrong in making a generalization." (Trochin, 2006) EXTERNAL VALIDITY (how transportability is seen in other sciences)
22
MOVING FROM THE LAB TO THE REAL WORLD... Real world Everything is assumed to be the same, trivially transportable! Everything is assumed to be the different, not transportable! XY Z W XY Z W XY Z W Lab H1H1 H2H2
23
MOTIVATION WHAT CAN EXPERIMENTS IN LA TELL ABOUT NYC? Experimental study in LA Measured: Needed: Observational study in NYC Measured: X (Intervention) Y (Outcome) Z (Age) X (Observation) Y (Outcome) Z (Age) Transport Formula (calibration):
24
TRANSPORT FORMULAS DEPEND ON THE STORY a) Z represents age b) Z represents language skill X Y Z (b) S X Y Z (a) S ? S S Factors producing differences
25
X TRANSPORT FORMULAS DEPEND ON THE STORY a) Z represents age b) Z represents language skill c) Z represents a bio-marker X Y Z (b) S (a) X Y (c) Z S ? Y Z S
26
U W GOAL: ALGORITHM TO DETERMINE IF AN EFFECT IS TRANSPORTABLE X Y Z V S T INPUT: Annotated Causal Graph OUTPUT: 1.Transportable or not? 2.Measurements to be taken in the experimental study 3.Measurements to be taken in the target population 4.A transport formula S Factors creating differences
27
TRANSPORTABILITY REDUCED TO CALCULUS Theorem A causal relation R is transportable from ∏ to ∏* if and only if it is reducible, using the rules of do -calculus, to an expression in which S is separated from do ( ). X Y Z S W
28
U W RESULT: ALGORITHM TO DETERMINE IF AN EFFECT IS TRANSPORTABLE X Y Z V S T INPUT: Annotated Causal Graph OUTPUT: 1.Transportable or not? 2.Measurements to be taken in the experimental study 3.Measurements to be taken in the target population 4.A transport formula 5.Completeness (Bareinboim, 2012) S Factors creating differences
29
XY (f) Z S XY (d) Z S W WHICH MODEL LICENSES THE TRANSPORT OF THE CAUSAL EFFECT X → Y XY (e) Z S W (c) XYZ S XYZ S WXYZ S W (b) YX S (a) YX S S External factors creating disparities Yes No YesNoYes XY (f) Z S
30
STATISTICAL TRANSPORTABILITY (Transfer Learning) Why should we transport statistical information? i.e., Why not re-learn things from scratch ? 1.Measurements are costly. Limit measurements to a subset V * of variables called “scope”. 2.Samples are scarce. Pooling samples from diverse populations will improve precision, if differences can be filtered out.
31
R=P* (y | x) is transportable over V* = {X,Z}, i.e., R is estimable without re-measuring Y Transfer Learning If few samples (N 2 ) are available from ∏* and many samples (N 1 ) from ∏ then estimating R = P*(y | x) by achieves a much higher precision STATISTICAL TRANSPORTABILITY Definition: (Statistical Transportability) A statistical relation R(P) is said to be transportable from ∏ to ∏* over V * if R(P*) is identified from P, P * (V * ), and D where P*(V *) is the marginal distribution of P * over a subset of variables V *. XYZ S XYZ S
32
META-ANALYSIS OR MULTI-SOURCE LEARNING XY (f) Z W XY (b) Z WXY (c) Z S WXY (a) Z W XY (g) Z W XY (e) Z W S S Target population R = P*(y | do(x)) XY (h) Z WXY (i) Z S W S XY (d) Z W
33
CAN WE GET A BIAS-FREE ESTIMATE OF THE TARGET QUANTITY? XY (a) Z W Target population R = P*(y | do(x)) XY (d) Z W Is R identifiable from (d) and (h) ? R(∏*) is identifiable from studies (d) and (h). R(∏*) is not identifiable from studies (d) and (i). XY (h) Z W S XY (i) Z W S S
34
FROM META-ANALYSIS TO META-SYNTHESIS The problem How to combine results of several experimental and observational studies, each conducted on a different population and under a different set of conditions, so as to construct an aggregate measure of effect size that is "better" than any one study in isolation.
35
META-SYNTHESIS REDUCED TO CALCULUS Theorem {∏ 1, ∏ 2,…,∏ K } – a set of studies. {D 1, D 2,…, D K } – selection diagrams (relative to ∏*). A relation R( ∏* ) is "meta estimable" if it can be decomposed into terms of the form: such that each Q k is transportable from D k. Open-problem: Systematic decomposition
36
Principle 1: Calibrate estimands before pooling (to minimize bias) Principle 2: Decompose to sub-relations before calibrating (to improve precision) Pooling Calibration BIAS VS. PRECISION IN META-SYNTHESIS WWWXY (a) Z XY (h) Z S XY (i) Z WXY (d) Z W S XY (g) Z
37
Pooling WWWXYXYXYWXYWXY (a) Z (h) Z S (i) Z (d) Z S (g) Z BIAS VS. PRECISION IN META-SYNTHESIS Composition Pooling
38
MISSING DATA: A SEEMINGLY STATISTICAL PROBLEM (Mohan & Pearl, 2012) Pervasive in every experimental science. Huge literature, powerful software industry, deeply entrenched culture. Current practices are based on statistical characterization (Rubin, 1976) of a problem that is inherently causal. Consequence: Like Alchemy before Boyle and Dalton, the field is craving for (1) theoretical guidance and (2) performance guarantees.
39
ESTIMATE P(X,Y,Z) Sam-ObservationsMissingness ple #X*Y*Z*RxRx RyRy RzRz 1100000 2101000 31mm011 401m001 5m1m101 6m01100 7mm0110 801m001 900m001 1010m001 11101000 -
40
Q-1. What should the world be like, for a given statistical procedure to produce the expected result? Q-2. Can we tell from the postulated world whether any method can produce a bias-free result? How? Q-3. Can we tell from data if the world does not work as postulated? None of these questions can be answered by statistical characterization of the problem. All can be answered using causal models WHAT CAN CAUSAL THEORY DO FOR MISSING DATA?
41
Causal inference is a missing data problem. (Rubin 2012) Missing data is a causal inference problem. (Pearl 2012) Why is missingness a causal problem? Which mechanism causes missingness makes a difference in whether / how we can recover information from the data. Mechanisms require causal language to be properly described – statistics is not sufficient. Different causal assumptions lead to different routines for recovering information from data, even when the assumptions are indistinguishable by any statistical means. MISSING DATA: TWO PERSPECTIVES
42
ESTIMATE P(X,Y,Z) Sam-ObservationsMissingness ple #X*Y*Z*RxRx RyRy RzRz 1100000 2101000 31mm011 401m001 5m1m101 6m01100 7mm0110 801m001 900m001 1010m001 11101000 - X * X RXRX Missingness graph {
43
Row # XYZRxRx RyRy RzRz 1100000 2101000 11101000 - Line deletion estimate is generally biased. NAIVE ESTIMATE OF P(X,Y,Z) Complete Cases Sam-ObservationsMissingness ple #X*Y*Z*RxRx RyRy RzRz 1100000 2101000 31mm011 401m001 5m1m101 6m01100 7mm0110 801m001 900m001 1010m001 11101000 - RzRz RyRy RxRx X YZ MCAR
44
SMART ESTIMATE OF P(X,Y,Z) RzRz RyRy RxRx Z XY Sam-ObservationsMissingness ple #X*Y*Z*RxRx RyRy RzRz 1100000 2101000 31mm011 401m001 5m1m101 6m01100 7mm0110 801m001 900m001 1010m001 11101000 -
45
Sam- ple # X*Y*Z* 1100 2101 31mm 401m 5m1m 6m01 7mm0 801m 900m 1010m 11101 - SMART ESTIMATE OF P(X,Y,Z)
46
Compute P(Y|R y =0) Sam- ple # X*Y*Z* 1100 2101 31mm 401m 5m1m 6m01 7mm0 801m 900m 1010m 11101 - Row # Y* 10 20 41 51 60 81 90 100 110 - SMART ESTIMATE OF P(X,Y,Z)
47
Compute P(X|Y,R x =0,R y =0) Compute P(Y|R y =0) Sam- ple # X*Y*Z* 1100 2101 31mm 401m 5m1m 6m01 7mm0 801m 900m 1010m 11101 - Row # Y* 10 20 41 51 60 81 90 100 110 - Row # X*Y* 110 210 401 801 900 1010 1110 - SMART ESTIMATE OF P(X,Y,Z)
48
Compute P(Z|X,Y,R x =0,R y =0,R z =0) Compute P(X|Y,R x =0,R y =0) Compute P(Y|R y =0) Sam- ple # X*Y*Z* 1100 2101 31mm 401m 5m1m 6m01 7mm0 801m 900m 1010m 11101 - Row # Y* 10 20 41 51 60 81 90 100 110 - Row # X*Y* 110 210 401 801 900 1010 1110 - Row # X*Y*Z* 1100 2101 11101 - SMART ESTIMATE OF P(X,Y,Z)
49
INESTIMABLE P(X,Y,Z) RyRy RxRx RzRz Z XY
50
Definition: Given a missingness model M, a probabilistic quantity Q is said to be recoverable if there exists an algorithm that produces a consistent estimate of Q for every dataset generated by M. Theorem: Q is recoverable iff it is decomposable into terms of the form Q j =P(S j | T j ), such that: 1.For each variable V that is in T j, R V is also in T j. 2.For each variable V that is in S j, R V is either in T j or in S j. e.g., (That is, in the limit of large sample, Q is estimable as if no data were missing.) RECOVERABILITY FROM MISSING DATA
51
Two statistically indistinguishable models, yet P(X,Y) is recoverable in (a) and not in (b). No universal algorithm exists that decides recoverability (or guarantees unbiased results) without looking at the model. AN IMPOSSIBILITY THEOREM FOR MISSING DATA RxRx X Y (a)(b) RxRx Y X AccidentInjury Treatment Missing (X) Education Missing (X)
52
Two statistically indistinguishable models, P(X) is recoverable in both, but through two different methods: No universal algorithm exists that produces an unbiased estimate whenever such exists. NO ESTIMATION WITHOUT CAUSATION RxRx X Y (a)(b) RxRx Y X
53
CONCLUSIONS Counterfactuals, the building blocks of scientific thought, are encoded meaningfully and conveniently in structural models. Identifiability and testability are computable tasks. The counterfactual-graphical symbiosis has led to major advances in the empirical sciences, including policy evaluation, mediation analysis, generalizability, credit-blame determination, missing data, and heterogeneity. THINK NATURE, NOT DATA
54
Thank you
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.