Department of Computer Science

Slides:



Advertisements
Similar presentations
Department of Computer Science
Advertisements

RELATED CLASS CS 262 Z – SEMINAR IN CAUSAL MODELING CURRENT TOPICS IN COGNITIVE SYSTEMS INSTRUCTOR: JUDEA PEARL Spring Quarter Monday and Wednesday, 2-4pm.
The World Bank Human Development Network Spanish Impact Evaluation Fund.
1 WHAT'S NEW IN CAUSAL INFERENCE: From Propensity Scores And Mediation To External Validity Judea Pearl University of California Los Angeles (
ASSESSING CAUSAL QUANTITIES FROM EXPERIMENTAL AND NONEXPERIMENTAL DATA Judea Pearl Computer Science and Statistics UCLA
Omitted Variable Bias Methods of Economic Investigation Lecture 7 1.
CAUSES AND COUNTERFACTUALS Judea Pearl University of California Los Angeles (
1 THE SYMBIOTIC APPROACH TO CAUSAL INFERENCE Judea Pearl University of California Los Angeles (
TRYGVE HAAVELMO AND THE EMERGENCE OF CAUSAL CALCULUS Judea Pearl University of California Los Angeles (
THE MATHEMATICS OF CAUSAL MODELING Judea Pearl Department of Computer Science UCLA.
COMMENTS ON By Judea Pearl (UCLA). notation 1990’s Artificial Intelligence Hoover.
Causal Networks Denny Borsboom. Overview The causal relation Causality and conditional independence Causal networks Blocking and d-separation Excercise.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering Causal Diagrams and the Identification of Causal Effects A presentation of.
Judea Pearl University of California Los Angeles CAUSAL REASONING FOR DECISION AIDING SYSTEMS.
1 CAUSAL INFERENCE: MATHEMATICAL FOUNDATIONS AND PRACTICAL APPLICATIONS Judea Pearl University of California Los Angeles (
SIMPSON’S PARADOX, ACTIONS, DECISIONS, AND FREE WILL Judea Pearl UCLA
CAUSES AND COUNTERFACTUALS OR THE SUBTLE WISDOM OF BRAINLESS ROBOTS.
1 WHAT'S NEW IN CAUSAL INFERENCE: From Propensity Scores And Mediation To External Validity Judea Pearl University of California Los Angeles (
CAUSAL INFERENCE IN THE EMPIRICAL SCIENCES Judea Pearl University of California Los Angeles (
1 REASONING WITH CAUSES AND COUNTERFACTUALS Judea Pearl UCLA (
Judea Pearl Computer Science Department UCLA DIRECT AND INDIRECT EFFECTS.
Judea Pearl University of California Los Angeles ( THE MATHEMATICS OF CAUSE AND EFFECT.
Lecture 8: Generalized Linear Models for Longitudinal Data.
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
Judea Pearl University of California Los Angeles ( THE MATHEMATICS OF CAUSE AND EFFECT.
THE MATHEMATICS OF CAUSE AND EFFECT: With Reflections on Machine Learning Judea Pearl Departments of Computer Science and Statistics UCLA.
V13: Causality Aims: (1) understand the causal relationships between the variables of a network (2) interpret a Bayesian network as a causal model whose.
REASONING WITH CAUSE AND EFFECT Judea Pearl Department of Computer Science UCLA.
CAUSES AND COUNTERFACTIALS IN THE EMPIRICAL SCIENCES Judea Pearl University of California Los Angeles (
INTERVENTIONS AND INFERENCE / REASONING. Causal models  Recall from yesterday:  Represent relevance using graphs  Causal relevance ⇒ DAGs  Quantitative.
REASONING WITH CAUSE AND EFFECT Judea Pearl Department of Computer Science UCLA.
THE MATHEMATICS OF CAUSE AND COUNTERFACTUALS Judea Pearl University of California Los Angeles (
Chapter 8: Simple Linear Regression Yang Zhenlin.
Impact Evaluation Sebastian Galiani November 2006 Causal Inference.
1 Use graphs and not pure logic Variables represented by nodes and dependencies by edges. Common in our language: “threads of thoughts”, “lines of reasoning”,
Causal Model Ying Nian Wu UCLA Department of Statistics July 13, 2007 IPAM Summer School.
Judea Pearl Computer Science Department UCLA ROBUSTNESS OF CAUSAL CLAIMS.
CAUSAL REASONING FOR DECISION AIDING SYSTEMS COGNITIVE SYSTEMS LABORATORY UCLA Judea Pearl, Mark Hopkins, Blai Bonet, Chen Avin, Ilya Shpitser.
Mediation: The Causal Inference Approach David A. Kenny.
Summary: connecting the question to the analysis(es) Jay S. Kaufman, PhD McGill University, Montreal QC 26 February :40 PM – 4:20 PM National Academy.
Copyright © 2015 Inter-American Development Bank. This work is licensed under a Creative Commons IGO 3.0 Attribution-Non Commercial-No Derivatives (CC-IGO.
Variable selection in Regression modelling Simon Thornley.
Identification in Econometrics: A Way to Get Causal Information from Observations? Damien Fennell, LSE UCL, May 27, 2005.
Some Terminology experiment vs. correlational study IV vs. DV descriptive vs. inferential statistics sample vs. population statistic vs. parameter H 0.
Methods of Presenting and Interpreting Information Class 9.
Math 6330: Statistical Consulting Class 6
Chapter 7. Propositional and Predicate Logic
Chapter 7. Classification and Prediction
CJT 765: Structural Equation Modeling
Chapter 4: The Nature of Regression Analysis
Combining Random Variables
Explanation of slide: Logos, to show while the audience arrive.
Department of Computer Science
Making Causal Inferences and Ruling out Rival Explanations
Determining the distribution of Sample statistics
Judea Pearl University of California Los Angeles
Chen Avin Ilya Shpitser Judea Pearl Computer Science Department UCLA
Department of Computer Science
A MACHINE LEARNING EXERCISE
Computer Science and Statistics
CAUSAL INFERENCE IN STATISTICS
Chapter 7. Propositional and Predicate Logic
Group Experimental Design
From Propensity Scores And Mediation To External Validity
Chapter 4: The Nature of Regression Analysis
THE MATHEMATICS OF PROGRAM EVALUATION
RISK ASSESSMENT, Association and causation
CAUSAL REASONING FOR DECISION AIDING SYSTEMS
Chapter 3 Hernán & Robins Observational Studies
Presentation transcript:

Department of Computer Science THE MATHEMATICS OF CAUSAL MODELING Judea Pearl Department of Computer Science UCLA

OUTLINE Modeling: Statistical vs. Causal Causal Models and Identifiability Inference to three types of claims: Effects of potential interventions Claims about attribution (responsibility) Claims about direct and indirect effects Robustness of Causal Claims

TRADITIONAL STATISTICAL INFERENCE PARADIGM P Joint Distribution Q(P) (Aspects of P) Data Inference e.g., Infer whether customers who bought product A would also buy product B. Q = P(B|A)

THE CAUSAL INFERENCE PARADIGM Data Joint Generating Q(M) Distribution Model Q(M) (Aspects of M) Data Inference Some Q(M) cannot be inferred from P. e.g., Infer whether customers who bought product A would still buy A if we double the price.

FROM STATISTICAL TO CAUSAL ANALYSIS: 1. THE DIFFERENCES Data joint distribution inferences from passive observations Probability and statistics deal with static relations Probability Statistics

FROM STATISTICAL TO CAUSAL ANALYSIS: 1. THE DIFFERENCES Probability and statistics deal with static relations Statistics Probability inferences from passive observations joint distribution Data Causal analysis deals with changes (dynamics) i.e. What remains invariant when P changes. P does not tell us how it ought to change e.g. Curing symptoms vs. curing diseases e.g. Analogy: mechanical deformation

FROM STATISTICAL TO CAUSAL ANALYSIS: 1. THE DIFFERENCES Data joint distribution inferences from passive observations Probability and statistics deal with static relations Probability Statistics Causal Model Data assumptions Effects of interventions Causes of effects Explanations Causal analysis deals with changes (dynamics) Experiments

FROM STATISTICAL TO CAUSAL ANALYSIS: 1. THE DIFFERENCES (CONT) Spurious correlation Randomization Confounding / Effect Instrument Holding constant Explanatory variables STATISTICAL Regression Association / Independence “Controlling for” / Conditioning Odd and risk ratios Collapsibility Causal and statistical concepts do not mix.

} FROM STATISTICAL TO CAUSAL ANALYSIS: 1. THE DIFFERENCES (CONT)  Spurious correlation Randomization Confounding / Effect Instrument Holding constant Explanatory variables STATISTICAL Regression Association / Independence “Controlling for” / Conditioning Odd and risk ratios Collapsibility Causal and statistical concepts do not mix. No causes in – no causes out (Cartwright, 1989) statistical assumptions + data causal assumptions causal conclusions  } Causal assumptions cannot be expressed in the mathematical language of standard statistics.

} FROM STATISTICAL TO CAUSAL ANALYSIS: 1. THE DIFFERENCES (CONT)  Spurious correlation Randomization Confounding / Effect Instrument Holding constant Explanatory variables STATISTICAL Regression Association / Independence “Controlling for” / Conditioning Odd and risk ratios Collapsibility Causal and statistical concepts do not mix. No causes in – no causes out (Cartwright, 1989) statistical assumptions + data causal assumptions causal conclusions  } Causal assumptions cannot be expressed in the mathematical language of standard statistics. Non-standard mathematics: Structural equation models (Wright, 1920; Simon, 1960) Counterfactuals (Neyman-Rubin, Lewis)

WHAT'S IN A CAUSAL MODEL? Oracle that assigns truth value to causal sentences: Action sentences: B if we do A. Counterfactuals: B would be different if A were true. Explanation: B occurred because of A. Optional: with what probability?

ORACLE FOR MANIPILATION FAMILIAR CAUSAL MODEL ORACLE FOR MANIPILATION X Y Z Here is a causal model we all remember from high-school -- a circuit diagram. There are 4 interesting points to notice in this example: (1) It qualifies as a causal model -- because it contains the information to confirm or refute all action, counterfactual and explanatory sentences concerned with the operation of the circuit. For example, anyone can figure out what the output would be like if we set Y to zero, or if we change this OR gate to a NOR gate or if we perform any of the billions combinations of such actions. (2) Logical functions (Boolean input-output relation) is insufficient for answering such queries (3)These actions were not specified in advance, they do not have special names and they do not show up in the diagram. In fact, the great majority of the action queries that this circuit can answer have never been considered by the designer of this circuit. (4) So how does the circuit encode this extra information? Through two encoding tricks: 4.1 The symbolic units correspond to stable physical mechanisms (i.e., the logical gates) 4.2 Each variable has precisely one mechanism that determines its value. INPUT OUTPUT

CAUSAL MODELS AND CAUSAL DIAGRAMS Definition: A causal model is a 3-tuple M = V,U,F with a mutilation operator do(x): M Mx where: (i) V = {V1…,Vn} endogenous variables, (ii) U = {U1,…,Um} background variables (iii) F = set of n functions, fi : V \ Vi  U  Vi vi = fi(pai,ui) PAi  V \ Vi Ui  U

CAUSAL MODELS AND CAUSAL DIAGRAMS U1 I W U2 Q P PAQ Definition: A causal model is a 3-tuple M = V,U,F with a mutilation operator do(x): M Mx where: (i) V = {V1…,Vn} endogenous variables, (ii) U = {U1,…,Um} background variables (iii) F = set of n functions, fi : V \ Vi  U  Vi vi = fi(pai,ui) PAi  V \ Vi Ui  U U1 I W U2 Q P PAQ

CAUSAL MODELS AND MUTILATION Definition: A causal model is a 3-tuple M = V,U,F with a mutilation operator do(x): M Mx where: (i) V = {V1…,Vn} endogenous variables, (ii) U = {U1,…,Um} background variables (iii) F = set of n functions, fi : V \ Vi  U  Vi vi = fi(pai,ui) PAi  V \ Vi Ui  U (iv) Mx= U,V,Fx, X  V, x  X where Fx = {fi: Vi  X }  {X = x} (Replace all functions fi corresponding to X with the constant functions X=x)

CAUSAL MODELS AND MUTILATION U1 I W U2 Q P Definition: A causal model is a 3-tuple M = V,U,F with a mutilation operator do(x): M Mx where: (i) V = {V1…,Vn} endogenous variables, (ii) U = {U1,…,Um} background variables (iii) F = set of n functions, fi : V \ Vi  U  Vi vi = fi(pai,ui) PAi  V \ Vi Ui  U (iv) U1 I W U2 Q P

CAUSAL MODELS AND MUTILATION U1 I W U2 Q P P = p0 Definition: A causal model is a 3-tuple M = V,U,F with a mutilation operator do(x): M Mx where: (i) V = {V1…,Vn} endogenous variables, (ii) U = {U1,…,Um} background variables (iii) F = set of n functions, fi : V \ Vi  U  Vi vi = fi(pai,ui) PAi  V \ Vi Ui  U (iv) Mp U1 I W U2 Q P P = p0

PROBABILISTIC CAUSAL MODELS Definition: A causal model is a 3-tuple M = V,U,F with a mutilation operator do(x): M Mx where: (i) V = {V1…,Vn} endogenous variables, (ii) U = {U1,…,Um} background variables (iii) F = set of n functions, fi : V \ Vi  U  Vi vi = fi(pai,ui) PAi  V \ Vi Ui  U (iv) Mx= U,V,Fx, X  V, x  X where Fx = {fi: Vi  X }  {X = x} (Replace all functions fi corresponding to X with the constant functions X=x) Definition (Probabilistic Causal Model): M, P(u) P(u) is a probability assignment to the variables in U.

CAUSAL MODELS AND COUNTERFACTUALS Definition: Potential Response The sentence: “Y would be y (in unit u), had X been x,” denoted Yx(u) = y, is the solution for Y in a mutilated model Mx, with the equations for X replaced by X = x. (“unit-based potential outcome”)

CAUSAL MODELS AND COUNTERFACTUALS Definition: Potential Response The sentence: “Y would be y (in unit u), had X been x,” denoted Yx(u) = y, is the solution for Y in a mutilated model Mx, with the equations for X replaced by X = x. (“unit-based potential outcome”) Joint probabilities of counterfactuals:

CAUSAL MODELS AND COUNTERFACTUALS Definition: Potential Response The sentence: “Y would be y (in unit u), had X been x,” denoted Yx(u) = y, is the solution for Y in a mutilated model Mx, with the equations for X replaced by X = x. (“unit-based potential outcome”) Joint probabilities of counterfactuals: In particular:

3-STEPS TO COMPUTING COUNTERFACTUALS Abduction TRUE TRUE U S5. If the prisoner is dead, he would still be dead if A were not to have shot. DDA Abduction TRUE TRUE U (Court order) C (Captain) A Consider now our counterfactual sentence S5: If the prisoner is Dead, he would still be dead if A were not to have shot. D ==> DA The antecedant {A} should still be treated as interventional surgery, but only after we fully account for the evidence given: D. This calls for three steps 1 Abduction: Interpret the past in light of the evidence 2. Action: Bend the course of history (minimally) to account for the hypothetical antecedant (A). 3.Prediction: Project the consequences to the future. B (Riflemen) D (Prisoner)

3-STEPS TO COMPUTING COUNTERFACTUALS Abduction Action Prediction TRUE S5. If the prisoner is dead, he would still be dead if A were not to have shot. DDA Abduction TRUE U D B C A FALSE TRUE Action TRUE U D B C A FALSE Prediction TRUE U C A B D

COMPUTING PROBABILITIES OF COUNTERFACTUALS P(S5). The prisoner is dead. How likely is it that he would be dead if A were not to have shot. P(DA|D) = ? Abduction TRUE Action U D B C A FALSE P(u|D) Prediction U D B C A FALSE P(u|D) P(DA|D) P(u) U P(u|D) C Suppose we are not entirely ignorant of U, but can assess the degree of belief P(u). The same 3-steps apply to the computation of the counterfactual probability (that the prisoner be dead if A were not to have shot) The only difference is that we now use the evidence to update P(u) into P(u|e), and draw probabilistic instead of logical conclusions. A B D

CAUSAL INFERENCE MADE EASY (1985-2000) Inference with Nonparametric Structural Equations made possible through Graphical Analysis. Mathematical underpinning of counterfactuals through nonparametric structural equations Graphical-Counterfactuals symbiosis

IDENTIFIABILITY P(M1) = P(M2) Þ Q(M1) = Q(M2) Definition: Let Q(M) be any quantity defined on a causal model M, and let A be a set of assumption. Q is identifiable relative to A iff P(M1) = P(M2) Þ Q(M1) = Q(M2) for all M1, M2, that satisfy A.

IDENTIFIABILITY P(M1) = P(M2) Þ Q(M1) = Q(M2) Definition: Let Q(M) be any quantity defined on a causal model M, and let A be a set of assumption. Q is identifiable relative to A iff P(M1) = P(M2) Þ Q(M1) = Q(M2) for all M1, M2, that satisfy A. In other words, Q can be determined uniquely from the probability distribution P(v) of the endogenous variables, V, and assumptions A.

IDENTIFIABILITY P(M1) = P(M2) Þ Q(M1) = Q(M2) Definition: Let Q(M) be any quantity defined on a causal model M, and let A be a set of assumption. Q is identifiable relative to A iff P(M1) = P(M2) Þ Q(M1) = Q(M2) for all M1, M2, that satisfy A. In this talk: A: Assumptions encoded in the diagram Q1: P(y|do(x)) Causal Effect (= P(Yx=y)) Q2: P(Yx =y | x, y) Probability of necessity Q3: Direct Effect

THE FUNDAMENTAL THEOREM OF CAUSAL INFERENCE Causal Markov Theorem: Any distribution generated by Markovian structural model M (recursive, with independent disturbances) can be factorized as Where pai are the (values of) the parents of Vi in the causal diagram associated with M.

THE FUNDAMENTAL THEOREM OF CAUSAL INFERENCE Causal Markov Theorem: Any distribution generated by Markovian structural model M (recursive, with independent disturbances) can be factorized as Where pai are the (values of) the parents of Vi in the causal diagram associated with M. Corollary: (Truncated factorization, Manipulation Theorem) The distribution generated by an intervention do(X=x) (in a Markovian model M) is given by the truncated factorization

RAMIFICATIONS OF THE FUNDAMENTAL THEOREM U (unobserved) X = x Z Y Smoking Tar in Lungs Cancer X Given P(x,y,z), should we ban smoking? Pre-intervention Post-intervention

RAMIFICATIONS OF THE FUNDAMENTAL THEOREM U (unobserved) X = x Z Y Smoking Tar in Lungs Cancer X Given P(x,y,z), should we ban smoking? Pre-intervention Post-intervention

RAMIFICATIONS OF THE FUNDAMENTAL THEOREM U (unobserved) X = x Z Y Smoking Tar in Lungs Cancer X Given P(x,y,z), should we ban smoking? Pre-intervention Post-intervention To compute P(y,z|do(x)), we must eliminate u. (Graphical problem.)

THE BACK-DOOR CRITERION Graphical test of identification P(y | do(x)) is identifiable in G if there is a set Z of variables such that Z d-separates X from Y in Gx. G Gx Z1 Z1 Z2 Z2 Z Z3 Z3 Z4 Z5 Z5 Z4 X Z6 Y X Z6 Y

THE BACK-DOOR CRITERION Graphical test of identification P(y | do(x)) is identifiable in G if there is a set Z of variables such that Z d-separates X from Y in Gx. G Gx Z1 Z1 Z2 Z2 Z Z3 Z3 Z4 Z5 Z5 Z4 X Z6 Y X Z6 Y Moreover, P(y | do(x)) = å P(y | x,z) P(z) (“adjusting” for Z) z

RULES OF CAUSAL CALCULUS Rule 1: Ignoring observations P(y | do{x}, z, w) = P(y | do{x}, w) Rule 2: Action/observation exchange P(y | do{x}, do{z}, w) = P(y | do{x},z,w) Rule 3: Ignoring actions P(y | do{x}, do{z}, w) = P(y | do{x}, w)

DERIVATION IN CAUSAL CALCULUS Genotype (Unobserved) Smoking Tar Cancer P (c | do{s}) = t P (c | do{s}, t) P (t | do{s}) Probability Axioms = t P (c | do{s}, do{t}) P (t | do{s}) Rule 2 = t P (c | do{s}, do{t}) P (t | s) Rule 2 = t P (c | do{t}) P (t | s) Rule 3 = st P (c | do{t}, s) P (s | do{t}) P(t |s) Probability Axioms Rule 2 = st P (c | t, s) P (s | do{t}) P(t |s) = s t P (c | t, s) P (s) P(t |s) Rule 3

RECENT RESULTS ON IDENTIFICATION Theorem (Tian 2002): we can identify P(v | do{x}) (x a singleton) if and only if there is no child Z of X connected to X by a bi-directed path. X Z k 1

RECENT RESULTS ON IDENTIFICATION (Cont.) Do-calculus is complete A complete graphical criterion available for identifying causal effects of any set on any set References: Shpitser and Pearl 2006 (AAAI, UAI)

OUTLINE Modeling: Statistical vs. Causal Causal models and identifiability Inference to three types of claims: Effects of potential interventions, Claims about attribution (responsibility)

DETERMINING THE CAUSES OF EFFECTS (The Attribution Problem) Your Honor! My client (Mr. A) died BECAUSE he used that drug.

DETERMINING THE CAUSES OF EFFECTS (The Attribution Problem) Your Honor! My client (Mr. A) died BECAUSE he used that drug. Court to decide if it is MORE PROBABLE THAN NOT that A would be alive BUT FOR the drug! P(? | A is dead, took the drug) > 0.50

THE PROBLEM Theoretical Problems: What is the meaning of PN(x,y): “Probability that event y would not have occurred if it were not for event x, given that x and y did in fact occur.”

THE PROBLEM Theoretical Problems: What is the meaning of PN(x,y): “Probability that event y would not have occurred if it were not for event x, given that x and y did in fact occur.” Answer:

THE PROBLEM Theoretical Problems: What is the meaning of PN(x,y): “Probability that event y would not have occurred if it were not for event x, given that x and y did in fact occur.” Under what condition can PN(x,y) be learned from statistical data, i.e., observational, experimental and combined.

WHAT IS INFERABLE FROM EXPERIMENTS? Simple Experiment: Q = P(Yx= y | z) Z nondescendants of X. Compound Experiment: Q = P(YX(z) = y | z) Multi-Stage Experiment: etc…

CAN FREQUENCY DATA DECIDE LEGAL RESPONSIBILITY? Experimental Nonexperimental do(x) do(x) x x Deaths (y) 16 14 2 28 Survivals (y) 984 986 998 972 1,000 1,000 1,000 1,000 Nonexperimental data: drug usage predicts longer life Experimental data: drug has negligible effect on survival Plaintiff: Mr. A is special. He actually died He used the drug by choice Court to decide (given both data): Is it more probable than not that A would be alive but for the drug?

TYPICAL THEOREMS (Tian and Pearl, 2000) Bounds given combined nonexperimental and experimental data Identifiability under monotonicity (Combined data) corrected Excess-Risk-Ratio

SOLUTION TO THE ATTRIBUTION PROBLEM (Cont) WITH PROBABILITY ONE P(yx | x,y) =1 From population data to individual case Combined data tell more that each study alone

OUTLINE Modeling: Statistical vs. Causal Causal models and identifiability Inference to three types of claims: Effects of potential interventions, Claims about attribution (responsibility) Claims about direct and indirect effects

QUESTIONS ADDRESSED What is the semantics of direct and indirect effects? Can we estimate them from data? Experimental data?

WHY DECOMPOSE EFFECTS? Direct (or indirect) effect may be more transportable. Indirect effects may be prevented or controlled. Direct (or indirect) effect may be forbidden Pill  Pregnancy + + Thrombosis Gender Qualification Hiring

TOTAL, DIRECT, AND INDIRECT EFFECTS HAVE SIMPLE SEMANTICS IN LINEAR MODELS b X Z z = bx + 1 y = ax + cz + 2 a c Y a + bc a bc

SEMANTICS BECOMES NONTRIVIAL IN NONLINEAR MODELS (even when the model is completely specified) X Z z = f (x, 1) y = g (x, z, 2) Y Dependent on z? Void of operational meaning?

THE OPERATIONAL MEANING OF DIRECT EFFECTS X Z z = f (x, 1) y = g (x, z, 2) Y “Natural” Direct Effect of X on Y: The expected change in Y per unit change of X, when we keep Z constant at whatever value it attains before the change. In linear models, NDE = Controlled Direct Effect

POLICY IMPLICATIONS (Who cares?) indirect What is the direct effect of X on Y? The effect of Gender on Hiring if sex discrimination is eliminated. GENDER QUALIFICATION HIRING X Z IGNORE f Y

THE OPERATIONAL MEANING OF INDIRECT EFFECTS X Z z = f (x, 1) y = g (x, z, 2) Y “Natural” Indirect Effect of X on Y: The expected change in Y when we keep X constant, say at x0, and let Z change to whatever value it would have under a unit change in X. In linear models, NIE = TE - DE

LEGAL DEFINITIONS TAKE THE NATURAL CONCEPTION (FORMALIZING DISCRIMINATION) ``The central question in any employment-discrimination case is whether the employer would have taken the same action had the employee been of different race (age, sex, religion, national origin etc.) and everything else had been the same’’ [Carson versus Bethlehem Steel Corp. (70 FEP Cases 921, 7th Cir. (1996))] x = male, x = female y = hire, y = not hire z = applicant’s qualifications NO DIRECT EFFECT

SEMANTICS AND IDENTIFICATION OF NESTED COUNTERFACTUALS Consider the quantity Given M, P(u), Q is well defined Given u, Zx*(u) is the solution for Z in Mx*, call it z is the solution for Y in Mxz Can Q be estimated from data?

GRAPHICAL CONDITION FOR EXPERIMENTAL IDENTIFICATION OF AVERAGE NATURAL DIRECT EFFECTS Theorem: If there exists a set W such that Example:

HOW THE PROOF GOES? Proof: Each factor is identifiable by experimentation.

GRAPHICAL CRITERION FOR COUNTERFACTUAL INDEPENDENCE X Z X Z U1 Y U1 Y U3 U1 X Y U2 Z

GRAPHICAL CONDITION FOR NONEXPERIMENTAL IDENTIFICATION OF AVERAGE NATURAL DIRECT EFFECTS Identification conditions There exists a W such that (Y Z | W)GXZ There exist additional covariates that render all counterfactual terms identifiable.

IDENTIFICATION IN MARKOVIAN MODELS Corollary 3: The average natural direct effect in Markovian models is identifiable from nonexperimental data, and it is given by X Z Y

RELATIONS BETWEEN TOTAL, DIRECT, AND INDIRECT EFFECTS Theorem 5: The total, direct and indirect effects obey The following equality In words, the total effect (on Y) associated with the transition from x* to x is equal to the difference between the direct effect associated with this transition and the indirect effect associated with the reverse transition, from x to x*.

GENERAL PATH-SPECIFIC EFFECTS (Def.) X x* X W Z W Z z* = Zx* (u) Y Y Form a new model, , specific to active subgraph g Definition: g-specific effect Nonidentifiable even in Markovian models

ANSWERS TO QUESTIONS Graphical conditions for estimability from experimental / nonexperimental data. Graphical conditions hold in Markovian models

ANSWERS TO QUESTIONS Graphical conditions for estimability from experimental / nonexperimental data. Graphical conditions hold in Markovian models Useful in answering new type of policy questions involving mechanism blocking instead of variable fixing.

THE OVERRIDING THEME Define Q(M) as a counterfactual expression Determine conditions for the reduction If reduction is feasible, Q is inferable. Demonstrated on three types of queries: Q1: P(y|do(x)) Causal Effect (= P(Yx=y)) Q2: P(Yx = y | x, y) Probability of necessity Q3: Direct Effect

OUTLINE Modeling: Statistical vs. Causal Causal Models and Identifiability Inference to three types of claims: Effects of potential interventions Claims about attribution (responsibility) Claims about direct and indirect effects Actual Causation and Explanation Robustness of Causal Claims

ROBUSTNESS: MOTIVATION u a x y Genetic Factors (unobserved) u In linear systems: y = ax + u cov (x,u) = 0 a is identifiable. a = Ryx a x y Smoking Cancer

ROBUSTNESS: MOTIVATION u x y Genetic Factors (unobserved) u x y Smoking Cancer The claim a = Ryx is sensitive to the assumption cov (x,u) = 0. a is non-identifiable if cov (x,u) ≠ 0.

ROBUSTNESS: MOTIVATION Z u b a y x Price of Cigarettes b Genetic Factors (unobserved) u a y Smoking Cancer Z – Instrumental variable; cov(z,u) = 0 a is identifiable, even if cov (x,u) ≠ 0

ROBUSTNESS: MOTIVATION Z u b a y x Price of Cigarettes b Genetic Factors (unobserved) u a y Smoking Cancer Claim “a = Ryx” is likely to be true

ROBUSTNESS: MOTIVATION Z1 u b a Z2 y x Invoking several instruments Genetic Factors (unobserved) Price of Cigarettes u b a Peer Pressure Z2 g y x Smoking Cancer Invoking several instruments If a0 = a1 = a2, claim “a = a0” is more likely correct

ROBUSTNESS: MOTIVATION Z1 u b a Z2 y x Z3 Genetic Factors (unobserved) Price of Cigarettes u b a Z2 g y x Peer Pressure Z3 Zn Smoking Cancer Anti-smoking Legislation Greater surprise: a1 = a2 = a3….= an = q Claim a = q is highly likely to be correct

ROBUSTNESS: MOTIVATION a x y a1 = a2 = …an Given a parameter a in a general graph a x y Assume we have several independent estimands of a, and a1 = a2 = …an Find the degree to which a is robust to violations of model assumptions

ATTEMPTED FORMULATION ROBUSTNESS: ATTEMPTED FORMULATION Bad attempt: Parameter a is robust (over-identified) f1, f2: Two distinct functions if:

ROBUSTNESS: MOTIVATION u x a y s b Is a robust if a0 = a1? Genetic Factors (unobserved) u x a y s Symptom b Smoking Cancer Is a robust if a0 = a1?

ROBUSTNESS: MOTIVATION u x a s y b Symptoms do not act as instruments Genetic Factors (unobserved) u x a s Symptom y b Smoking Cancer Symptoms do not act as instruments remains non-identifiable if cov (x,u) ≠ 0 Why? Taking a noisy measurement (s) of an observed variable (y) cannot add new information

ROBUSTNESS: MOTIVATION Sn u S2 a x S1 y Symptom S1 S2 Sn Genetic Factors (unobserved) u x a Smoking Cancer Adding many symptoms does not help. remains non-identifiable

BASED ON DISTINCT SETS OF ASSUMPTION INDEPENDENT: BASED ON DISTINCT SETS OF ASSUMPTION u u a a z y x z y x Estimand Assumptioms Estimand Assumptioms

RELEVANCE: FORMULATION Definition 8 Let A be an assumption embodied in model M, and p a parameter in M. A is said to be relevant to p if and only if there exists a set of assumptions S in M such that S and A sustain the identification of p but S alone does not sustain such identification. Theorem 2 An assumption A is relevant to p if and only if A is a member of a minimal set of assumptions sufficient for identifying p.

ROBUSTNESS: FORMULATION Definition 5 (Degree of over-identification) A parameter p (of model M) is identified to degree k (read: k-identified) if there are k minimal sets of assumptions each yielding a distinct estimand of p.

ROBUSTNESS: FORMULATION x y b z c Minimal assumption sets for c. x y z G3 G2 G1 Minimal assumption sets for b. x y b z

FROM MINIMAL ASSUMPTION SETS TO MAXIMAL EDGE SUPERGRAPHS FROM PARAMETERS TO CLAIMS Definition A claim C is identified to degree k in model M (graph G), if there are k edge supergraphs of G that permit the identification of C, each yielding a distinct estimand. e.g., Claim: (Total effect) TE(x,z) = q x y z TE(x,z) = Rzx TE(x,z) = Rzx Rzy ·x x y z

FROM MINIMAL ASSUMPTION SETS TO MAXIMAL EDGE SUPERGRAPHS FROM PARAMETERS TO CLAIMS Definition A claim C is identified to degree k in model M (graph G), if there are k edge supergraphs of G that permit the identification of C, each yielding a distinct estimand. e.g., Claim: (Total effect) TE(x,z) = q x y z x z x z y y Nonparametric

SUMMARY OF ROBUSTNESS RESULTS Formal definition to ROBUSTNESS of causal claims: “A claim is robust when it is insensitive to violations of some of the model assumptions relevant to substantiating that claim.” Graphical criteria and algorithms for computing the degree of robustness of a given causal claim.

CONCLUSIONS Structural-model semantics enriched with logic + graphs leads to formal interpretation and practical assessments of wide variety of causal and counterfactual relationships.