Download presentation
Presentation is loading. Please wait.
Published byCameron Watkins Modified over 8 years ago
1
Kolmogorov (1933) Stochastic dependence of events [ ( , A, P), P(A B)] Set (of possible outcomes) -Algebra (set of possible events) A is a set of subsets of with (a) A (b)If A A, then Ā A (c)If A 1, A 2,... A, then A 1 A 2 ... A Probability measure P: A [0, 1] with (a) P( ) = 1 (b) P(A) ≥ 0 A A (c) P( i A i ) = i P(A i ), if A i A j = i, j Conditional Probability P(A B) P( A | B) = _____________ P(B) Stochastic dependence of A and B P(A B) P(A) or P(A B) P(A) P(B)
2
still Kolmogorov (1933) Regressive Dependence of Random Variables [ ( , A, P),E(Y X) ] same as before Random variables on ( , A, P) Y: (real-valued) X: X must be „measurable“ i. e., all events associated with X and Y are elements in A Regression or conditional expectation, i. e. that function of X, the values of which are the conditional expected values E(Y X = x) Regressive dependence of Y on X E(Y X ) ≠ E(Y ) More general framework than before, because Y and X can be indicator variables for events A and B, i. e. Y = I A and X = I B.
3
Kolmogorov´s framework is fine for causal and noncausal dependencies, but it does not distinguish between the two. Prototypical examples for noncausal dependencies: 1. Experiment with treatment and control where the more seriously ill people tend to select themselves into the treatment condition. 2. The size of the older sibbling on the younger one. Prototypical examples for causal dependencies: 3. Experiment with truely random assignment of units to treatment conditions. E(Y X ) can be highly misleading if causally interpreted. It may indicate a positive dependence when in fact there is a negative individual causal effect for each and every individual in the population. (See Steyer et al., 2000) MPR-online, „Causal Regression. Models I“
4
Structural Prerequisites (Steyer, 1992) [ ( , A, P), E(Y X ), ( C t, t T), D ] same as before (2) Monotonically nondecreasing family of -algebras C t A C1C1 C2C2 C3C3 A used to define preorderedness relation between events and random variables. [Random variables generate -algebras A.] D A, a sub- -algebra of A. used to define „potential confounders“ W (random variables). Their generated - algebra is a subset of D. Pre-orderedness W X Y
5
Causality conditions (Steyer, 1992) Strict Causality E(Y X, W ) = E(Y X ) for each potential confounder W Strong Causality E(Y X, W ) = E(Y X ) + f (W ) for each potential confounder W Weak Causality (= Unconfoundedness) If W is a potential confounder, then, for each value x of X: E(Y X = x ) = ∫ E(Y X = x,W = w) P W (dw) i.e., if W is discrete: E(Y X = x) = w E(Y X = x, W = w) P(W = w)
6
Sufficient conditions for Weak Causality (Steyer, 1992) 1.Stochastic independence of X and D implies Weak Causality. [If D is defined to be generated by U, the random variable, the values of which are the observational units drawn from the population, then this independence can be deliberately created via random assignment of units to treatment conditions.] 2.Both, Strict and Strong Causality Conditions imply Weak Causality.
7
Necessary conditions for Weak Causality (Steyer, 1992) 1.See definition of Weak Causality itself (the condition ist directly empirically testable). 2.If E(Y X, W ) = 0 + 1 X + 2 W then Weak Causality implies E(Y X ) = 0 + 1 X with 1 = 1. (For statistical tests in the normal distribution case see von Davier, 2001).
8
More recent work related to the Theory of Causal Regression Models Nonorthogonal Analysis of Variance in cooperation with Wüthrich-Martone (dissertation just finished May 2001) Integrating Rubin´s Approach Steyer et al. (2000a, b, see MPR-online)
9
Yet to be done 1.Causal modeling with categorial variables: - How to test causality? - How to analyze causal effects? 2.Extending the theory to the whole distribution of random variables [instead of only focussing E(Y X )]. 3.Testing and analyzing causal models with qualitative stochastic regressors (ANOVA with stochastic regressors). 4.Developing the theory of nonexperimental design, i.e., causal analysis in panel studies etc. 5.Extending the theory to systems of regression models.
10
How to construct latent variables Basic Concepts of Classical Test Theory Primitives –The set of possible events of the random experiment = U O –Test Score Variables Y i : IR –Projection U: Definition of the Theoretical Variables –True Score Variable i := E(Y i | U ) –Measurement Error Variable i := Y i i
11
Y3Y3 Y2Y2 Y1Y1 Y6Y6 Y5Y5 Y4Y4
12
-congenerity i = ij0 + ij1 j, ij0, ij1 IR, ij1 > 0 uncorrelated errors Cov ( i, j ) = 0, i j equal error variances Var( i ) = Var( j ) -congenerity implies Y i = i0 + i1 + i
13
Y3Y3 Y2Y2 Y1Y1
15
Structural equation models LISREL notation, exogeneous and endogenous variables
16
Structural equation models –Model equations for structured means Measurement model for y: y = t y + L y h + e Measurement model for x: x = t x + L x x + d Structural model: h = a + B h + G x + z
17
Structural equation models
18
Category probabilities in IRT models
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.