Download presentation
Presentation is loading. Please wait.
1
Discrete Multivariate Analysis
Analysis of Multivariate Categorical Data
2
References Fienberg, S. (1980), Analysis of Cross-Classified Data , MIT Press, Cambridge, Mass. Fingelton, B. (1984), Models for Category Counts , Cambridge University Press. Alan Agresti (1990) Categorical Data Analysis, Wiley, New York.
3
Example 1 In this study we examine n = 1237 individuals measuring X, Systolic Blood Pressure and Y, Serum Cholesterol
4
Example 2 The following data was taken from a study of parole success involving 5587 parolees in Ohio between 1965 and 1972 (a ten percent sample of all parolees during this period).
5
The study involved a dichotomous response Y
Success (no major parole violation) or Failure (returned to prison either as technical violators or with a new conviction) based on a one-year follow-up. The predictors of parole success included are: type of committed offence (Person offense or Other offense), Age (25 or Older or Under 25), Prior Record (No prior sentence or Prior Sentence), and Drug or Alcohol Dependency (No drug or Alcohol dependency or Drug and/or Alcohol dependency).
6
The data were randomly split into two parts
The data were randomly split into two parts. The counts for each part are displayed in the table, with those for the second part in parentheses. The second part of the data was set aside for a validation study of the model to be fitted in the first part.
7
Table
8
Multiway Frequency Tables
Two-Way A
9
Three -Way B A C
10
Three -Way C B A
11
four -Way B A C D
12
Analysis of a Two-way Frequency Table:
13
Frequency Distribution (Serum Cholesterol and Systolic Blood Pressure)
14
Joint and Marginal Distributions (Serum Cholesterol and Systolic Blood Pressure)
The Marginal distributions allow you to look at the effect of one variable, ignoring the other. The joint distribution allows you to look at the two variables simultaneously.
15
Conditional Distributions ( Systolic Blood Pressure given Serum Cholesterol )
The conditional distribution allows you to look at the effect of one variable, when the other variable is held fixed or known.
16
Conditional Distributions (Serum Cholesterol given Systolic Blood Pressure)
17
GRAPH: Conditional distributions of Systolic Blood Pressure given Serum Cholesterol
18
Notation: Let xij denote the frequency (no. of cases) where X (row variable) is i and Y (row variable) is j.
19
Different Models The Multinomial Model:
Here the total number of cases N is fixed and xij follows a multinomial distribution with parameters pij
20
The Product Multinomial Model:
Here the row (or column) totals Ri are fixed and for a given row i, xij follows a multinomial distribution with parameters pj|i
21
The Poisson Model: In this case we observe over a fixed period of time and all counts in the table (including Row, Column and overall totals) follow a Poisson distribution. Let mij denote the mean of xij.
22
Independence
23
Multinomial Model if independent and The estimated expected frequency in cell (i,j) in the case of independence is:
24
The same can be shown for the other two models – the Product Multinomial model and the Poisson model
namely The estimated expected frequency in cell (i,j) in the case of independence is: Standardized residuals are defined for each cell:
25
The Chi-Square Statistic
The Chi-Square test for independence Reject H0: independence if
26
Table Expected frequencies, Observed frequencies, Standardized Residuals
c2 = (p = )
27
Example In the example N = 57,407 cases in which individuals were victimized twice by crimes were studied. The crime of the first victimization (X) and the crime of the second victimization (Y) were noted. The data were tabulated on the following slide
28
Table 1: Frequencies
29
Table 2: Expected Frequencies (assuming independence)
30
Table 3: Standardized residuals
31
Table 3: Conditional distribution of second victimization given the first victimization (%)
32
Log Linear Model
33
Recall, if the two variables, rows (X) and columns (Y) are independent then
34
In general let then (1) where Equation (1) is called the log-linear model for the frequencies xij.
35
Note: X and Y are independent if
In this case the log-linear model becomes
36
Comment: The log-linear model for a two-way frequency table: is similar to the model for a two factor experiment
37
Three-way Frequency Tables
38
Example Data from the Framingham Longitudinal Study of Coronary Heart Disease (Cornfield [1962]) Variables Systolic Blood Pressure (X) < 127, , , 167+ Serum Cholesterol <200, , , 260+ Heart Disease Present, Absent The data is tabulated on the next slide
39
Three-way Frequency Table
40
Log-Linear model for three-way tables
Let mijk denote the expected frequency in cell (i,j,k) of the table then in general where
41
Hierarchical Log-linear models for categorical Data
For three way tables The hierarchical principle: If an interaction is in the model, also keep lower order interactions and main effects associated with that interaction
42
1. Model: (All Main effects model)
ln mijk = u + u1(i) + u2(j) + u3(k) i.e. u12(i,j) = u13(i,k) = u23(j,k) = u123(i,j,k) = 0. Notation: [1][2][3] Description: Mutual independence between all three variables.
43
2. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) i.e. u13(i,k) = u23(j,k) = u123(i,j,k) = 0. Notation: [12][3] Description: Independence of Variable 3 with variables 1 and 2.
44
3. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u13(i,k) i.e. u12(i,j) = u23(j,k) = u123(i,j,k) = 0. Notation: [13][2] Description: Independence of Variable 2 with variables 1 and 3.
45
4. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u23(j,k) i.e. u12(i,j) = u13(i,k) = u123(i,j,k) = 0. Notation: [23][1] Description: Independence of Variable 3 with variables 1 and 2.
46
5. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u13(i,k) i.e. u23(j,k) = u123(i,j,k) = 0. Notation: [12][13] Description: Conditional independence between variables 2 and 3 given variable 1.
47
6. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u23(j,k) i.e. u13(i,k) = u123(i,j,k) = 0. Notation: [12][23] Description: Conditional independence between variables 1 and 3 given variable 2.
48
7. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u13(i,k) + u23(j,k) i.e. u12(i,j) = u123(i,j,k) = 0. Notation: [13][23] Description: Conditional independence between variables 1 and 2 given variable 3.
49
8. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u13(i,k) u23(j,k) i.e. u123(i,j,k) = 0. Notation: [12][13][23] Description: Pairwise relations among all three variables, with each two variable interaction unaffected by the value of the third variable.
50
9. Model: (the saturated model)
ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u13(i,k) u23(j,k) + u123(i,j,k) Notation: [123] Description: No simplifying dependence structure.
51
Hierarchical Log-linear models for 3 way table
Description [1][2][3] Mutual independence between all three variables. [1][23] Independence of Variable 1 with variables 2 and 3. [2][13] Independence of Variable 2 with variables 1 and 3. [3][12] Independence of Variable 3 with variables 1 and 2. [12][13] Conditional independence between variables 2 and 3 given variable 1. [12][23] Conditional independence between variables 1 and 3 given variable 2. [13][23] Conditional independence between variables 1 and 2 given variable 3. [12][13] [23] Pairwise relations among all three variables, with each two variable interaction unaffected by the value of the third variable. [123] The saturated model
52
Maximum Likelihood Estimation
Log-Linear Model
53
For any Model it is possible to determine the maximum Likelihood Estimators of the parameters
Example Two-way table – independence – multinomial model or
54
Log-likelihood where With the model of independence
55
and with also
56
Let Now
57
Since
58
Now or
59
Hence and Similarly Finally
60
Hence Now and
61
Hence Note or
62
Comments Maximum Likelihood estimates can be computed for any hierarchical log linear model (i.e. more than 2 variables) In certain situations the equations need to be solved numerically For the saturated model (all interactions and main effects), the estimate of mijk… is xijk… .
63
Discrete Multivariate Analysis
Analysis of Multivariate Categorical Data
64
Multiway Frequency Tables
Two-Way A
65
four -Way B A C D
66
Log Linear Model
67
Two- way table where The multiplicative form:
68
Log-Linear model for three-way tables
Let mijk denote the expected frequency in cell (i,j,k) of the table then in general where
69
Log-Linear model for three-way tables
Let mijk denote the expected frequency in cell (i,j,k) of the table then in general or the multiplicative form
70
Comments The log-linear model is similar to the ANOVA models for factorial experiments. The ANOVA models are used to understand the effects of categorical independent variables (factors) on a continuous dependent variable (Y). The log-linear model is used to understand dependence amongst categorical variables The presence of interactions indicate dependence between the variables present in the interactions
71
Hierarchical Log-linear models for categorical Data
For three way tables The hierarchical principle: If an interaction is in the model, also keep lower order interactions and main effects associated with that interaction
72
1. Model: (All Main effects model)
ln mijk = u + u1(i) + u2(j) + u3(k) i.e. u12(i,j) = u13(i,k) = u23(j,k) = u123(i,j,k) = 0. Notation: [1][2][3] Description: Mutual independence between all three variables.
73
2. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) i.e. u13(i,k) = u23(j,k) = u123(i,j,k) = 0. Notation: [12][3] Description: Independence of Variable 3 with variables 1 and 2.
74
3. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u13(i,k) i.e. u12(i,j) = u23(j,k) = u123(i,j,k) = 0. Notation: [13][2] Description: Independence of Variable 2 with variables 1 and 3.
75
4. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u23(j,k) i.e. u12(i,j) = u13(i,k) = u123(i,j,k) = 0. Notation: [23][1] Description: Independence of Variable 3 with variables 1 and 2.
76
5. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u13(i,k) i.e. u23(j,k) = u123(i,j,k) = 0. Notation: [12][13] Description: Conditional independence between variables 2 and 3 given variable 1.
77
6. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u23(j,k) i.e. u13(i,k) = u123(i,j,k) = 0. Notation: [12][23] Description: Conditional independence between variables 1 and 3 given variable 2.
78
7. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u13(i,k) + u23(j,k) i.e. u12(i,j) = u123(i,j,k) = 0. Notation: [13][23] Description: Conditional independence between variables 1 and 2 given variable 3.
79
8. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u13(i,k) u23(j,k) i.e. u123(i,j,k) = 0. Notation: [12][13][23] Description: Pairwise relations among all three variables, with each two variable interaction unaffected by the value of the third variable.
80
9. Model: (the saturated model)
ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u13(i,k) u23(j,k) + u123(i,j,k) Notation: [123] Description: No simplifying dependence structure.
81
Hierarchical Log-linear models for 3 way table
Description [1][2][3] Mutual independence between all three variables. [1][23] Independence of Variable 1 with variables 2 and 3. [2][13] Independence of Variable 2 with variables 1 and 3. [3][12] Independence of Variable 3 with variables 1 and 2. [12][13] Conditional independence between variables 2 and 3 given variable 1. [12][23] Conditional independence between variables 1 and 3 given variable 2. [13][23] Conditional independence between variables 1 and 2 given variable 3. [12][13] [23] Pairwise relations among all three variables, with each two variable interaction unaffected by the value of the third variable. [123] The saturated model
82
Goodness of Fit Statistics
These statistics can be used to check if a log-linear model will fit the observed frequency table
83
Goodness of Fit Statistics
The Chi-squared statistic The Likelihood Ratio statistic: d.f. = # cells - # parameters fitted We reject the model if c2 or G2 is greater than
84
Example: Variables Systolic Blood Pressure (B) Serum Cholesterol (C) Coronary Heart Disease (H)
85
Goodness of fit testing of Models
MODEL DF LIKELIHOOD- PROB PEARSON PROB RATIO CHISQ CHISQ B,C,H B,CH C,BH H,BC BC,BH BH,CH n.s. CH,BC BC,BH,CH n.s. Possible Models: 1. [BH][CH] – B and C independent given H [BC][BH][CH] – all two factor interaction model
86
Model 1: [BH][CH] Log-linear parameters
Heart disease -Blood Pressure Interaction
87
Multiplicative effect
Log-Linear Model
88
Heart Disease - Cholesterol Interaction
89
Multiplicative effect
90
Model 2: [BC][BH][CH] Log-linear parameters
Blood pressure-Cholesterol interaction:
91
Multiplicative effect
92
Heart disease -Blood Pressure Interaction
93
Multiplicative effect
94
Heart Disease - Cholesterol Interaction
95
Multiplicative effect
96
Another Example In this study it was determined for N = 4353 males
Occupation category Educational Level Academic Aptidude
97
Occupation categories
Self-employed Business Teacher\Education Self-employed Professional Salaried Employed Education levels Low Low/Med Med High/Med High
98
Academic Aptitude Low Low/Med High/Med High
99
Self-employed, Business Teacher
Education Education Aptitude Low LMed HMed High Total Aptitude Low LMed HMed High Total Low Low LMed LMed Med Med HMed HMed High High Total Total Self-employed, Professional Salaried Employed Low Low LMed LMed Med Med HMed HMed High High Total Total
101
This is similar to looking at all the bivariate correlations
It is common to handle a Multiway table by testing for independence in all two way tables. This is similar to looking at all the bivariate correlations In this example we learn that: Education is related to Aptitude Education is related to Occupational category Can we do better than this?
102
Fitting various log-linear models
Simplest model that fits is: [Apt,Ed][Occ,Ed] This model implies conditional independence between Aptitude and Occupation given Education.
103
Log-linear Parameters
Aptitude – Education Interaction
104
Aptitude – Education Interaction (Multiplicative)
105
Occupation – Education Interaction
106
Occupation – Education Interaction (Multiplicative)
107
Conditional Test Statistics
108
Suppose that we are considering two Log-linear models and that Model 2 is a special case of Model 1.
That is the parameters of Model 2 are a subset of the parameters of Model 1. Also assume that Model 1 has been shown to adequately fit the data.
109
In this case one is interested in testing if the differences in the expected frequencies between Model 1 and Model 2 is simply due to random variation] The likelihood ratio chi-square statistic that achieves this goal is:
110
Example
111
Goodness of Fit test for the all k-factor models
Conditional tests for zero k-factor interactions
112
Conclusions The four factor interaction is not significant G2(3|4) = 0.7 (p = 0.705) The all three factor model provides a significant fit G2(3) = 0.7 (p = 0.705) All the three factor interactions are not significantly different from 0, G2(2|3) = 9.2 (p = 0.239). The all two factor model provides a significant fit G2(2) = 9.9 (p = 0.359) There are significant 2 factor interactions G2(1|2) = 33.0 (p = Conclude that the model should contain main effects and some two-factor interactions
113
There also may be a natural sequence of progressively complicated models that one might want to identify. In the laundry detergent example the variables are: Softness of Laundry Used Previous use of Brand M Temperature of laundry water used Preference of brand X over brand M
114
A natural order for increasingly complex models which should be considered might be:
[1][2][3][4] [1][3][24] [1][34][24] [13][34][24] [13][234] [134][234] The all-Main effects model Independence amongst all four variables Since previous use of brand M may be highly related to preference for brand M, add first the 2-4 interaction Brand M is recommended for hot water add 2nd the 3-4 interaction brand M is also recommended for Soft laundry add 3rd the 1-3 interaction Add finally some possible 3-factor interactions
115
Likelihood Ratio G2 for various models
d]f] G2 [1][3][24] 17 22.4 [1][24][34] 16 18 [13][24][34] 14 11.9 [13][23][24][34] 13 11.2 [12][13][23][24][34] 11 10.1 [1][234] 14.5 [134][24] 10 12.2 [13][234] 12 8.4 [24][34][123] 9 [123][234] 8 5.6
117
Discrete Multivariate Analysis
Analysis of Multivariate Categorical Data
118
Log-Linear model for three-way tables
Let mijk denote the expected frequency in cell (i,j,k) of the table then in general where
119
Hierarchical Log-linear models for categorical Data
For three way tables The hierarchical principle: If an interaction is in the model, also keep lower order interactions and main effects associated with that interaction
120
Models for three-way tables
121
1. Model: (All Main effects model)
ln mijk = u + u1(i) + u2(j) + u3(k) i.e. u12(i,j) = u13(i,k) = u23(j,k) = u123(i,j,k) = 0. Notation: [1][2][3] Description: Mutual independence between all three variables. Comment: For any model the parameters (u, u1(i) , u2(j) , u3(k)) can be estimated in addition to the expected frequencies (mijk) in each cell
122
2. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) i.e. u13(i,k) = u23(j,k) = u123(i,j,k) = 0. Notation: [12][3] Description: Independence of Variable 3 with variables 1 and 2.
123
3. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u13(i,k) i.e. u12(i,j) = u23(j,k) = u123(i,j,k) = 0. Notation: [13][2] Description: Independence of Variable 2 with variables 1 and 3.
124
4. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u23(j,k) i.e. u12(i,j) = u13(i,k) = u123(i,j,k) = 0. Notation: [23][1] Description: Independence of Variable 3 with variables 1 and 2.
125
5. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u13(i,k) i.e. u23(j,k) = u123(i,j,k) = 0. Notation: [12][13] Description: Conditional independence between variables 2 and 3 given variable 1.
126
6. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u23(j,k) i.e. u13(i,k) = u123(i,j,k) = 0. Notation: [12][23] Description: Conditional independence between variables 1 and 3 given variable 2.
127
7. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u13(i,k) + u23(j,k) i.e. u12(i,j) = u123(i,j,k) = 0. Notation: [13][23] Description: Conditional independence between variables 1 and 2 given variable 3.
128
8. Model: ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u13(i,k) u23(j,k) i.e. u123(i,j,k) = 0. Notation: [12][13][23] Description: Pairwise relations among all three variables, with each two variable interaction unaffected by the value of the third variable.
129
9. Model: (the saturated model)
ln mijk = u + u1(i) + u2(j) + u3(k) + u12(i,j) + u13(i,k) u23(j,k) + u123(i,j,k) Notation: [123] Description: No simplifying dependence structure.
130
Goodness of Fit Statistics
The Chi-squared statistic The Likelihood Ratio statistic: d.f. = # cells - # parameters fitted We reject the model if c2 or G2 is greater than
131
Conditional Test Statistics
132
In this case one is interested in testing if the differences in the expected frequencies between Model 1 and Model 2 is simply due to random variation] The likelihood ratio chi-square statistic that achieves this goal is:
133
Stepwise selection procedures
Forward Selection Backward Elimination
134
Forward Selection: Starting with a model that under fits the data, log-linear parameters that are not in the model are added step by step until a model that does fit is achieved. At each step the log-linear parameter that is most significant is added to the model: To determine the significance of a parameter added we use the statistic: G2(2|1) = G2(2) – G2(1) Model 1 contains the parameter. Model 2 does not contain the parameter
135
Backward Elimination:
Starting with a model that over fits the data, log-linear parameters that are in the model are deleted step by step until a model that continues to fit the model and has the smallest number of significant parameters is achieved. At each step the log-linear parameter that is least significant is deleted from the model: To determine the significance of a parameter deleted we use the statistic: G2(2|1) = G2(2) – G2(1) Model 1 contains the parameter. Model 2 does not contain the parameter
137
K = knowledge N = Newspaper R = Radio S = Reading L = Lectures
140
Continuing after 10 steps
141
The final step
142
The best model was found a the previous step
[LN][KLS][KR][KN][LR][NR][NS]
143
Modelling of response variables
Independent → Dependent
144
Logit Models To date we have not worried whether any of the variables were dependent of independent variables. The logit model is used when we have a single binary dependent variable.
146
The variables Type of seedling (T) Depth of planting (D)
Longleaf seedling Slash seedling Depth of planting (D) Too low. Too high Mortality (M) (the dependent variable) Dead Alive
147
The Log-linear Model Note: mij1 = # dead when T = i and D = j.
mij2 = # alive when T = i and D = j. = mortality ratio when T = i and D = j.
148
Hence since
149
The logit model: where
150
Thus corresponding to a loglinear model there is logit model predicting log ratio of expected frequencies of the two categories of the independent variable. Also k +1 factor interactions with the dependent variable in the loglinear model determine k factor interactions in the logit model k + 1 = constant term in logit model k + 1 = 2, main effects in logit model
151
1 = Depth, 2 = Mort, 3 = Type
152
Log-Linear parameters for Model: [TM][TD][DM]
153
Logit Model for predicting the Mortality
155
The best model was found by forward selection was
[LN][KLS][KR][KN][LR][NR][NS] To fit a logit model to predict K (Knowledge) we need to fit a loglinear model with important interactions with K (knowledge), namely [LNRS][KLS][KR][KN] The logit model will contain Main effects for L (Lectures), N (Newspapers), R (Radio), and S (Reading) Two factor interaction effect for L and S
156
The Logit Parameters for the Model : LNSR, KLS, KR, KN
( Multiplicative effects are given in brackets, Logit Parameters = 2 Loglinear parameters) The Constant term: (0.798) The Main effects on Knowledge: Lectures Lect (1.307) None (0.765) Newspaper News (1.383) None (0.723) Reading Solid (1.405) Not (0.712) Radio Radio (1.162) None (0.861) The Two-factor interaction Effect of Reading and Lectures on Knowledge
157
Fitting a Logit Model with a Polytomous Response Variable
158
Example: NA – Not available
159
The variables Race – white, black Age - < 22, ≥ 22
Father’s education – GS, some HS, HS grad, NA Respondents Education - GS, some HS, HS grad – the response (dependent) variable
161
Techniques for handling Polytomous Response Variable Approaches
Consider the categories 2 at a time. Do this for all possible pairs of the categories. Look at the continuation ratios 1 vs 2 1,2 vs 3 1,2,3 vs 4 etc
165
Causal or Path Analysis for Categorical Data
166
When the data is continuous, a causal pattern may be assumed to exist amongst the variables.
The path diagram This is a diagram summarizing causal relationships. Straight arrows are drawn between a variable that has some cause and effect on another variable X Y Curved double sided arrows are drawn between variables that are simply correlated Y X
167
Job Stress Smoking Example 1
The variables – Job stress, Smoking, Heart Disease The path diagram Job Stress Smoking Heart Disease In Path Analysis for continuous variables, one is interested in determining the contribution along each path (the path coefficents)
168
Job Stress Smoking Drinking Example 2
The variables – Job stress, Alcoholic Drinking, Smoking, Heart Disease The path diagram Job Stress Smoking Drinking Heart Disease
169
In analysis of categorical data there are no path coefficients but path diagrams can point to the appropriate logit analysis Example In this example the data consists of a two wave, two variable panel data for a sample of n =3398 schoolboys. It is looking at “membership” and “attitude towards” the leading crowd.
170
A B C D The path diagram: This suggest predicting B from A, then
C from A and B and finally D from A, B and C.
174
Example 2 In this example we are looking at
Social Economic Status (SES) Sex IQ Parental Encouragement for Higher Education (PE) College Plans(CP)
176
The Path Diagram Sex SES IQ PE CP
177
The path diagram suggests
Predicting Parental Encouragement from Sex, SocioEconomic status, and IQ, then Predicting College Plans from Parental Encouragement, Sex, SocioEconomic status, and IQ.
179
Logit Parameters: Model [ABC][ABD][ACD][BCD]
180
Two factor Interactions
183
Logit Parameters for Predicting College Plans Using Model 9:
Logit Parameters for Predicting College Plans Using Model 9: [ABCD][BCE][AE][DE]
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.