Download presentation
Presentation is loading. Please wait.
Published byEdwin Rice Modified over 9 years ago
1
بسم الله الرحمن الرحیم.
2
Multivariate Analysis of Variance
3
– – – –
4
5
6
8
MANOVA Model – –
9
10
How the method works A new variable is created that combines all the dependent variables on the left hand side of the equation such that the differences between group means are maximized. (The f-statistic from Anova is maximized, that is, the ratio of explained variance to error variance). The simplest significance test treats the first, new variable just like a single dependent variable in Anova, and uses the tests as in Anova. Additional, multivariate tests can also be computed that involve multiple new variables derived from the initial set of dependent variables.
11
Advantages
12
1.
13
Assumptions
14
15
–
16
’ ’
17
– ’ – – ’ ’
18
’
19
20
21
-
22
23
24
- -
25
–
26
‘ ’
27
Sum of Squares The sum of squares measure found in a MANOVA, like that reported in the ANOVA, is the measure of the squared deviations from the mean both within and between the independent variable. In MANOVA, the sums of squares are controlled for covariance between the independent variables.
28
There are six different methods of calculating the sum of squares. Type I, hierarchical or sequential sums of squares, is appropriate when the groups in the MANOVA are of equal sizes. Type I sum of squares provides a breakdown of the sums of squares for the whole model used in the MANOVA but it is particularly sensitive to the order in which the independent variables are placed in the model. If a variable is entered first, it is not adjusted for any of the other variables; if it is entered second, it is adjusted for one other variable (the first one entered); if it is placed third, it will be adjusted for the two other variables already entered.
29
Type II, the partially sequential sum of squares, has the advantage over Type I in that it is not affected by the order in which the variables are entered. It displays the sum of squares after controlling for the effect of other main effects and interactions but is only robust where there are even numbers of participants in each group.
30
Type III sum of squares can be used in models where there are uneven group sizes, although there needs to be at least one participant in each cell. It calculates the sum of squares after the independent variables have all been adjusted for the inclusion of all other independent variables in the model.
31
Type IV sum of squares can be used when there are empty cells in the model but it is generally thought more suitable to use Type III sum of squares under these conditions since Type IV is not thought to be good at testing lower order effects.
32
Type V has been developed for use where there are cells with missing data. It has been designed to examine the effects according to the degrees of freedom which are available and if the degrees of freedom fall below a given level these effects are not taken into account. The cells which remain in the model have at least the degrees of freedom the full model would have without any cells being excluded. For those cells which remain in the model the Type III sum of squares are calculated. However, the Type V sum of squares are sensitive to the order in which the independent variables are placed in the model and the order in which they are entered will determine which cells are excluded.
33
Type VI sum of squares is used for testing hypotheses where the independent variables are coded using negative and positive signs e.g. +1 = male, -1 = female. Type III sum of squares is the most frequently used as it has the advantages of Types IV, V and VI without the corresponding restrictions.
34
Principal components analysis Principal components analysis is a technique for forming new variables which are linear composites of the original variables The objective of principal components analysis is to reduce the number of variables to a few components such that each component forms a new variable and the number of retained components explains the maximum amount of variance in the data. Principal Components Analysis can be viewed as a Dimensional Reducing Technjque
35
factor analysis Factor Analysis can be viewed as a Dimensional Reducing Technjque The objective of factor analysis, is to search or identify the underlying factor(s) or latent constructs that can explain the intercorrelation among the variables. -Find statistically independent variables. -Reduce dimensionality of data. There are two major differences First, principal components analysis places emphasis on explaining the variance in the data; the objective of factor analysis is to explain the correlation among the indicators. Second, in principal components analysis the variables form an index. In factor analysis, the variables or indicators reflect the presence of unobservable construct(s) or factor(s).
36
Discriminant Function Analysis (DFA) Description: DFA uses a set of independent variables (IV's) to separate cases based on groups you define; the grouping variable is the dependent variable (DV) and it is categorical. DFA creates new variables based on linear combinations of the independent set that you provided. These new variables are defined so that they separate the groups as far apart as possible. How well the model performed is usually reported in terms of the classification efficiency, that is, how many cases would be correctly assigned to their groups using the new variables from DFA. The new variables can also be used to classify a new set of cases.
37
How the method works DFA creates a new variable from the independent variables. This new variable defines a line onto which the group centers would plot as far apart as possible from each other. In other words, this new variable is defined such that is provides the maximum separation between groups of cases. This process repeats with successive new variables that further separate the group centers.
38
-In statistical testing MANOVA employs a discriminant function, which is the variate of dependent variables that maximizes the difference between groups -Discriminant analysis employs a single nonmetric variable as the dependent variable. The categories of the dependent variable are assumed as given, and the independent variables are used to form variates that maximally differ between the groups formed by the dependent variable categories. -MANOVA uses the set of metric variables as dependent variables,
39
-The dependent variables in MANOVA (a set of metric variables) are the independent variables in discriminant analysis, and the single nonmetric dependent variable of discriminant analysis becomes the independent variable in MANOVA. and the objective becomes finding groups of respondents that exhibit differences on the set of dependent variables. The groups of respondents are not prespecified; instead, the researcher uses one or more independent variables (nonmetric variables) to form groups. MANOVA, even while forming these groups, still retains the ability to assess the impact of each nonmetric variable separately. Moreover, both use the same methods in forming the variates and assessing the statistical significance between groups. The differences, however, center around the objectives of the analyses and the role of the nonmetric variable(s).
40
MULTIVARIATE LINEAR REGRESSION MODELS Regression analysis is the statistical methodology for predicting values of one or more response (dependent) variables from a collection of predictor (independent) variable values. It can also be used for assessing the effects of the predictor variables on the responses.
41
Canonical correlation -Canonical correlation is the appropriate technique for identifying relationships between two sets of variables. it is known that one set of variables is the predictor or independent set and another set of variables is the criterion or dependent set then -the objective of canonical correlation analysis is to determine if the predictor set of variables affects the criterion set of variables. However. it is not necessary to designate the two sets of variables as the dependent and independent sets. In such cases the objective is simply to ascertain the relationship between the two sets of variables -canonical correlation analysis is also a data reduction technique -an additional objective of canonical correlation is to determine the minimum number of canonical correlations needed to adequately represent the association between the two sets of variables..
42
Most of the dependence methods are special cases of canonical correlation analysis. MANOVA and multiple-group discriminant analysis are also special cases of canonical correlation analysis. When the criterion variables are dummy variables representing multiple groups then canonical correlation analysis reduces to multiple-group discriminant analysis. when the predictor variables are dummy variables representing the groups formed by the various factors, then canonical correlation analysis reduces to MANOVA. In fact SPSS does not have a separate procedure for canonical correlation analysis. Rather one has to use MANOVA for canonical correlation analysis.
43
logistic regression logistic regression is normally recommended when the independent variables do not satisfy the multivariate normality assumption. Discriminant analysis assumes that the data come from a multivariate normal distribution. whereas logistic regression analysis makes no such distributional assumptions. Since the multivariate normality assumption will clearly be violated for a mixture of categorical and continuous variables we suggest that in such cases one should use logistic regression analysis. In the case when there are no categorical variables logistjc regression should be used when the multivariate assumption violated. discriminant analysis should be used when the multivariate normality assumption is not violated because discriminant analysis is computationally more efficient.
44
how to interpret MANOVA results If the treatments result in statistically significant differences in the vector of dependent variable means, the researcher then examines the results to understand how each treatment impacts the dependent measures. Three steps are involved: (1) interpreting the effects of covariates, if included; (2) assessing which dependent variable(s) exhibited differences across the groups of each treatment; (3) identifying if the groups differ on a single dependent variable or the entire dependent variate.
45
When a significant effect is found, we say that there is a main effect, meaning that there are significant differences between the dependent variables of the two or more groups defined by the treatment. With two levels of the treatment, a significant main effect ensures that the two groups are significantly different. With three or more levels, however, a significant main effect does not guarantee that all three groups are significantly different, instead just that there is at least one significant difference between a pair of groups. If there is more than one treatment in the analysis, the researcher must examine the interaction terms to see if they are significant, and if so, do they allow for an interpretation of the main effects or not. If there are more than two levels for a treatment, then the researcher must perform a series of additional tests between the groups to see which pairs of groups are significantly different.
46
Although the multivariate tests of MANOVA enable us to reject the null hypothesis that the groups’ means are all equal, they do not pinpoint where the significant differences lie if there are more than two groups. Multiple t tests without any form of adjustment are not appropriate for testing the significance of differences between the means of paired groups because the probability of a Type I error increases with the number of intergroup comparisons made (similar to the problem of using multiple univariate ANOVAs versus MANOVA). If the researcher wants to systematically examine group differences across specific pairs of groups for one or more dependent measures, two types of statistical tests should beused: post hoc and a priori. Post hoc tests examine the dependent variables between all possible pairs of group differences that are tested after the data patterns are establishedt
47
The interaction term represents the joint effect of two or more treatments. Any time a research design has two or more treatments, the researcher must first examine the interactions before any statement can be made about the main effects. Interaction effects are evaluated with the same criteria as main effects. If the statistical tests indicate that the interaction is nonsignificant, this denotes that the effects of the treatments are independent. Independence in factorial designs means that the effect of one treatment (i.e., group differences) is the same for each level of the other treatment(s) and that the main effects can be interpreted directly. If the interactions are deemed statistically significant, it is critical that the researcher identify the type of interaction (ordinal versus disordinal), because this has direct bearing on the conclusion that can be drawn from the results.
48
Ordinal interaction occurs when the effects of a treatment are not equal across all levels of another treatment, but the group difference(s) is always the same direction. Disordinal interaction occurs when the differences between levels “switch” depending on how they are combined with levels from another treatment. Here the effects of one treatment are positive for some levels and negative for other levels of the other treatment.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.