Download presentation
Presentation is loading. Please wait.
Published byItzel Harbach Modified over 9 years ago
1
Analysis of Variance (ANOVA)
2
Hypothesis H 0 : i = G H 1 : i | ( i G ) Logic S 2 within = error variability S 2 between = error variability + treatment Where, x = grand mean, x j = group mean, k = number of groups, n j = number of participants in a group, N = Total number of participants
3
Classic ANOVA Table Total variability = Between variability + within variability Sums of square
4
Computation Degrees of freedomMeans square F
5
ANOVA
6
Analysis of Variance (ANOVA) From ANOVA If the independent variables are continuous and the dependant variable is also continuous then we will perform a multiple regression. Now, if the independent variables are discrete and the dependant variable is still continuous we will performed an ANOVA To GLMwhere, where, = grand mean, = treatment effect, e = error
7
Analysis of Variance (ANOVA) Example
8
Analysis of Variance (ANOVA) Using the GLM approach through a coding matrix Logic: If, for example, there are 3 groups and we know that the participant number 12 is not part of the first nor the second group, then we know that this participant is necessary part of group 3.
9
Analysis of Variance (ANOVA) Performing ANOVA using GLM through a coding matrix Logic: In other words, there is only 2 degrees of freedom in group assignation. Therefore, the third group column will be eliminated. A value of 1 will be assigned to the participants of group i and a value of 0 for the other groups. Whereas, the last group will receive a value of -1 for all the group (balance things).
10
Analysis of Variance (ANOVA) Then, for each subject we associate its corresponding group coding. = independent variables (X)
11
Analysis of Variance (ANOVA) X A = y SSCP =
12
R-Square R 2 is obtained by: Number of participantsNumber of predictors (independent variables) SSCP =
13
ANOVA Table The hypothesis is that the R-Square between the predictors and the criterion is null. Since F(3,32)=5.86938, p<0.05, we reject H 0 and we accept H 1. There is at least one group that is different from the others.
14
ANOVA Terminology Coefficient of determination (proportion of explained variation)
15
ANOVA Now you know it! ANOVA is a special case of regression. The same logic can be applied to t-test, factorial ANOVA, ANCOVA, simple effects (Tukey, Bonferoni, LSD, etc.)
16
Principal Component Analysis (PCA)
17
PCA Why To discover or to reduce the dimensionality of the data set. To identify new meaningful underlying variables Assumptions Sample size : about 300 (in general) Normality Linearity Absence of outliers among cases
18
PCA Illustration First principal component Second principal component
19
PCA X = Zx =Zx = Preliminary Data Z scores
20
PCA Preliminary SSCP Correlation Matrix SSCP = M =
21
PCA Eigenvalues and eigenvectors Let’s define a random vector as v(0) = [1, 1] T. Now, if we compute the inner product between the correlation matrix (M) and V(0) an re-multiply the result by M, again, again, and again, what the results will be after k iterations?
22
PCA Eigenvalues and eigenvectors Let’s define a random vector as v(0) = [1, 1] T. Now, if we compute the inner product between the correlation matrix (M) and V(0) an re-multiply the result by M, again, again, and again, what the results will be after k iterations?
23
PCA Eigenvalues and eigenvectors After convergence 1- The direction of the stable vector = Eigenvector ( ) 2- The stable vector lengthening factor = Eigenvalue ( )
24
PCA Eigenvalues and eigenvectors Once the first eigenvector (and associated eigenvalue) has been identified, we remove it from the matrix. And we repeat the process until all the eigenvectors and eigenvalues have been extracted.
25
PCA Eigenvalues and eigenvectors There will be as many eigenvectors/eigenvalues as there are variables. Each eigenvector will be orthogonal to the others. M = PCA
26
PCA Eigenvalues and eigenvectors There will be as many eigenvectors/eigenvalues as there are variables. Each eigenvector will be orthogonal to the others. 11 22 33 44
27
PCA Eigenvalues and eigenvectors How many are important? Plot the eigenvalues, 1- If the points on the graph tend to level out (show an "elbow"), these eigenvalues are usually close enough to zero that they can be ignored. 2- Limit variance accounted for, (e.g. 90%) Method 1 Method 2
28
PCA Eigenvalues and eigenvectors Illustration of the data and the selected eigenvectors 11 22 x1 x3 x2 x4 = F T
29
PCA VARIMAX Rotation Why ? To improve the readability The VARIMAX rotation aims at finding a solution where an original variable loads highly on one particular factor and loads as low as possible on other factors. Rotation matrix
30
PCA VARIMAX Rotation The algorithm maximizes the VARIMAX index, V, the sum of the variances of the component-loading matrix. (X-X) 2 V will be a long equation that contains the variable. An optimization technique is then used to find the value that maximize V.
31
PCA VARIMAX Rotation The algorithm maximizes the VARIMAX index, V, the sum of the variances of the component-loading matrix. U = V =
32
PCA VARIMAX Rotation The algorithm maximizes the VARIMAX index, V, the sum of the variances of the component-loading matrix. 11 22 x1 x3 x2 x4 x1 x3 x2 x4 22 11
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.