Presentation is loading. Please wait.

Presentation is loading. Please wait.

Principal Components and Factor Analysis Principal components.

Similar presentations


Presentation on theme: "Principal Components and Factor Analysis Principal components."— Presentation transcript:

1 Principal Components and Factor Analysis Principal components

2 Intro In many disciplines we study phenomena or constructs that cannot be directly measured ▫(self-esteem, personality, intelligence) It often is required to take multiple observations for each case, and in the end we may have more data than can be readily interpreted ▫Items are representations of underlying or latent factors.  We want to know what these factors are ▫We have an idea of the phenomena that a set of items represent (construct validity). Because of this, we’ll want to “reduce” them to a smaller set of factors

3 Purpose of PCA/FA To find underlying latent constructs ▫As manifested in multiple items/variables To assess the association between multiple factors To produce usable scores that reflect critical aspects of any complex phenomenon As an end in itself and a major step toward creating error free measures

4 These problems can be addressed with factor analysis (general) The goal is to explain a set of data in less than the total number of observations per person It identifies linear combinations of variables that will account for the variance in data set

5 Basic Concept If two items are highly correlated ▫They may represent the same phenomenon ▫If they tell us about the same underlying variance, combining them to form a single measure is reasonable for two reasons  Parsimony  Reduction in Error Suppose one is just a little better than the other at representing this underlying phenomena? And suppose you have 3 variables, or 4, or 5, or 100? FACTOR ANALYSIS (general) looks for the phenomena underlying the observed variance and covariance in a set of variables. These phenomena are called “factors” or “principal components.”

6 Substance Use F1 Psychosocial Functioning F2 Alcohol Use X1 Marijuana Use X2 Hard Drug Use X3 Distress Y1 Self-Esteem Y2 Example of PCA-FA each with 3 main (bold-faced) loadings and each with 3 inconsequential (dashed-line) loadings. Powerless- ness Y3

7 PCA/FA While often used similarly, PCA and FA are distinct from one another Principal Components Analysis ▫Extracts all the factors underlying a set of variables ▫The number of factors = the number of variables ▫Completely explains the variance in each variable Factor Analysis ▫Analyzes only the shared variance  Error is estimated apart from shared variance

8 FA vs. PCA conceptually FA produces factors; PCA produces components Factors cause variables; components are aggregates of the variables The underlying causal model is fundamentally distinct between the two ▫Some do not consider PCA as part of the FA family*

9 Contrasting the underlying models* PCA ▫Extraction is the process of forming PCs as linear combinations of the measured variables as we have done with our other techniques  PC 1 = b 11 X 1 + b 21 X 2 + … + b k1 X k  PC 2 = b 12 X 1 + b 22 X 2 + … + b k2 X k  PC f = b 1f X 1 + b 2f X 2 + … + b kf X k Common factor model ▫Each measure X has two contributing sources of variation: the common factor ξ and the specific or unique factor δ:  X 1 = λ 1 ξ + δ 1  X 2 = λ 2 ξ + δ 2  X f = λ f ξ + δ f

10 FA vs. PCA PCA ▫PCA is mathematically precise in orthogonalizing dimensions ▫PCA redistributes all variance into orthogonal components ▫PCA uses all variable variance and treats it as true variance FA is conceptually realistic in identifying common factors FA ▫FA distributes common variance into orthogonal factors ▫FA recognizes measurement error and true factor variance

11 FA vs. PCA In some sense, PCA and FA are not so different conceptually than what we have been doing since multiple regression ▫Creating linear combinations ▫PCA especially falls more along the line of what we’ve already been doing What we do have different from previous methods is that there is no IV/DV distinction ▫Just a single set of variables

12 Summary PCA goal is to analyze variance and reduce the observed variables PCA reproduces the R matrix perfectly PCA – the goal is to extract as much variance with the least amount of factors PCA gives a unique solution FA analyzes covariance (communality) FA is a close approximation to the R matrix FA – the goal is to explain as much of the covariance with a minimum number of factors that are tied specifically to assumed constructs FA can give multiple solutions depending on the method and the estimates of communality

13 Questions Three general goals: data reduction, describe relationships and test theories about relationships How many interpretable factors exist in the data? How many factors are needed to summarize the pattern of correlations?

14 Questions Which factors account for the most variance? How well does the factor structure fit a given theory? What would each subject’s score be if they could be measured directly on the factors? What does each factor mean? What is the percentage of variance in the data accounted for by the factors? (FA or how much by the most notable components in PCA)

15 Assumptions/Issues Assumes reliable variables/correlations ▫Very much affected by missing data, outlying cases and truncated data ▫Data screening methods (e.g. transformations, etc.) may improve poor factor analytic results Normality ▫Univariate - normally distributed variables make the solution stronger but not necessary if we are using the analysis in a purely descriptive manner ▫Multivariate – is assumed when assessing the number of factors

16 Assumptions/Issues No outliers ▫Influence on correlations would bias results Variables as outliers ▫Some variables don’t work ▫Explain very little variance ▫Relates poorly with factor ▫Low squared multiple correlations as DV with other items as predictors ▫Low loadings

17 Assumptions/Issues Factorable R matrix ▫Need inter-item correlations >.30 or PCA/FA is going to do much for you ▫Large inter-item correlations does not guarantee solution either  While two variables may be highly correlated, they may not be correlated with others ▫Matrix of partials adjusted for other variables, Kaiser’s measure of sampling adequacy can help assess.  Kaiser’s is a ratio of the sum of squared correlations to the sum of squared correlations plus sum of squared partial correlations  Approaches 1 if partials are small, and typically desire or about.6+ Multicollinearity/Singularity ▫In PCA it is not problem; no matrix inversion necessary  As such it is a solution to dealing with collinearity in regression ▫Investigate tolerances, det(R)

18 Assumptions/Issues Sample Size and Missing Data ▫True missing data are handled in the usual ways ▫Factor analysis via Maximum Likelihood needs large samples and it is one of the only drawbacks The more reliable the correlations are, the smaller the number of subjects needed Need enough subjects for stable estimates How many? Depends on the nature of the data and the number of parameters to be estimated ▫For example, a simple setting with few variables and clean data might not need as much ▫Having several hundred data points for a more complex solution with messy data with lower correlations among the variables might not provide a meaningful result (PCA) or even converge upon a solution (FA)

19 Other issues No readily defined criteria by which to judge outcome ▫Before we had R 2, canonical corr, classification Choice of rotations is dependent entirely on researcher’s estimation of interpretability Often used when other outcomes/analyses are not so hot, just to have something to talk about*

20 Extraction Methods for Factor Analytic Approaches There are many (dozens at least) All extract orthogonal sets of factors (components) that reproduce the R matrix Different techniques – some maximize variance, others minimize the residual matrix (R – reproduced R) With large stable sample interpretations will be similar

21 Extraction Methods for Factor Analytic Approaches Usually solutions are difficult to interpret without a rotation The output will differ depending on ▫Extraction method ▫Communality estimates ▫Number of factors extracted ▫Rotational Method

22 Extraction Methods for Factor Analytic Approaches PCA vs. FA (family) PCA ▫begins with 1s on the diagonal of the correlation matrix ▫as such all variance is extracted and each variable given equal weight FA ▫begins with a communality estimates (e.g. squared multiple correlation, reliability estimate) on the diagonal ▫analyzes only common/shared variance

23 Extraction Methods PCA ▫Extracts maximum variance with each component ▫First component is a linear combination of variables that maximizes component score variance for the cases ▫The second (etc.) extracts the max. variance from the residual matrix left over after extracting the first component (therefore orthogonal to the first) ▫If all components retained, all variance explained

24 PCA First factor is the one that accounts for the most variance ▫It is the latent phenomena to which the items are as a group most strongly associated Second represents the factor that accounts for the most of what is left And so on till there is none left to account for

25 PCA Factors are linear combinations of variables. ▫These combinations are based on weights (eigenvectors) developed by the analysis FA and PCA are not much different than canonical correlation in terms of generating canonical variates from linear combinations of variables ▫Although there are now no “sides” of the equation, and you’re not necessarily correlating the “factors”, “components”, “variates”, etc. The factor loading for each item/variable is the r between it and the factor (i.e., the underlying shared variance) However, unlike many of the analyses so far there is no statistical criterion to compare the linear combination to ▫In MANOVA we create linear combinations that maximally differentiate groups ▫In Canonical correlation one linear combination is used to maximally correlate with another

26 PCA Once again we come to eigenvalues and eignenvectors Eigenvalues ▫Conceptually can be considered to measure the strength (relative length) of an axis ▫Derived eigen analysis of the square symmetric matrix (covariance or correlation) Eigenvector ▫Each eigenvalue has an associated eigenvector. An eigenvalue is the length of an axis, the eigenvector determines its orientation in space. ▫The values in an eigenvector are not unique because any coordinates that described the same orientation would be acceptable.

27 Data Example data of women’s height and weight heightweightZheightZweight 5793-1.77427146053986-1.96516286068824 58110-1.47097719378091-.873405715861441 6099-.86438866026301-1.5798368095729 59111-1.16768292702196-.809184707342218 61115-.561094393504058-.552300673265324 60122-.86438866026301-.102753613630758 62110-.257800126745107-.873405715861441 61116-.561094393504058-.4880796647461 62122-.257800126745107-.102753613630758 63128.0454941400138444.282572437484583 62134-.257800126745107.667898488599925 64117.348788406772796-.423858656226876 63123.0454941400138444-.0385326051115347 65129.652082673531747.346793446003807 64135.348788406772796.732119497119148 66128.955376940290699.282572437484583 671351.25867120704965.732119497119148 66148.9553769402906991.56699260786906 681421.56196547380861.18166655675371 691551.865259740567552.01653966750362

28 Data transformation Consider two variables height and weight X would be our data matrix, w our eigenvector (coefficients) Multiplying our original data by these weights* results in a column vector of values ▫z 1 = Xw The multiplying of a matrix by a vector is a linear combination The variance of this linear combination is the eigenvalue

29 Data transformation Consider a gal 5’ and 122 pounds She is -.86sd from the mean height and -.10 sd from the mean weight for this data The first eigenvector associated with the normalized data* is [.707,.707], as such the resulting value for that data point is -.68 So with the top graph we have taken the original data point and projected it onto a new axis -.68 units from the origin Now if we do this for all data points we will have projected them onto a new axis/component/dimension/factor/linear combination The length of the new axis is the eigenvalue

30 Data transformation Suppose we have more than one dimension/factor? In our discussion of the techniques thus far, we have said that each component or dimension is independent of the previous one What does independent mean? ▫r = 0 What does this mean geometrically in the multivariate sense? It means that the next axis specified is perpendicular to the previous Note how r is represented even here ▫The cosine of the 90 o angle formed by the two axes is… 0 Had the lines been on top of each other (i.e. perfectly correlated) the angle formed by them would be zero, whose cosine is 1 ▫r = 1

31 Data transformation The other eigenvector associated with the data is (-.707,.707) Doing as we did before we’d create that second axis, and then could plot the data points along these new axes* We now have two linear combinations, each of which is interpretable as the vector comprised of projections of original data points onto a directed line segment Note how the basic shape of the original data has been perfectly maintained The effect has been to rotate the configuration (45 o ) to a new orientation while preserving its essential size and shape ▫It is an orthogonal transformation ▫Note that we have been talking of specifiying/rotating axes, but rotating the points themselves would give us the same result

32 Stretching and shrinking Note that with what we have now there are two new variables, Z 1 and Z 2, with very different variances ▫Z 1 much larger If we want them to be equal we can simply standardize* those Z 1 and Z 2 values ▫s 2 = 1 for both In general, multiplying a matrix by a scalar will shrink or stretch the plot Here, let Z be the matrix of the Z variables and D a diagonal matrix with the standard deviations on the diagonal The resulting plot would now be circular

33 Singular value decomposition Given a data matrix X, we can use one matrix operation to stretch or shrink the values ▫Multiply by a scalar  < 1  shrink  > 1  stretch We’ve just seen how to rotate the values ▫Matrix multiplication In general we can start with a matrix X and get ▫Z s = X W D -1 W here is the matrix that specifies the rotation by some amount (degrees) With a little reworking ▫X = Z s D W’ What this means is that any data matrix X can be decomposed into three parts: ▫A matrix of uncorrelated variables with variance/sd = 1 (Z s ) ▫A stretching and shrinking transformation (D) ▫And an orthogonal rotation (W ’ ) Finding these components is called singular value decomposition

34 The Determinant The determinant of a var/covar matrix provides a single measure to characterize the variance and covariance in the data Generalized variance ▫10.87*224.42 – 44.51*44.51 = 654 Note ▫44.51/(10.87*224.42).5 = r =.867 VarCovar matrix for the height and weight data 10.8744.51 44.51224.42

35 Geometric interpretation of the determinant Suppose we use the values from our variance/covariance matrix and plot them geometrically as coordinates The vectors emanating from the origin to those points defines a parrallelogram In general the skinnier the parallelogram, the larger the correlation ▫Here our’s is pretty skinny due to an r =.867 The determinant is the area of this parallelogram, and thus will be smaller with larger correlations ▫Recall our collinearity problem The top is that associated with the raw variables varcovar matrix The bottom is that of the components Z 1 and Z 2 which have zero correlation

36 The scatterplots illustrate two variables with increasing correlations 0,.25,.50,.75,1. The smaller plots are the plot of the correlation matrix in 2d space (e.g. the coordinates for the 4th diagram are 1,.75 for one point,.75,1 for the other). The eigenvalues associated with the correlation matrix are the lengths of the major and minor axes, e.g. 1.75 and.25 for diagram 4. Also drawn is the ellipse specified by the axes

37 My head hurts! What is all this noise?

38 PCA It may take awhile for that stuff to sink in, but at this point we have the tools necessary to jump into the application of PCA “Principal components” Extraction process and resulting characteristics

39 Meaning of “Principal Components” “Component” analyses are those that are based on the “full” correlation matrix 1.00s in the diagonal “Principal” analyses are those for which each successive factor... accounts for maximum available variance is orthogonal (uncorrelated, independent) with all prior factors full solution (as many factors as variables), i.e. accounts for all the variance

40 Application of PC analysis Components analysis is a kind of “data reduction” start with an inter-related set of “measured variables” identify a smaller set of “composite variables” that can be constructed from the “measured variables” and that carry as much of their information as possible A “Full components solution”... has as many components as variables accounts for 100% of the variables’ variance each variable has a final communality of 1.00 A “Truncated components solution” … has fewer components than variables accounts for <100% of the variables’ variance each variable has a communality < 1.00

41 The steps of a PC analysis Compute the correlation matrix Extract a full components solution Determine the number of components to “keep” total variance accounted for variable communalities interpretability replicability “Rotate” the components and interpret (name) them Compute “component scores” “Apply” components solution theoretically -- understand the meaning of the data reduction statistically -- use the component scores in other analyses

42 PC Factor Extraction Extraction is the process of forming PCs as linear combinations of the measured variables as we have done with our other techniques  PC 1 = b 11 X 1 + b 21 X 2 + … + b k1 X k  PC 2 = b 12 X 1 + b 22 X 2 + … + b k2 X k  PC f = b 1f X 1 + b 2f X 2 + … + b kf X k The goal is to reproduce as much of the information in the measured variables with as few PCs as possible Here’s the thing to remember… We usually perform factor analyses to “find out how many groups of related variables there are” … however … The mathematical goal of extraction is to “reproduce the variables’ variance, efficiently”

43 3 variable example Consider 3 variables with the correlations displayed In a 3d sense we might envision their relationship as this, with the shadows what the scatterplots would roughly look like for each bivariate relationship X1 X3 X2

44 The first component identified

45 The variance of this component, its eigenvalue, is 2.063 In other words it accounts for twice as much variance as any single variable* Note 3 variables  2.063/3 =.688% variance accounted for*

46 PCA In principal components, we extract as many components as there are variables As mentioned previously, each component is uncorrelated with the previous If we save the component scores and were to look at their graph it would resemble something like this

47 How do we interpret the components? As we have done with the other techniques, the component loadings can inform us as to their interpretation As before, they are the original variable’s correlation with the component In this case, all load nicely on the first component, which since the others do not account for nearly as much variance is probably the only one to interpret

48 Here is an example of magazine readership Underlined loadings are >.30 How might this be interpreted?

49 Applied example Six items ▫Three sadness, three relationship quality ▫N = 300 PCA

50 Start with the Correlation Matrix

51 Communalities are ‘Estimated’ A measure of how much variance of the original variables is accounted for by the observed factors Uniqueness is 1-communality With PC with all factors, communality always = 1 As we’ll see with FA, the approach will be different ▫The initial value is the multiple R 2 for the association between a item and all the other items in the model Why 1.0? ▫PCA analyzes all the variance for each variable ▫FA only that shared variance

52 What are we looking for? Any factor whose eigenvalue is less than 1.0 is in most cases not going to be retained for interpretation ▫Unless it is very close or has a readily understood and interesting meaning *Loadings that are: ▫more than.5 are good ▫between.3 and.5 are ok ▫Less than.3: small Matrix reproduction ▫All the information about the correlation matrix is maintained ▫Correlations can be reproduced exactly in PCA  Sum of cross loadings

53 Assessing the variance accounted for Eigenvalue is an index of the strength of the factor, the amount of variance it accounts for. It is the sum of the squared loadings for that factor/component Eigenvalue/N of items or variables

54 Factor Loadings Eigenvalue of factor 1 =.609 2 +.614 2.593 2 +.728 2 +.767 2 +.764 2 = 2.80

55 Reproducing the correlation matrix (R) Sum the products of the loadings for two variables on all factors ▫For RQ1 and RQ2:  (.61 *.61) + (.61 *.57) + (-.12 * -.41) + (-.45 *.33) +.06 *.05) + (.20 * -.16) =.59  If we just kept to the first the first two factors only the reproduced correlation =.72 Note that an index of the quality of a factor analysis (as opposed to PCA) is the extent to which the factor loadings can reproduce the correlation matrix* Original correlation

56 Variance Accounted For For Items ▫The sum of the square of the loadings (i.e., weights) across the factors is the amount of variance accounted for in each item. ▫Item 1: .61 2 +.61 2 + -.12 2 +.45 2 +.06 2 +.20 2 .37 +.37 +.015 +.2 +.004 +.04 = ~1.0  For the first two factors:.74 For components ▫How much variance is accounted for by the components that will be retained?

57 When is it appropriate to use PCA? PCA is largely a descriptive procedure In our examples, we are looking at variables with decent correlations. However, if the variables are largely uncorrelated PCA won’t do much for you ▫May just provide components that are respective of each individual variable i.e. nothing is gained One may use Bartlett’s sphericity test to determine whether such an approach is appropriate It tests the null hypothesis that the R matrix is an identity matrix (1s on diagonal, 0s offdiagonals) When the determinant of R is small (recall from before this implies strong correlation), the chi-square statistic will be large  reject H 0 and PCA would be appropriate for data reduction One should note though that it is a powerful test, and usually will result in rejection with typical sample sizes One may instead refer to estimation of practical effect rather than a statistical test ▫Are the correlations worthwhile?

58 How should the data be scaled? In most of our examples we have been using the R matrix instead of the var-covar matrix As PCA seeks to maximize variance, it can be sensitive to scale differences across variables Variables with a larger range of scores would thus have more of an impact on the linear combination created As such, the R matrix should be used, except perhaps in cases where the items are on the same scale (e.g. Likert) The values involved will change (e.g. eigenvalues), though the general interpretation may not

59 How many components should be retained? Kaiser’s Rule ▫What we’ve already suggested i.e. eigenvalues over 1 ▫The idea is that any component should account for at least as much as a single variable Another perspective on this is to retain as many components as will account for X amount of variance ▫Practical approach Scree Plot ▫Look for the elbow  Look for the point after which the remaining eigenvalues decrease in linear fashion and retain only those ‘above’ the elbow ▫Not really a good primary approach though may be consistent with others

60 How many components should be retained? Horn’s Procedure This is a different approach which suggests to create a set of random data of the same size N and p variables The idea is that in this maximizing variance accounted for, PCA has a good chance of capitalization on chance Even with random data, the first eigenvalue will be > 1 As such, retain components with eigenvalues greater than that produced by the largest component of the random data

61 Rotation Sometimes our loadings will be a little difficult to interpret initially Given such a case we can ‘rotate’ the solution such that the loadings perhaps make more sense ▫This is typically done in factor analysis but is possible here too An orthogonal rotation is just a shift to a new set of coordinate axes in the same space spanned by the principal components

62 Rotation You can think of it as shifting the axes or rotating the ‘egg’ The gist is that the relations among the items is maintained, while maximizing their more natural loadings and minimizing ‘off-loadings’* Note that as PCA is a technique that initially creates independent components, and orthogonal rotations that maintain this independence are typically used ▫Loadings will be either large or small, little in between Varimax is the common rotation utilized ▫Maximizes the sum of the squared loadings for each component

63 Other issues: How do we assess validity? Cross-validation ▫Holdout sample as we have discussed before ▫About a 2/3, 1/3 split ▫Using eigenvectors from the original components, we can create new components with the new data and see how much variance each accounts for ▫Hope it’s similar to original solution Jackknife ▫With smaller samples conduct PCA multiple times each with a specific case held out ▫Using the eigenvectors, calculate the component score for the value held out ▫Compare the eigenvalues for the components involved Bootstrap ▫In the absence of a hold out sample, we can create a bootstrapped sample to perform the same function

64 Other issues: Factoring items vs. factoring scales Items are often factored as part of the process of scale development Check if the items “go together” like the scale’s author thinks Scales (composites of items) are factored to … ▫examine construct validity of “new” scales ▫test “theory” about what constructs are interrelated Remember, the reason we have scales is that individual items are typically unreliable and have limited validity

65 Other issues: Factoring items vs. factoring scales The limited reliability and validity of items means that they will be measured with less precision, and so, their intercorrelations for any one sample will be “fraught with error” Since factoring starts with R, factorings of items is likely to yield spurious solutions -- replication of item-level factoring is very important! Is the issue really “items vs. scales” ? ▫No -- it is really the reliability and validity of the “things being factored”, scales having these properties more than items

66 Other issues: When is it appropriate to use PCA? Another reason to use PCA, which isn’t a great one obviously, is that the maximum likelihood test involved in and Exploratory Factor Analysis does not converge PCA will always give a result (it does not require matrix inversion) and so can often be used in such a situation We’ll talk more on this later, but in data reduction situations EFA is to be preferred for social scientists and others that use imprecise measures

67 Other issues: Selecting Variables for a Factor Analysis Sometimes a researcher has access to a data set that someone else has collected -- an “opportunistic data set” While this can be a real money/time saver, be sure to recognize the possible limitations Be sure the sample represents a population you want to talk about Carefully consider variables that “aren’t included” and the possible effects their absence has on the resulting factors ▫this is especially true if the data set was chosen to be “efficient” variables chosen to cover several domains You should plan to replicate any results obtained from opportunistic data

68 Other issues: Selecting the Sample for a Factor Analysis How many? Keep in mind that the R and so the factor solution is the same no matter now many cases are used -- so the point is the representativeness and stability of the correlation Advice about the subject/variable ration varies pretty dramatically ▫5-10 cases per variable ▫300 cases minimum (maybe + # of items) Consider that like for other statistics, your standard error for correlation decreases with increasing sample size

69 A note about SPSS SPSS does provide a means for principal components analysis However, its presentation (much like many textbooks for that matter) blurs the distinction between PCA and FA, such that they are easily confused Although they are both data dimension reduction techniques, they do go about the process differently, have different implications regarding the results and can even come to different conclusions

70 A note about SPSS In SPSS, the menu is ‘factor’ analysis (even though ‘principal components’ is the default technique setting) Unlike other programs PCA isn’t even a separate procedure (it’s all in the Factor syntax) In order to perform PCA, make sure you have principal components selected as your extraction method, analyze the correlation matrix, and specify the number of factors to be extracted equals the number of variables Even now, your loadings will be different from other programs, which are scaled such that the sum of their squared values = 1 In general be cautious when using SPSS

71 No frills PCA in R* pca=princomp(Dataset) pca summary(pca) pca$loadings pca$scores #scree plot; plot(pca)

72 Other functions http://rss.acs.unt.edu/Rdoc/library/pcaMethods/html/ 00Index.htmlhttp://rss.acs.unt.edu/Rdoc/library/pcaMethods/html/ 00Index.html library(pcaMethods) ▫pca  Uses modern pca approaches (Bayesian, nonlinear etc.)  nipalsPca ▫Uses a technique ‘nipals’ to estimate missing values first ▫Can be specified in the ‘pca’ function  bpca ▫Same but uses a Bayesian method  robustPCA Q2 ▫Can perform cross-validation


Download ppt "Principal Components and Factor Analysis Principal components."

Similar presentations


Ads by Google