Download presentation
Presentation is loading. Please wait.
Published byCorey Whitehead Modified over 9 years ago
1
Dimension Reduction Methods
2
statistical methods that provide information about point scatters in multivariate space “factor analytic methods” –simplify complex relationships between cases and/or variables –makes it easier to recognize patterns
3
identify and describe ‘dimensions’ that underlie the input data –may be more fundamental than those directly measured, and yet hidden from view reduce the dimensionality of the research problem –benefit = simplification; reduce number of variables you have to worry about identifying sets of variables with similar “behaviour” How?
4
Basic Ideas imagine a point scatter in multivariate space: –the specific values of the numbers used to describe the variables don’t matter –we can do anything we want to the numbers, provided they don’t distort the spatial relationships that exist among cases some kinds of manipulations help us think about the shape of the scatter in more productive ways
5
imagine a two dimensional scatter of points that show a high degree of correlation … x y bar-x bar-y orthogonal regression…
7
Why bother? more “efficient” description –1 st var. captures max. variance –2 nd var. captures the max. amount of residual variance, at right angles (orthogonal) to the first the 1 st var. may capture so much of the information content in the original data set that we can ignore the remaining axis
8
other advantages… you can score original cases (and variables) in new space, and plot them… spatial arrangements may reveal relationships that were hidden in higher dimension space may reveal subsets of variables based on correlations with new axes…
9
length width
10
“size” “shape”
11
Storage / Cooking Cooking PUBLIC PRIVATE DOMESTIC RITUAL Ritual candelero Service?
12
Principal Components Analysis (PCA) why: clarify relationships among variables clarify relationships among cases when: significant correlations exist among variables how: define new axes (components) examine correlation between axes and variables find scores of cases on new axes
13
r = 0r = -1r = 1 x4x4 x3x3 x2x2 x1x1 pc 2 pc 1 component loading eigenvalue: sum of all squared loadings on one component
14
eigenvalues the sum of all eigenvalues = 100% of variance in original data proportion accounted for by each eigenvalue = ev/n (n = # of vars.) correlation matrix; variance in each variable = 1 –if an eigenvalue < 1, it explains less variance than one of the original variables –but.7 may be a better threshold… ‘scree plots’ – show trade-off between loss of information, and simplification
15
Mandara Region knife morphology
19
J. Yellen – San ethnoarchaeology (1977) CAMP:the camp identification number (1-16.) LENGTH:the total number of days the camp was occupied. INDIVID:the number of individuals in the principal period of occupation of the camp. Note that not all individuals were at the camp for the entire LENGTH of occupation. FAMILY:the number of families occupying the site. ALS:the absolute limit of scatter; the total area (m²) over which debris was scattered. BONE:the number of animal bone fragments recovered from the site. PERS_DAY:the actual number of person-days of occupation (not the product of INDIVID*LENGTH—not all individuals were at the camp for the entire time.)
20
Correspondence Analysis (CA) like a special case of PCA — transforms a table of numerical data into a graphic summary hopefully a simplified, more interpretable display deeper understanding of the fundamental relationships/structure inherent in the data a map of basic relationships, with much of the “noise” eliminated usually reduces the dimensionality of the data…
21
derived from methods of contingency table analysis most suited for analysis of categorical data: counts, presence-absence data possibly better to use PCA for continuous (i.e., ratio) data but, CA makes no assumptions about the distribution of the input variables… CA – basic ideas
22
simultaneously R and Q mode analysis derives two sets of eigenvalues and eigenvectors ( CA axes; analogous to PCA components) input data is scaled so that both sets of eigenvectors occupy very comparable spaces can reasonably compare both variables and cases in the same plots
23
CA output CA (factor) scores –for both cases and variables percentage of total inertia per axis –like variance in PCA; relates to dispersal of points around an average value –inertia accounted for by each axis distortion in a graphic display loadings –correlations between rows/columns and axes –which of the original entities are best accounted for by what axis?
24
“mass” as in PCA new axes maximize the spread of observations in rows / columns –spread is measured in inertia, not variance –based on a “chi-squared” distance, and is assessed separately for cases and variables (rows and columns) contributions to the definition of CA axes is weighted on the basis of row/column totals –ex: pottery counts from different assemblages; larger collections will have more influence than smaller ones
25
“Israeli political economic concerns” residential codes: As/Af (Asia or Africa) Eu/Am (Europe or America) Is/AA (Israel, dad lives in Asia or Africa) Is/EA (Israel, dad lives in Europe or America) Is/Is (Israel, dad lives in Israel)
26
“Israeli political economic concerns” “worry” codes ENREnlisted relative SABSabotage MILMilitary situation POLPolitical situation ECOEconomic situation OTHOther MTOMore than one worry PERPersonal economics
28
Ksar Akil – Up. Pal., Lebanon
29
Data> Frequency> COUNT Statistics> Data Reduction> CA
38
Multidimensional Scaling (MDS) aim: define low-dimension space that preserves the distance between cases in original high-dimension space… closely related to CA/PCA, but with an iterative location-shifting procedure… –may produce a lower-dimension solution than CA/PCA –not simultaneously Q and R mode…
39
ABCD ABCD ‘non-metric’ MDS ‘metric’ MDS
45
“Shepard Diagram”
46
Discriminant Analysis (DFA) aims: –calculate a function that maximizes the ability to discriminate among 2 or more groups, based on a set of descriptive variables –assess variables in terms of their relative importance and relevance to discrimination –classify new cases not included in the original analysis
47
var A var B
48
DFA DFs = groups-1 –each subsequent function is orthogonal to the last –associated with eigenvalues that reflect how much ‘work’ each function does in discriminating between groups stepwise vs. complete DFA
49
Figure 6.5: Factor structure coefficients: These values show the correlation between Miccaotli ceramic categories and the first two discriminant functions. Categories exhibiting high positive or negative values are the most important for discriminating among A-clusters.
50
Figure 6.4: Case scores calculated for the first two functions generated by discriminant analysis, using Miccaotli A-cluster membership as the grouping variable and posterior estimates of ceramic category proportions as discriminating variables.
51
Figure 6.6: Factor structure coefficients generated by four separate DFA analyses using binary grouping variables derived from Miccaotli A-cluster memberships. A single discriminant function is associated with each A- cluster.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.