Download presentation
Presentation is loading. Please wait.
Published byAndrew McLaughlin Modified over 9 years ago
1
Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)
2
The problem Let X = X N be a sample of size N supposed to satisfy d-dimen- sional Gaussian mixture model (d is supposed to be large). Because of large dimension it is natural to project the sample to k- dimensional (k = 1, 2,…) linear subspaces using projection pursuit method (Huber (1985), Friedman (1987)) which gives the best selection of these subspaces. If distribution of standardized sample on the complement space becomes standard Gaussian, this linear subspace H is called discriminant subspace. E. g., if we have q Gaussian mixture components with equal covariance matrices then dimension of the discriminant subspace is q–1. Having an estimate of the discriminant subspace we can perform much easier classification using projected sample.
3
The sequential procedure applied to the standardized sample is the following (k = 1, 2,…, until the hypothesis of discriminant subspace holds for some k): 1.Find the best k-dimensional linear subspace using projection pursuit method (Rudzkis and Radavičius (1999)). 2.Fit a Gaussian mixture model for the sample projected to the k- dimensional linear subspace (Rudzkis and Radavičius (1995)). 3.Test goodness-of-fit of the estimated d-dimensional model assuming that distribution on the complement space is standard Gaussian. If the test fails then increase k and go to the step 1. The problem in step 1 is to find basic vectors in high-dimension space (we do not cover this problem by now). The problem in step 3 (in common approach) is comparing some non-parametric density estimate with parametric one in high-dimensional space.
4
We present a simple, data-driven and computationally efficient procedure for testing goodness-of-fit. The procedure is based on well-known interpretation of testing goodness-of-fit as the classi- fication problem, a special sequential data partition procedure, randomization and resampling, elements of sequential testing. Monte-Carlo simulations are used to assess the performance of the procedure. This procedure can be applied to the testing of independence of components in high-dimensional data. We present some preliminary computer simulation results.
5
Introduction Let Consider general classification problem of estimation of a posteriori probabilities from the sample
6
Under these assumptions we have Usually the EM algorithm is used to estimate the a posteriori probabilities. Denote then EM algorithm is a following iterative procedure: EM algoritm converges to some local maximum of the maximum likelihood function
7
which usually is not equal to the global maximum Let for some subspace the following equality holds: where
8
and the subspace H has a maximum dimension, then this subspace is called the discriminant subspace. We do not lose an information on the a posteriori probabilities when we project the sample to the discriminant subspace. We can get the estimate of the discriminant subspace using projection pursuit procedure (see e. g., J. H. Friedman (1987), S. A. Aivazyan (1996), R. Rudzkis, M. Radavičius (1998)).
9
Test statistics Letbe a sample of the size N of i.i.d. random vectors with a common distribution function F on R d. distributions. Consider a nonparametric hypothesis testing problem: Letandbe two disjoint classes of d-dimensional LetConsider a mixture model
10
of two populations H and with d.f. F H and F, respecti- vely. Fix p and let Y = Y (p) denote a random vector with the mixture distribution F (p). Let Z = Z (p) be the posterior proba- bility of the population given Y, i.e. Here f and f H denote distribution densities of F and F H, respecti- vely. Let us introduce a loss function l(F, F H ) = E(Z – p) 2.
11
be a sequence of partitions of R d, possibly dependent on Y, and let be the corresponding sequence of -algebras generated by these partitions. A computationally efficient choice of P is the sequential dyadic coordinate-wise partition minimizing at each step the mean square error. Let X (H) = {X (H) (1), X (H) (2),…, X (H) (M)} be a sample of size M of i.i.d. vectors from H. It is also supposed that X (H) is independent of X. Set
12
In view of the definition of the loss function a natural choice of the test statistics would be 2 -type statistics for somewhich can be treated as a smoothing parameter. Here E MN stands for the expectation with respect to the empirical distribution F of Y. However, since the optimal value of k is unknown, we prefer the following definition of the test statistics: where a k and b k are centering and scaling parameters to be specified.
13
We have selected the following test statistics: where sample X (H) ) in the jth area of the kth partition P k.
14
Illustration of the sequential dyadic partitioning procedure Here we have an example (at some step) of sequential partitioning procedure with two samples of two-dimen- sional data. The next partition is selected from all current squares and all divisions by each dimen- sion (in this case d=2) to achieve minimum mean square error of grouping.
15
Preliminary simulation results The computer simulations have been performed using Monte- Carlo simulation method (typically 100 independent simulations). Sample sizes of X and X (H) were selected equal (typically N = M = 1000). The first problem is to evaluate using the computer simulation the test statistics T k in case when the hypothesis H holds. Centering and scaling parameters of the test statistics were selected in such a way that distribution of the test statistics is approximately standard Gaussian for each k not very close to 1 and K. The computer simulation results show that for very wide range of dimensions, sample sizes and distributions behaviour of the test statistics in case when the hypothesis H holds is very similar.
16
Fig. 1. Behaviour of T k when the hypothesis holds Here we have sample size N=1000, dimension d=100, and two samples of d-dimensional standard Gaussian distribution. We have maxima and minima of 100 realizations and corresponding maxima and minima except of 5 per cent largest values at each point.
17
Fig. 2. Behaviour of T k when the hypothesis does not hold Here we have sample size N=1000, dimension d=10, q=3, Gaussian mixture with means (–4, –3, 0, 0, 0,…), (0, 6, 0, 0, 0,…), (4, –3, 0, 0, 0,…). The sample is projected to one-dimensional subspace. This is an extremely unfit situation.
18
Fig. 3. Behaviour of T k (control data) This is a control example for the data in Fig. 2 assuming that we project data to the true two-dimensional discriminant subspace.
19
Fig. 4. Behaviour of T k when the hypothesis does not hold Here we have sample size N=1000, dimension d=10, q=3, Gaussian mixture with means (–4, –1, 0, 0, 0,…), (0, 2, 0, 0, 0,…), (4, –1, 0, 0, 0,…). The sample is projected to one-dimensional subspace.
20
Fig. 5. Behaviour of T k (control data) This is a control example for the data in Fig. 4 assuming that we project data to the true two-dimensional discriminant subspace.
21
Fig. 6. Behaviour of T k when the hypothesis does not hold Here we have sample size N=1000, dimension d=10, q=3, Gaussian mixture with means (–4, –0.5, 0, 0, 0,…), (0, 1, 0, 0, 0,…), (4, –0.5, 0, 0, 0,…). The sample is projected to one-dimensional subspace.
22
Fig. 7. Behaviour of T k (control data) This is a control example for the data in Fig. 6 assuming that we project data to the true two-dimensional discriminant subspace.
23
Fig. 8. Behaviour of T k when the hypothesis does not hold Here we have sample size N=1000, dimension d=20, and standard Cauchy distribution. Sample X H is simulated with independent components, number of independent components are d 1 = d/2, d 2 = d/2.
24
Fig. 9. Behaviour of T k (control data) This is a control example for the data in Fig. 8 assuming that the sample X (H) is simulated as sample with the same distribution as the sample X.
25
Fig. 10. Behaviour of T k when the hypothesis does not hold Here we have sample size N=1000, dimension d=10, and Student distribution with 3 degrees of freedom. Number of independent components are d 1 = 1, d 2 = d–1.
26
Fig. 11. Behaviour of T k (control data) This is a control example for the data in Fig. 10 assuming that the sample X (H) is simulated as sample with the same distribution as the sample X.
27
end.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.