Presentation is loading. Please wait.

Presentation is loading. Please wait.

Parallel ICA Algorithm and Modeling Hongtao Du March 25, 2004.

Similar presentations


Presentation on theme: "Parallel ICA Algorithm and Modeling Hongtao Du March 25, 2004."— Presentation transcript:

1 Parallel ICA Algorithm and Modeling Hongtao Du March 25, 2004

2 Outline Review – Independent Component Analysis – FastICA – Parallel ICA Parallel Computing Laws Parallel Computing Models Model for pICA

3 Independent Component Analysis (ICA) A linear transformation which minimizes the higher order statistical dependence between components. ICA model: What is independence? Source signal S: – statistically independent – not more than one is Gaussian distributed Weight matrix (unmixing matrix) W:

4 Methods to minimize statistical dependence – Mutural information (InfoMax) – K-L divergence or relative entropy (Output Divergence) – Nongaussianity (FastICA)

5 FastICA Algorithm

6 Parallel ICA Internal Decorrelation External Decorrelation

7

8 Performance Comparison (4 Processors)

9 Parallel Computing Classified by instruction delivery mechanism and data stream. Single Instruction Flow Multiple Instruction Flow Single Data Stream SISDMISD (Pipeline) Multiple Data Stream SIMD (MPI, PVM) MIMD (Distributed)

10 SISD: Do-It-Yourself, No help SIMD: Rowing, 1 master, several slave MISD: Assemble line in car manufacture MIMD: Distributed sensor network PICA algorithm for hyperspectral image analysis (high volume data set) is SIMD.

11 Parallel Computing Laws and Models Amdahl Law Gustafson Law BSP Model LogP Model

12 Amdahl Law First law for parallel computing (1967) Limit the speedup for parallel applications. where N: number of processors s: serial fraction p: parallel fraction

13 Speedup boundary: 1/a Serial part should be limited and very fast Problem: parallel computer must be fast sequential computer.

14 Gustafson Law Improvement of Amdahl law Considering data size In a parallel program, if the quantity of data increases, then the sequential fraction decreases.

15 Parallel Computing Model Amdahl and Gustafson laws define the limits without considering the properties of the computer architecture Can not predict the real performance of any parallel application. Parallel computing models integrate the computer architecture and application architecture.

16 Purpose: – Predicting computing cost – Evaluating efficiency of programs Impacts on performance – Computing node (processor, memory) – Communication network – T app =T comp +T comm

17 Centric vs. Distributed Parallel Random Access Machine – Synchronous processors – Shared memory Distributed-memory Parallel Computer – Distributed processor and memory – Interconnected by a communication network – Each processor has fast access to its own memory, slow access to remote memory P1P2P3 Shared Memory P4 P1 M1 P2 M2 P3 M3 P4 M4

18 Bulk Synchronous Parallel - BSP For distributed-memory parallel computer. Assumptions – N identical processors, each of them having its own memory – Interconnected with a predictable network. – Each processor can conduct synchronization. Applications are composed by supersteps, separated by global synchronization. Each superstep includes: – computation step – communication step – synchronization step

19 T Superstep = w + g * h + l – w: maximum of computing time – g: 1 / (Network bandwidth) – h: amount of transferred message – l: time of synchronization Algorithm can be described with w and h.

20 LogP Model Improvement of BSP model. Decomposing the communication (g) into 3 parts. – Latency (L): message cross the network – Overhead (O): lost time in I/O – Gap (g): gap between 2 consecutive messages oo L

21 T Superstep = w + (L + 2 * o) * h + l Execution time is the time of the slowest process. The total time for a message to be transferred from processor A to processor B is: L + 2 * o

22 g > o oo P1 P2 wait g g < o oo P1 P2 g

23 Giving the finite capacity of the network: Drawbacks: – Does not address the data size. If the all messages are very small? – Does not consider the global capacity of the network.

24 Model for pICA Features – SIMD – High volume data set transfer at first stage – Low volume data transfer at other stages Combine BSP and LogP models – Stage 1: Pipeline: hyperspectral image transfer, one unit (weight vector) estimations Parallel: Internal decorrelations in sub-matrices – Other stages: Parallel: External decorrelations

25 T = T stage1 + T stage2 +… + T stagek Number of layers k = log 2 P T stage1 = (w one-unit + w internal-decorrelation ) + (L + 2 * o) * h hyperspectral-image + g * h weight-vectors + l stage1 T stagei = w external-decorrelation + g * h weight-vectors + l stagei i = 2, ….., k

26

27

28

29

30

31 Another Topic Optimization of parallel computing – Heterogeneous parallel computing network – Minimize overall time – Tradeoff problem between computation (individual computer properties) and communication (network)

32 References A. Hyv¨arinen and Erkki Oja, “A fast fixed-point algorithm for independent component analysis,” Neural Computation, vol. 9, pp. 1483–1492, 1997. P. Common, “Independent component analysis, a new concept,” Signal Processing, vol. 36, no. 3, pp. 287–314, April 1994, Special Issue on Highorder Statistics. A.J. Bell and T.J. Sejnowski, “An information maximisation approach to blind separation and blind deconvolution,” Neural Computation, vol. 7, no. 6, pp. 1129–1159, 1995. S. Amari, A. Cichochi, and H. Yang, “A new learning algorithm for blind signal separation,” Advances in Neural Information Processing Systems, vol. 8, 1996. Te-Won Lee, Mark Girolami, Anthony J. Bell, and Terrence J. Sejnowski, “A unifying information-theoretic framework for independent component analysis,” International Journal on Mathematical and Computer Modeling, 1998.


Download ppt "Parallel ICA Algorithm and Modeling Hongtao Du March 25, 2004."

Similar presentations


Ads by Google