Presentation is loading. Please wait.

Presentation is loading. Please wait.

Parallel Sparse Matrix Algorithms for numerical computing matrix-vector multiplication.

Similar presentations


Presentation on theme: "Parallel Sparse Matrix Algorithms for numerical computing matrix-vector multiplication."— Presentation transcript:

1 Parallel Sparse Matrix Algorithms for numerical computing matrix-vector multiplication

2 Introduction Matrix computing is an important part in numeric computing. Sparse Matrix is very important in Matrix computing. Sparse matrix can be used in all kinds of computing.

3 Construction Importance of Parallel Sparse Matrix Computing Introduction to Sparse Matrix Introduction to Matrix-Vector Multiplication How to use parallel technology solve it? Conclusion

4 Sparse Matrix Concept Sparse Matrix Storage/Save

5 Concept In the mathematical subfield of numerical analysis, a sparse matrix is a matrix populated primarily with zeros.(Stoer & Bulirsch 2002, p. 619). If has a matrix A[m*n], where NZ = nonzero elements. If NZ<<m*n then A is Sparse Matrix

6 Sparse Matrix Storage/Save A new data structure to store the sparse matrix. New data structure is easy to transformed from tradition data structure. Less space to store. Fast seeking address of the elements.

7 Example Matrix A[4*4] with elements: 0 , 0 , 0 , 2 1 , 0 , 0 , 6 0 , 1 , 0 , 0 0 , 0 , 0 , 0 Space: 4x4xB = 16B New structure only store non-zero elements: 0 , 3 , 2 1 , 0 , 1 1 , 3 , 6 2 , 1 , 1 space: 3x4xB = 12B

8 Matrix-vector multiplication Matrix-Vector Multiplication define by : Where A ij is a matrix define by [i*j] X j is vector has j elements.

9 Parallel Method Produce Matrix Produce Vector Transform Matrix to Sparse Matrix Broadcast Vector to each slave processor Partition Sparse Matrix Send each buffer to each salve processor Each salve do Matrix-Vector Multiplication Send result to master Done

10 Data structure and input parameter User input matrix : 2D array User input vector : 1D array Usage : exefilename

11 Produce Matrix void producematrix(int ** _mt,int _row,int _col,int _zero){ for (i = 0; i < _row; i++) { for (j = 0; j < _col; j++) { tempelements = rand() % 50; //get random if (residualelements == 0) { _mt[i][j] = 0; }else{ p = rand() % 1; if (p < residualzero/residualelements) { _mt[i][j] = 0; residualzero--; }else{ _mt[i][j] = tempelements; residualelements--; }

12 Transform Matrix to Sparse Matrix int * producesparse1d(int ** _mt,int _mtrow,int _mtcol,int _nonzero){ int * tempsp = (int*)malloc(sizeof(int)*_nonzero * 3); int m,n; m = 0; for (int i = 0; i < _mtrow; i++) { for (int j = 0; j < _mtcol; j++) { if (_mt[i][j]!=0) { tempsp[m] = i; tempsp[m+1] = j; tempsp[m+2] = _mt[i][j]; m = m +3; } return tempsp; }

13 Broadcast Vector to each slave processor

14 Partition Sparse Matrix

15 Send each buffer to each salve processor

16

17 Parallel logic MPI_Status stat; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI Regular MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Bcast; //broadcast vector to slave if (myid == 0){ //master MPI_Send(sendbuffer); //send each buffer to each slave }else{ //slave MPI_Recv(recvbuffer); // receive buffer from master SparseMult(recvbuffer,vect); // multiplication MPI_Send(slaveresult); //send result to master } MPI_Finalize();

18 Results

19 Conclusions & Analysis Spend more time on communication with each processors. Unbalanced communication and computing is bottle-neck

20 Bibliography [1]Blaise,B. (2009). Lawrence Livermore National Laboratory.Retrieved May 2009 from the World Wide Web: https://computing.llnl.gov/tutorials/parallel_comp/ [2] Bruce Hendrickson, Robert Leland and Steve Plimpton, An Efficient Parallel Algorithm for Matrix – Vector Multiplication, Sandia National Laboratories, Albuquerque, NM87185 [3] Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-95452-3.ISBN 978-0-387-95452-3 [4] L.M. Romero and E.L. Zapata, Data Distributions for Sparse Matrix Vector Multiplication, Univ ersity of Malaga, J.Parallel Computing, vol. 21, no.4, April 1999 [5] Martin Johnson Numerical Algorithm, Lecture notes in Parallel Computing, IIMS Massey University at Albany, Auckland, New Zealand. 2009 [6] Message Passing Interface http://en.wikipedia.org/wiki/Message_Passing_Interface [7] R.Raz. On the complexity of matrix product. SIAM Journal on Computing, 32:1356-1369, 2003 [8] SPARSE MATRIX http://en.wikipedia.org/wiki/Sparse_matrix [9] SPARSE MATRIX http://baike.baidu.com/view/891721.htm [10] V. Pan. How to multiply matrices faster. Lecture notes in computer science, volume 179. Springer-Verlag, 1985 [11] Wang Shun and Wang Xiao Ge Parallel Algorithm for Matrix – Vector Multiplication, Tsinghua University Library, CHINA

21 questions


Download ppt "Parallel Sparse Matrix Algorithms for numerical computing matrix-vector multiplication."

Similar presentations


Ads by Google