Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Future of LAPACK and ScaLAPACK www.netlib.org/lapack-dev Jim Demmel UC Berkeley 27 March 2006.

Similar presentations


Presentation on theme: "The Future of LAPACK and ScaLAPACK www.netlib.org/lapack-dev Jim Demmel UC Berkeley 27 March 2006."— Presentation transcript:

1 The Future of LAPACK and ScaLAPACK www.netlib.org/lapack-dev Jim Demmel UC Berkeley 27 March 2006

2 Outline Motivation for new Sca/LAPACK Challenges (or research opportunities…) Goals of new ScaLAPACK Highlights of progress

3 Motivation LAPACK and ScaLAPACK are widely used –Adopted by Cray, Fujitsu, HP, IBM, IMSL, MathWorks, NAG, NEC, SGI, … –>56M web hits @ Netlib (incl. CLAPACK, LAPACK95)

4 Impact (with NERSC, LBNL) Cosmic Microwave Background Analysis, BOOMERanG collaboration, MADCAP code (Apr. 27, 2000). ScaLAPACK

5 Motivation LAPACK and ScaLAPACK are widely used –Adopted by Cray, Fujitsu, HP, IBM, IMSL, MathWorks, NAG, NEC, SGI, … –>56M web hits @ Netlib (incl. CLAPACK, LAPACK95) Many ways to improve them, based on –Own algorithmic research –Enthusiastic participation of research community –User/vendor survey –Opportunities and demands of new architectures, programming languages New releases planned (NSF support)

6 Participants UC Berkeley: –Jim Demmel, Ming Gu, W. Kahan, Beresford Parlett, Xiaoye Li, Osni Marques, Christof Voemel, David Bindel, Yozo Hida, Jason Riedy, Jianlin Xia, Jiang Zhu, undergrads… U Tennessee, Knoxville –Jack Dongarra, Julien Langou, Julie Langou, Piotr Luszczek, Stan Tomov Other Academic Institutions –UT Austin, UC Davis, Florida IT, U Kansas, U Maryland, North Carolina SU, San Jose SU, UC Santa Barbara –TU Berlin, FU Hagen, U Carlos III Madrid, U Manchester, U Umeå, U Wuppertal, U Zagreb Research Institutions –CERFACS, LBL Industrial Partners –Cray, HP, Intel, MathWorks, NAG, SGI

7 Challenges For all large scale computing, not just linear algebra! Example …

8 Parallelism in the Top500

9 Challenges For all large scale computing, not just linear algebra! Example … your laptop

10 CPU Trends Relative processing power will continue to double every 18 months 256 logical processors per chip in late 2010

11 Challenges For all large scale computing, not just linear algebra! Example … your laptop Exponentially growing gaps between –Floating point time << 1/Memory BW << Memory Latency

12 Commodity Processor Trends Annual increase Typical value in 2006 Predicted value in 2010 Typical value in 2020 Single-chip floating-point performance 59% 4 GFLOP/s 32 GFLOP/s 3300 GFLOP/s Memory bus bandwidth 23% 1 GWord/s = 0.25 word/flop 3.5 GWord/s = 0.11 word/flop 27 GWord/s = 0.008 word/flop Memory latency(5.5%) 70 ns = 280 FP ops = 70 loads 50 ns = 1600 FP ops = 170 loads 28 ns = 94,000 FP ops = 780 loads Source: Getting Up to Speed: The Future of Supercomputing, National Research Council, 222 pages, 2004, National Academies Press, Washington DC, ISBN 0-309-09502-6. Will our algorithms run at a high fraction of peak?

13 Challenges For all large scale computing, not just linear algebra! Example … your laptop Exponentially growing gaps between –Floating point time << 1/Memory BW << Memory Latency –Floating point time << 1/Network BW << Network Latency

14 Parallel Processor Trends Annual increase Typical value in 2004 Predicted value in 2010 Typical value in 2020 # Processors20 % 4,000 12,000 3300 GFLOP/s Network Bandwidth 26% 65 MWord/s = 0.03 word/flop 260 MWord/s = 0.008 word/flop 27 GWord/s = 0.008 word/flop Network latency (15%) 5  s = 20K FP ops 2  s = 64K FP ops 28 ns = 94,000 FP ops = 780 loads Source: Getting Up to Speed: The Future of Supercomputing, National Research Council, 222 pages, 2004, National Academies Press, Washington DC, ISBN 0-309-09502-6. Will our algorithms scale up to more processors?

15 Challenges For all large scale computing, not just linear algebra! Example … your laptop Exponentially growing gaps between –Floating point time << 1/Memory BW << Memory Latency –Floating point time << 1/Network BW << Network Latency Heterogeneity (performance and semantics) Asynchrony Unreliability

16 What do users want? High performance, ease of use, … Survey results at www.netlib.org/lapack-dev –Small but interesting sample –What matrix sizes do you care about? 1000s: 34% 10,000s: 26% 100,000s or 1Ms: 26% –How many processors, on distributed memory? >10: 34%, >100: 31%, >1000: 19% –Do you use more than double precision? Sometimes or frequently: 16% –Would Automatic Memory Allocation help? Very useful: 72%, Not useful: 14%

17 Goals of next Sca/LAPACK 1.Better algorithms –Faster, more accurate 2.Expand contents –More functions, more parallel implementations 3.Automate performance tuning 4.Improve ease of use 5.Better software engineering 6.Increased community involvement

18 Goal 1: Better Algorithms Faster –But provide “usual” accuracy, stability More accurate –But provide “usual” speed –Or at any cost

19 Goal 1a – Faster Algorithms (Highlights) MRRR algorithm for symmetric eigenproblem / SVD: –Parlett / Dhillon / Voemel / Marques / Willems Up to 10x faster HQR: –Byers / Mathias / Braman Extensions to QZ: –Kågström / Kressner Faster Hessenberg, tridiagonal, bidiagonal reductions: –van de Geijn/Quintana, Bischof / Lang, Howell / Fulton Recursive blocked layouts for packed formats: –Gustavson / Kågström / Elmroth / Jonsson/

20 Goal 1a – Faster Algorithms (Highlights) MRRR algorithm for symmetric eigenproblem / SVD: –Parlett / Dhillon / Voemel / Marques / Willems –Faster and more accurate than previous algorithms –New sequential, first parallel versions out in 2006

21 Timing of Eigensolvers (1.2 GHz Athlon, only matrices where time >.1 sec)

22

23

24 Timing of Eigensolvers (only matrices where time >.1 sec)

25 Accuracy Results (old vs new Grail) max i ||Tq i – i q i || / ( n  ) || QQ T – I || / (n  )

26 Goal 1a – Faster Algorithms (Highlights) MRRR algorithm for symmetric eigenproblem / SVD: –Parlett / Dhillon / Voemel / Marques / Willems –Faster and more accurate than previous algorithms –New sequential, first parallel versions out in 2006 –Numerical evidence shows DC faster if It “deflates” often, which is hard to predict in advance. So having both algorithms is important.

27 Goal 1a – Faster Algorithms (Highlights) MRRR algorithm for symmetric eigenproblem / SVD: –Parlett / Dhillon / Voemel / Marques / Willems Up to 10x faster HQR: –Byers / Mathias / Braman –SIAM SIAG/LA Prize in 2003 –Sequential version out in 2006 –More on performance later

28 Goal 1a – Faster Algorithms (Highlights) MRRR algorithm for symmetric eigenproblem / SVD: –Parlett / Dhillon / Voemel / Marques / Willems Up to 10x faster HQR: –Byers / Mathias / Braman Extensions to QZ: –Kågström / Kressner –LAPACK Working Note (LAWN) #173 –On 26 real test matrices, speedups up to 11.9x, 4.4x average

29 Comparison of ScaLAPACK QR and new parallel multishift QZ Execution times in secs for 4096 x 4096 random problems Ax = sx and Ax = sBx, using processor grids including 1-16 processors. Note: work(QZ) > 2 * work(QR) but Time(// QZ) << Time (//QR)!! Times include cost for computing eigenvalues and transformation matrices. Adlerborn-Kågström-Kressner, SIAM PP’2006

30 Goal 1a – Faster Algorithms (Highlights) MRRR algorithm for symmetric eigenproblem / SVD: –Parlett / Dhillon / Voemel / Marques / Willems Up to 10x faster HQR: –Byers / Mathias / Braman Extensions to QZ: –Kågström / Kressner Faster Hessenberg, tridiagonal, bidiagonal reductions: –van de Geijn/Quintana, Howell / Fulton, Bischof / Lang –Full nonsymmetric eigenproblem: n=1500: 3.43x faster HQR: 5x faster, Reduction: 14% faster –Bidiagonal Reduction (LAWN#174): n=2000: 1.32x faster –Sequential versions out in 2006

31 Goal 1a – Faster Algorithms (Highlights) MRRR algorithm for symmetric eigenproblem / SVD: –Parlett / Dhillon / Voemel / Marques / Willems Up to 10x faster HQR: –Byers / Mathias / Braman Extensions to QZ: –Kågström / Kressner Faster Hessenberg, tridiagonal, bidiagonal reductions: –van de Geijn/Quintana, Howell / Fulton, Bischof / Lang Recursive blocked layouts for packed formats: –Gustavson / Kågström / Elmroth / Jonsson/ –SIAM Review Article 2004

32 Recursive Layouts and Algorithms Still merges multiple elimination steps into a few BLAS 3 operations Best speedups for packed storage of symmetric matrices

33 Goal 1b – More Accurate Algorithms Iterative refinement for Ax=b, least squares –“Promise” the right answer for O(n 2 ) additional cost Jacobi-based SVD –Faster than QR, can be arbitrarily more accurate Arbitrary precision versions of everything –Using your favorite multiple precision package

34 Goal 1b – More Accurate Algorithms Iterative refinement for Ax=b, least squares –“Promise” the right answer for O(n 2 ) additional cost –Iterative refinement with extra-precise residuals –Extra-precise BLAS needed (LAWN#165)

35 More Accurate: Solve Ax=b Conventional Gaussian Elimination With extra precise iterative refinement    n 1/2  

36 Goal 1b – More Accurate Algorithms Iterative refinement for Ax=b, least squares –“Promise” the right answer for O(n 2 ) additional cost –Iterative refinement with extra-precise residuals –Extra-precise BLAS needed (LAWN#165) –“Guarantees” based on condition number estimates Condition estimate < 1/ e  reliable answer and tiny error bounds No bad bounds in 6.2M tests Can condition estimators lie?

37 Yes, but rarely, unless they cost as much as matrix multiply = cost of LU factorization –Demmel/Diament/Malajovich (FCM2001) But what if matrix multiply costs O(n 2 )? – More later

38 Goal 1b – More Accurate Algorithms Iterative refinement for Ax=b, least squares –“Promise” the right answer for O(n 2 ) additional cost –Iterative refinement with extra-precise residuals –Extra-precise BLAS needed (LAWN#165) –“Guarantees” based on condition number estimates –Get tiny componentwise bounds too Each x i accurate Slightly different condition number –Extends to Least Squares –Release in 2006

39 Goal 1b – More Accurate Algorithms Iterative refinement for Ax=b, least squares –Promise the right answer for O(n 2 ) additional cost Jacobi-based SVD –Faster than QR, can be arbitrarily more accurate –LAWNS # 169, 170 –Can be arbitrarily more accurate on tiny singular values –Yet faster than QR iteration!

40 Goal 1b – More Accurate Algorithms Iterative refinement for Ax=b, least squares –Promise the right answer for O(n 2 ) additional cost Jacobi-based SVD –Faster than QR, can be arbitrarily more accurate Arbitrary precision versions of everything –Using your favorite multiple precision package –Quad, Quad-double, ARPREC, MPFR, … –Using Fortran 95 modules

41 Iterative Refinement: for speed What if double precision much slower than single? –Cell processor in Playstation 3 256 GFlops single, 25 GFlops double –Pentium SSE2: single twice as fast as double Given Ax=b in double precision –Factor in single, do refinement in double –If  (A) < 1/  single, runs at speed of single 1.9x speedup on Intel-based laptop Applies to many algorithm, if difference large

42 Goal 2 – Expanded Content Make content of ScaLAPACK mirror LAPACK as much as possible

43 Missing Drivers in Sca/LAPACK LAPACKScaLAPACK Linear Equations LU Cholesky LDL T xGESV xPOSV xSYSV PxGESV PxPOSV missing Least Squares (LS) QR QR+pivot SVD/QR SVD/D&C SVD/MRRR QR + iterative refine. xGELS xGELSY xGELSS xGELSD missing PxGELS missing missing (intent?) missing Generalized LSLS + equality constr. Generalized LM Above + Iterative ref. xGGLSE xGGGLM missing

44 More missing drivers LAPACKScaLAPACK Symmetric EVDQR / Bisection+Invit D&C MRRR xSYEV / X xSYEVD xSYEVR PxSYEV / X PxSYEVD missing Nonsymmetric EVDSchur form Vectors too xGEES / X xGEEV /X missing driver SVDQR D&C MRRR Jacobi xGESVD xGESDD missing PxGESVD missing (intent?) missing Missing Generalized Symmetric EVD QR / Bisection+Invit D&C MRRR xSYGV / X xSYGVD missing PxSYGV / X missing (intent?) missing Generalized Nonsymmetric EVD Schur form Vectors too xGGES / X xGGEV / X missing Generalized SVDKogbetliantz MRRR xGGSVD missing missing (intent) missing

45 Goal 2 – Expanded Content Make content of ScaLAPACK mirror LAPACK as much as possible New functions (highlights) –Updating / downdating of factorizations: Stewart, Langou –More generalized SVDs: Bai, Wang

46 New GSVD Algorithm Bai et al, UC DavisPSVD, CSD on the way Given m x n A and p x n B, factor A = U ∑ a X and B = V ∑ b X

47 Goal 2 – Expanded Content Make content of ScaLAPACK mirror LAPACK as much as possible New functions (highlights) –Updating / downdating of factorizations: Stewart, Langou –More generalized SVDs: Bai, Wang –More generalized Sylvester/Lyapunov eqns: Kågström, Jonsson, Granat –Structured eigenproblems O(n 2 ) version of roots(p) –Gu, Chandrasekaran, Bindel et al Selected matrix polynomials: –Mehrmann How should we prioritize missing functions?

48 New algorithm for roots(p) To find roots of polynomial p –Roots(p) does eig(C(p)) –Costs O(n 3 ), stable, reliable O(n 2 ) Alternatives –Newton, Jenkins-Traub, Laguerre, … –Stable? Reliable? New: Exploit “semiseparable” structure of C(p) –Low rank of any submatrix of upper triangle of C(p) preserved under QR iteration –Complexity drops from O(n 3 ) to O(n 2 ), stable in practice Related work: Gemignani, Bini, Pan, et al Ming Gu, Shiv Chandrasekaran, Jiang Zhu, Jianlin Xia, David Bindel, David Garmire, Jim Demmel -p 1 -p 2 … -p d 1 0 … 0 0 1 … 0 … … … … 0 … 1 0 C(p)=

49 Goal 3 – Automate Performance Tuning Widely used in performance tuning of Kernels –ATLAS (PhiPAC) – BLAS - www.netlib.org/atlas –FFTW – Fast Fourier Transform – www.fftw.org –Spiral – signal processing - www.spiral.net –OSKI – Sparse BLAS – bebop.cs.berkeley.edu/oski

50 Optimizing blocksizes for mat-mul Finding a Needle in a Haystack – So Automate

51 Goal 3 – Automate Performance Tuning Widely used in performance tuning of Kernels 1300 calls to ILAENV() to get block sizes, etc. –Never been systematically tuned Extend automatic tuning techniques of ATLAS, etc. to these other parameters –Automation important as architectures evolve Convert ScaLAPACK data layouts on the fly –Important for ease-of-use too

52 ScaLAPACK Data Layouts 1D Block Cyclic 1D Cyclic 2D Block Cyclic

53 Times obtained on: 60 processors, Dual AMD Opteron 1.4GHz Cluster w/Myrinet Interconnect 2GB Memory Speedups for using 2D processor grid range from 2x to 8x Cost of redistributing from 1D to best 2D layout 1% - 10%

54 Goal 4: Improved Ease of Use Which do you prefer? CALL PDGESV( N,NRHS, A, IA, JA, DESCA, IPIV, B, IB, JB, DESCB, INFO) A \ B CALL PDGESVX( FACT, TRANS, N,NRHS, A, IA, JA, DESCA, AF, IAF, JAF, DESCAF, IPIV, EQUED, R, C, B, IB, JB, DESCB, X, IX, JX, DESCX, RCOND, FERR, BERR, WORK, LWORK, IWORK, LIWORK, INFO)

55 Goal 4: Improved Ease of Use Easy interfaces vs access to details –Some users want access to all details, because Peak performance matters Control over memory allocation –Other users want “simpler” interface Automatic allocation of workspace No universal agreement across systems on “easiest interface” Leave decision to higher level packages Keep expert driver / simple driver / computational routines Add wrappers for other languages –Fortran95, Java, Matlab, Python, even C –Automatic allocation of workspace Add wrappers to convert to “best” parallel layout

56 Goal 5: Better SW Engineering: What could go into Sca/LAPACK? For all linear algebra problems For all matrix structures For all data types For all programming interfaces Produce best algorithm(s) w.r.t. performance and accuracy (including condition estimates, etc) For all architectures and networks Need to prioritize, automate!

57 Goal 5: Better SW Engineering How to map multiple SW layers to emerging HW layers? How much better are asynchronous algorithms? Are emerging PGAS languages better? Statistical modeling to limit performance tuning costs, improve use of shared clusters Only some things understood well enough for automation now –Telescoping languages, Bernoulli, Rose, FLAME, … Research Plan: explore above design space Development Plan to deliver code (some aspects) –Maintain core in F95 subset –Friendly wrappers for other programming environments –Use variety of source control, maintenance, development tools

58 Goal 6: Involve the Community To help identify priorities –More interesting tasks than we are funded to do –See www.netlib.org/lapack-dev for list To help identify promising algorithms –What have we missed? To help do the work –Bug reports, provide fixes –Again, more tasks than we are funded to do –Already happening: thank you!

59 Fast Matrix Multiplication (1) (Cohn, Kleinberg, Szegedy, Umans) Can think of fast convolution of polynomials p, q as –Map p (q) into group algebra  i p i z i  C[G] of cyclic group G = { z i } –Multiply elements of C [G] (use divide&conquer = FFT) –Extract coefficients For matrix multiply, need non-abelian group satisfying triple product property –There are subsets X, Y, Z of G where xyz = 1 with x  X, y  Y, z  Z  x = y = z = 1 –Map matrix A into group algebra via  xy A xy x -1 y, B into  y’z B y’z y’ -1 z. –Since x -1 y y’ -1 z = x -1 z iff y = y’ we get  y A xy B yz = (AB) xz Search for fast algorithms reduced to search for groups with certain properties –Fastest algorithm so far is O(n 2.38 ), same as Coppersmith/Winograd

60 Fast Matrix Multiplication (2) (Cohn, Kleinberg, Szegedy, Umans) 1.Embed A, B in group algebra (exact) 2.Perform FFT (roundoff) 3.Reorganize results into new matrices (exact) 4.Multiple new matrices recursively (roundoff) 5.Reorganize results into new matrices (exact) 6.Peform IFFT (roundoff) 7.Extract C = AB from group algebra (exact)

61 Fast Matrix Multiplication (3) (Demmel, Dumitriu, Holtz, Kleinberg) Thm 1: Any algorithm of this class for C = AB is “numerically stable” –|| C comp - C || <= c n d  || A || || B || + O(    –c and d are “modest” constants –Like Strassen Let  be the exponent of matrix multiplication, i.e. no algorithm is faster than O(n  ). Thm 2: For all  >0 there exists an algorithm with complexity O(n  +  ) that is numerically stable in the sense of Thm 1.

62 Conclusions Lots to do in Dense Linear Algebra –New numerical algorithms –Continuing architectural challenges Parallelism, performance tuning –Ease of use, software engineering Grant support, but success depends on contributions from community www.netlib.org/lapack-dev www.cs.berkeley.edu/~demmel


Download ppt "The Future of LAPACK and ScaLAPACK www.netlib.org/lapack-dev Jim Demmel UC Berkeley 27 March 2006."

Similar presentations


Ads by Google