Download presentation
Presentation is loading. Please wait.
Published byMartha Powell Modified over 9 years ago
1
www.clearspeed.comWolfram Technology Conference ENVISION. ACCELERATE.ARRIVE. Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 1 12 th October 2007 Real Acceleration for Mathematica® Simon McIntosh-Smith VP of Applications, ClearSpeed Technology simon@clearspeed.com
2
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 2 Agenda Introduction Accelerators ClearSpeed math acceleration technology Accelerating Mathematica Summary
3
www.clearspeed.comWolfram Technology Conference ENVISION. ACCELERATE.ARRIVE. Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 3 12 th October 2007 Introduction
4
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 4 Introduction Mathematica® is being used to solve more and more computationally intensive problems General purpose CPUs keep getting faster, but a new wave of application accelerators are emerging that could give much greater performance –Much as GPUs have done for graphics ClearSpeed has been developing hardware accelerators specifically focused on scientific computing, and which accelerate the low-level math libraries used by Mathematica
5
www.clearspeed.comWolfram Technology Conference ENVISION. ACCELERATE.ARRIVE. Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 5 12 th October 2007 Accelerators
6
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 6 Accelerator technologies Visualization and media processing –Good for graphics, video, game physics, speech, … –Graphics Processing Units (GPUs) are well established in the mainstream –But there was a time not too long ago when your PC still did all the graphics in software on the main CPU… –Can be applied to some 32-bit applications today (64-bit coming at much lower speed), but currently they are fairly hard to program and very power hungry – 200W! Embedded content processing –Data mining, encryption, XML, compression –Field Programmable Gate Arrays (FPGAs) are often being used here, mainly to accelerate integer-intensive codes –Poor at floating point, especially 64-bit, and cut corners on precision so don’t get good accuracy –Very hard to program and get good performance
7
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 7 Accelerator technologies continued Math Accelerators –Mostly floating point, 64-bit performance is crucial, high precision, supporting true IEEE754 floating point –Can accelerate numerically-intensive applications in Finance Oil and Gas Economics Electromagnetics Bioinformatics And many, many more –This is what ClearSpeed has developed To accelerate Mathematica, a true Math Accelerator is needed…
8
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 8 The other benefit of accelerators – low power Running 1 watt for 1 years costs about $1 Modern CPUs can consume around 100W –$100/year running cost for the CPU alone if used 24/7 –Significant associated CO 2 emissions Accelerators typically bring significant performance per watt gains –Examples later in this presentation show 1 CPU plus a 25W ClearSpeed board running as fast as a 4 CPU (8 core) machine –This power consumption reduction of around 275W, if applied 24/7, is a $275 energy cost saving –Not to mention how much smaller and quieter the accelerated system can be…
9
www.clearspeed.comWolfram Technology Conference ENVISION. ACCELERATE.ARRIVE. Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 9 12 th October 2007 ClearSpeed’s Math Acceleration Technology
10
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 10 What are ClearSpeed’s products? Math accelerator boards, ClearSpeed Advance ™ e620 & X620 –Dual ClearSpeed CSX600 coprocessors –R ∞ ≈ 66 GFLOPS for 64-bit matrix multiply (DGEMM) calls Hardware also supports 32-bit floating point –PCI Express x8 and 133 MHz PCI-X 2/3 rds support –1 GByte of memory on the board –Linux drivers today for RedHat and Suse –Low power; 25 to 33 Watts Significantly accelerates the low-level math library used by Mathematica (MKL): –Target functions: Level 3 BLAS and LAPACK
11
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 11 Which MKL functions can ClearSpeed accelerate? Previous release – CSXL 2.51 and before: L3 BLAS: large matrix arithmetic (preferably at least 1,000 on a side): –DGEMM – real matrix multiply LAPACK: factorize and solve for large systems of linear equations –LU (DGETRF) New release – CSXL 2.52: L3 BLAS: –ZGEMM – complex matrix multiply –DTRSM – triangular solve –Future release: DTRMM, DSYRK and others LAPACK: –LU (DGETRS) –QR (DGEQRF, DORGQR & DORMQR) –Cholesky (DPOTRF & DPOTRS) –Future release – complex versions of the above
12
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 12 Software development kit (SDK) C compiler with vector extensions (ANSI-C based commercial compiler), assembler, libraries, ddd/gdb- based debugger, newlib-based C-rtl etc. ClearSpeed Advance development boards Available for Linux, Windows
13
www.clearspeed.comWolfram Technology Conference ENVISION. ACCELERATE.ARRIVE. Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 13 12 th October 2007 Accelerating Mathematica
14
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 14 Mathematica uses libraries underneath Mathematica BLAS & LAPACK library: Intel’s MKL CPU Software Hardware
15
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 15 Mathematica using accelerated libraries Mathematica BLAS & LAPACK library: Intel’s MKL CPU Software Hardware ClearSpeed’s CSXL Library ClearSpeed Advance TM board
16
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 16 Plug-and-Play – No changes to your notebooks Mathematica has used MKL since v5.2 ClearSpeed provides a modified kernel –Uses a modified “math” script that launches the kernel –Sets the library path to pick up CSXL as well as MKL Functions supported in Mathematica today include: –Dot[] –Det[] –LUDecomposition[] –LinearSolve[] –Inverse[] –CholeskyDecomposition[] – new! –QRDecomposition[] – new! If your notebooks spend a high percentage of your total runtime in these functions, and a lot of time in each call to these functions, then you may have a candidate for ClearSpeed acceleration! It is very likely that other functions are also accelerated –If you find more, let us know!
17
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 17 ClearSpeed has been collaborating with ScienceOps to discover what kinds of problems are accelerated Early results show a good breadth of applications being accelerated –Performance improvements –Ability to run larger problem sets Initial results show speedup ranging from 2 – 5X What kind of notebooks could be accelerated?
18
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 18 Example notebooks Benchmarked on a fast server for comparison: –4 processors, each dual core (8 cores total), AMD Opteron 870 (2GHz) with 32GBytes of memory running Linux RHE4-64 Comparisons are between: –Using 2 Opteron cores on their own –Using all 8 Opteron cores on their own, and –Using 2 Opteron cores with a single ClearSpeed Advance accelerator board We haven’t re-benchmarked these notebooks on our latest release and on the new PCI Express verison of our board yet, both of which should increase performance
19
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 19 Example notebook descriptions ANOVA –Analysis of variance, a linear least squares minimisation, fitting a curve to sampled data Microarray –Microarray data analysis, determines coexpression networks – sets of genes that are commonly expressed together under different experimental conditions. Calculates distance metrics ImageDecode –Progressive decoding of images using the Haar wavelet transform. Grayscale images used in this example Spatial Auto Regression (SAR) –Simple regressions iterating on large, dense matrices
20
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 20 Example – ANOVA ANOVA notebook benefits from 2X speedup with 4,000 predictors Two cores with a ClearSpeed accelerator equivalent in performance to an eight core machine!
21
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 21 Example – Microarray Microarray notebook benefits from nearly a 3X speedup with 4,000 inputs Larger problems may receive even more speedup –Data sets with over 6,000 expression levels exist for yeast
22
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 22 Example – ImageDecode ImageDecode notebook speedup ranges from 2-3X depending on the image size When tuned this speedup should also be achieved for images around 960x960 in size (already around 1.6X)
23
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 23 Example – Spatial Auto Regression SAR notebook speedup nearly 2X Larger problems should receive even more speedup –Run-times quite substantial too
24
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 24 New CholeskyDecomposition[] performance A = Table[Random[], {n}, {n}]; B = Dot[Transpose[A], A]; Clear[A]; AbsoluteTiming[CholeskyDecomposition[B];]
25
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 25 New QRDecomposition[] performance A = Table[Random[], {n}, {n}]; B = Dot[Transpose[A], A]; Clear[A]; AbsoluteTiming[QRDecomposition[B];]
26
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 26 New complex Dot[] performance A=Table[Complex[1.5,1.5],{n},{n}]; AbsoluteTiming[Dot[Transpose[A], A];]
27
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 27 The Challenge Mathematica does a great job of choosing the right method for the right problem… … Which make it hard to know which method is going to be used and when! Consequently it’s proving very difficult to know in advance what is going to be accelerated and by how much Call to action: –Can you think of any applications that should be significantly accelerated by ClearSpeed?
28
www.clearspeed.comWolfram Technology Conference ENVISION. ACCELERATE.ARRIVE. Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 28 12 th October 2007 Summary
29
www.clearspeed.comWolfram Technology Conference Copyright © 2006 ClearSpeed Technology plc. All rights reserved. 12 th October 2007 29 Summary Accelerators can be used to significantly increase performance and performance per watt across a range of interesting applications in Mathematica You need a real 64-bit math accelerator for Mathematica to deliver the precision you depend upon ClearSpeed can accelerate notebooks making intensive use of Dot[], Det[], LUDecomposition[], LinearSolve[], Inverse[], CholeskyDecomposition[] and QRDecomposition[] –More in the future as the libraries are developed Plug-and-play – no changes to your notebooks What could you do if you added 66 GFLOPS of matrix crunching power to your Mathematica performance?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.