Download presentation
Presentation is loading. Please wait.
Published byOpal Scott Modified over 9 years ago
1
Experts in numerical algorithms and HPC services Compiler Requirements and Directions Rob Meyer September 10, 2009
2
2 NAG in 30 Seconds High performance computing user support Mathematical, statistical, data analysis components What We Do Enterprise/ISV developers Analysts/researchers/modelers Users Algorithmic content Software engineering expertise Compilers & tools Strengths UK Research Community ISVs (finance, science, retail analytics, etc) Finance, Pharmaceuticals, Energy, etc Customers UK (Oxford, Manchester), US (Chicago) Japan (Tokyo), Greater China (Taipei) Offices Started 1970 from four British universities Not-for-profit (no shareholders) 80+ staff, >50% technical, 25 PhDs Origins
3
3 Q1. Language support for HPC Provocative short answer – We aren’t Longer, politically correct answer – but we’re working on it Standard-compliant Fortran compiler using latest standards (2003)* OpenMP Library for nodes MPI-based Library for loosely coupled systems No magic wands (really) Managing communications is the key to scalability PGAS languages and other vendor libraries/tools will help but… … application design (for communications) and data distribution will have a profound impact on scalability MPI, hard as it is, survives for a reason If this was easy we’d be playing golf (or hanging out in the pub) *But co-arrays will be part of Fortran 2008
4
4 Q2. Performance Optimization Provocative short answer – We aren’t (again) Longer, politically correct answer – We’re (still) working on it Inter-process communication is still the critical issue Hardware providers keep communication costs low Programmers reduce communications Dynamic optimization? Potentially helpful along with a range of other techniques Hardware-resource-aware middleware? Ultimately, massively parallel performance will still depend more on the programmer than the compiler
5
5 Q3. Support for Heterogeneous cores? A series of partial answers OpenMP support for conventional cores Linking to GPUs for appropriate code Have built/tested equivalent routines for GPUs Rhetorical question – what if we are successful? Portability of code? Complexity of code? Correctness of results?
6
6 Q4. Tools for serial code/parallel novices? Not likely (that we’ll do it) Not likely to be successful in a broad, sustained fashion Would we be better off using a community of experts to parallelize code and training the owners rather than trying to develop “HPC for Dummies” tools?
7
7 Q5: Support of pre-compiled libraries? Our own Expanding the number of routines optimized for multi/many core Link to Intel, AMD, etc vendor libraries from our code where they have optimized for multi-core Identifying options for updating/re-inventing MPI- based library
8
8 Some last thoughts The landscape is getting more (not less) complex Threading tools Languages Processor options Another view: If our objective is to produce more useful results in a given amount of time given Hardware & languages User software User expertise Vendor/community software tools Vendor/community people expertise? Are we putting too much emphasis on compilers and tools? Should we put more emphasis on training & supporting users
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.