Download presentation
Presentation is loading. Please wait.
Published byPhilippa Wilkerson Modified over 8 years ago
1
Applications Scaling Panel Jonathan Carter (LBNL) Mike Heroux (SNL) Phil Jones (LANL) Kalyan Kumaran (ANL) Piyush Mehrotra (NASA Ames) John Michalakes (NCAR) Nir Paikowsky (ScaleMP) Bronis de Supinski (LLNL) Trey White (ORNL)
2
Panel Question #1 1.Are their more appropriate domain-specific performance metrics for science and engineering HPC applications available then the canonical “percent of peak” or “parallel efficiency and scalability? If so, what are they? Are these metrics driving for weak or strong scaling or both?
3
Panel Question #2 2.Similar to HPC “disruptive technologies” (memory, power, etc.) thought to be needed in many H/W roadmaps over the next decade to reach exascale, are their looming computational challenges (models, algorithms, S/W) whose resolution will be game changing or required over the next decade?
4
Panel Question #3 3.What is the role of local (node-based) floating point accelerators (e.g., cell, GPUs, etc.) for key science and engineering applications in the next 3-5 years? Is there unexploited or unrealized concurrency in the applications you are familiar with? If so, what and where is it?
5
Panel Question #4 4.Should applications continue with current programming models (Fortran, C, C++, PGAS, etc.) and paradigms (e.g., flat MPI, hybrid MPI/OpenMP, etc.) over the next decade? If not, what needs to change?
6
Panel Question #5 5.How might HPC system-attribute priorities change over the next decade for the science and engineering applications you are familiar with? Attributes to consider are node peak flops; mean time to interrupt, wide-area network bandwidth, node memory capacity, local storage capacity, archival storage capacity, memory latency, interconnect latency, disk latency, interconnect bandwidth, memory bandwidth, and disk bandwidth.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.