Contemporary Languages in Parallel Computing Raymond Hummel
Current Languages
Standard Languages Distributed Memory Multiprocessors MPI Shared Memory Multiprocessors OpenMP pthreads Graphics Processing Units CUDA OpenCL
MPI Stands for: Message Passing Interface Pros Extremely Scalable Portable Can harness a multitude of hardware setups Cons Complicated Software Complicated Hardware Complicated Setup
MPI
OpenMP Stands for: Open Multi-Processing Pros Incremental Parallelization Fairly Portable Simple Software Cons Limited Use-Case
OpenMP
POSIX Threads Stands for: Portable Operating System Interface Threads Pros Portable Fine Grained Control Cons All-or-Nothing Complicated Software Limited Use-Case
POSIX Threads
CUDA Stands for: Compute Unified Device Architecture Pros Manufacturer Support Low Level Hardware Access Cons Limited Use-Case Only Compatible with NVIDIA Hardware
CUDA
OpenCL Stands for: Open Compute Language Pros Portability Heterogeneous Platform Works with All Major Manufacturers Cons Complicated Software Special Tuning Required
Future Languages
Developing Languages D Rust Harlan
D Performance of Compiled Languages Memory Safety Expressiveness of Dynamic Languages Includes a Concurrency Aware Type-System Nearing Maturity
Rust Designed for creation of large Client-Server Programs on the Internet Safety Memory Layout Concurrency Still Major Changes Occurring
Harlan Experimental Language Based on Scheme Designed to take care of boilerplate for GPU Programming Could be expanded to include automatic scheduling for both CPU and GPU, depending on available resources.
Questions?