Models and Languages for Parallel Computation D. Skillicorn and D. Talia, ACM Computing Surveys, Vol. 30, No. 2, June 1998 Presented by Russell Zuck June 28, 2005
Overview Motivation Difficulties of parallel programming Evaluation criteria Categories of parallel programming models Summary of trends in model development A related perspective and observation
Motivation What programming models are out there? How well do they help solve some of the problems inherent with parallel programming? How difficult are they to use? What type of architecture is it used on?
Difficulties of Parallel Programming Many architectures Most are special purpose No standards Many languages/tools Optimizers don’t work well Wide range of programming paradigms Lack of skilled programmers Parallel thinking Lack of a substantial market
Evaluation Criteria Easy to program Software Development Methodology Decomposition Mapping Communication Synchronization Software Development Methodology How do you determine correctness? Sequential techniques of testing and debugging don’t extend well to parallel systems Large state space to test due to extra degree of freedom Testing limited to a couple of architectures
Evaluation Criteria Architecture independence Easy to understand Can a large number of people become proficient at it? Guaranteed performance How much will execution performance differ from one architecture to another? Cost Measures Execution time Processor utilization Development costs
Parallel Programming Models Differ in the degree of abstraction of concepts from the programmer Parallelism Decomposition Mapping Communication Synchronization The ideal model The programmer is not required to be explicitly involved in any of the above operations
Nothing Explicit Complete abstraction from parallel operations These exist only as “we are nearly there” types of languages Examples Optimizing compilers Haskell: High order functional language
Parallelism Explicit Techniques for identifying parallelism required Library functions Syntax extensions Example Fortran
Decomposition Explicit Programmer must specify the division of the problem into parallelizable pieces Library functions Example BSP (Bulk synchronous parallelism)
Mapping Explicit Programmer must specify the distribution of program pieces Example RPC Paralf: An annotated functional language
Communication Explicit Programmer responsible for interprocessor communication Example ABCL/1
Everything Explicit Almost no abstraction from the programmer Examples Orca PRAM MPI
Summary of Trends So which is the best compromise? As with most things, the middle of the road seems to be the best A model with a medium amount of programmer abstraction Trends in model development Work on models with little abstraction is diminishing Concentration of effort on models with midrange abstraction. Conceal some aspects of parallelism while preserving expressiveness Some hope still resides with highly abstract models…Don’t Hold Your Breath Too Long!!!
Another Perspective Speculation of the future Alternate solution As parallel machines regain popularity, more manufactures will be willing to produce them Eventually a handful of standard architectures will emerge to cover SIMD, MIMD (Shared Memory), and MIMD (Fixed Connection) Alternate solution Use a one or two development languages and develop virtual machines (middle-ware) for each type of architecture Similar to the Java paradigm
Programming Language VM1 VM2 VM3 MIMD (Shared) MIMD (Fixed) SIMD