Download presentation
Presentation is loading. Please wait.
Published byEustacia Haynes Modified over 9 years ago
1
Reinventing Explicit Parallel Programming for Improved Engineering of High Performance Computing Software Anthony Skjellum, Purushotham Bangalore, Jeff Gray, Fei Cao, Barrett Bryant University of Alabama at Birmingham Department of Computer and Information Sciences http://www.cis.uab.edu ICSE 2004 W3S – Software Engineering for HPCS Applications
2
Premises of Position Paper Explicit parallel programming to be “reinvented” MPI, BSP, and relatives too low level, too hard to develop/maintain “Accidental Complexity” results from their use Improve vs. replace explicit parallel programming Refactor middleware by capturing + orthogonalizing features, raising level of abstraction, exploring patterns Middleware only subtracts and never adds scalability/performance (Brightwell’s law), so let’s add less if possible
3
Premises of Position Paper Several “software engineering” areas show promise to help – OO, generic programming, generative programming, AOSD, MDA, MIC, component architectures, design patterns New notations should map as well or better to existing and emerging hardware/architecture if possible Scalability and performance of resulting applications remain key requirements Refocus certain academics from “reimplementing MPI-1.2” systems for lowest latency, high bandwidth
4
Latest Motivator for Change – CRA Workshop Report Replacing an object-based MPI (object-based standard) with another would not be a big step forward… MPI programs now looked on as legacy code “problem” Huge investment in MPI codes (e.g., US DOE) More MPI codes being developed all the time Computing Research Association, “Report of Workshop on The Roadmap for the Revitalization of High-End Computing,” organized by Computing Research Association, Edited by Daniel A. Reed, Washington D.C., June 16-18, 2003.
5
Some MPI Realities After 10 years, MPI training still an issue (though widely taught at basic level!) MPI is not necessarily used well Myths exist on how to use MPI correctly “Unsafe MPI programs” abound Ties to particular MPI implementation quirks Some use bad MPI features (e.g., ALLTOALL, IPROBE) Failure to use advanced MPI features where useful (e.g., ISEND/IRECV vs. SEND/RECV) Posting receives early vs. late (fallacy)
6
Some Application Realities Parallel codes are often behind sequential codes, more “rigid” Conflation of data distribution and correctness of parallel application Lack of evolvability of parallel code MPI architecture within code set very early in parallelization, with specific machine/topology/MPI implementation/level of knowledge of designers/implementers frozen in Almost no fault tolerance in parallel codes
7
Outline for “new approach” Called “Refactoring MPI” Demonstrating validity for clusters (multicomputers) and grids Designing new applications based on Refactored MPI Studying means to move legacy software forward, where appropriate (beyond scope of current paper)
8
Refactoring MPI Capture semantics/features of MPI-1.2 MPI-2.1 DRI (data reorganization interface) MPI/RT BSP Offer a simpler, consistent model for achieving these features in a unified/component friendly environment Extract design patterns for communication and computation from these specifications
9
Build new applications differently Move away from middleware library implementations + header files to component libraries + constraint specifications + application builders Look for ways to let “application builders” apply correctness semantics more optimistically than monolithic middleware currently allows Look for ways to build only required underlying communication fabric and complexity needed for a given application
10
How can software engineering techniques help? MDA/MIC – capture the features of MPI + relatives in a meta-model, define complete, orthogonal, minimal features for apps needing middleware (minimize dead code) Principles from generative programming (cf, Czarneki et al) for code generation from (GME) models may be useful to create actual code Ideally, design alternatives can be compared in terms of expression of data transfer, including concerns of overlapping transfers with computation
11
How can software engineering techniques help? AOSD – helpful with global concerns Now, global concerns either implemented by high level programmer, and/or libraries make conservative assumptions on resources AOSD could help spread fault characteristics across application and automatically generated middleware AOSD could also be helpful in instrumentation for performance understanding (either source or binary) Specifically co-aspecting Refactored MPI components and application would allow high- level optimizations (e.g., less conservative assumptions about queues, message ordering) Aspecting models used to generate parallel codes (two level aspecting) also of interest
12
Summary Positions in this paper represent new/early effort Need to improve explicit parallel programming New/recent software engineering methodologies help address this goal Goal: Achieve equal or greater scalability, performance, predictability, productivity than MPI-1 / MPI-2 programs use AOSD, MDA, patterns, and componentized middleware design to support data parallel and irregular parallel computations Reflect on features of MPI relatives (why designed, what verticals addressed, why/how) Address dynamic resource issues (cf, grid, adaptive applications requirements) Achieve other benefits (flexibility, ability to factor in fault aspects, other separation of concerns)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.