Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Keshav Pingali University of Texas, Austin Towards a Science of Parallel Programming.

Similar presentations


Presentation on theme: "1 Keshav Pingali University of Texas, Austin Towards a Science of Parallel Programming."— Presentation transcript:

1

2 1 Keshav Pingali University of Texas, Austin Towards a Science of Parallel Programming

3 2 Problem Statement Community has worked on parallel programming for more than 30 years –programming models –machine models –programming languages –…. However, parallel programming is still a research problem –matrix computations, stencil computations, FFTs etc. are well-understood –each new application is a “new phenomenon” few insights for irregular applications Thesis: we need a science of parallel programming –analysis: framework for thinking about parallelism in application –synthesis: produce an efficient parallel implementation of application “The Alchemist” Cornelius Bega (1663)

4 3 Analogy: science of electro-magnetism Seemingly unrelated phenomena Unifying abstractions Specialized models that exploit structure

5 4 Organization of talk Seemingly unrelated parallel algorithms and data structures –Stencil codes –Delaunay mesh refinement –Event-driven simulation –Graph reduction of functional languages –… Unifying abstractions –Operator formulation of algorithms –Amorphous data-parallelism –Galois programming model –Baseline parallel implementation Specialized implementations that exploit structure –Structure of algorithms –Optimized compiler and runtime system support for different kinds of structure Ongoing work

6 5 Some parallel algorithms

7 6 Examples Application/domainAlgorithm MeshingGeneration/refinement/partitioning CompilersIterative and elimination-based dataflow algorithms Functional interpretersGraph reduction, static and dynamic dataflow MaxflowPreflow-push, augmenting paths Minimal spanning treesPrim, Kruskal, Boruvka Event-driven simulationChandy-Misra-Bryant, Jefferson Timewarp AIMessage-passing algorithms Stencil computationsJacobi, Gauss-Seidel, red-black ordering Sparse linear solversSparse MVM, sparse Cholesky factorization

8 7 Stencil computation: Jacobi iteration Finite-difference method for solving PDEs –discrete representation of domain: grid Values at interior points are updated using values at neighbors –values at boundary points are fixed Data structure: –dense arrays Parallelism: –values at all interior points can be computed simultaneously –parallelism is not dependent on input values Compiler can find the parallelism –spatial loops are DO-ALL loops //Jacobi iteration with 5-point stencil //initialize array A for time = 1, nsteps for in [2,n-1]x[2,n-1] temp(i,j)=0.25*(A(i-1,j)+A(i+1,j)+A(i,j-1)+A(i,j+1)) for in [2,n-1]x[2,n-1] A(i,j) = temp(i,j) (i,j) (i-1,j) (i+1,j) (i,j-1)(i,j+1) 5-point stencil

9 8 Delaunay Mesh Refinement Iterative refinement to remove badly shaped triangles: while there are bad triangles do { pick a bad triangle; find its cavity; retriangulate cavity; // may create new bad triangles } Don’t-care non-determinism: –final mesh depends on order in which bad triangles are processed –applications do not care which mesh is produced Data structure: –graph in which nodes represent triangles and edges represent triangle adjacencies Parallelism: –bad triangles with cavities that do not overlap can be processed in parallel –parallelism is very “input-dependent” compilers cannot determine this parallelism –(Miller et al.) at runtime, repeatedly build interference graph and find maximal independent sets for parallel execution Mesh m = /* read in mesh */ WorkList wl; wl.add(m.badTriangles()); while ( !wl.empty() ) { Element e = wl.get(); if (e no longer in mesh) continue; Cavity c = new Cavity(e);//determine new cavity c.expand(); c.retriangulate();//re-triangulate region m.update(c);//update mesh wl.add(c.badTriangles()); }

10 9 Event-driven simulation Stations communicate by sending messages with time-stamps on FIFO channels Stations have internal state that is updated when a message is processed Messages must be processed in time- order at each station Data structure: –Messages in event-queue, sorted in time- order Parallelism: –conservative: Chandy-Misra-Bryant station fires when it has messages on all incoming edges and processes earliest message requires null messages to avoid deadlock –optimistic: Jefferson time-warp station can fire when it has an incoming message on any edge requires roll-back if speculative conflict is detected 2 5 A B 4 6

11 10 Remarks on algorithms Diverse algorithms and data structures Exploiting parallelism in irregular algorithms is very complex –Miller et al. DMR implementation: interference graph + maximal independent sets –Jefferson Timewarp algorithm for event-driven simulation Algorithms: –parallelism can be very input-dependent DMR, event-driven simulation, graph reduction,…. –don’t-care non-determinism has nothing to do with concurrency DMR, graph reduction –activities created dynamically may interfere with existing activities event-driven simulation… Data structures: –relatively few algorithms use dense arrays –more common: graphs, trees, lists, priority queues,…

12 11 Organization of talk Seemingly unrelated parallel algorithms and data structures –Stencil codes –Delaunay mesh refinement –Event-driven simulation –Graph reduction of functional languages –……… Unifying abstractions –Amorphous data-parallelism –Baseline parallel implementation for exploiting amorphous data-parallelism Specialized implementations that exploit structure –Structure of algorithms –Optimized compiler and runtime system support for different kinds of structure Ongoing work

13 12 Requirements Provide a model of parallelism in irregular algorithms Unified treatment of parallelism in regular and irregular algorithms –parallelism in regular algorithms must emerge as a special case of general model –(cf.) correspondence principles in Physics Abstractions should be effective –should be possible to write an interpreter to execute algorithms in parallel

14 13 Traditional abstraction Computation graph –nodes are computations –edges are dependences Parallelism –width of the computation graph Effective parallel computation graph model –dataflow model of Dennis, Arvind et al. Inadequate for irregular applications –dependences between computations are a function of input data –don’t-care non-determinism –conflicting work may be created dynamically –…–… Data structures play almost no role in this abstraction –in most programs, parallelism comes from data-parallelism (concurrent operations on data structure elements) New abstraction –data-centric: data structures play a central role –we will use graph ADT to illustrate concepts

15 14 i1i1 i2i2 i3i3 i4i4 i5i5 Operator formulation of algorithms Algorithm = repeated application of operator to graph –active element: node or edge where operator is applied –Jacobi: interior nodes of mesh –DMR: nodes representing bad triangles –Event-driven simulation: station with incoming message –neighborhood: set of nodes and edges read/written to perform computation –Jacobi: nodes in stencil –DMR: cavity of bad triangle –Event-driven simulation: station distinct usually from neighbors in graph –ordering: order in which active elements must be executed in a sequential implementation –any order (Jacobi, DMR, graph reduction) –some problem-dependent order (event- driven simulation) : active node : neighborhood

16 15 Parallelism Amorphous data-parallelism: –parallelism in processing active nodes subject to neighborhood constraints ordering constraints Computations at two active elements are independent if –Neighborhoods do not overlap –More generally, neither of them writes to an element in the intersection of the neighborhoods Unordered active elements –In principle, independent active elements can be processed in parallel –How do we find independent active elements? Ordered active elements –Independence is not enough since elements can become active dynamically (see example) –How do we determine what is safe to execute in parallel? How do we make this model effective? i1i1 i2i2 i3i3 i4i4 i5i5 2 5 A B 3 4 C 6

17 16 Galois programming model (PLDI 2007) Program written in terms of abstractions in model Programming model: sequential, OO Graph class: provided by Galois library –specialized versions to exploit structure (see later) Galois set iterators: for iterating over unordered and ordered sets of active elements –for each e in Set S do B(e) evaluate B(e) for each element in set S no a priori order on iterations set S may get new elements during execution –for each e in OrderedSet S do B(e) evaluate B(e) for each element in set S perform iterations in order specified by OrderedSet set S may get new elements during execution Mesh m = /* read in mesh */ Set ws; ws.add(m.badTriangles()); // initialize ws for each tr in Set ws do { //unordered Set iterator if (tr no longer in mesh) continue; Cavity c = new Cavity(tr); c.expand(); c.retriangulate(); m.update(c); ws.add(c.badTriangles()); //bad triangles } DMR using Galois iterators

18 17 Shared Memory main() …. for each …..{ ……. }..... Master ProgramThreads Parallel execution model: –shared-memory –optimistic execution of Galois iterators Implementation: –master thread begins execution of program –when it encounters iterator, worker threads help by executing iterations concurrently –barrier synchronization at end of iterator Independence of neighborhoods: –software TM variety –logical locks on nodes and edges Ordering constraints for ordered set iterator: –execute iterations out of order but commit in order –cf. out-of-order CPUs Galois parallel execution model i1i1 i2i2 i3i3 i4i4 i5i5

19 18 Parameter tool (PPoPP 2009) Idealized execution model: –unbounded number of processors –applying operator at an active node takes one time step –execute a maximal set of active nodes, subject to neighborhood and ordering constraints Measures amorphous data-parallelism in irregular program execution Useful as an analysis tool

20 19 Example: DMR Input mesh: –Produced by Triangle (Shewchuck) –550K triangles –Roughly half are badly shaped Available parallelism: –How many non-conflicting triangles can be expanded at each time step? Parallelism intensity: –What fraction of the total number of bad triangles can be expanded at each step?

21 20 Examples Boruvka MST algorithm –Builds MST bottom-up –Unordered active elements Agglomerative clustering (AC) –Data-mining algorithm –Ordered active elements Similarity in parallelism profiles arises from similarity in algorithmic structure AC: 20K random points in 2D Boruvka: 10K node graph, avg degree 5

22 21 Summary Old abstraction: computation graphs New abstraction: operator formulation of algorithms –active elements –neighborhoods –ordering of active elements Amorphous data-parallelism –generalizes conventional data-parallelism Baseline execution model –Galois programming model sequential, OO uses new abstractions –optimistic parallel execution Parameter tool –provides estimates of amorphous data- parallelism in programs written using Galois programming model i1i1 i2i2 i3i3 i4i4 i5i5

23 22 Organization of talk Seemingly unrelated parallel algorithms and data structures –Stencil codes –Delaunay mesh refinement –Event-driven simulation –Graph reduction of functional languages –……… Unifying abstractions –Operator formulation of algorithms –Amorphous data-parallelism –Galois programming model –Baseline parallel implementation Specialized implementations that exploit structure –Structure of algorithms –Optimized compiler and runtime system support for different kinds of structure Ongoing work

24 23 Key idea Baseline implementation is general but usually inefficient –(e.g.) dynamic scheduling of iterations is not needed for Jacobi since grid structure is known at compile-time –(e.g.) hand-written parallel implementations of DMR and Jacobi do not buffer updates to neighborhood until commit point Efficient execution requires exploiting structure in algorithms and data structures How do we talk about structure in algorithms? –Previous approaches: like descriptive biology Mattson et al. book Parallel programming patterns (PPP): Snir et al. Berkeley dwarfs … –Our approach: like molecular biology based on amorphous data-parallelism framework

25 24 iterative algorithms topology operator ordering morph: modifies structure of graph local computation: only updates values on nodes/edges reader: does not modify graph in any way general graph grid tree unordered ordered Algorithm abstractions Jacobi: topology: grid, operator: local computation, ordering: unordered DMR, graph reduction: topology: graph, operator: morph, ordering: unordered Event-driven simulation: topology: graph, operator: local computation, ordering: ordered

26 25 morph Morphs coarsening node elimination: sparse Cholesky factorization edge contraction: Metis, Kruskal MST, Boruvka MST, AC sub-graph elimination: elimination-based dataflow analysis refinement: DMR, Prim MST, Barnes-Hut tree building general: graph reduction ….. …. operator u v a m n m n a uv  Edge contraction Node elimination

27 26 Reducing Overheads of Optimistic Parallel Execution

28 27 Graph partitioning (ASPLOS 2008) Algorithm structure: –general graph/grid + unordered active elements Optimization I: –partition the graph/grid and work-set between cores –data-centric work assignment: core gets active elements from its own partition Pros and cons: –eliminates contention for worklist –improves locality and can dramatically reduce conflicts –dynamic load-balancing may be needed Optimization II: –lock coarsening: associate logical locks with partitions, not graph elements –reduces overhead of lock management Over-decomposition may improve core utilization Cores

29 28 Zero-copy implementation Cautious operator: –reads all the elements in its neighborhood before modifying any of them –(e.g.) Delaunay mesh refinement Algorithm structure: –cautious operator + unordered active elements Optimization: optimistic execution w/o buffering updates –grab locks on elements during read phase conflict: someone else has lock, so release your locks –once update phase begins, no new locks will be acquired update in-place w/o making copies –note: this is not two-phase locking

30 29 Delaunay mesh refinement Algorithm structure: –general graph/grid + cautious operator + unordered active elements Optimizations: –partitioning + lock-coarsening + zero- buffering –very efficient implementations possible Maverick@TACC –128-core Sun Fire E25K 1.05 GHz –64 dual-core processors –Sun Solaris Speed-up of 20 on 32 cores for refinement Mesh partitioning is still sequential –time for mesh partitioning starts to dominate after 8 processors (32 partitions) –Need parallel mesh partitioning

31 30 Survey propagation on Maverick SP is a heuristic for solving difficult SAT problems SP: general graph + cautious operator + unordered elements Implementation: –partitioning –lock coarsening –zero-buffering Survey propagation on Maverick (roughly 1500 clauses, 250-500 variables)

32 31 Eliminating the Need for Optimistic Parallel Execution

33 32 Scheduling Baseline implementation –autonomous scheduling: no coordination between execution of different active elements Global coordination possible for some algorithms –Run-time scheduling: cautious operator + unordered active elements execute all activities partially to determine neighborhoods create interference graph and find independent set of activities execute independent set of activities in parallel w/o synchronization used in Gary Miller’s implementation of DMR –Just-in-time scheduling: local computation + structure-driven + cautious, unordered (e.g.) sparse MVM Inspector-executor approach –Compile-time scheduling: previous case + graph is known at compile-time (e.g.) Jacobi make all scheduling decisions at compile-time time

34 33 Ongoing work Algorithm studies: –divide-and-conquer algorithms –transforming ordered algorithms into unordered algorithms –intra-operator parallelism important for some algorithms on dense graphs –locality Language/programming model: –incorporating scheduling information into Galois program refinements? Compiler analysis –analyze and optimize code for operators Runtime system –adaptive control system for managing threads Application studies –Case studies of hand-optimized codes understand hand optimizations figure out how to incorporate them into system –Lonestar benchmark suite for irregular programs joint work with Calin Cascaval’s group at IBM Yorktown Heights n1n1 n2n2 n4n4 n3n3 h4h4 h3h3 h2h2 n1n1 n2n2 n4n4 n3n3 h4h4 h3h3 h2h2 n1n1 n2n2 n4n4 h1h1 n3n3 h2h2 h4h4 h3h3

35 34 Acknowledgements (in alphabetical order) Kavita Bala (Cornell) Martin Burtscher (UT Austin) Patrick Carribault (UT Austin) Calin Cascaval (IBM) Paul Chew (Cornell) Amber Hassaan (UT Austin) Tony Ingraffea (Cornell) Milind Kulkarni (UT Austin) Mario Mendez (UT Austin) Rajasekhar Inkulu (UT Austin) Donald Nguyen (UT Austin) Dimitrios Prountzos (UT Austin) Ganesh Ramanarayanan (Microsoft) Xin Sui (UT Austin) Bruce Walter (Cornell) Zifei Zhong (UT Austin)

36 35 Science of Parallel Programming Seemingly unrelated algorithms Unifying abstractions Specialized models that exploit structure 2 A B …….. i1i1 i2i2 i3i3 i4i4 i5i5

37 36 (  x.  y. x x) ( z. z) xx y x x z z @ @ y z z I @ I I I ( y. (( z. z) (  z.z))) Graph reduction of functional language programs Functional language semantics are defined by rewrite rules –(eg)  -reduction in -calculus  x.e1 e2)  e1[e2/x] Redex: –expression that matches the lefthand side of a rewrite rule Normal form: –program without any redexes Data structure: –graph representation more efficient than tree: permits sharing of sub- expressions Parallelism: –there may be many redexes in a program at each step –parallelism dependent on input expression –1980’s: parallel graph reduction machines (Burroughs NORMA)

38 37 Change in world view Relational database community has mastered parallelism Codd’s contribution to databases –SQL programmer thinks of data as relations with certain operations –Relations are represented internally in the database using index structures, B-trees, etc. –Sharp separation between abstract and concrete data types –Codd’s 12 rules for relational databases Rule 8: Physical data independence: the user should not be aware of the representation of data-files HPC/PL worldview is still derived from FORTRAN/C –No sharp separation between abstract and concrete data types –(eg) 2-D arrays in FORTRAN could be accessed as 1-D vectors, reflecting how the array is stored in memory Nearer my Codd to thee –Science of parallel programming requires adopting a world view closer to Codd than to Backus

39 38 Protein Homology Networks Graph: –nodes are proteins (roughly 6.4 million known) –edges connect “similar” proteins –weights on edges are multi- dimensional measures of similarity Cliques –Families of similar proteins Key problems: –finding cliques of large cardinality maximal weight –often solved by formulating as SAT problem and using SAT solvers Protein Homology Network Alex Adai, UT Austin

40 39 Remark Distinction between abstract data type and concrete representation is critical –data vs. meta-data –(eg) concrete representation may have meta- data to permit iteration over nodes or edges, but this is not visible at ADT level set/multi-set may be represented using a list Similar distinction is central to success of relational databases –relations are tables as far as applications programmers are concerned –concrete representation may be very complex and may involve B-trees, index structures etc. 4 5 7 5 4 7

41 40 Example: DMR Input mesh: –Produced by Triangle –550K triangles –Roughly half are badly shaped Available parallelism: –How many non-conflicting triangles can be expanded at each time step? Parallelism intensity: –What fraction of the total number of bad triangles can be expanded at each step?


Download ppt "1 Keshav Pingali University of Texas, Austin Towards a Science of Parallel Programming."

Similar presentations


Ads by Google