Download presentation
Presentation is loading. Please wait.
Published byΆποφις Βασιλειάδης Modified over 6 years ago
1
Using compiler-directed approach to create MPI code automatically
Paraguin Compiler Continued ITCS4145/5145, Parallel Programming Clayton Ferner/B. Wilkinson March 11, ParagionSlides2.ppt
2
The Paraguin compiler is being developed by Dr
The Paraguin compiler is being developed by Dr. C Ferner, UNC-Wilmington Following based upon his slides (Assumes already seen OpenMP)
3
Running a Parallel Program
When your parallel program is run, you specify how many processors you want on the command line (or in a job submission file) Processes (1 per processor) will be given a rank, which is unique, in the range [0 .. NP-1], where NP is the number of processors. Process 0 is considered to be the master.
4
Parallel Region // Sequential Code
… #pragma paraguin begin_parallel // Code to be executed by all processors #pragma paraguin end_parallel Code outside parallel region executed by master process (with rank = 0) only. All other processors do not execute this code. Code inside parallel region is executed by all processors
5
Hello World To deal with an incompatibility issue between SUIF compiler and gcc. Don’t worry about it, but just put it into your program. #ifdef PARAGUIN typedef void* __builtin_va_list; #endif #include <stdio.h> int __guin_rank = 0; int main(int argc, char *argv[]) { char hostname[256]; printf("Master process %d starting.\n", __guin_rank); #pragma paraguin begin_parallel gethostname(hostname, 255); printf("Hello world from process %3d on machine %s.\n", __guin_rank, hostname); #pragma paraguin end_parallel printf("Goodbye world from process %d.\n", _guin_rank); } A predefined Paraguin identifier that represents the ID of each MPI process. We are allowed to declare it and even initialize it, but it should not be modified.
6
Explanation of Hello World
PE 0 PE 1 PE 2 PE 3 PE 4 PE 5 printf("Master process %d starting.\n", __guin_rank); #pragma paraguin begin_parallel gethostname(hostname, 255); printf("Hello world from process %3d on machine %s.\n", __guin_rank, hostname); #pragma paraguin end_parallel This defines a region to be executed by all processors. Outside of this region, only the master process (with rank = 0) executes the statements. The other processors skip it. THERE IS NO IMPLIED BARRIER AT THE BEGINNING OR END OF A PARALLEL REGION.
7
Result of Hello World Compiling Running All on one line
$ scc -DPARAGUIN -D__x86_64__ -I/opt/openmpi/include/ -cc mpicc helloWorld.c -o helloWorld $ mpiexec –n 8 hello.out Master process 0 starting. Hello world from process 0 on machine compute-1-5.local. Goodbye world from process 0. Hello world from process 1 on machine compute-1-5.local. Hello world from process 2 on machine compute-1-5.local. Hello world from process 3 on machine compute-1-5.local. Hello world from process 4 on machine compute-1-1.local. Hello world from process 5 on machine compute-1-1.local. Hello world from process 6 on machine compute-1-1.local. Hello world from process 7 on machine compute-1-1.local. Running
8
Notes on location of pragmas
SUIF attaches the pragmas to the last instruction, which may be deeply nested. This makes it difficult for Paraguin to find the pragmas
9
Incorrect Location of Pragma
This code for (i = 0; i < n; i++) for (j = 0; j < n; j++) a[i][j] = 0; #pragma paraguin begin_parallel Actually appears like this: a[i][j] = 0; #pragma paraguin begin_parallel }
10
Solution for (j = 0; j < n; j++) a[i][j] = 0; ;
Solution: Insert a semicolon on a line by itself before a block of pragma statements This will insert a NOOP instruction into the code to which the pragmas are attached for (i = 0; i < n; i++) for (j = 0; j < n; j++) a[i][j] = 0; ; #pragma paraguin begin_parallel Usually, needed after a nesting (e.g. for loop nest, while loop nest, etc.)
11
More on Parallel Regions
The parallel region pragmas must be at the topmost nesting within a function. int f () { #pragma paraguin begin_parallel … #pragma paraguin end_parallel }
12
More on Parallel Regions
Al processes must reach end_parallel int g() { #pragma paraguin begin_parallel ... if (a < b) #pragma paraguin end_parallel This is an error
13
Parallel regions and sequential regions are implemented in Paraguin by using a simple if statement to check whether the rank of the process is zero. The user may create a sequential region within a parallel region simply by surrounding that code with an if statement such as: int __guin_rank = 0; ... #pragma paraguin begin_parallel if (__guin_rank == 0) { // Sequential region within a parallel region } #pragma paraguin end_parallel
14
Parallel Regions Related to Functions
If a function is to be executed totally in parallel, it does not need its own parallel region: int f () { … } int main() #pragma paraguin begin_parallel f(); #pragma paraguin end_parallel This has been relaxed from what was said in the user manual. This one will execute in parallel This one will execute sequentially If a function has one or Paraguin directives, then the calling routine also needs a parallel region.
15
Initializations Initializations of variables are executable statements (as opposed to the declaration) Therefore, they need to be within a parallel region for all processes: int f () { int a = 23, b; #pragma paraguin begin_parallel b = 46; … #pragma paraguin end_parallel } a will be initialized on the master only because it is outside a parallel region b will be initialized on all processors
16
Parallel Constructs #pragma paraguin barrier #pragma paraguin forall
#pragma paraguin bcast #pragma paraguin scatter #pragma paraguin gather #pragma paraguin reduce All of these must be within a parallel region (some would deadlock if not). Similar placement to worksharing constructs in parallel regions in OpenMP
17
Barrier A barrier is a point at which all processors stop until they all arrive at the same point, after which they may proceed. Perform a barrier on MPI_COMM_WORLD, all processes … #pragma paraguin barrier PE 0 PE 1 PE 2 PE 3 PE 4 PE 5
18
Parallel For (or forall)
To execute a for loop in parallel: #pragma paraguin forall [chunksize] Each process will execute a different partition of the iterations (call the iteration space) Partitions will be no larger than chunksize number of iterations Default chunksize where n is the number of iterations and P is the number of processes
19
Parallel For (or forall)
For example consider: #pragma paraguin forall for (i = 0; i < n; i++) { <body> Suppose n = 13. The iteration space is i=0 i=5 i=10 i=1 i=6 i=11 i=2 i=7 i=12 i=3 i=8 i=4 i=9
20
Parallel For (or forall)
Also suppose we have 4 processes. Default chunksize is The iteration space will be executed by the 4 processes as: P0 P1 P2 P3 i=0 i=4 i=8 i=12 i=1 i=5 i=9 - i=2 i=6 i=10 i=3 i=7 i=11
21
Parallel For (other notes)
Note the for loop that is executed as a forall must be a simple for loop: The increment must be positive 1 (and the upper bound must be greater than the lower bound) The loop termination must use either < or <= A nested for loop can be a forall: for (i = 0; i < n; i++) { #pragma paraguin forall for (j = 0; j < n; j++) { However, foralls cannot be nested
22
How to transform for loops to simple for loops
Count down loop: for (i = n-1; i >=0; i--) { … Nested loops for (i = 0; i < n; i++) { for (j = 0; j < n; j++) { #pragma paraguin forall for (tmp = 0; tmp < n; tmp++) { i = n – tmp – 1; … for (tmp = 0; tmp < n*n; tmp++) { i = tmp / n; j = tmp % n;
23
Parallel For (other notes)
If the user provides a chunksize, then each process cycles through chunksize iterations in a cyclic fashion Specifying a chunksize of 1 is cyclic scheduling (better load balancing) P0 P1 P2 P3 i=0 i=1 i=2 i=3 i=4 i=5 i=6 i=7 i=8 i=9 i=10 i=11 i=12 -
24
#pragma paraguin bcast <list of variables>
Broadcast Broadcasting data sends the same data to all processes from the master: #pragma paraguin bcast <list of variables> <list of variables> is a white space separated list. Variables may be arrays or scalars - byte, char, unsigned char, short, int, long, float, double, and long double. There will be a separate broadcast performed for each variable in the list. If a variable is a pointer type, then only one element, to which the pointer points, of the base type will be broadcast. If the pointer is used to point to an array or a portion of an array, the user must provide the number of elements to send, see next.
25
Broadcast example int a, b[N][M], n; char *s = “hello world”;
n = strlen(s) + 1; #pragma paraguin begin_parallel #pragma paraguin bcast a b n s( n ) … #pragma paraguin end_parallel Variable a is a scalar and b is an array, but the correct number of bytes are broadcast N*M*sizeof(int) bytes are broadcast for variable b. Variable s is a string or a pointer. There is no way to know how big the data actually is Pointers require a size (such as s( n )) If size is not given then only one character will be broadcast the size is not given then only one character will be broadcast
26
Broadcast char *s = “hello world”; n = strlen(s) + 1;
int a, b[N][M], n; char *s = “hello world”; n = strlen(s) + 1; #pragma paraguin begin_parallel #pragma paraguin bcast a b n s( n ) Notice that s and n are initialize on the master only 1 is added to strlen(s) to include null character Variable n must be broadcast BEFORE variable s Put spaces between parentheses and size (e.g. ( n ))
27
#pragma paraguin scatter <list of variables>
Divides and scatters data that resides on the master among the other processors #pragma paraguin scatter <list of variables> Here chucksize = 2
28
Scatter { int B[N]; … // Initialize B somehow
void f(int *A, int n) { int B[N]; … // Initialize B somehow #pragma paraguin begin_parallel #pragma paraguin scatter A( n ) B ... #pragma paraguin end_parallel Same thing applies for pointers with scatter as with broadcast. The size must be given. Only arrays should be scattered (it makes no sense to scatter a scalar).
29
Scatter The default chunksize is
where N is the number of rows and P is the number of processes Notice that the rows are scattered, not columns User defined chunksize is not yet implemented
30
Gather Gather works just like Scatter except that the data moves in the opposite direction #pragma paraguin gather <list of variables>
31
Gather Gather is the collection of partial results back to the master
The default chunksize is where N is the number of rows and P is the number of processes User defined chunksize is not yet implemented
32
Reduction A reduction is when a binary commutative operator is applied to a collection of values producing a single value: #pragma paraguin reduce <op> <source> <result> where <op> is the operator <source> is the variable with the data to be reduced <result> is the variable that will hold the answer
33
Reduction For example, applying summation to the following values:
Produces the single value of 549 MPI does not specify how reduction should be implemented; however, … 83 40 23 85 90 2 74 68 51 33
34
Reduction A reduction could be implemented fairly efficiently on multiple processor using a tree In which case the time complexity is O(log(P)) with P processes
35
Reduction Available operators that can be used in a reduction (MPI):
Description max Maximum lor Logical or min Minimum bor Bitwise or sum Summation lxor Logical exclusive or prod Product bxor Bitwise exclusive or land Logical and maxloc Maximum and location Band Bitwise and minloc Minimum and location
36
Reduction ... #pragma paraguin begin_parallel
double c, result_c; ... #pragma paraguin begin_parallel // Each processor assigns some value to the variable c #pragma paraguin reduce sum c result_c // The variable result_c on the master now holds the result // of summing the values of the variable c on all the processors
37
Reducing an Array When a reduction is applied to an array, the corresponding values in the same relative position in the array are reduced across processors double c[N], result_c[N]; ... #pragma paraguin begin_parallel // Each process assigns N values to array c #pragma paraguin reduce sum c result_c
38
More detailed information on Paraguin at
Questions? More detailed information on Paraguin at
39
Next Topic Higher level Patterns in Paraguin: Scatter/Gather template
Stencil
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.