Presentation is loading. Please wait.

Presentation is loading. Please wait.

Practical techniques & Examples

Similar presentations


Presentation on theme: "Practical techniques & Examples"— Presentation transcript:

1 Practical techniques & Examples
Parallel Computing(Intro-07): Rajeev Wankar

2 • Embarrassingly Parallel Computations
Parallel Techniques • Embarrassingly Parallel Computations • Partitioning and Divide-and-Conquer Strategies • Pipelined Computations • Synchronous Computations • Asynchronous Computations • Load Balancing and Termination Detection Parallel Computing(Intro-07): Rajeev Wankar

3 Embarrassingly Parallel Computations
A computation that can be divided into a number of completely independent parts, each of which can be executed by a separate process(or). No communication or very little communication between processes Input data Processes Results Parallel Computing(Intro-07): Rajeev Wankar

4 Practical embarrassingly parallel computation with static process creation and master-slave MPI approach All processes started together Send initial data Slaves send() Master send() recv() recv() Collect results Parallel Computing(Intro-07): Rajeev Wankar

5 Practical embarrassingly parallel computation with dynamic process creation and master slave PVM approach Master started initially Send initial data spawn() send() send() Master Slaves recv() recv() Collect results Parallel Computing(Intro-07): Rajeev Wankar

6 • Circuit Satisfiability Low level image processing (assignment)
Examples • Circuit Satisfiability Low level image processing (assignment) • Mandelbrot set • Monte Carlo Calculations Parallel Computing(Intro-07): Rajeev Wankar

7 MPI programs for Circuit Satisfiability
Case Study Parallel Computing(Intro-07): Rajeev Wankar

8 Outline Coding MPI programs Compiling MPI programs
Running MPI programs Benchmarking MPI programs Parallel Computing(Intro-07): Rajeev Wankar

9 Processes Number is specified at start-up time
Remains constant throughout execution of program All execute same program Each has unique ID number Alternately performs computations and communicates Parallel Computing(Intro-07): Rajeev Wankar

10 Circuit Satisfiability
1 Not Satisfiable 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Parallel Computing(Intro-07): Rajeev Wankar

11 Solution Method Circuit satisfiability is NP-complete
No known algorithms to solve in polynomial time We seek all solutions We find through exhaustive search 16 inputs  65,536 combinations to test Parallel Computing(Intro-07): Rajeev Wankar

12 Partitioning: Functional Decomposition
Embarrassingly parallel: No channels between tasks Parallel Computing(Intro-07): Rajeev Wankar

13 Properties and Mapping
Properties of parallel algorithm Fixed number of tasks No communications between tasks Time needed per task is variable Mapping strategy decision tree Map tasks to processors in a cyclic fashion Parallel Computing(Intro-07): Rajeev Wankar

14 Cyclic (interleaved) Allocation
Assume p processes Each process gets every pth piece of work Example: 5 processes and 12 pieces of work P0: 0, 5, 10 P1: 1, 6, 11 P2: 2, 7 P3: 3, 8 P4: 4, 9 Parallel Computing(Intro-07): Rajeev Wankar

15 Program Design Program will consider all 65,536 combinations of 16 boolean inputs Combinations allocated in cyclic fashion to processes Each process examines each of its combinations If it finds a satisfiable combination, it will print it Parallel Computing(Intro-07): Rajeev Wankar

16 #include <mpi.h>
Include Files #include <mpi.h> MPI header file #include <stdio.h> Parallel Computing(Intro-07): Rajeev Wankar

17 int main (int argc, char *argv[]) { int i; int id; /* Process rank */
Local Variables int main (int argc, char *argv[]) { int i; int id; /* Process rank */ int p; /* Number of processes */ void check_circuit (int, int); One copy of every variable for each process running this program Parallel Computing(Intro-07): Rajeev Wankar

18 MPI_Init(&argc, &argv);
Initialize MPI MPI_Init(&argc, &argv); Determine Number of Processes and Rank MPI_Comm_size(MPI_COMM_WORLD, &p); MPI_Comm_rank(MPI_COMM_WORLD, &id); Parallel Computing(Intro-07): Rajeev Wankar

19 Replication of Automatic Variables
1 id 6 p rank id 6 p 2 id 6 p 5 id 6 p 4 id 6 p 3 id 6 p Parallel Computing(Intro-07): Rajeev Wankar

20 What about External Variables?
int total; int main (int argc, char *argv[]){ int i; int id; int p; Where is variable total stored? Parallel Computing(Intro-07): Rajeev Wankar

21 Cyclic Allocation of Work
for (i = id; i < 65536; i += p) check_circuit (id, i); Parallelism is outside function check_circuit It can be an ordinary, sequential function Parallel Computing(Intro-07): Rajeev Wankar

22 Shutting Down MPI MPI_Finalize();
Call after all other MPI library calls Allows system to free up MPI resources Parallel Computing(Intro-07): Rajeev Wankar

23 Put fflush() after every printf()
#include <mpi.h> #include <stdio.h> int main (int argc, char *argv[]) { int i; int id; int p; void check_circuit (int, int); MPI_Init (&argc, &argv); MPI_Comm_rank (MPI_COMM_WORLD, &id); MPI_Comm_size (MPI_COMM_WORLD, &p); for (i = id; i < 65536; i += p) check_circuit (id, i); printf ("Process %d is done\n", id); fflush (stdout); MPI_Finalize(); return 0; } Put fflush() after every printf() Parallel Computing(Intro-07): Rajeev Wankar

24 /* Return 1 if 'i'th bit of 'n' is 1; 0 otherwise */
#define EXTRACT_BIT(n,i) ((n&(1<<i))?1:0) void check_circuit (int id, int z) { int v[16]; /* Each element is a bit of z */ int i; for (i = 0; i < 16; i++) v[i] = EXTRACT_BIT(z,i); if ((v[0] || v[1]) && (!v[1] || !v[3]) && (v[2] || v[3]) && (!v[3] || !v[4]) && (v[4] || !v[5]) && (v[5] || !v[6]) && (v[5] || v[6]) && (v[6] || !v[15]) && (v[7] || !v[8]) && (!v[7] || !v[13]) && (v[8] || v[9]) && (v[8] || !v[9]) && (!v[9] || !v[10]) && (v[9] || v[11]) && (v[10] || v[11]) && (v[12] || v[13]) && (v[13] || !v[14]) && (v[14] || v[15])) { printf ("%d) %d%d%d%d%d%d%d%d%d%d%d%d%d%d%d%d\n", id, v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9], v[10],v[11],v[12],v[13],v[14],v[15]); fflush (stdout); } Parallel Computing(Intro-07): Rajeev Wankar

25 Compiling MPI Programs
mpicc -O -o cnf cnf.c mpicc: script to compile and link C + MPI programs Flags: same meaning as C compiler -O  optimize -o <file>  where to put executable Parallel Computing(Intro-07): Rajeev Wankar

26 Running MPI Programs mpirun -np <p> <exec> <arg1> …
-np <p>  number of processes <exec>  executable <arg1> …  command-line arguments Parallel Computing(Intro-07): Rajeev Wankar

27 Specifying Host Processors
File .mpi-machines in home directory lists host processors in order of their use Example .mpi_machines file contents hcu01.aiab.uohyd.ernet.in hcu02.aiab.uohyd.ernet.in hcu03.aiab.uohyd.ernet.in hcu04.aiab.uohyd.ernet.in Parallel Computing(Intro-07): Rajeev Wankar

28 Enabling Remote Logins
MPI needs to be able to initiate processes on other processors without supplying a password Each processor in group must list all other processors in its .rhosts file; e.g., hcu01.aiab.uohyd.ernet.in hcu02.aiab.uohyd.ernet.in hcu03.aiab.uohyd.ernet.in hcu04.aiab.uohyd.ernet.in Parallel Computing(Intro-07): Rajeev Wankar

29 Execution on 1 CPU % mpirun -np 1 sat 0) 1010111110011001
0) 0) 0) 0) 0) 0) 0) 0) Process 0 is done Parallel Computing(Intro-07): Rajeev Wankar

30 Execution on 2 CPUs % mpirun -np 2 sat 0) 0110111110011001
0) 0) 1) 1) 1) 1) 1) 1) Process 0 is done Process 1 is done Parallel Computing(Intro-07): Rajeev Wankar

31 Execution on 3 CPUs % mpirun -np 3 sat 0) 0110111110011001
0) 2) 1) 1) 1) 0) 2) 2) Process 1 is done Process 2 is done Process 0 is done Parallel Computing(Intro-07): Rajeev Wankar

32 Interpreting Output Output order only partially reflects order of output events inside parallel computer If process A prints two messages, first message will appear before second If process A calls printf before process B, there is no guarantee process A’s message will appear before process B’s message Parallel Computing(Intro-07): Rajeev Wankar

33 Enhancing the Program (Phase parallel model)
We want to find total number of solutions Incorporate sum-reduction into program Reduction is a collective communication Parallel Computing(Intro-07): Rajeev Wankar

34 Modifications Modify function check_circuit
Return 1 if circuit satisfiable with input combination Return 0 otherwise Each process keeps local count of satisfiable circuits it has found Perform reduction after for loop Parallel Computing(Intro-07): Rajeev Wankar

35 New Declarations and Code
int count; /* Local sum */ int global_count; /* Global sum */ int check_circuit (int, int); count = 0; for (i = id; i < 65536; i += p) count += check_circuit (id, i); Parallel Computing(Intro-07): Rajeev Wankar

36 Prototype of MPI_Reduce()
int MPI_Reduce ( void *operand, /* addr of 1st reduction element */ void *result, /* addr of 1st reduction result */ int count, /* reductions to perform */ MPI_Datatype type, /* type of elements */ MPI_Op operator, /* reduction operator */ int root, /* process getting result(s) */ MPI_Comm comm /* communicator */ ) Parallel Computing(Intro-07): Rajeev Wankar

37 Our Call to MPI_Reduce()
MPI_Reduce (&count, &global_count, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); Only process ‘0’ will get the result if (!id) printf ("There are %d different solutions\n", global_count); Parallel Computing(Intro-07): Rajeev Wankar

38 Execution of Second Program
% mpirun -np 3 seq2 0) 0) 1) 1) 2) 2) 2) 1) 0) Process 1 is done Process 2 is done Process 0 is done There are 9 different solutions Parallel Computing(Intro-07): Rajeev Wankar

39 Benchmarking the Program
MPI_Barrier  barrier synchronization MPI_Wtick  timer resolution MPI_Wtime  current time Parallel Computing(Intro-07): Rajeev Wankar

40 double elapsed_time; …
Benchmarking Code double elapsed_time; … MPI_Init (&argc, &argv); MPI_Barrier (MPI_COMM_WORLD); elapsed_time = - MPI_Wtime(); … MPI_Reduce (…); elapsed_time += MPI_Wtime(); Parallel Computing(Intro-07): Rajeev Wankar

41 Partitioning into regions for individual processes.
Square region for each process (can also use strips) x 640 y Map to process 480 80 Parallel Computing(Intro-07): Rajeev Wankar

42 Some geometrical operations
Shifting Object shifted by Dx in the x-dimension and Dy in the y-dimension: x¢ = x + Dx y¢ = y + Dy where x and y are the original and x¢ and y¢ are the new coordinates. Scaling Object scaled by a factor Sx in x-direction and Sy in y-direction: x¢ = xSx y¢ = ySy Rotation Object rotated through an angle q about the origin of the coordinate system: x¢ = x cosq + y sinq y¢ = -x sinq + y cosq Parallel Computing(Intro-07): Rajeev Wankar

43 Sample assignment Write a parallel program that reads an image file in a suitable uncompressed format and generates a file of the image shifted N pixels right, where N is an input parameter. Parallel Computing(Intro-07): Rajeev Wankar

44 Mandelbrot Set Set of points in a complex plane that are quasi-stable (will increase and decrease, but not exceed some limit) when computed by iterating the function zk+1 = zk2 + c where zk+1 is the (k + 1)th iteration of the complex number z = a + bi and c is a complex number giving position of point in the complex plane. The initial value for z is zero. Iterations continued until magnitude of z is greater than 2 or number of iterations reaches arbitrary limit. Magnitude of z is the length of the vector given by Parallel Computing(Intro-07): Rajeev Wankar

45 Parallel Computing(Intro-07): Rajeev Wankar

46 Parallel Computing(Intro-07): Rajeev Wankar

47 Mandelbrot Set If the window is to display the complex plane with minimum values of (real_min, imag_min) and (real_max, imag_max). Each (x,y) point needs to be scaled by the factors c.real = real_min + x *(real_max-real_min)/disp_width; c.imag = imag_min + y *(imag_max-imag_min)/disp_height; or scale_real = (real_max-real_min)/disp_width; scale_imag = (imag_max-imag_min)/disp_height; Parallel Computing(Intro-07): Rajeev Wankar

48 c.imag = imag_min +((float) y*scale_imag);
Mandelbrot Set Including scaling, the code could be of the form for (x = 0; c < disp_width; x++) for (y = 0; c < disp_height; y++) { c.real = real_min +((float) x *scale_real); c.imag = imag_min +((float) y*scale_imag); color = cal_pixel(c); display(x,y, color);} Parallel Computing(Intro-07): Rajeev Wankar

49 Sample Results Parallel Computing(Intro-07): Rajeev Wankar

50 Parallelizing Mandelbrot Set Computation
Static Task Assignment Simply divide the region in to fixed number of parts, each computed by a separate processor. Not very successful because different regions require different numbers of iterations and time. Dynamic Task Assignment Have processor request regions after computing previous regions Parallel Computing(Intro-07): Rajeev Wankar

51 Partitioning into regions for individual processes.
Square region for each process (can also use strips) x 640 y Map to process 480 80 Parallel Computing(Intro-07): Rajeev Wankar

52 Dynamic Task Assignment Work Pool/Processor Farms
Parallel Computing(Intro-07): Rajeev Wankar

53 Master count = 0; /*counter for termination*/
row = 0; /*row being sent*/ for (k = 0; k < procno; k++){ /*procno < disp_height*/ send( &row, Pk, data_tag); /*send initial row to processor*/ count++; row++; } do { recv(&slave, &r, color, Pany, result_tag); count--; /*reduce count as row received*/ if (row < disp_height){ /*send next row*/ send(&row, Pslave, data_tag); /*next row*/ } else send(&row, Pslave, terminator_tag);/*terminate*/ rows_recv++; display (r, color); } while (count > 0); Parallel Computing(Intro-07): Rajeev Wankar

54 Slave recv(y, Pslave, ANYTAG, source_tag); /*receive 1st row to compute*/ while (source_tag == data_tag) { c.img = imag_min + ((float) y * scale)imag); for (x = 0; c < disp_width; x++) /*compute row colors*/ c.real = real_min +((float) x *scale_real); color[x] = cal_pixel(c); } send(&i, &y,color, Pslave, result_tag);/*send row colors to master*/ recv(y, Pmaster, source_tag); /*receive next row*/ }; Parallel Computing(Intro-07): Rajeev Wankar

55 Outline Monte Carlo method Sequential random number generators
Parallel random number generators Generating non-uniform random numbers Monte Carlo case studies Parallel Computing(Intro-07): Rajeev Wankar

56 Monte Carlo Method Basic: Use random selections in calculations that lead to the solution to numerical and physical problems. It solves a problem using statistical sampling Name comes from Monaco’s gambling resort city First important use in development of atomic bomb during World War II Parallel Computing(Intro-07): Rajeev Wankar

57 Applications of Monte Carlo Method
Evaluating integrals of arbitrary functions of higher dimensions Predicting future values of stocks Solving partial differential equations Sharpening satellite images Modeling cell populations Finding approximate solutions to NP-hard problems Parallel Computing(Intro-07): Rajeev Wankar

58 Example of Monte Carlo Method
Area = D2 Area = (D/2)2 r = D/2 D D Parallel Computing(Intro-07): Rajeev Wankar

59 Example of Monte Carlo Method
Area = D2 Parallel Computing(Intro-07): Rajeev Wankar

60 Example: Computing the integral
The probabilistic method to find an integral is to use random values of x to computer f(x) and sum the values of f(x) Here xr is random number generated between x1 and x2. Parallel Computing(Intro-07): Rajeev Wankar

61 Mean Value Theorem Parallel Computing(Intro-07): Rajeev Wankar

62 The expected value of (1/n)(f(x1) + … + f(xn)) is f
Estimating Mean Value The expected value of (1/n)(f(x1) + … + f(xn)) is f Parallel Computing(Intro-07): Rajeev Wankar

63 Why Monte Carlo Works Parallel Computing(Intro-07): Rajeev Wankar

64 Requires a statistical analysis
Example: Computing the integral Sequential Code sum = 0; for (i = 0; i < N; i++) { /* N random samples */ xr = rand_v(x1, x2); /* generate next random value */ sum = sum + xr * xr - 3 * xr ; /* compute f(xr) */ } area = (sum / N) * (x2 – x1); Routine rand_v(x1, x2) returns a pseudorandom number between x1 and x2. Requires a statistical analysis Parallel Computing(Intro-07): Rajeev Wankar

65 Partitioning and Divide-and-Conquer Strategies
Partitioning simply divides the problem into parts-Domain decomposition & Functional decomposition. Divide and Conquer Characterized by dividing a problem into sub-problems that are of the same form as the larger problem. Further divisions into still smaller sub-problems are usually done by recursion. Recursive divide and conquer amenable to parallelization because separate processes can be used for the divided parts. Also usually data is naturally localized. Parallel Computing(Intro-07): Rajeev Wankar

66 A parent process divides its workload into several smaller pieces and assigns them to a number of child processes. The child processes then compute their workload in parallel and the results are merged by the parent. The dividing and the merging procedures are done recursively. This paradigm is very natural for computations such as quick sort. Its disadvantage is the difficulty in achieving good load balance. Divide and conquer Parallel Computing(Intro-07): Rajeev Wankar

67 Partitioning/Divide and Conquer Examples
Many possibilities. • Operations on sequences of number such as simply adding them together • Several sorting algorithms can often be partitioned or constructed in a recursive fashion • Numerical integration Parallel Computing(Intro-07): Rajeev Wankar

68 Bucket sort One “bucket” assigned to hold numbers that fall within each region.Numbers in each bucket sorted using a sequential algorithm (Insertion sort !) Sequential sorting time complexity: O(n log(n/m)). Works well if the original numbers uniformly distributed across a known interval, say [0, 1). Parallel Computing(Intro-07): Rajeev Wankar

69 Parallel version of Bucket sort Simple approach
Assign one processor for each bucket. Parallel Computing(Intro-07): Rajeev Wankar

70 Further Parallelization
Partition sequence into m regions, one region for each processor. Each processor maintains p “small” buckets and separates the numbers in its region into its own small buckets. Small buckets then emptied into p final buckets for sorting, which requires each processor to send one small bucket to each of the other processors (bucket i to processor i). Parallel Computing(Intro-07): Rajeev Wankar

71 Another parallel version of bucket sort
Parallel Computing(Intro-07): Rajeev Wankar

72 The following phases are needed
Partition numbers Sort into small buckets Send to large buckets Sort large buckets Introduces new message-passing operation – all-to-all broadcast. Parallel Computing(Intro-07): Rajeev Wankar

73 “all-to-all” broadcast routine
Sends data from each process to every other process Parallel Computing(Intro-07): Rajeev Wankar

74 “all-to-all” routine actually transfers rows of an array to columns
Effect of “all-to-all” on an array Parallel Computing(Intro-07): Rajeev Wankar

75 Data Parallel Example - Prefix Sum Problem
Given a list of numbers, x0, …, xn-1, compute all the partial summations (i.e., x0 + x1; x0 + x1 + x2; x0 + x1 + x2 + x3; … ). Can also be defined with associative operations other than addition. Widely studied. Practical applications in areas such as processor allocation, data compaction, sorting, and polynomial evaluation. Parallel Computing(Intro-07): Rajeev Wankar

76 Sequential code for(i = 0; i < n; i++) { sum[i] = 0;
for (j = 0; j <= i; j++) sum[i] = sum[i] + x[j]; } An O(n2) algorithm. Parallel Computing(Intro-07): Rajeev Wankar

77 Parallel Method For obtaining the partial sum of 16 numbers, a different number of computations occurs in each step. First 15(16-1) additions occur in which x[i-1] is added to x[i], 1 i <16 Second 14(16-2) additions occur in which x[i-2] is added to x[i], 2  i <16 Then 12(16-4) additions occur in which x[i-4] is added to x[i], 4  i <16 In General, method requires log n steps in which n - 2j additions occur in which x[i-2j] is added to x[i], 2j  i <n Parallel Computing(Intro-07): Rajeev Wankar

78 for(j = 0; i < log(n); j++) { for (i = 2j; i < n; i++)
Sequential code for(j = 0; i < log(n); j++) { for (i = 2j; i < n; i++) x[i] = x[i] + x[i-2j]; } Parallel Computing(Intro-07): Rajeev Wankar

79 for(j = 0; i < log(n); j++) { forall (i = 0; i < n; i++)
Parallel code for(j = 0; i < log(n); j++) { forall (i = 0; i < n; i++) if (i >= 2j) x[i] = x[i] + x[i-2j]; } Parallel Computing(Intro-07): Rajeev Wankar

80 Parallel Computing(Intro-07): Rajeev Wankar

81 Another fully synchronous computation example
Solving a General System of Linear Equations by Iteration Suppose the equations are of a general form with n equations and n unknowns Parallel Computing(Intro-07): Rajeev Wankar

82 ai,0,x0 + ai,1,x1 + … + ai,n-1,xn-1 = bi to
One way to solve these equations for the unknowns is by iteration. By rearranging the ith equation: ai,0,x0 + ai,1,x1 + … + ai,n-1,xn-1 = bi to xi = (1/ai,i) [bi – (ai,0,x0 + ai,1,x1 + …+ ai,n-1,xn-1)] or Parallel Computing(Intro-07): Rajeev Wankar

83 Termination A simple, common approach. Compare values computed in one iteration to values obtained from the previous iteration. Terminate computation when all values are within given tolerance; i.e., when However, this does not guarantee the solution to that accuracy. Parallel Computing(Intro-07): Rajeev Wankar

84 Sequential Code for (i = 0; i < n; i ++)
x[i] = b[i]; /*initialize unknown*/ for (iteration = 0; iteration < limit; iteration++) { for (i = 0; i < n; i ++){ sum = 0; for (j = 0; j < n; j++) /* compute summation */ if (i != j) sum = sum + a[i][j] * x[j]; new_x[i] = (b[i] - sum) / a[i][i]; /* compute unknown */ } x[i] = new_x[i]; Parallel Computing(Intro-07): Rajeev Wankar

85 Parallel Code Process Pi could be of the form
x[i] = b[i]; /*initialize unknown*/ for (iteration = 0; iteration < limit; iteration++) { sum = -a[i][i] * x[i]; for (j = 0; j < n; j++) /* compute summation */ sum = sum + a[i][j] * x[j]; new_x[i] = (b[i] - sum) / a[i][i]; /* compute unknown */ allgather(&new_x[i]); /*bcast or rec values */ global_barrier(); /* wait for all procs */ } allgather() sends the newly computed value of x[i] from process i to every other process and collects data broadcast from the other processes. Parallel Computing(Intro-07): Rajeev Wankar

86 Allgather Broadcast and gather values in one composite construction.
Parallel Computing(Intro-07): Rajeev Wankar

87 Textbooks Ian T. Foster, Designing and Building Parallel Programs, Concepts and tools for Parallel Software Engineering, Addison-Wesley Publishing Company (1995). Kai Hwang, Zhiwei Xu, Scalable Parallel Computing (Technology Architecture Programming) McGraw Hill Newyork (1997) Parallel Programming in C with MPI and OpenMP – Michael J Quinn, Tata-McGRaw-Hill Edition, 2003 Barry Wilkinson And Michael Allen, Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers, Prentice Hall, Upper Saddle River, NJ, 1999. Parallel Computing(Intro-07): Rajeev Wankar


Download ppt "Practical techniques & Examples"

Similar presentations


Ads by Google