Embarrassingly Parallel Computations

Slides:



Advertisements
Similar presentations
Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Advertisements

Practical techniques & Examples
Grid Computing, B. Wilkinson, C Program Command Line Arguments A normal C program specifies command line arguments to be passed to main with:
Monte Carlo Methods and Statistical Physics
CSCI-455/552 Introduction to High Performance Computing Lecture 11.
Partitioning and Divide-and-Conquer Strategies ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 23, 2013.
4.1 Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M.
Parallel Strategies Partitioning consists of the following steps –Divide the problem into parts –Compute each part separately –Merge the results Divide.
Partitioning and Divide-and-Conquer Strategies Cluster Computing, UNC-Charlotte, B. Wilkinson, 2007.
COMPE472 Parallel Computing Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations.
Embarrassingly Parallel (or pleasantly parallel) Domain divisible into a large number of independent parts. Minimal or no communication Each processor.
Isoparametric Elements Element Stiffness Matrices
Computational Physics Lecture 4 Dr. Guy Tel-Zur.
EECC756 - Shaaban #1 lec # 8 Spring Message-Passing Computing Examples Problems with a very large degree of parallelism: –Image Transformations:
Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations Asynchronous Computations.
Characteristics of Embarrassingly Parallel Computations Easily parallelizable Little or no interaction between processes Can give maximum speedup.
Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations Asynchronous Computations.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Jointly distributed Random variables
Mathematics for Business (Finance)
Random Number Generation Fall 2013
Definition: the definite integral of f from a to b is provided that this limit exists. If it does exist, we say that is f integrable on [a,b] Sec 5.2:
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Simulation of Random Walk How do we investigate this numerically? Choose the step length to be a=1 Use a computer to generate random numbers r i uniformly.
D MANCHE Finding the area under curves:  There are many mathematical applications which require finding the area under a curve.  The area “under”
Wicomico High School Mrs. J. A. Austin AP Calculus 1 AB Third Marking Term.
Notes Over 11.4 Infinite Geometric Sequences
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Scientific Computing Lecture 5 Dr. Guy Tel-Zur Autumn Colors, by Bobby Mikul, Mikul Autumn Colors, by Bobby Mikul,
Module 1: Statistical Issues in Micro simulation Paul Sousa.
P.1 Real Numbers. 2 What You Should Learn Represent and classify real numbers. Order real numbers and use inequalities. Find the absolute values of real.
Embarrassingly Parallel Computations processes …….. Input data results Each process requires different data and produces results from its input without.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Volumes of Revolution The Shell Method Lesson 7.3.
Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations Asynchronous Computations.
CSCI-455/552 Introduction to High Performance Computing Lecture 9.
Computer Science 320 A First Program in Parallel Java.
Embarrassingly Parallel (or pleasantly parallel) Characteristics Domain divisible into a large number of independent parts. Little or no communication.
Parallel Computing Chapter 3 - Patterns R. HALVERSON MIDWESTERN STATE UNIVERSITY 1.
Embarrassingly Parallel Computations
Chapter 7 Review.
TOPIC : 7 NUMERICAL METHOD.
Approximate computations
Embarrassingly Parallel (or pleasantly parallel)
Partitioning and Divide-and-Conquer Strategies
Copyright © Cengage Learning. All rights reserved.
Parallel Techniques之 易并行计算
Approximate computations
Probability and Probability Density Functions
cuRAND cuRAND uses GPU to generate pseudorandom numbers
Embarrassingly Parallel
Copyright © Cengage Learning. All rights reserved.
Advanced Mathematics D
Lecture 2 – Monte Carlo method in finance
Chapter 15 Multiple Integrals
Chapter 7 Optimization.
CSC4005 – Distributed and Parallel Computing
Monte Carlo Integration Using MPI
Advanced Mathematics D
Monte Carlo Methods A so-called “embarrassingly parallel” computation as it decomposes into obviously independent tasks that can be done in parallel without.
Using compiler-directed approach to create MPI code automatically
Introduction to High Performance Computing Lecture 7
Parallel Techniques • Embarrassingly Parallel Computations
Riemann Sums and Definite Integrals
Copyright © Cengage Learning. All rights reserved.
Statistical Methods for Data Analysis Random number generators
Random Variate Generation
Definition: Sec 5.2: THE DEFINITE INTEGRAL
Symbolic Integral Notation
Chapter 5 Integration Section R Review.
Presentation transcript:

Embarrassingly Parallel Computations Chapter 3

Embarrassingly Parallel Computations A computation that can obviously be divided into a number of completely independent parts, each of which can be executed by a separate process. No communication or very little communication between processes Each process can do its tasks without any interaction with others

Practical embarrassingly parallel computation (static process creation / master-slave approach)

Embarrassingly Parallel Computations Examples Low level image processing Many of such operations only involve local data with very limited if any communication between areas of interest. Mandelbrot set Monte Carlo Calculations

Some geometrical operations Shifting Object shifted by Dx in the x-dimension and Dy in the y-dimension: x¢ = x + Dx y¢ = y + Dy where x and y are the original and x¢ and y¢ are the new coordinates. Scaling Object scaled by a factor Sx in x-direction and Sy in y-direction: x¢ = xSx y¢ = ySy Rotation Object rotated through an angle q about the origin of the coordinate system: x¢ = x cosq + y sinq y¢ = -x sinq + y cosq

Partitioning into regions for individual processes Square region for each process (can also use strips)

Mandelbrot Set What is the Mandelbrot set? Set of all complex numbers c for which sequence defined by the following iteration remains bounded: z(0) = c, z(n+1) = z(n)*z(n) + c, n=0,1,2, ... This means that there is a number B such that the absolute value of all iterates z(n) never gets larger than B.

computing value of one point Sequential routine computing value of one point structure complex { float real; float imag; }; int cal_pixel(complex c) { int count, max; complex z; float temp, lengthsq; max = 256; z.real = 0; z.imag = 0; count = 0; /* number of iterations */ do { temp = z.real * z.real - z.imag * z.imag + c.real; z.imag = 2 * z.real * z.imag + c.imag; z.real = temp; lengthsq = z.real * z.real + z.imag * z.imag; count++; } while ((lengthsq < 4.0) && (count < max)); return count; }

Static Task Assignment Parallelizing Mandelbrot Set Computation Static Task Assignment Simply divide the region into fixed number of parts, each computed by a separate processor. Disadvantage: Different regions may require different numbers of iterations and time. Dynamic Task Assignment

Monte Carlo Methods Another embarrassingly parallel computation Example: calculate π using the ratio: Randomly choose points within the square Count the points that lie within the circle Given a sufficient number of randomly selected samples fraction of points within the circle will be: π /4

Example of Monte Carlo Method Area = D2

Computing an Integral Use random values of x to compute f(x) and sum values of f(x): where xi are randomly generated values of x between x1 and x2. Monte Carlo method is very useful if the function cannot be integrated numerically (maybe having a large number of variables)

Example: Sequential Code sum = 0; for (i = 0; i < N; i++) { /* N random samples */ xr = rand_v(x1, x2); /* generate next random value */ sum = sum + xr * xr - 3 * xr; /* compute f(xr) */ } area = (sum / N) * (x2 - x1); Routine rand_v(x1, x2) returns a pseudorandom number between x1 and x2

Values for a "good" generator: a=16807, m=231-1 (a prime number), c=0 This generates a repeating sequence of (231- 2) different numbers.

A Parallel Formulation xi+1 = (axi + c) mod m xi+k = (Axi+ C) mod m A and C above can be derived by computing xi+1=f(xi) , xi+2=f(f(xi)), ... xi+k=f(f(f(…f(xi)))) and using the following properties: (A+B) mod M = [(A mod M) + (B mod M)] mod M [ X(A mod M) ] mod M = (X.A mod M) X(A + B) mod M = (X.A + X.B) mod M = [(X.A mod M) + (X.B mod M)] mod M [ X( (A+B) mod M) ] mod M = (X.A + X.B) mod M