Scientific Computing Lecture 5 Dr. Guy Tel-Zur Autumn Colors, by Bobby Mikul, Mikul Autumn Colors, by Bobby Mikul,

Slides:



Advertisements
Similar presentations
Dr. Guy Tel-Zur Computational Physics Differential Equations Autumn Colors, by Bobby Mikul, Mikul Autumn Colors,
Advertisements

Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Practical techniques & Examples
Its.unc.edu 1 Collective Communication University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
COMPE472 Parallel Computing Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations.
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
Computational Physics Lecture 4 Dr. Guy Tel-Zur.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Computational Physics Lecture 3 - Admin Dr. Guy Tel-Zur Coral. Picture by Anna Cervova, publicdomainpictures.net.
Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations Asynchronous Computations.
Characteristics of Embarrassingly Parallel Computations Easily parallelizable Little or no interaction between processes Can give maximum speedup.
Computational Physics Dr. Guy Tel-Zur. Agenda Administration MPI – Parallel Mandelbrot set Differential equations – MHJ Chapter 13 + S. Koonin Chapter.
Computational Physics Lecture 2 - administration Dr. Guy Tel-Zur Old rustic barn. Picture by: by Peter Griffin,
Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations Asynchronous Computations.
Computational Physics Dr. Guy Tel-Zur Lecture 7 mohan pmohan p, Forest,
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Math-254 Numerical Methods.
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
Computational Physics Differential Equations
Scientific Computing Lecture 6 Dr. Guy Tel-Zur Butterfly in Africa by Anna Cervova,
Scientific Computing Topics for Final Projects Dr. Guy Tel-Zur Version 2,
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006outline.1 ITCS 4145/5145 Parallel Programming (Cluster Computing) Fall 2006 Barry Wilkinson.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
Chapter 6 Parallel Sorting Algorithm Sorting Parallel Sorting Bubble Sort Odd-Even (Transposition) Sort Parallel Odd-Even Transposition Sort Related Functions.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 3, 2011outline.1 ITCS 6010/8010 Topics in Computer Science: GPU Programming for High Performance.
1 Computer Programming (ECGD2102 ) Using MATLAB Instructor: Eng. Eman Al.Swaity Lecture (1): Introduction.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Embarrassingly Parallel Computations processes …….. Input data results Each process requires different data and produces results from its input without.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
PP Lab MPI programming II. Program#1 Write a program that prints hello from every created process. Like: Hello World from process 0 of 5 Hello World from.
Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations Asynchronous Computations.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, Dec 26, 2012outline.1 ITCS 4145/5145 Parallel Programming Spring 2013 Barry Wilkinson Department.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
Parallel Systems Lecture 10 Dr. Guy Tel-Zur. Administration Home assignments status Final presentation status – Open Excel file ps2013a.xlsx Allinea DDT.
Computational Physics Lecture 3 - Admin Dr. Guy Tel-Zur Coral. Picture by Anna Cervova, publicdomainpictures.net.
M ATLAB – What Is It ? Name is from matrix laboratory Powerful tool for – Computation and visualization of engineering and science mathematics – Communication.
Scientific Computing Lecture 2 - administration Dr. Guy Tel-Zur Old rustic barn. Picture by: by Peter Griffin,
MPI Groups, Communicators and Topologies. Groups and communicators In our case studies, we saw examples where collective communication needed to be performed.
Scientific Computing Dr. Guy Tel-Zur Lecture 7 mohan pmohan p, Forest,
M ATLAB – What Is It ? Name is from matrix laboratory Powerful tool for – Computation and visualization of engineering and science mathematics – Communication.
PVM and MPI.
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Embarrassingly Parallel Computations
CSC207 Fall 2016.
CS4402 – Parallel Computing
MPI Message Passing Interface
Send and Receive.
Math-254 Numerical Methods.
Dr. Barry Wilkinson © B. Wilkinson Modification date: Jan 9a, 2014
MPI Groups, Communicators and Topologies
Send and Receive.
Message Passing Models
MATLAB – What Is It ? Name is from matrix laboratory Powerful tool for
Lecture 10 Dr. Guy Tel-Zur.
Quiz Questions Suzaku pattern programming framework
Parallel Processing Dr. Guy Tel-Zur.
Parallel Techniques • Embarrassingly Parallel Computations
Embarrassingly Parallel Computations
Hybrid Parallel Programming
Introduction to High Performance Computing Lecture 8
Some codes for analysis and preparation for programming
Programming Parallel Computers
Presentation transcript:

Scientific Computing Lecture 5 Dr. Guy Tel-Zur Autumn Colors, by Bobby Mikul, Mikul Autumn Colors, by Bobby Mikul, Mikul Version :00

Agenda Administration MPI – Continue from last lecture DeinoMPI short presentation MPI Examples: – Parallel Integration – Parallel Mandelbrot set Differential equations – MHJ Chapter 13 + S. Koonin Chapter 2 – Matlab demos – Scilab demos OpenMP mini-course FlexPDE mini-course

Final Projects In two weeks we must finalize the final projects topics. Please devote time to search for a project – CPU intensive – Will use one or more of the computational tools and techniques we learnt in class

Feed Back You are encouraged to send a feedback about all aspects of the course Math level, CS level Condor, MPI,… Send an to:

No Class Next Week. We Will Meet Again on 1/5

Join the Computational Science and Engineering group

Wolfram Alpha -Computational Knowledge Engine

MPI Tutorial Continue tutorial from: Sections 1 to 6

Collective Communications Example - Scatter

#include "mpi.h" #include #define SIZE 4 int main(argc,argv) int argc; char *argv[]; { int numtasks, rank, sendcount, recvcount, source; float sendbuf[SIZE][SIZE] = { {1.0, 2.0, 3.0, 4.0}, {5.0, 6.0, 7.0, 8.0}, {9.0, 10.0, 11.0, 12.0}, {13.0, 14.0, 15.0, 16.0} }; float recvbuf[SIZE]; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &numtasks); Show demo in Virtual box – Ubuntu OS. Folder: ~/mpi

Continued if (numtasks == SIZE) { source = 1; sendcount = SIZE; recvcount = SIZE; MPI_Scatter(sendbuf,sendcount,MPI_FLOAT,recvbuf,recvcount, MPI_FLOAT,source,MPI_COMM_WORLD); printf("rank= %d Results: %f %f %f %f\n",rank,recvbuf[0], recvbuf[1],recvbuf[2],recvbuf[3]); } else printf("Must specify %d processors. Terminating.\n",SIZE); MPI_Finalize(); }

MPI examples Parallel Numerical Integration – Computing pi Embarrassingly Parallel Computation – Mandelbrot set

Switch to DeinoMPI slides Demo: execute the cpi.c code

Benoît B. Mandelbrot ( ), Father of Fractal Geometry mandelbrot father-of- fractal-geometry/ mandelbrot father-of- fractal-geometry/ Died on Oct 15, 2010

Next slides are from online presentations by: Dr. Barry Wilkinson UNIVERSITY OF NORTH CAROLINA AT CHARLOTTE Department of Computer Science ITCS 4145/5145 Parallel Programming Spring 2009

Mandelbrot Set Set of points in a complex plane that are quasi-stable (will increase and decrease, but not exceed some limit) when computed by iterating the function: where z k +1 is the (k + 1)th iteration of complex number: z = a + bi and c is a complex number giving position of point in the complex plane. The initial value for z is zero. 3.10

Mandelbrot Set continued Iterations continued until magnitude of z is: Greater than 2 or Number of iterations reaches arbitrary limit. Magnitude of z is the length of the vector given by: 3.10

Sequential routine computing value of one point returning number of iterations structure complex { float real; float imag; }; int cal_pixel(complex c) { int count, max; complex z; float temp, lengthsq; max = 256; z.real = 0; z.imag = 0; count = 0; /* number of iterations */ do { temp = z.real * z.real - z.imag * z.imag + c.real; z.imag = 2 * z.real * z.imag + c.imag; z.real = temp; lengthsq = z.real * z.real + z.imag * z.imag; count++; } while ((lengthsq < 4.0) && (count < max)); return count; } 3.11

Mandelbrot set 3.12 Show class live demo using Octave

Demo: Guy’s Matlab program

Parallelizing Mandelbrot Set Computation Static Task Assignment Simply divide the region in to fixed number of parts, each computed by a separate processor. Not very successful because different regions require different numbers of iterations and time. 3.13

3.14 Dynamic Task Assignment Have processor request regions after computing previous regions

Demo: parallel Mandelbrot set + parallel visualizationDemo: parallel Mandelbrot set + parallel visualization – Open 2 DOS command line windows and cd to C:\Program Files (x86)\DeinoMPI\examples directory – Start pmandel using: Window1:..\bin\mpiexec –n 4 pmandel Window2: pman_vis.exe (select the port from the “file” menu)

Mandelbrot set

FlexPDE mini course Open the other pptx file