Computational Physics Lecture 4 Dr. Guy Tel-Zur.

Slides:



Advertisements
Similar presentations
Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Advertisements

The OSCAR Cluster System Tarik Booker CS 370. Topics Introduction OSCAR Basics Introduction to LAM LAM Commands MPI Basics Timing Examples.
Partitioning and Divide-and-Conquer Strategies ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2007.
Parallel Systems Parallel Systems Tools Dr. Guy Tel-Zur.
COMPE472 Parallel Computing Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations.
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Partitioning and Divide-and-Conquer Strategies ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2007.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Computational Physics Lecture 3 - Admin Dr. Guy Tel-Zur Coral. Picture by Anna Cervova, publicdomainpictures.net.
Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations Asynchronous Computations.
Deino MPI Installation The famous “cpi.c” Profiling
and Divide-and-Conquer Strategies
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
12d.1 Two Example Parallel Programs using MPI UNC-Wilmington, C. Ferner, 2007 Mar 209, 2007.
Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations Asynchronous Computations.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
1 Lecture 4: Distributed-memory Computing with PVM/MPI.
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Scientific Computing Lecture 5 Dr. Guy Tel-Zur Autumn Colors, by Bobby Mikul, Mikul Autumn Colors, by Bobby Mikul,
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Hybrid MPI and OpenMP Parallel Programming
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
Embarrassingly Parallel Computations processes …….. Input data results Each process requires different data and produces results from its input without.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Embarrassingly Parallel Computations Partitioning and Divide-and-Conquer Strategies Pipelined Computations Synchronous Computations Asynchronous Computations.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
CSCI-455/552 Introduction to High Performance Computing Lecture 11.5.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
1 Introduction to Parallel Programming with Single and Multiple GPUs Frank Mueller
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
Parallel Computing in Numerical Simulation of Laser Deposition The objective of this proposed project is to research and develop an effective prediction.
CSE 160 – Lecture 16 MPI Concepts, Topology and Synchronization.
Parallel Systems Lecture 10 Dr. Guy Tel-Zur. Administration Home assignments status Final presentation status – Open Excel file ps2013a.xlsx Allinea DDT.
Computational Physics Lecture 3 - Admin Dr. Guy Tel-Zur Coral. Picture by Anna Cervova, publicdomainpictures.net.
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
Timing in MPI Tarik Booker MPI Presentation May 7, 2003.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Interface Using resources from
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming - Exercises Dr. Xiao Qin Auburn University
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Lecture 4: Distributed-memory Computing with PVM/MPI
Embarrassingly Parallel Computations
The OSCAR Cluster System
Chapter 4.
MPI Message Passing Interface
Special Jobs: MPI Alessandro Costa INAF Catania
CS 584.
and Divide-and-Conquer Strategies
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Parallel Processing - MPI
Quiz Questions Suzaku pattern programming framework
Pattern Programming Tools
Parallel Processing Dr. Guy Tel-Zur.
Monte Carlo Integration Using MPI
Introduction to parallelism and the Message Passing Interface
Parallel Techniques • Embarrassingly Parallel Computations
Embarrassingly Parallel Computations
Hybrid MPI and OpenMP Parallel Programming
Introduction to High Performance Computing Lecture 8
Some codes for analysis and preparation for programming
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Computational Physics Lecture 4 Dr. Guy Tel-Zur

Join the Computational Science and Engineering group

לגבי תרגיל הבית נא לצרף הוראות הרצה מדוייקות נא לציין על איזו מערכת התכנית הורצה ( מערכת הפעלה, סביבה, קומפיילר ) לגבי פרוייקטי הגמר נא לחשוב על נושא ! יש לתאם איתי את הנושא עד לשיעור מס ' 7, 25/11/2010

Wolfram Alpha -Computational Knowledge Engine

Agenda Introduction to HPC – continued from slide 57 Linear Algebra (chapter MHJ ) MPI Mini-course – Message Passing Interface (MPI) (LLNL tutorial) Message Passing Interface (MPI) – DeinoMPI short presentation – Examples: cpi, mandelbrot) How to built a parallel cluster demo using BCCD

MPI examples Parallel Numerical Integration – Computing pi Embarrassingly Parallel Computation – Mandelbrot set

Figures in this slide are taken from: “Ch MPI: Interpretive Parallel Computing in C”, By Yu-Cheng Chou Chung Yuan Christian University, Taiwan. Stephen S. Nestinger, Worcester Polytechnic Institute. Harry H. Cheng, University of California, Davis Computing in Science and Engineering, March/April 2010

/************************************************ * File: cpi.c * Purpose: Parallel computation of pi with the * number of intervals from standard input. ************************************************/ #include int main(int argc, char *argv[]) { int n, myid, numprocs, i; double PI25DT = ; double mypi, pi, h, sum, x; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &numprocs); MPI_Comm_rank(MPI_COMM_WORLD, &myid); while(1) { if(myid == 0) { printf("Enter the number of intervals: (0 quits) "); scanf("%d", &n); }cpi.c

MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); if(n == 0) { // program quits if the interval is 0 break; } else { h = 1.0 / (double)n; sum = 0.0; for(i=myid+1; i<=n; i+=numprocs) { x = h * ((double)i - 0.5); sum += (4.0 / (1.0 + x*x)); } mypi = h * sum; MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if(myid == 0) { printf("PI is %.16f, Error is %.16f\n“,pi, fabs(pi - PI25DT)); } } // n!=0 } // end of while MPI_Finalize(); return 0; } // end of main Continued from previous slide

Switch to DeinoMPI slides Demo: execute the cpi.c code

Benoît B. Mandelbrot ( ), Father of Fractal Geometry mandelbrot father-of- fractal-geometry/ mandelbrot father-of- fractal-geometry/ Died on Oct 15, 2010

Next slides are from online presentations by: Dr. Barry Wilkinson UNIVERSITY OF NORTH CAROLINA AT CHARLOTTE Department of Computer Science ITCS 4145/5145 Parallel Programming Spring 2009

Mandelbrot Set Set of points in a complex plane that are quasi-stable (will increase and decrease, but not exceed some limit) when computed by iterating the function: where z k +1 is the (k + 1)th iteration of complex number: z = a + bi and c is a complex number giving position of point in the complex plane. The initial value for z is zero. 3.10

Mandelbrot Set continued Iterations continued until magnitude of z is: Greater than 2 or Number of iterations reaches arbitrary limit. Magnitude of z is the length of the vector given by: 3.10

Sequential routine computing value of one point returning number of iterations structure complex { float real; float imag; }; int cal_pixel(complex c) { int count, max; complex z; float temp, lengthsq; max = 256; z.real = 0; z.imag = 0; count = 0; /* number of iterations */ do { temp = z.real * z.real - z.imag * z.imag + c.real; z.imag = 2 * z.real * z.imag + c.imag; z.real = temp; lengthsq = z.real * z.real + z.imag * z.imag; count++; } while ((lengthsq < 4.0) && (count < max)); return count; } 3.11

Mandelbrot set 3.12 Show class live demo using Octave

Demo: Guy’s Matlab program

Parallelizing Mandelbrot Set Computation Static Task Assignment Simply divide the region in to fixed number of parts, each computed by a separate processor. Not very successful because different regions require different numbers of iterations and time. 3.13

3.14 Dynamic Task Assignment Have processor request regions after computing previous regions

Demo: parallel Mandelbrot set + parallel visualizationDemo: parallel Mandelbrot set + parallel visualization – Open 2 DOS command line windows and cd to DeinoMPI /examples directory – Start pmandel using: Window1:..\bin\mpiexec –n 4 pmandel Window2: pman_vis.exe (select the port from the “file” menu)