Lecture 10 Dr. Guy Tel-Zur.

Slides:



Advertisements
Similar presentations
1 Programming Explicit Thread-level Parallelism  As noted previously, the programmer must specify how to parallelize  But, want path of least effort.
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Introduction to Parallel Processing Dr. Guy Tel-Zur Lecture 10.
Tutorial on MPI Experimental Environment for ECE5610/CSC
Parallel Systems Parallel Systems Tools Dr. Guy Tel-Zur.
Computational Physics Lecture 10 Dr. Guy Tel-Zur.
Computational Physics Lecture 4 Dr. Guy Tel-Zur.
Deino MPI Installation The famous “cpi.c” Profiling
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
A Very Short Introduction to OpenMP Basile Schaeli EPFL – I&C – LSP Vincent Keller EPFL – STI – LIN.
CS 470/570 Lecture 7 Dot Product Examples Odd-even transposition sort More OpenMP Directives.
Monte Carlo Simulation Used when it is infeasible or impossible to compute an exact result with a deterministic algorithm Especially useful in –Studying.
Programming with Shared Memory Introduction to OpenMP
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
Hybrid MPI+OpenMP Yao-Yuan Chuang.
Confidential – Internal Use Only 1 Parallel Programming Orientation.
1 Datamation Sort 1 Million Record Sort using OpenMP and MPI Sammie Carter Department of Computer Science N.C. State University November 18, 2004.
Lecture 5: Shared-memory Computing with Open MP. Shared Memory Computing.
Lecture 8: Caffe - CPU Optimization
Scientific Computing Lecture 5 Dr. Guy Tel-Zur Autumn Colors, by Bobby Mikul, Mikul Autumn Colors, by Bobby Mikul,
OpenMP Blue Waters Undergraduate Petascale Education Program May 29 – June
CS 240A Models of parallel programming: Distributed memory and MPI.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Hybrid MPI and OpenMP Parallel Programming
1 " Teaching Parallel Design Patterns to Undergraduates in Computer Science” Panel member SIGCSE The 45 th ACM Technical Symposium on Computer Science.
PP Lab MPI programming II. Program#1 Write a program that prints hello from every created process. Like: Hello World from process 0 of 5 Hello World from.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
1 Introduction to Parallel Programming with Single and Multiple GPUs Frank Mueller
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Parallel Systems Lecture 10 Dr. Guy Tel-Zur. Administration Home assignments status Final presentation status – Open Excel file ps2013a.xlsx Allinea DDT.
Introduction to Pragnesh Patel 1 NICS CSURE th June 2015.
Using Compiler Directives Paraguin Compiler 1 © 2013 B. Wilkinson/Clayton Ferner SIGCSE 2013 Workshop 310 session2a.ppt Modification date: Jan 9, 2013.
Implementing Processes and Threads CS550 Operating Systems.
Introduction to MPI programming Morris Law, SCID May 18/25, 2013.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming - Exercises Dr. Xiao Qin Auburn University
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Lecture 6: Lecturer: Simon Winberg Distributed Memory Systems & MPI vs. OpenMP vula share MPI OpenMP + an observation of 8 March International Womans Day.
Hands on training session for core skills
PARADOX Cluster job management
Introduction to OpenMP
MPI Basics.
Lecture 5: Shared-memory Computing with Open MP
MPI Message Passing Interface
Introduction to OpenMP
September 4, 1997 Parallel Processing (CS 667) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Parallel Processing.
Special Jobs: MPI Alessandro Costa INAF Catania
Lab. 3 (May 6st) You may use either cygwin or visual studio for using OpenMP Compiling in cygwin “> gcc –fopenmp ex1.c” will generate a.exe Execute : “>
Lecture 14: Distributed Memory Systems & MPI vs. OpenMP
CS 584.
Introduction to Parallel Programming with MPI
Hybrid Parallel Programming
Lab. 3 (May 11th) You may use either cygwin or visual studio for using OpenMP Compiling in cygwin “> gcc –fopenmp ex1.c” will generate a.exe Execute :
Dr. Guy Tel-Zur (*)=and clones + various tools
KAUST Winter Enhancement Program 2010 (WE 244)
Hybrid Parallel Programming
DNA microarrays. Infinite Mixture Model-Based Clustering of DNA Microarray Data Using openMP.
Hybrid Parallel Programming
Introduction to OpenMP
Hybrid MPI and OpenMP Parallel Programming
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Introduction to Parallel Programming with Single and Multiple GPUs
Introduction to Parallel Computing
Lab. 3 (May 1st) You may use either cygwin or visual studio for using OpenMP Compiling in cygwin “> gcc –fopenmp ex1.c” will generate a.exe Execute : “>
Hybrid Parallel Programming
Working in The IITJ HPC System
Distributed Memory Programming with Message-Passing
Some codes for analysis and preparation for programming
Programming Parallel Computers
Presentation transcript:

Lecture 10 Dr. Guy Tel-Zur

Today’s Agenda Sorting Algorithms MatlabMPI demo Wilkinson & Allen chapter 10OpenMP demo in Visual Studio MatlabMPI demo Hybrid OpenMP and MPI programming hpi.c, hybridpi.c, lecture11-12.pdf Numerical Algorithms Wilkinson & Allen chapter 11 CilkPlus (demos in Linux and Visual Studio) Home assignment #3

Visual Studio 2012 express

MatlabMPI demo cd to ~/matlab Start Matlab without GUI: matlab -nojvm –nosplash If MatMPI directory exists before executing your code then erase it or from inside Matlab type: MatMPI_Delete_all

xbasic.m See more examples : /usr/local/PP/MatlabMPI/examples/ >> MatMPI_Delete_all No MPI_COMM_WORLD, deleting anyway, files may be leftover. >> eval(MPI_Run('xbasic',4,{})) Launching MPI rank: 3 on: hobbit10.ee.bgu.ac.il Launching MPI rank: 2 on: hobbit10.ee.bgu.ac.il Launching MPI rank: 1 on: hobbit10.ee.bgu.ac.il Launching MPI rank: 0 on: hobbit10.ee.bgu.ac.il unix_launch = /bin/sh ./MatMPI/Unix_Commands.hobbit10.ee.bgu.ac.il.0.sh & my_rank: 0 SUCCESS See more examples : /usr/local/PP/MatlabMPI/examples/

hello.m >> MatMPI_Delete_all >> eval(MPI_Run('hello',4,{})) Launching MPI rank: 3 on: hobbit10.ee.bgu.ac.il Launching MPI rank: 2 on: hobbit10.ee.bgu.ac.il Launching MPI rank: 1 on: hobbit10.ee.bgu.ac.il Launching MPI rank: 0 on: hobbit10.ee.bgu.ac.il unix_launch = /bin/sh ./MatMPI/Unix_Commands.hobbit10.ee.bgu.ac.il.0.sh & HelloWorld from rank: 0 SUCCESS

Hybrid MPI + OpenMP

See helper script: hybrid.bash Hybrid MPI + OpenMP Demo Machine File: Node1 Node2 Node3 node4 Each node has 8 cores MPI mpicc -o mpi_out mpi_test.c -fopenmp See helper script: hybrid.bash OpenMP Lecturer's note: make a demo cd ~/mpi program name: hybridpi.c

mpicc -o mpi_exe mpi_test.c -fopenmp export OMP_NUM_THREADS=8 (bash) mpirun -np 4 –machinefile ./machines mpi_exe Openmpi extensions (N/A at BGU): -npersocket, --npersocket <#persocket>On each node, launch this many processes times the number of processor sockets on the node. The -npersocket option also turns on the -bind-to-socket option.-npernode, --npernode <#pernode>On each node, launch this many processes.-pernode, --pernodeOn each node, launch one process -- equivalent to -npernode 1. -ppn 1

Hybrid Pi (MPI+OpenMP) #include <stdio.h> #include <mpi.h> #include <omp.h> #define NBIN 100000 #define MAX_THREADS 8 int main(int argc,char **argv) { int nbin,myid,nproc,nthreads,tid; double step,sum[MAX_THREADS]={0.0},pi=0.0,pig; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Comm_size(MPI_COMM_WORLD,&nproc); nbin = NBIN/nproc; step = 1.0/(nbin*nproc);

#pragma omp parallel private(tid) { int i; double x; nthreads = omp_get_num_threads(); tid = omp_get_thread_num(); for (i=nbin*myid+tid; i<nbin*(myid+1); i+=nthreads) { x = (i+0.5)*step; sum[tid] += 4.0/(1.0+x*x); } printf("rank tid sum = %d %d %e\n",myid,tid,sum[tid]); for(tid=0; tid<nthreads; tid++) pi += sum[tid]*step; MPI_Allreduce(&pi,&pig,1,MPI_DOUBLE,MPI_SUM,MPI_COMM_WORLD); if (myid==0) printf("PI = %f\n",pig); MPI_Finalize(); return 0; }

Hybrid MPI+OpenMP continued Lecturer's note:For the demo: See hybrid.bash script