Lab Course CFD Parallelisation Dr. Miriam Mehl.

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

CS 140: Models of parallel programming: Distributed memory and MPI.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
IBM Research © 2006 IBM Corporation CDT Static Analysis Features CDT Developer Summit - Ottawa Beth September.
Director of Contra Costa College High Performance Computing Center
ECE 1747H : Parallel Programming Message Passing (MPI)
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Hybrid MPI and OpenMP Parallel Programming
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
Parallel Computing in Numerical Simulation of Laser Deposition The objective of this proposed project is to research and develop an effective prediction.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
Timing in MPI Tarik Booker MPI Presentation May 7, 2003.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Implementing Processes and Threads CS550 Operating Systems.
Message Passing Interface Using resources from
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
PVM and MPI.
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
1 MPI: Message Passing Interface Prabhaker Mateti Wright State University.
The OSCAR Cluster System
Chapter 4.
Introduction to parallel computing concepts and technics
MPI Basics.
CS4402 – Parallel Computing
Introduction to MPI.
MPI Message Passing Interface
Send and Receive.
CS 584.
MPI_Bcast Bcast stands for broadcast, and is used to send data from one process to all other processes. The format for this function is: MPI_Bcast(&msg_address,#_elements,MPI_Type,
MPI: The Message-Passing Interface
Send and Receive.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
Introduction to Message Passing Interface (MPI)
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Message Passing Models
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
MPI: Message Passing Interface
Introduction to parallelism and the Message Passing Interface
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Hybrid Parallel Programming
MPI MPI = Message Passing Interface
Introduction to Parallel Computing with MPI
Hybrid MPI and OpenMP Parallel Programming
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Introduction to Parallel Computing
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Distributed Memory Programming with Message-Passing
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Parallel Processing - MPI
MPI Message Passing Interface
Some codes for analysis and preparation for programming
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Lab Course CFD Parallelisation Dr. Miriam Mehl

Architecture MIMD multiple instruction multiple data distributed address space

Architecture

Message Passing Interface heterogeneous computers one program for all processes control  master process

Example: communicate.c #include <stdlib.h> #include <stdio.h> #include <mpi.h> const double PI = 3.14; /* the value to be broadcast */ int main(int argc, char* argv[]) { int i, myrank, nproc; int NeighborID_li, NeighborID_re; int recv_NeighborID_li, recv_NeighborID_re; double pi; MPI_Status status;

Example: communicate.c /* initialisation */ MPI_Init( &argc, &argv ); MPI_Comm_size( MPI_COMM_WORLD, &nproc ); MPI_Comm_rank( MPI_COMM_WORLD, &myrank ); /* determine the neighbors */ if ( 0 < myrank ) NeighborID_li = myrank - 1; else NeighborID_li = MPI_PROC_NULL; /* no neighbor */ if ( (nproc - 1) > myrank ) NeighborID_re = myrank + 1; NeighborID_re = MPI_PROC_NULL; /* no neighbor */

Example: communicate.c /* broadcast of pi from process 0 to all */ if ( 0 == myrank ) pi = PI; else pi = 0.0; printf("Proc %2d : Before the broadcast pi = %e\n", myrank, pi); MPI_Bcast( &pi, 1,MPI_DOUBLE, 0, MPI_COMM_WORLD ); printf("Proc %2d : After the broadcast pi = %e\n", myrank, pi);

Example: communicate.c /* send the ID to left/right neighbors (conventional) MPI_Send( &myrank, 1, MPI_INT, NeighborID_li, 1, MPI_COMM_WORLD ); MPI_Recv( &recv_NeighborID_re, 1, MPI_INT, NeighborID_re, 1, MPI_COMM_WORLD, &status ); if ( MPI_PROC_NULL != NeighborID_re ) printf("Proc %2d : ID right neighbor = %2d\n", myrank, recv_NeighborID_re); else printf("Proc %2d : ID right neighbor = MPI_PROC_NULL\n", myrank); MPI_Send( &myrank, 1, MPI_INT, NeighborID_re, 2, MPI_COMM_WORLD ); MPI_Recv( &recv_NeighborID_li, 1, MPI_INT, NeighborID_li, 2, …….

Example: communicate.c /* Epilog */ fflush(stdout); /* write out all output */ fflush(stderr); MPI_Barrier( MPI_COMM_WORLD ); /* synchronize all processes */ MPI_Finalize(); /* end the MPI session */ return 0; }

Compiler and Program Call local machine different machines