1 Lecture 4: Distributed-memory Computing with PVM/MPI.

Slides:



Advertisements
Similar presentations
Parallel Virtual Machine Rama Vykunta. Introduction n PVM provides a unified frame work for developing parallel programs with the existing infrastructure.
Advertisements

MPI Message Passing Interface
Practical techniques & Examples
Company LOGO Parallel Virtual Machine Issued by: Ameer Mosa Al_Saadi 1 University of Technology Computer Engineering and Information Technology Department.
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Computational Physics Lecture 4 Dr. Guy Tel-Zur.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
Deino MPI Installation The famous “cpi.c” Profiling
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
Parallel Programming in C with MPI and OpenMP
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
Introduction to PVM PVM (Parallel Virtual Machine) is a package of libraries and runtime daemons that enables building parallel apps easily and efficiently.
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
1 Developing Native Device for MPJ Express Advisor: Dr. Aamir Shafi Co-advisor: Ms Samin Khaliq.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
PVM and MPI What is more preferable? Comparative analysis of PVM and MPI for the development of physical applications on parallel clusters Ekaterina Elts.
PVM. PVM - What Is It? F Stands for: Parallel Virtual Machine F A software tool used to create and execute concurrent or parallel applications. F Operates.
OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Parallel Solution of 2-D Heat Equation Using Laplace Finite Difference Presented by Valerie Spencer.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
1 (1)Cluster computing (2) Grid computing) Part 4 Current trend of parallel processing.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Hybrid MPI and OpenMP Parallel Programming
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 11.
An Introduction to MPI Parallel Programming with the Message Passing Interface Prof S. Ramachandram.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
CSCI-455/552 Introduction to High Performance Computing Lecture 11.5.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Parallel and Distributed Programming Kashif Bilal.
PVM (Parallel Virtual Machine)‏ By : Vishal Prajapati Course CS683 Computer Architecture Prof. Moreshwar R Bhujade.
PVM: Parallel Virtual Machine anonymous ftp ftp ftp.netlib.org cd pvm3/book get pvm-book.ps quit
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
MPI and OpenMP.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
CSE 160 – Lecture 16 MPI Concepts, Topology and Synchronization.
Parallel Programming with PVM Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
Lecture 5: Parallel Virtual Machine (PVM). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
Timing in MPI Tarik Booker MPI Presentation May 7, 2003.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 12.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming - Exercises Dr. Xiao Qin Auburn University
PVM and MPI.
Lecture 4: Distributed-memory Computing with PVM/MPI
Outline Background Basics of MPI message passing
Parallel Virtual Machine
Prabhaker Mateti Wright State University
MPI Message Passing Interface
CS 668: Lecture 3 An Introduction to MPI
Special Jobs: MPI Alessandro Costa INAF Catania
CS 584.
Message Passing Libraries
Introduction to parallelism and the Message Passing Interface
MPI MPI = Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Parallel Processing - MPI
MPI Message Passing Interface
Programming Parallel Computers
Presentation transcript:

1 Lecture 4: Distributed-memory Computing with PVM/MPI

2 Cluster Computing What is cluster computing  Use a number of work stations or PCs to construct a virtual parallel computer.  Comparing with parallel computers, cluster processing is cheap.  Performance is worse than parallel computers, but it can be largely improved by using high performance network.

3 Global/Cloud Computing What is global (GRID) computing Use all the resources on internet to construct a super distributed parallel virtual computer Intranet + Internet Super parallel computers PC cluster User High performance PC

4 Implement of cluster processing Standard software for cluster processing  PVM (Parallel Virtual Machine)  MPI (Message Passing Interface) History of PVM 1989 PVM1.0 released by Oak Ridge National Laboratory PVM2.0 released by University of Tennessee PVM3.0 released Published with free manual 2001PVM released with C and Fortran. PVM: A Users’ Guide and Tutorial for Networked Parallel Computing

5 Implement of cluster processing History of MPI Specification (MP-I) was decided by the experts from 40 companies Specification of MPI-2 was decided MPI-1.1 based implementation was released. It is used as default environment in many distributed parallel processing systems.

6 Common in PVM and MPI  Communication is based on 1 to 1 message passing.  They are designed for not only work stations and PCs but also parallel computers.  Corresponding to many OS and Languages. ( OS: UNIX, Windows95/NT, Languages: C, Fortran, JAVA)  Free  Standard software in distributed and parallel computing. send(5, message ) receive(3) Processor ID: 3 Processor ID: 5

7 Construction of PVM Software of PVM  Daemon pvmd used for the communication between processors.  Console pvm used for constructing virtual parallel computer ( xpvm is the console combined with GUI)  Library of functions such as pvm_send and pvm_receive for message sending and receiving. On any computer, one can use pvm or xpvm to construct a virtual parallel computer, and then make and execute programs. A virtual parallel computer pvmd (opty1) (optima) (opty2) (opty3) ( Constructed by connected 4 computers )

8 Start of PVM (1) On any computer, use command pvm or xpvm to start pvmd. On optima, use pvm or xpvm to start pvmd.

9 Start of PVM (2) On console pvm add computers to construct a virtual parallel computer. (At the this time, pvmd is started in these computers.) On optima add opty1, opty2, opty3 to virtual parallel computer by pvmd.

10 Execution of PVM Program execution on PVM (1) When the program which contains communication functions of pvm is executed, the specified programs will be executed by pvmd When program prog1 is executed on optima which contains communication functions, the process (in this case prog 1) will be started in all processors by pvmd.

11 Execution of PVM (2) Once the execution of a program started, communication and processing inside the program will be done automatically until the end. Processing is held with the communication between processors.

12 Using PVM Using command pvm chen[11]% pvm pvm> add opty1 opty2 opty3 ( add opty1, opty2, opty3 to the virtual computer ) 3 successful HOST DTID opty opty2 c0000 opty pvm> conf (Show the configuration of the virtual parallel computer) 4 hosts, 1 data format HOST DTID ARCH SPEED DSIG optima X86SOL x opty X86SOL x opty2 c0000 X86SOL x opty X86SOL x pvm> spawn edm4 ( Execute program edm4 on the virtual computer) 1 successful t80001 pvm> halt ( Halt of the virtual computer ) Terminated chen[12]%

XPVM Screen Shot

14 Using XPVM Using xpvm (X application)

15 PVM programming – Example 1 (Greeting) 1. Example (hello.c, hello_other.c)

16 PVM Programming – Example 1(Greeting) Start the process hello Hello, Dr. t80004 ( Receive the message by pvm_recv ) Fine, thank you, Dr. t80004 ( Process hello_other is started ) Hello,Prof. t4000a. How are you? ( Send message by pvm_send ) pvm_spawn (Process hello_other is started on other computers by pvm_spawn. Started process has its task ID.) pvm_send pvm_recv ( Finish the process by pvm_exit )

17 PVM programming – Example 2 (Calculate π) Equation of unit circle: Area of unit circle: π or

18 /* Calculating PI by integral (Mater Program) */ #include /* For standard I/O functions¡ */ #include "pvm3.h" /* For stand pvm functions */ main(argc,argv) int argc; char **argv; { int intervals; /* Number of intervals */ int ctid[2]; /* task id (subtasks) */ int n; /* Number of intervals for one subtask */ double s,t; /* Interval of tasks */ double result1,result2; /* Results of subtasks */ if(argc<=1){ /* Finish if no any parameter */ printf("Usage:pi_dual intervals\n"); pvm_exit(); exit(0); } sscanf(argv[1],"%d",&intervals); /* Get the first parameter and put it into intervals */ if(pvm_spawn("pi_slave",(char **) 0,0,"",2,ctid)!=2){ /* Start two subtasks for executing slave program pi_slave¡ Save the subtask ids into arry ctid */ printf("can't start pi_slave\n"); /* Fail in starting the subtasks */ pvm_exit(); exit(0); } n=intervals/2; /* Interval for one subtask */ s=0;t=0.5; /* First subtask responsible for interval [0,0.5] */ pvm_initsend(0); /* Prepare for data sending */ pvm_pkint(&n, 1,1); /* Put the interval into the sending buffer */ pvm_pkdouble(&s, 1, 1); /* Put the value of s into the sending buffer */ pvm_pkdouble(&t, 1,1); /* Put the value of t into the sending buffer */ pvm_send(ctid[0], 1); /* Send the content of the buffer to the first subtask */ s=0.5;t=1; /* Secong subtask responsible for interval [0.5,1] */ pvm_initsend(0); /* Prepare for data sending */ pvm_pkint(&n, 1,1); /* Put the interval into the sending buffer */ pvm_pkdouble(&s, 1, 1); /* Put the value of s into the sending buffe */ pvm_pkdouble(&t, 1,1); /* Put the value of t into the sending buffe */ pvm_send(ctid[1], 1); /* Send the content of the buffer to the second subtask */ pvm_recv(-1, 1); /* Waiting for receiving the data from the subtasks, and put the data into the receiving buffer when received */ pvm_upkdouble(&result1,1,1); /* Put the received value (the result from the first subtask) into result1*/ pvm_recv(-1, 1); /* Waiting for receiving the data from the subtasks, and put the data into the receiving buffer when received */ pvm_upkdouble(&result2,1,1); /*Put the received value (the result from the second subtask)into result2*/ printf("Intervals=%d, PI=%1.15f",intervals,result1+result2); /* Show the result */ pvm_exit(); /* End */ exit(0); } Master Program (pi_dual.c) PVM programming – Example 2 (Calculate π)

19 /* Program for calculating PI by integral (slave program) */ #include /* Using numerical fuctions (sqrt, etc.) */ #include "pvm3.h" /* Using pvm functions */ main() { int ptid; /* Master task id */ int n; /* Number of intercals */ double s,t; /* Interval of integral */ int i; /* Variable for loop */ double h; /* Width of one interval */ double sum=0; /* Accumulated value of the area. Initial value=0 */ double x; /* x coordinate used at present */ ptid = pvm_parent(); /* put master task id into ptid */ pvm_recv(ptid, 1); /* Wating for receiving the data from master task, and put the data into the receiving buffer when received */ pvm_upkint(&n, 1, 1); /*Put the interval assigned for one task into n */ pvm_upkdouble(&s, 1,1); /*Put the left endpoint of the interval to s*/ pvm_upkdouble(&t, 1,1); /* Put the right endpoint into t */ h=(t-s)/n; /* Width of small intervals */ for (i=0;i<n;++i){ /* Repeat the loop n times */ x=(i+0.5)*h+s; /* Midpoint of x coordinate */ sum+=(4*sqrt(1-x*x))*h; /* Calculate the area of one small interval and add it to sum. */ } pvm_initsend(0); /* Prepare for sending data */ pvm_pkdouble(&sum, 1, 1); /* Put sum into the sending buffer */ pvm_send(ptid, 1); /* Send the content of the sending buffer into the master process */ pvm_exit(); /* End */ exit(0); } Slave Program (pi_slave.c) PVM programming – Example 2 (Calculate π)

20 MPI programming – Example 1(Greeting) master slave

21 MPI programming – Example 2 (Calculate π) #include "mpi.h" #include int main( int argc, char *argv[] ) { int n, myid, numprocs, i; double PI25DT = ; double mypi, pi, h, sum, x; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); while (1) { if (myid == 0) { printf("Enter the number of intervals: (0 quits) "); scanf("%d",&n); } MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); if (n == 0) break; else { h = 1.0 / (double) n; sum = 0.0; for (i = myid + 1; i <= n; i += numprocs) { x = h * ((double)i - 0.5); sum += (4.0 / (1.0 + x*x)); } mypi = h * sum; MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT)); } MPI_Finalize(); return 0; } one task reduction function for parallel summation

22 MPI programming – Reduction

PVM? MPI?  PVM is easy to use, especially on a network of workstations. Its message passing API is relatively simple  MPI is a standard, has a steeper learning curve and doesn’t have a standard way to start tasks MPICH does have an “mpirun” command  If building a new scalable, production code, should use MPI (widely supported now)  If experimenting with message passing, are interested in dynamics, use PVM.  MPI has a very rich messaging interface and designed for efficiency  PVM has a simple messaging interface + Process control, Interoperability, Dynamics:  Perform comparably when on Ethernet  MPI outperforms when on MPP  Both are still popular, but MPI is an accepted community standard with many support chains.

Exercise 24 (1) Read the PVM program pi_dual.c and pi_slave.c which are used for calculating. Compile pi_dual.c and pi_slave.c. Learn to use xpvm. (2) Revise program pi_dual.c to pi_single.c such that it uses only one pi_slave task. Find N (the number of the divided intervals) such that the running time of pi_single is 30 sec, 60 sec, 90 sec, and 120 sec, respectively. (3) Compare the running time of pi_dual and pi_single using the different values of N obtained in (2). Notice that the ideal speed-up rate is (4) Revise program pi_dual.c to pi_multi.c such that it can use any number of slave tasks. For example, Pvm> spawn -> pi_multi starts 4 slave tasks which do calculation in intervals [0,0.25],[0.25,0.5],[0.5,0.75],[0.75,1], respectively, where each interval is divided to smaller intervals. (5) For each value of N obtained in (2), get the running time pi_multi when the number of slave tasks is 1,2,4,8,16, respectively, find the speed-up rate (notice that the ideal rate is, investigate the change in the speed- up rates, and discuss the reasons. (6) Consider how to balance the work load among the host computers. (Hint: according to the work load of each computer, the time needed for running pi_slave is different. Assign more slave tasks to the computers which has lower work load. Design the PVM program to realize your idea, and discuss how much the running time and the speed-up rate are improved. (7) Change the above PVM programs into the MPI programs.