Timing in MPI Tarik Booker MPI Presentation May 7, 2003.

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

The OSCAR Cluster System Tarik Booker CS 370. Topics Introduction OSCAR Basics Introduction to LAM LAM Commands MPI Basics Timing Examples.
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
MPI_Gatherv CISC372 Fall 2006 Andrew Toy Tom Lynch Bill Meehan.
Computational Physics Lecture 4 Dr. Guy Tel-Zur.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Deino MPI Installation The famous “cpi.c” Profiling
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
12d.1 Two Example Parallel Programs using MPI UNC-Wilmington, C. Ferner, 2007 Mar 209, 2007.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Introduction to MPI, OpenMP, Threads Gyan Bhanot IAS Course 10/12/04 and 10/13/04.
MPI (Message Passing Interface) Basics
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
IBM Research © 2006 IBM Corporation CDT Static Analysis Features CDT Developer Summit - Ottawa Beth September.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
2a.1 Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
1 Collective Communications. 2 Overview  All processes in a group participate in communication, by calling the same function with matching arguments.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
1. Create list of unmarked natural numbers 2, 3, …, n 2. k  2 3. Repeat: (a) Mark all multiples of k between k 2 and n (b) k  smallest unmarked number.
Hybrid MPI and OpenMP Parallel Programming
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
PP Lab MPI programming II. Program#1 Write a program that prints hello from every created process. Like: Hello World from process 0 of 5 Hello World from.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Task/ChannelMessage-passing TaskProcess Explicit channelsMessage communication.
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
MPI Chapter 3 More Beginning MPI. MPI Philosopy One program for all processes – Starts with init – Get my process number Process 0 is usually the “Master”
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Interface Using resources from
Introduction to MPI programming Morris Law, SCID May 18/25, 2013.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming - Exercises Dr. Xiao Qin Auburn University
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Parallel Programming C. Ferner & B. Wilkinson, 2014 Introduction to Message Passing Interface (MPI) Introduction 9/4/
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
The OSCAR Cluster System
Message-Passing Computing
Introduction to MPI.
MPI Message Passing Interface
Special Jobs: MPI Alessandro Costa INAF Catania
CS 584.
MPI_Bcast Bcast stands for broadcast, and is used to send data from one process to all other processes. The format for this function is: MPI_Bcast(&msg_address,#_elements,MPI_Type,
Introduction to Message Passing Interface (MPI)
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Message Passing Models
Pattern Programming Tools
Monte Carlo Integration Using MPI
Lab Course CFD Parallelisation Dr. Miriam Mehl.
Introduction to Parallel Computing with MPI
Hybrid MPI and OpenMP Parallel Programming
Introduction to Parallel Computing
Sample answer of the first exercise. (1)
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Cluster Computing on the Cloud with StackIQ Rocks+
Distributed Memory Programming with Message-Passing
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Parallel Processing - MPI
MPI Message Passing Interface
Some codes for analysis and preparation for programming
Presentation transcript:

Timing in MPI Tarik Booker MPI Presentation May 7, 2003

What we will cover… How to time programs Sample Programs My Samples MPI Pi Program

How to Time Programs Very Easy Very Easy Simply use function Simply use functionMPI_Wtime()

MPI_Wtime() Null input Null input Returns a Long Float (double) Returns a Long Float (double) Not like UNIX “clock” function Not like UNIX “clock” function  MPI_Wtime() is somewhat arbitrary  Starts depending on node

How to Time Code Must use time blocks Must use time blocks  Start time  End time  Time for your code is simply end_time – start_time.

Example #include<stdio.h>#include<mpi.h> main(int argc, char **argv) { int size, node; int size, node; double start, end; double start, end; MPI_Init(&argc, &argv); MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &node); MPI_Comm_rank(MPI_COMM_WORLD, &node); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Comm_size(MPI_COMM_WORLD, &size); start = MPI_Wtime(); start = MPI_Wtime(); if(node==0) if(node==0) { printf(" Hello From Master. Time = %lf \n", MPI_Wtime() - start); printf(" Hello From Master. Time = %lf \n", MPI_Wtime() - start); //Count number of ticks //Count number of ticks } else else { printf("Hello From Slave #%d %lf \n", node, (MPI_Wtime() - start)); printf("Hello From Slave #%d %lf \n", node, (MPI_Wtime() - start)); } MPI_Finalize(); MPI_Finalize(); }

Example (2) #include<stdio.h>#include<mpi.h> main(int argc, char **argv) { int size, node; int size, node; int serial_counter = 0; int serial_counter = 0; double start, end; double start, end; MPI_Init(&argc, &argv); MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &node); MPI_Comm_rank(MPI_COMM_WORLD, &node); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Barrier(MPI_COMM_WORLD); MPI_Barrier(MPI_COMM_WORLD); start = MPI_Wtime(); start = MPI_Wtime(); MPI_Barrier(MPI_COMM_WORLD); MPI_Barrier(MPI_COMM_WORLD); if(node==0) if(node==0) { printf("Hello From Node #%d %lf \n", node, (MPI_Wtime() - start)); printf("Hello From Node #%d %lf \n", node, (MPI_Wtime() - start)); } else else { printf("Hello From Slave #%d #%lf \n", node, (MPI_Wtime() - start)); printf("Hello From Slave #%d #%lf \n", node, (MPI_Wtime() - start)); } MPI_Finalize(); MPI_Finalize();}

Pi Example Uses MPI to compute value of Pi using formula: Uses MPI to compute value of Pi using formula:

Pi Example (2) start_time = MPI_Wtime(); start_time = MPI_Wtime(); MPI_Bcast(&n, 1, MPI_INT, host_rank, MPI_COMM_WORLD); MPI_Bcast(&n, 1, MPI_INT, host_rank, MPI_COMM_WORLD); end_time = MPI_Wtime(); end_time = MPI_Wtime(); communication_time = end_time - start_time; communication_time = end_time - start_time;

Pi Example (3) start_time = MPI_Wtime(); h = 1.0 / (double) n; h = 1.0 / (double) n; sum = 0.0; for (i = my_rank + 1; i <= n; i += pool_size) { x = h * ((double)i - 0.5); sum += f(x); } mypi = h * sum; end_time = MPI_Wtime(); computation_time = end_time - start_time;

Pi Example (4) start_time = MPI_Wtime(); MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, host_rank, MPI_COMM_WORLD); end_time = MPI_Wtime(); communication_time = communication_time + end_time - start_time; communication_time = communication_time + end_time - start_time;