Quiz Questions ITCS 4145/5145 Parallel Programming MPI

Slides:



Advertisements
Similar presentations
6.1 Synchronous Computations ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006.
Advertisements

MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 3 Distributed Memory Programming with MPI An Introduction to Parallel Programming Peter Pacheco.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
2a.1 Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
An Introduction to MPI (message passing interface)
1 Using PMPI routines l PMPI allows selective replacement of MPI routines at link time (no need to recompile) l Some libraries already make use of PMPI.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Introduction to parallel computing concepts and technics
CS4402 – Parallel Computing
MPI Point to Point Communication
Introduction to MPI.
MPI Message Passing Interface
Blocking / Non-Blocking Send and Receive Operations
Parallel Programming with MPI and OpenMP
More on MPI Nonblocking point-to-point routines Deadlock
MPI-Message Passing Interface
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
Quiz Questions Suzaku pattern programming framework
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
CSCE569 Parallel Computing
Introduction to parallelism and the Message Passing Interface
More on MPI Nonblocking point-to-point routines Deadlock
Quiz Questions Parallel Programming MPI
Barriers implementations
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
More Quiz Questions Parallel Programming MPI Non-blocking, synchronous, asynchronous message passing routines ITCS 4/5145 Parallel Programming, UNC-Charlotte,
More Quiz Questions Parallel Programming MPI Collective routines
Message-Passing Computing Message Passing Interface (MPI)
Quiz Questions CUDA ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizCUDA.ppt Nov 12, 2014.
Synchronizing Computations
More Quiz Questions Parallel Programming MPI Collective routines
Hello, world in MPI #include <stdio.h> #include "mpi.h"
5- Message-Passing Programming
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

Quiz Questions ITCS 4145/5145 Parallel Programming MPI ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2012, QuizQuestions2a.ppt June 15, 2012

What is the name of the default MPI communicator? DEF_MPI_COMM_WORLD It has no name. DEFAULT_COMMUNICATOR COMM_WORLD MPI_COMM_WORLD

What does the MPI routine MPI_Comm_rank() do? It compares the supplied process ID with that of the process and returns TRUE or FALSE. It returns an integer that is number of processes in the specifed communicator. The number is returned as an agument. It converts the Linux process ID to a unique integer from zero onwards. It returns an integer that is the rank of the process in the specifed communicator. The integer is returned as an agument. It returns the priority number of the process from highest (0) downwards.

Name one MPI routine that does not have a named communicator as a parameter (argument). MPI_Send() MPI_Bcast() MPI_Init() MPI_Barrier() None - they all have a named communicator as a parameter. none of the other answers

What is the purpose of a message tag in MPI? To provide a mechanism to differentiate between message-passing routines written by different programmers To count the number of characters in a message To indicate the type of message To provide a matching mechanism differentiating between message sent from one process to another process

When does the MPI routine MPI_Recv() return? After the arrival of the message the routine is waiting for but before the data has been collected. Never Immediately After a time specified in the routine. After the arrival of message the routine is waiting for and the data collected.

What is meant by a blocking message passing routine in MPI? The routine returns when all the local actions are complete but the message transfer may not have completed. The routine returns immediately but the message transfer may not have completed. The routine returns when the message transfer has completed. The routine blocks all actions on other processes until it has completed its actions. None of the other answers.

What is meant by a non-blocking (or asynchronous) message passing routine in MPI? The routine returns when all the local actions are complete but the message transfer may not have completed. The routine returns immediately but the message transfer may not have completed. The routine returns when the message transfer has completed. The routine blocks all actions on other processes until it has completed its actions. None of the other answers.

In the routine: MPI_Send(message,13,MPICHAR,x,10, MPI_COMM_WORLD); when can x be altered without affecting the message being transferred? Never. After the routine returns, i.e. in subsequent statements Anytime When the message has been received None of the other answers

What does the routine MPI_Wtime() do? Waits a specific time before returning as given by an argument. Returns the elapsed time from some point in the past, in seconds. Returns the elapsed time from the beginning of the program execution, in seconds. Returns the time of the process execution. Returned the actual time of day None of the other answers.

Under what circumstance might an MPI_Send() operate as an MPI_Ssend()? If the available message buffer space becomes exhausted. If you specify more than a thousand bytes in the message. If the tags do not match. When the "synch" parameter is set in the parameter list of MPI_Send() Never

What does the MPI routine MPI_Barrier() do? Waits for all messages to be sent and received. Will cause processes to wait for all processes within the specific communicator to call the routine. Then all processes send a message to the master process and continue. Makes a process to execute slower to allow debugging Waits for a specified amount of time. Will cause processes after calling MPI_Barrier() to wait for all processes within the specific communicator to call the routine. Then all processes are released and are allowed to continue.