ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming.

Slides:



Advertisements
Similar presentations
6.1 Synchronous Computations ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006.
Advertisements

1 Buffers l When you send data, where does it go? One possibility is: Process 0Process 1 User data Local buffer the network User data Local buffer.
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
Getting Started with MPI Self Test with solution.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
A Brief Look At MPI’s Point To Point Communication Brian T. Smith Professor, Department of Computer Science Director, Albuquerque High Performance Computing.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
CS 179: GPU Programming Lecture 20: Cross-system communication.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 3 Distributed Memory Programming with MPI An Introduction to Parallel Programming Peter Pacheco.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
2a.1 Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
1 Lecture 4: Part 2: MPI Point-to-Point Communication.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
An Introduction to MPI (message passing interface)
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
MPI Send/Receive Blocked/Unblocked Josh Alexander, University of Oklahoma Ivan Babic, Earlham College Andrew Fitz Gibbon, Shodor Education Foundation Inc.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Message Passing Interface Using resources from
1 ParallelAlgorithms Parallel Algorithms Dr. Stephen Tse Lesson 9.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Introduction to parallel computing concepts and technics
MPI Point to Point Communication
Introduction to MPI.
MPI Message Passing Interface
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
Quiz Questions ITCS 4145/5145 Parallel Programming MPI
Introduction to parallelism and the Message Passing Interface
Quiz Questions Parallel Programming MPI
Barriers implementations
More Quiz Questions Parallel Programming MPI Non-blocking, synchronous, asynchronous message passing routines ITCS 4/5145 Parallel Programming, UNC-Charlotte,
More Quiz Questions Parallel Programming MPI Collective routines
Synchronizing Computations
More Quiz Questions Parallel Programming MPI Collective routines
Hello, world in MPI #include <stdio.h> #include "mpi.h"
5- Message-Passing Programming
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
Programming Parallel Computers
Presentation transcript:

ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2013, QuizQuestions2a.ppt Jan 21, 2013 Quiz Questions ITCS 4145/5145 Parallel Programming MPI

What is the name of the default MPI communicator? a)DEF_MPI_COMM_WORLD b)It has no name. c)DEFAULT_COMMUNICATOR d)COMM_WORLD e)MPI_COMM_WORLD

What does the MPI routine MPI_Comm_rank() do? a)It compares the supplied process ID with that of the process and returns TRUE or FALSE. b)It returns an integer that is number of processes in the specifed communicator. The number is returned as an agument. c)It converts the Linux process ID to a unique integer from zero onwards. d)It returns an integer that is the rank of the process in the specifed communicator. The integer is returned as an agument. e)It returns the priority number of the process from highest (0) downwards.

Name one MPI routine that does not have a named communicator as a parameter (argument). a)MPI_Send() b)MPI_Bcast() c)MPI_Init() d)MPI_Barrier() e)None - they all have a named communicator as a parameter. f)none of the other answers

What is the purpose of a message tag in MPI? a)To provide a mechanism to differentiate between message-passing routines written by different programmers b)To count the number of characters in a message c)To indicate the type of message d)To provide a matching mechanism differentiating between message sent from one process to another process

When does the MPI routine MPI_Recv() return? a)After the arrival of the message the routine is waiting for but before the data has been collected. b)Never c)Immediately d)After a time specified in the routine. e)After the arrival of message the routine is waiting for and the data collected.

What is meant by a blocking message passing routine in MPI? a)The routine returns when all the local actions are complete but the message transfer may not have completed. b)The routine returns immediately but the message transfer may not have completed. c)The routine returns when the message transfer has completed. d)The routine blocks all actions on other processes until it has completed its actions. e)None of the other answers.

What is meant by a non-blocking (or asynchronous) message passing routine in MPI? a)The routine returns when all the local actions are complete but the message transfer may not have completed. b)The routine returns immediately but the message transfer may not have completed. c)The routine returns when the message transfer has completed. d)The routine blocks all actions on other processes until it has completed its actions. e)None of the other answers.

In the routine: MPI_Send(message,13,MPICHAR,x,10, MPI_COMM_WORLD); when can x be altered without affecting the message being transferred? a)Never. b)After the routine returns, i.e. in subsequent statements c)Anytime d)When the message has been received e)None of the other answers

What does the routine MPI_Wtime() do? a)Waits a specific time before returning as given by an argument. b)Returns the elapsed time from some point in the past, in seconds. c)Returns the elapsed time from the beginning of the program execution, in seconds. d)Returns the time of the process execution. e)Returned the actual time of day f)None of the other answers.

Under what circumstance might an MPI_Send() operate as an MPI_Ssend()? a)If the available message buffer space becomes exhausted. b)If you specify more than a thousand bytes in the message. c)If the tags do not match. d)When the "synch" parameter is set in the parameter list of MPI_Send() e)Never

What does the MPI routine MPI_Barrier() do? a)Waits for all messages to be sent and received. b)Will cause processes to wait for all processes within the specific communicator to call the routine. Then all processes send a message to the master process and continue. c)Makes a process to execute slower to allow debugging d)Waits for a specified amount of time. e)Will cause processes after calling MPI_Barrier() to wait for all processes within the specific communicator to call the routine. Then all processes are released and are allowed to continue.