Download presentation
Presentation is loading. Please wait.
Published bySybil Stephens Modified over 9 years ago
1
Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce
2
Collective Message Passing Broadcast –Sends a message from one to all processes in the group Scatter –Distributes each element of a data array to a different process for computation Gather –The reverse of scatter…retrieves data elements into an array from multiple processes
3
Collective Message Passing w/MPI MPI_Bcast() Broadcast from root to all other processes MPI_Gather() Gather values for group of processes MPI_Scatter() Scatters buffer in parts to group of processes MPI_Alltoall() Sends data from all processes to all processes MPI_Reduce() Combine values on all processes to single val MPI_Reduce_Scatter() Broadcast from root to all other processes MPI_Bcast() Broadcast from root to all other processes
4
Log in to mimosa & get workshop files A. Use secure shell to login to mimosa using your assigned training account: ssh tracct1@mimosa.mcsr.olemiss.edu ssh tracct2@mimosa.mcsr.olemiss.edutracct1@mimosa.mcsr.olemiss.edu See lab instructor for password. B. Copy workshop files into your home directory by running: /usr/local/apps/ppro/prepare_mpi_workshop
5
Examine, compile, and execute add_mpi.c
11
Examine add_mpi.pbs
12
Submit PBS Script: add_mpi.pbs
13
Examine Output and Errors add_mpi.c
14
Determine Speedup
15
Determine Parallel Efficiency
16
How Could Speedup/Efficiency Improve?
17
What Happens to Results When MAXSIZE Not Evenly Divisible by n?
18
Exercise 1: Change Code to Work When MAXSIZE is Not Evenly Divisible by n
19
Exercise 2: Change Code to Improve Speedup
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.