Computations with MPI technology Matthew BickellThomas Rananga Carmen Jacobs John S. NkunaMalebo Tibane S UPERVISORS : Dr. Alexandr P. Sapozhnikov Dr.

Slides:



Advertisements
Similar presentations
An Overview of ABFT in cloud computing
Advertisements

MPI Message Passing Interface
Parallel Processing with OpenMP
Introduction to Openmp & openACC
1 Introduction to Collective Operations in MPI l Collective operations are called by all processes in a communicator. MPI_BCAST distributes data from one.
A NOVEL APPROACH TO SOLVING LARGE-SCALE LINEAR SYSTEMS Ken Habgood, Itamar Arel Department of Electrical Engineering & Computer Science GABRIEL CRAMER.
1 Collective Operations Dr. Stephen Tse Lesson 12.
Parallel Matrix Operations using MPI CPS 5401 Fall 2014 Shirley Moore, Instructor November 3,
Partitioning and Divide-and-Conquer Strategies ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 23, 2013.
The LINPACK Benchmark on a Multi-Core Multi-FPGA System by Emanuel Ramalho Supervisor: Prof. Paul Chow University of Toronto Electrical and Computer Engineering.
Summary Background –Why do we need parallel processing? Applications Introduction in algorithms and applications –Methodology to develop efficient parallel.
Scientific Programming OpenM ulti- P rocessing M essage P assing I nterface.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
Page 1 CS Department Parallel Design of JPEG2000 Image Compression Xiuzhen Huang CS Department UC Santa Barbara April 30th, 2003.
E.Papandrea 06/11/2003 DFCI COMPUTING - HW REQUIREMENTS1 Enzo Papandrea COMPUTING HW REQUIREMENT.
Parallel Implementation of the Inversion of Polynomial Matrices Alina Solovyova-Vincent March 26, 2003 A thesis submitted in partial fulfillment of the.
Collective Communication.  Collective communication is defined as communication that involves a group of processes  More restrictive than point to point.
Algorithms in a Multiprocessor Environment Kevin Frandsen ENCM 515.
1 Tuesday, October 31, 2006 “Data expands to fill the space available for storage.” -Parkinson’s Law.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
The hybird approach to programming clusters of multi-core architetures.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 9 October 30, 2002 Nayda G. Santiago.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
IEEE Globecom-2006, NXG-02: Broadband Access ©Copyright All Rights Reserved 1 FPGA based Acceleration of Linear Algebra Computations. B.Y. Vinay.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Parallel Programming and Algorithms – MPI Collective Operations David Monismith CS599 Feb. 10, 2015 Based upon MPI: A Message-Passing Interface Standard.
Parallel Computing Through MPI Technologies Author: Nyameko Lisa Supervisors: Prof. Elena Zemlyanaya, Prof Alexandr P. Sapozhnikov and Tatiana F. Sapozhnikov.
AN EXTENDED OPENMP TARGETING ON THE HYBRID ARCHITECTURE OF SMP-CLUSTER Author : Y. Zhao 、 C. Hu 、 S. Wang 、 S. Zhang Source : Proceedings of the 2nd IASTED.
MIMD Distributed Memory Architectures message-passing multicomputers.
Linux Clusters and Tiled Display Walls July 30 - August 1, 2002 Collectives 1 of 10 MPI Collective Communication Kadin Tseng Scientific Computing and Visualization.
Parallelization of the Classic Gram-Schmidt QR-Factorization
Hybrid MPI and OpenMP Parallel Programming
Plan: I. Introduction: Programming Model II. Basic MPI Command III. Examples IV. Collective Communications V. More on Communication modes VI. References.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
Summary Background –Why do we need parallel processing? Moore’s law. Applications. Introduction in algorithms and applications –Methodology to develop.
Parallel Programming & Cluster Computing MPI Collective Communications Dan Ernst Andrew Fitz Gibbon Tom Murphy Henry Neeman Charlie Peck Stephen Providence.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture FIT5174 Distributed & Parallel Systems Lecture 5 Message Passing and MPI.
Parallel Computing in Numerical Simulation of Laser Deposition The objective of this proposed project is to research and develop an effective prediction.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 8 October 23, 2002 Nayda G. Santiago.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
June 24-25, 2008 Regional Grid Training, University of Belgrade, Serbia Hands-on: Compiling MPI codes with PGI Dušan Vudragović SCL,
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
Message Passing Interface (MPI) by Blaise Barney, Lawrence Livermore National Laboratory.
Lecture Name: Dr. Mohammed Elmaleeh Office Location: Building No. D Office Contact Number: ELECTRICAL CIRCUITS (EE 270)
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
PVM and MPI.
Introduction to parallel computing concepts and technics
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Message Passing Interface (cont.) Topologies.
Introduction to MPI.
Introduction to Parallelism.
Is System X for Me? Cal Ribbens Computer Science Department
Jidong Zhai, Tianwei Sheng, Jiangzhou He, Wenguang Chen, Weimin Zheng
Parallel Inversion of Polynomial Matrices
MPI-Message Passing Interface
Message Passing Models
May 19 Lecture Outline Introduce MPI functionality
Introduction to parallelism and the Message Passing Interface
MPJ: A Java-based Parallel Computing System
Parallel Processing - MPI
Presentation transcript:

Computations with MPI technology Matthew BickellThomas Rananga Carmen Jacobs John S. NkunaMalebo Tibane S UPERVISORS : Dr. Alexandr P. Sapozhnikov Dr. Tatiana Sapozhnikova Prof. Elena Zemylyanaya J OINT I NSTITUTE OF N UCLEAR R ESEARCH

Outline What is MPI? Why MPI? Examples Results Discussions and Conclusions Recommendations

What is MPI? Message Passing Interface (1992) A tool to develop programs that use multiple, parallel processes MPI is a set of communicative and auxiliary operations for programming in Fortran and C languages. The fundamental structures are: processes and messages  Processes communicate exclusively through messages

All modern computers have multiple processors Parallel computing obeys Amdahl’s Law: 0 ≤ S ≤ 1 is the number of operations that must be performed successively, P is the number of processes, A is acceleration To compute as fast as possible Allows a more flexible division of work among processors Affords one the opportunity to develop one’s own parallel programming paradigm Portable across different platforms and languages Why MPI?

Examples program main include 'mpif.h' integer ierr, rank, size, a, b, stat(MPI_STATUS_SIZE) call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr) if (rank.eq.0) then a = 2 call MPI_SEND(a, 1, MPI_INTEGER, 1, 5, MPI_COMM_WORLD, ierr) call MPI_RECV(b, 1, MPI_INTEGER, 1, 5, MPI_COMM_WORLD, stat, ierr) elseif (rank.eq.1) then b = 5 call MPI_SEND(b, 1, MPI_INTEGER, 0, 5, MPI_COMM_WORLD, ierr) call MPI_RECV(a, 1, MPI_INTEGER, 0, 5, MPI_COMM_WORLD, stat, ierr) endif call MPI_FINALIZE(ierr) end I II b a

Examples I II III I sum Total = 55 V = Process: Vector summation Split a vector of length L up amongst the N processes Each process sums its part of the vector Each process sends its result back to the master process

Examples Process: Collective communication Each process has its own region of memory. Transfer of data from one process to another can take time. We want to minimise these transfers.  Tree broadcasting o Parallel transfers

Examples = I II III Matrix Multiplication Broadcast the matrices to all processes Each process calculates a number of columns of the product matrix

Results Time Num Processors Num of physical processors 1 For large matrices, time is inversely proportional to number of processors. Time increases for N > N phy since the transfer times become substantial.

Discussions and Conclusions Learnt fundamental principles of MPI Experienced the power of parallel computing Knowledge of MPI (at JINR) has improved our potential for professional excellence. MPI is most effective in distributed memory systems (DMS) than shared memory systems (SMS). High performance computing can be utilised to its full potential. MPI improves research productivity

Recommendations Continued correspondence with our supervisors. Encourage researchers from SA to learn about MPI (CHPC, National Facility). Propose the introduction of MPI into undergraduate courses.

Acknowledgements NRF (RSA) JINR (Russia) Supervisors  Dr. Alexandr P. Sapozhnikov  Prof. Tatiana Sapozhnikova  Prof. Elena Zemylyanaya Prof. M. L. Lekala Dr. N. M. Jacobs