MPI: Portable Parallel Programming for Scientific Computing William Gropp Rusty Lusk Debbie Swider Rajeev Thakur.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

Class CS 775/875, Spring 2011 Amit H. Kumar, OCCS Old Dominion University.
Portable MPI and Related Parallel Development Tools Rusty Lusk Mathematics and Computer Science Division Argonne National Laboratory (The rest of our group:
Reference: Getting Started with MPI.
A Scalable FPGA-based Multiprocessor for Molecular Dynamics Simulation Arun Patel 1, Christopher A. Madill 2,3, Manuel Saldaña 1, Christopher Comis 1,
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
Portability Issues. The MPI standard was defined in May of This standardization effort was a response to the many incompatible versions of parallel.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
1 Message protocols l Message consists of “envelope” and data »Envelope contains tag, communicator, length, source information, plus impl. private data.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
1 What is message passing? l Data transfer plus synchronization l Requires cooperation of sender and receiver l Cooperation not always apparent in code.
Parallel Programming with Java
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
Lecture 11 –Parallelism on Supercomputers Parallelism on Supercomputers and the Message Passing Interface (MPI) Parallel Computing CIS 410/510 Department.
The Future of MPI William Gropp Argonne National Laboratory
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
© 2010 IBM Corporation Enabling Concurrent Multithreaded MPI Communication on Multicore Petascale Systems Gabor Dozsa 1, Sameer Kumar 1, Pavan Balaji 2,
HPCA2001HPCA Message Passing Interface (MPI) and Parallel Algorithm Design.
MIMD Distributed Memory Architectures message-passing multicomputers.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
SciDAC All Hands Meeting, March 2-3, 2005 Northwestern University PIs:Alok Choudhary, Wei-keng Liao Graduate Students:Avery Ching, Kenin Coloma, Jianwei.
Lecture 2: Part II Message Passing Programming: MPI
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
1 Parallel Programming Aaron Bloomfield CS 415 Fall 2005.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Non-Data-Communication Overheads in MPI: Analysis on Blue Gene/P P. Balaji, A. Chan, W. Gropp, R. Thakur, E. Lusk Argonne National Laboratory University.
PMI: A Scalable Process- Management Interface for Extreme-Scale Systems Pavan Balaji, Darius Buntinas, David Goodell, William Gropp, Jayesh Krishna, Ewing.
Parallel Programming with MPI By, Santosh K Jena..
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
A Scalable FPGA-based Multiprocessor Arun Patel 1, Christopher A. Madill 2,3, Manuel Saldaña 1, Christopher Comis 1, Régis Pomès 2,3, Paul Chow 1 Presented.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture FIT5174 Distributed & Parallel Systems Lecture 5 Message Passing and MPI.
Bronis R. de Supinski and Jeffrey S. Vetter Center for Applied Scientific Computing August 15, 2000 Umpire: Making MPI Programs Safe.
CSE 160 – Lecture 16 MPI Concepts, Topology and Synchronization.
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
CCA Common Component Architecture Distributed Array Component based on Global Arrays Manoj Krishnan, Jarek Nieplocha High Performance Computing Group Pacific.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
SDM Center High-Performance Parallel I/O Libraries (PI) Alok Choudhary, (Co-I) Wei-Keng Liao Northwestern University In Collaboration with the SEA Group.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
MPI Adriano Cruz ©2003 NCE/UFRJ e Adriano Cruz NCE e IM - UFRJ Summary n References n Introduction n Point-to-point communication n Collective.
Message Passing Interface Using resources from
1 Advanced MPI William D. Gropp Rusty Lusk and Rajeev Thakur Mathematics and Computer Science Division Argonne National Laboratory.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
1 MPI: Message Passing Interface Prabhaker Mateti Wright State University.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Introduction to parallel computing concepts and technics
Parallel I/O Optimizations
MPI: Portable Parallel Programming for Scientific Computing
Introduction to MPI.
CS4961 Parallel Programming Lecture 18: Introduction to Message Passing Mary Hall November 2, /02/2010 CS4961.
University of Technology
Parallel Programming with MPI and OpenMP
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
MPI-Message Passing Interface
Message Passing Models
MPI: Message Passing Interface
MPJ: A Java-based Parallel Computing System
Presentation transcript:

MPI: Portable Parallel Programming for Scientific Computing William Gropp Rusty Lusk Debbie Swider Rajeev Thakur

Portable Parallel Programming Why Parallel? For performance: –Realtime needs (weather prediction) –Large memory (throughput) Why Portable? –Lifetime of code >> machine –Performance through algorithms

The Message Passing Model Parallel programs consist of cooperating processes, each with its own memory Processes send data to one another as messages Messages may have tags that may be used to sort messages Messages may be received in any order

What is MPI? Message-passing model Standard (specification) –Many implementations –MPICH most widely used Two phases: –MPI 1: Traditional message-passing –MPI 2: Remote memory, parallel I/O, and dynamic processes

Why was MPI needed? Software crisis in parallel computing –Each vendor provided their own, different interface –No “critical mass” of users Enabling libraries (code sharing) Turnkey parallel applications –CFD, pharmaceutical design, etc.

Quick Tour of MPI Point-to-point Collective Process groups and topology Profiling Other

Point-to-point Send/Receive Datatype –Basic for heterogeneity –Derived for non-contiguous Contexts –Message safety for libraries Buffering –Robustness and correctness A(10) B(20) MPI_Send( A, 10, MPI_DOUBLE, 1, …) MPI_Recv( B, 20, MPI_DOUBLE, 0, … )

Collective Process groups –Collections of cooperating processes –Hierarchical algorithms need nested collections Categories –Communication: Broadcast data –Computation: Global sum –Synchronization: Barrier

New MPI-2 Features Remote Memory Parallel I/O Dynamic Process Threads

MPI in the World Libraries –At ANL: PETSc, SUMMA3d, etc. –Others: Aztec, PESSL, ScaLAPACK, FFTW, etc. Applications –Electromagnetic Scattering –Groundwater pollution –Stellar atmospheres –Reactive flows –QCD –Navier-Stokes –Convective flows –Multi-physics in metal forming Papers

MPI Research at MCS Portable performance I/O Programming Environment New hardware –Network/Memory interconnect (VIA) New Programming Models –Threads –Remote memory

Learning More MPI Books –Using MPI –MPI: The Complete Reference –Parallel Programming with MPI Online Resources – –MPI Forum: –Tutorials: