Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.

Slides:



Advertisements
Similar presentations
IBM’s X10 Presentation by Isaac Dooley CS498LVK Spring 2006.
Advertisements

U NIVERSITY OF D ELAWARE C OMPUTER & I NFORMATION S CIENCES D EPARTMENT Optimizing Compilers CISC 673 Spring 2009 Potential Languages of the Future Chapel,
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 18 Indexing Structures for Files.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Fluency with Information Technology Third Edition by Lawrence Snyder Chapter.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 11 Object, Object- Relational, and XML: Concepts, Models, Languages,
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
מבוא לעיבוד מקבילי דר' גיא תל-צור סמסטר א' רשימת נושאים שנסקרו עד כה בקורס הרשימה מיועדת לסייע לתלמיד בהכנה לבוחן בכך שהיא מאזכרת מושגים. אולם,
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 5 Part 1 Conditionals and Loops.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
MPI3 Hybrid Proposal Description
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
LIGO-G Z 8 June 2001L.S.Finn/LDAS Camp1 How to think about parallel programming.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 2 Limits.
ECE 1747H : Parallel Programming Message Passing (MPI)
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.
Compilation Technology SCINET compiler workshop | February 17-18, 2009 © 2009 IBM Corporation Software Group Coarray: a parallel extension to Fortran Jim.
HPCA2001HPCA Message Passing Interface (MPI) and Parallel Algorithm Design.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley. Ver Chapter 2: Recursion: The Mirrors Data Abstraction & Problem Solving.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
L17: Introduction to “Irregular” Algorithms and MPI, cont. November 8, 2011.
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 11.5 Lines and Curves in Space.
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley. Chapter 4 Applications of the Derivative.
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 1 Functions.
Definitions Speed-up Efficiency Cost Diameter Dilation Deadlock Embedding Scalability Big Oh notation Latency Hiding Termination problem Bernstein’s conditions.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
11/04/2010CS4961 CS4961 Parallel Programming Lecture 19: Message Passing, cont. Mary Hall November 4,
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
1 Qualifying ExamWei Chen Unified Parallel C (UPC) and the Berkeley UPC Compiler Wei Chen the Berkeley UPC Group 3/11/07.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
L17: MPI, cont. October 25, Final Project Purpose: -A chance to dig in deeper into a parallel programming model and explore concepts. -Research.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 8 October 23, 2002 Nayda G. Santiago.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Lecture 3: Today’s topics MPI Broadcast (Quinn Chapter 5) –Sieve of Eratosthenes MPI Send and Receive calls (Quinn Chapter 6) –Floyd’s algorithm Other.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
CS4402 – Parallel Computing
MPI Message Passing Interface
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
Chapter 8: ZPL and Other Global View Languages
Lab Course CFD Parallelisation Dr. Miriam Mehl.
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
More Quiz Questions Parallel Programming MPI Collective routines
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
Presentation transcript:

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder Chapter 7: MPI and Other Local View Languages

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-2 Figure 7.1 An MPI solution to the Count 3s problem.

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-3 Figure 7.1 An MPI solution to the Count 3s problem. (cont.)

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-4 Code Spec 7.1 MPI_Init().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-5 Code Spec 7.2 MPI_Finalize().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-6 Code Spec 7.3 MPI_Comm_Size().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-7 Code Spec 7.4 MPI_Comm_Rank().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-8 Code Spec 7.5 MPI_Send().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-9 Code Spec 7.6 MPI_Recv().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-10 Code Spec 7.7 MPI_Reduce().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-11 Code Spec 7.8 MPI_Scatter().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-12 Code Spec 7.8 MPI_Scatter(). (cont.)

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-13 Figure 7.2 Replacement code (for lines 16– 48 of Figure 7.1) to distribute data using a scatter operation.

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-14 Code Spec 7.9 MPI_Gather().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-15 Figure 7.3 Each message must be copied as it moves across four address spaces, each contributing to the overall latency.

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-16 Code Spec 7.10 MPI_Scan().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-17 Code Spec 7.11 MPI_Bcast(). MPI routine to broadcast data from one root process to all other processes in the communicator.

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-18 Code Spec 7.12 MPI_Barrier().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-19 Code Spec 7.13 MPI_Wtime().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-20 Figure 7.4 Example of collective communication within a group.

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-21 Code Spec 7.14 MPI_Comm_group().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-22 Code Spec 7.15 MPI_Group_incl().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-23 Code Spec 7.16 MPI_Comm_create().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-24 Figure 7.5 A 2D relaxation replaces—on each iteration—all interior values by the average of their four nearest neighbors.

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-25 Figure 7.6 MPI code for the main loop of the 2D SOR computation.

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-26 Figure 7.6 MPI code for the main loop of the 2D SOR computation. (cont.)

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-27 Figure 7.6 MPI code for the main loop of the 2D SOR computation. (cont.)

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-28 Figure 7.7 Depiction of dynamic work redistribution in MPI.

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-29 Figure 7.8 A 2D SOR MPI program using non-blocking sends and receives.

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-30 Figure 7.8 A 2D SOR MPI program using non-blocking sends and receives. (cont.)

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-31 Figure 7.8 A 2D SOR MPI program using non-blocking sends and receives. (cont.)

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-32 Code Spec 7.17 MPI_Waitall().

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley 7-33 Figure 7.9 Creating a derived data type.

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Partitioned Global Address Space Languages Higher level of abstraction Built on top of distributed memory clusters Considered a single address space Allows definition of global data structures Must consider local vs global data No longer consider message passing details or distributed data structures Use a more efficient one sided substrate 7-34

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Main PGAS Co-Array Fortran – Unified Parallel C – Titanium – 7-35

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Co-Array Fortran (CAF) Extends FORTRAN Originally called F - - Elegant and simple Uses co-array (communication array) Real, dimension (n,n)[p,*]:: a, b, c –a, b, c are co-arrays Memory for co-array is dist across each process determined by the dimension statement 7-36

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Unified Parallel C (UPC) Global view of address space Shared arrays are distributed in cyclic or block cyclic arrangement (aides load balancing) Supports pointers (C) 4 types –private private –shared private –private shared –shared shared 7-37

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley C pointers Private pointer pointing locally –int *p1; Private pointer pointing to shared space –shared int *p2; Shared pointer pointing locally –int *shared p3; Shared pointer pointing into shared space –shared int *shared p4; 7-38

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley UPC Has a forall verb –upc_forall Distributes normal C for loop iterations across all processes A global operation whereas most other operations are local 7-39

Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Titanium Extends java Object oriented Adds regions –Supports safe memory management Unordered iteration –Foreach –Allows concurrency over multiple indices in a block 7-40