Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 17 2005 Session 11.

Slides:



Advertisements
Similar presentations
Parallel Virtual Machine Rama Vykunta. Introduction n PVM provides a unified frame work for developing parallel programs with the existing infrastructure.
Advertisements

PVM : Parallel Virtual Machine The poor man s super- computer Yvon Kermarrec Based on
MPI Message Passing Interface
Decision Trees and MPI Collective Algorithm Selection Problem Jelena Pje¡sivac-Grbovi´c,Graham E. Fagg, Thara Angskun, George Bosilca, and Jack J. Dongarra,
Remote Procedure Call (RPC)
Chess Problem Solver Solves a given chess position for checkmate Problem input in text format.
Yevgeny Petrilin Shay Dan Shadi Ibrahim. GUI : Graphical User Interface DAQ :Data Acquisition Data Acquisition device  a self-powered system that communicated.
CS 584 Lecture 17 n Assignment? n C* program n Papers n Test n When?
Point-to-Point Communication Self Test with solution.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
1 Process Description and Control Chapter 3. 2 Process Management—Fundamental task of an OS The OS is responsible for: Allocation of resources to processes.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
3.5 Interprocess Communication
CSE 160 – Lecture 10 Programs 1 and 2. Program 1 Write a “launcher” program to specify exactly where programs are to be spawned, gather output, clean.
Introduction to PVM PVM (Parallel Virtual Machine) is a package of libraries and runtime daemons that enables building parallel apps easily and efficiently.
20101 Chapter 7 The Application Layer Message Passing.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
Mapping Techniques for Load Balancing
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Chapter 9 Message Passing Copyright © Operating Systems, by Dhananjay Dhamdhere Copyright © Operating Systems, by Dhananjay Dhamdhere2 Introduction.
PVM and MPI What is more preferable? Comparative analysis of PVM and MPI for the development of physical applications on parallel clusters Ekaterina Elts.
PVM. PVM - What Is It? F Stands for: Parallel Virtual Machine F A software tool used to create and execute concurrent or parallel applications. F Operates.
Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server.
OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY Parallel Solution of 2-D Heat Equation Using Laplace Finite Difference Presented by Valerie Spencer.
CHAPTER 3 TOP LEVEL VIEW OF COMPUTER FUNCTION AND INTERCONNECTION
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Operating Systems Part III: Process Management (Process States and Transitions)
Introduction to Parallel Programming with C and MPI at MCSR Part 1 The University of Southern Mississippi April 8, 2010.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Processes. Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication.
CS212: OPERATING SYSTEM Lecture 2: Process 1. Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating.
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Parallel and Distributed Programming Kashif Bilal.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 11, 2006 Session 23.
PVM (Parallel Virtual Machine)‏ By : Vishal Prajapati Course CS683 Computer Architecture Prof. Moreshwar R Bhujade.
PVM: Parallel Virtual Machine anonymous ftp ftp ftp.netlib.org cd pvm3/book get pvm-book.ps quit
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Computer Science and Engineering Computer System Security CSE 5339/7339 Session 25 November 16, 2004.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
Parallel Programming with PVM Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Computer Science and Engineering Advanced Computer Architecture CSE 8383 April 24, 2008 Session 12.
 Process Concept  Process Scheduling  Operations on Processes  Cooperating Processes  Interprocess Communication  Communication in Client-Server.
Message Passing Computing 1 iCSC2015,Helvi Hartmann, FIAS Message Passing Computing Lecture 2 Message Passing Helvi Hartmann FIAS Inverted CERN School.
Lecture 5: Parallel Virtual Machine (PVM). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
1 Chapter 11 Global Properties (Distributed Termination)
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
1 Lecture 19: Unix signals and Terminal management n what is a signal n signal handling u kernel u user n signal generation n signal example usage n terminal.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 12.
Process by Dr. Amin Danial Asham. References Operating System Concepts ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, and GREG GAGNE.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 7, 2005 Session 23.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 6, 2006 Session 22.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
Chapter 3: Process Concept
Operating System Concepts
Parallel Virtual Machine
Prabhaker Mateti Wright State University
Processes Overview: Process Concept Process Scheduling
Computer Engg, IIT(BHU)
MPI Message Passing Interface
Message Passing Libraries
Operating System Concepts
Programming Parallel Computers
Presentation transcript:

Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 11

Computer Science and Engineering Parallel Virtual Machine (PVM)  Introduction  Environment & Application Structure  Task Creation  Task Groups  Communication  Synchronization  Reduction operations  Work Assignments

Computer Science and Engineering PVM Introduction    Started as a research project in 1989  Developed at Oak Ridge National Lab & University of Tennessee  It makes it possible to develop applications on a set of heterogeneous computers connected by a network that appears logically to user as a single parallel computer

Computer Science and Engineering PVM Environment  Virtual machine  Dynamic set of heterogeneous computer systems connected via a network and managed as a single parallel computer  Computer nodes  hosts  Hosts are uniprocessors or multiprocessors running PVM software

Computer Science and Engineering PVM Software  Two Components:  Library of PVM routines  Daemon  Should reside on all hosts in the virtual machine  Before running an application, the user must start up PVM and configure a virtual machine

Computer Science and Engineering PVM Application  A number of sequential programs, each of which will correspond to one or more processes in a parallel program  These programs are compiled individually for each host in the virtual machine  Object files are placed in locations accessible from other hosts

Computer Science and Engineering PVM Application (Cont.)  One of these sequential programs, which is called the initiation task, has to be started manually on one of the hosts  Tasks on other hosts are started automatically by the initiation task  Tasks comprising a PVM application can be identical (SPMD) [common in most applications] or can be different (pipeline: input processing, output)

Computer Science and Engineering Application Structure  Start graph  Middle node is called supervisor or master  Supervisor-workers or Master-slaves  Tree  Root is the top supervisor  Hierarchy

Computer Science and Engineering Task Creation  A task in PVM can be started manually or can be spawned from another task  The function pvm_spawn() is used for dynamic task creation.  The task that calls the function pvm_spawn() is referred to as the parent  The newly created tasks are called children.

Computer Science and Engineering To Create a child, you must specify: 1. The machine on which the child will be started 2. A path to the executable file on the specified machine 3. The number of copies of the child to be created 4. An array of arguments to the child tasks

Computer Science and Engineering Task ID  All PVM tasks are identified by an integer task identifier  When a task is created it is assigned a unique identifier (TID)  Task identifiers can be used to identify senders and receivers during communication. It can also be used to assign functions to different tasks based on their TIDs

Computer Science and Engineering Task ID Retrieval Task’s TID  pvm_mytid() Mytid = pvm_mytid; Child’s TID  pvm_spawn() pvm_spawn(…,…,…,…,…, &tid); Parent’s TID  pvm_parent() my_parent_tid = pvm_parent(); Daemon’s TID  pvm_tidtohost() daemon_tid = pvm_tidtohost(id);

Computer Science and Engineering Pvm_spawn num = pvm_spawn(child, arguments, flag, where, howmany, &tids) Example n1 = pvm_spawn(“/user/rewini/worker”, 0, 1, “homer”, 2, &tid1) n1 = pvm_spawn(“/user/rewini/worker”, 0, 1, “corona”, 4, &tid2)

Computer Science and Engineering Task Groups  Groups are useful in cases when a collective operation is performed on only a subset of tasks – Broadcast for example  A task can join or leave a group at any time without informing other tasks in the group  A task my belong to multiple groups  PVM provides several functions for tasks to join and leave a group as well as to retrieve information about other groups

Computer Science and Engineering Group Functions  i = pvm_joingroup(group_name);  i  instance number (starts at 0)  info = pvm_lvgroup(group_name);  In case of an error, info will have a negative value  Retrieve information about groups  pvm_gsize(group_name);  pvm_gettid(instance-number, group_name);  pvm_getinst(TID, group_name);

Computer Science and Engineering Communication among Tasks User application Library Daemon User application Library Daemon Sending TaskReceiving Task

Computer Science and Engineering Standard PVM asynchronous communication  A sending task issues a send command (point 1)  The message is transferred to the daemon (point 2)  Control is returned to the user application (points 3 & 4)  The daemon will transmit the message on the physical wire sometime after returning control to the user application (point 3)

Computer Science and Engineering Standard PVM asynchronous communication (cont.) The receiving task issues a receive command (point 5) at some other time In the case of a blocking receive, the receiving task blocks on the daemon waiting for a message (point 6). After the message arrives, control is returned to the user application (points 7 & 8) In the case of a non-blocking receive, control is returned to the user application immediately (points 7 & 8)

Computer Science and Engineering Send (3 steps) 1.A send buffer must be initialized 2.The message is packed into the buffer 3.The completed message is sent to its destination(s)

Computer Science and Engineering Receive (2 steps) 1.The message is received 2.The received items are unpacked

Computer Science and Engineering Message Buffers Buffer Creation (before packing) Bufid = pvm_initsend(encoding_option) Bufid = pvm_mkbuf(encoding_option) Encoding optionMeaning 0XDR 1No encoding 2Leave data in place

Computer Science and Engineering Message Buffers (cont.) Data Packing pvm_pk*() pvm_pkstr() – one argument pvm_pkstr(“This is my data”); Others – three arguments 1. Pointer to the first item 2. Number of items to be packed 3. Stride pvm_pkint(my_array, n, 1); Packing functions can be called multiple times to pack data into a single message

Computer Science and Engineering Sending a message Point to point (one receiver) info = pvm_send(tid, tag) broadcast (multiple receivers) info = pvm_mcast(tids, n, tag) info = pvm_bcast(group_name, tag) Pack and Send (one step) info = pvm_psend(tid, tag, my_array, length, data type)

Computer Science and Engineering Receiving a message Blocking bufid = pvm_recv(tid, tag) -1  wild card in either tid or tag Nonblocking bufid = pvm_nrecv(tid, tag) bufid = 0 (no message was received) Timeout bufid = pvm_trecv(tid, tag, timeout) bufid = 0 (no message was received)

Computer Science and Engineering Different Receive in PVM Pvm_recv() wait Time Funciton is called Time is expired Message arrival Blocking Pvm_nrecv() Continue execution Non-blocking Pvm_trecv() wait Timeout Resume execution

Computer Science and Engineering Data unpacking pvm_upk*() pvm_upkstr() – one argument pvm_upkstr(string); Others – three arguments 1. Pointer to the first item 2. Number of items to be unpacked 3. Stride pvm_upkint(my_array, n, 1);

Computer Science and Engineering Task Synchronization  Synchronization constructs are used to force a certain order of execution among the activities in a parallel program.  Synchronization Constructs  Blocking Receive  Barriers

Computer Science and Engineering Blocking Receive pvm_recv(100,tag)g()f()pvm_send(200,tag) T0 TID = 100 T1 TID = 200 g() in T1 is not executed until f() in T0 has finished

Computer Science and Engineering Group Barrier in PVM pvm_barrier(“slave”, 3) proceed wait pvm_barrier(“slave”, 3) proceed wait pvm_barrier(“slave”, 3) proceed T2T0T1 Group: slave Synchronization Point

Computer Science and Engineering Reduction Operation info = pvm_reduce(func, data, n, datatype, tag, group_name, root) Example info = pvm_reduce(PvmSum, dataarray, 5, PVM_INT, tag, “slave”, root) T010,5,20,8,3010,5,20,8,30 T1(root)2,15,4,12,620,45,30,30,50 T28,25,6,10,148,25,6,10,14 Before reductionafter reduction

Computer Science and Engineering Work Assignment (different programs) info1 = pvm_spawn(“/user/rewini/worker1”, 0, 1, “lpc01”, 1, &tid1) info2 = pvm_spawn(“/user/rewini/worker2”, 0, 1, “lpc02”, 1, &tid2) info3 = pvm_spawn(“/user/rewini/worker3”, 0, 1, “lpc03”, 1, &tid3) info4 = pvm_spawn(“/user/rewini/worker4”, 0, 1, “lpc04”, 1, &tid4)

Computer Science and Engineering Work Assignment (Same Program) If we know that the IDs are 1, 2,.., n-1 Switch (my_id) { case 1: /* Work assigned to the worker whose id number is 1 */ break; case 2: /* Work assigned to the worker whose id number is 2 */ break; … case n-1: /* Work assigned to the worker whose id number is n-1 */ break; default:;} /* end switch */

Computer Science and Engineering Using task ID array to get my_id  The supervisor sends an array containing the TIDs of all the tasks to all the workers.  The supervisor’s TID is saved in the zero element of the array and the workers are saved in elements 1 to n-1.  Each worker searchers for its own TID and the index can be used to identify the corresponding worker.

Computer Science and Engineering Using task groups to get my_id  All the tasks join one group and the instance numbers are used as the new task identifiers.  The supervisor is the first one to join the group and gets instance number 0.  The workers get instance numbers in the range from 1 to n-1