Parallel and Distributed Programming Kashif Bilal.

Slides:



Advertisements
Similar presentations
Parallel Virtual Machine Rama Vykunta. Introduction n PVM provides a unified frame work for developing parallel programs with the existing infrastructure.
Advertisements

MPI Message Passing Interface Portable Parallel Programs.
MPI Message Passing Interface
Parallel Processing & Parallel Algorithm May 8, 2003 B4 Yuuki Horita.
Practical techniques & Examples
Company LOGO Parallel Virtual Machine Issued by: Ameer Mosa Al_Saadi 1 University of Technology Computer Engineering and Information Technology Department.
Chess Problem Solver Solves a given chess position for checkmate Problem input in text format.
Point-to-Point Communication Self Test with solution.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
Learning Objectives Understanding the difference between processes and threads. Understanding process migration and load distribution. Understanding Process.
3.5 Interprocess Communication
Introduction to PVM PVM (Parallel Virtual Machine) is a package of libraries and runtime daemons that enables building parallel apps easily and efficiently.
20101 Chapter 7 The Application Layer Message Passing.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
Introduction to Parallel Processing Ch. 12, Pg
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
Mapping Techniques for Load Balancing
1 Lecture 4: Distributed-memory Computing with PVM/MPI.
Computer Architecture Parallel Processing
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Chapter 9 Message Passing Copyright © Operating Systems, by Dhananjay Dhamdhere Copyright © Operating Systems, by Dhananjay Dhamdhere2 Introduction.
PVM and MPI What is more preferable? Comparative analysis of PVM and MPI for the development of physical applications on parallel clusters Ekaterina Elts.
PVM. PVM - What Is It? F Stands for: Parallel Virtual Machine F A software tool used to create and execute concurrent or parallel applications. F Operates.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Department of Computer Science University of the West Indies.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
The Socket Interface Chapter 21. Application Program Interface (API) Interface used between application programs and TCP/IP protocols Interface used between.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 11.
System calls for Process management
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
PVM (Parallel Virtual Machine)‏ By : Vishal Prajapati Course CS683 Computer Architecture Prof. Moreshwar R Bhujade.
PVM: Parallel Virtual Machine anonymous ftp ftp ftp.netlib.org cd pvm3/book get pvm-book.ps quit
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Parallel Programming with PVM Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Computer Science and Engineering Advanced Computer Architecture CSE 8383 April 24, 2008 Session 12.
Lecture 5: Parallel Virtual Machine (PVM). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
System calls for Process management Process creation, termination, waiting.
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Parallel Computing Presented by Justin Reschke
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February Session 12.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Computer Science and Engineering Copyright by Hesham El-Rewini Advanced Computer Architecture CSE 8383 April 6, 2006 Session 22.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
PVM and MPI.
Parallel Virtual Machine
Prabhaker Mateti Wright State University
Computer Engg, IIT(BHU)
MPI Message Passing Interface
Message Passing Libraries
System Structure and Process Model
Introduction to parallelism and the Message Passing Interface
Programming with Shared Memory
CS510 Operating System Foundations
MPI Message Passing Interface
Programming Parallel Computers
Presentation transcript:

Parallel and Distributed Programming Kashif Bilal

Parallel and Distributed Programming. Generally We have two types of Distributed and parallel Systems. Generally We have two types of Distributed and parallel Systems. –Multiprocessors ( Shared memory System) Having single memory accessible to every processor Having single memory accessible to every processor –Multicomputers ( Message Passing System) Every Processor has its own local memory. Every Processor has its own local memory.

Parallel Programming Architectures Generally we have two basic architectures for parallel programming Generally we have two basic architectures for parallel programming –Supervisor Worker Structure (Master Slave model) –Hierarchy Structure (Tree Model )

Supervisor Worker Model There is only one level of hierarchy in this structure one supervisor and many workers. Supervisor – –It normally interacts with the user – – activates the workers – –assigns work to the workers – –collects results from the workers. Workers perform calculations and send result back to supervisor

Hierarchy Structure The hierarchy structure allows the workers to create new levels of workers. The top-level supervisor is the initiating task, which creates a set of workers at the second level. These workers may create other sets of workers.

Process ….. A program (task) in execution. A program (task) in execution. A process is a set of executable instructions (program) which runs on a processor. Process is basic entity to achieve parallelism in both multiprocessors and multicomputers. Process is basic entity to achieve parallelism in both multiprocessors and multicomputers.

Distributes System Classification John Flynn classified computers in four categories. – –SISD – Single Instruction stream/Single Data stream – –SIMD – Single Instruction stream/Multiple Data stream OR SPMD – Single Program/Multiple Data stream – –MISD – Multiple Instruction stream/Single Data stream (No real application) – –MIMD – Multiple Instruction stream/Multiple Data stream.

Message Passing The method by which data from one processor’s memory is copied to the memory of another processor. In distributed memory systems, data is generally sent as packets of information over a network from one processor to another.

Thumb Recognition Example Suppose we have a database of 10 lack thumb impressions and their related data. Suppose we have a database of 10 lack thumb impressions and their related data. A user comes, system takes ones impression and searches the database for ones information. A user comes, system takes ones impression and searches the database for ones information. Suppose one database match take 1/100 seconds. Suppose one database match take 1/100 seconds. To search the complete database, system will take approx seconds. i.e. 2.7 hours. To search the complete database, system will take approx seconds. i.e. 2.7 hours.

Algorithm for Thumb Recognition Main(){ Picture =Capture the thumb impression to be matched Details = Match( picture ) }Match(Picture){ Pick record from database one by one and match it with Picture If (matched) return Details of that record }

Distributed Algorithm… Main(){ Receive impression, start record number and end record number from master. Details = Match( picture, start_no, end_no ) Send details to Master or supervisor code. } Match( Picture, start_no, end_no) { Pick record from database one by one and match it with Picture from record no start_no till end_no. If (matched) return Details of that record }

Problem… How to receive data from supervisor to workers. How to receive data from supervisor to workers. How send details back to supervisor process. How send details back to supervisor process. How supervisor allocate and communicate with workers. How supervisor allocate and communicate with workers. Etc…… Etc……

Possible Solutions Make a new programming language for programming in distributed and parallel. Make a new programming language for programming in distributed and parallel. Change existing languages. Change existing languages. Build libraries and API’s( Set of functions) to perform all tasks related to Distributed and parallel programming like remote spawning to task, communication, synchronization etc. Build libraries and API’s( Set of functions) to perform all tasks related to Distributed and parallel programming like remote spawning to task, communication, synchronization etc.

PVM and MPI PVM and MPI are two famous libraries used for parallel and distributed programming. PVM and MPI are two famous libraries used for parallel and distributed programming. PVM is a bit older than MPI. PVM is a bit older than MPI. MPI is now considered as standard for building parallel and distributed programs. MPI is now considered as standard for building parallel and distributed programs. Both libraries provide different functions to perform different tasks required in parallel programs. Both libraries provide different functions to perform different tasks required in parallel programs.

PVM (Parallel Virtual Machine) Started as a research project in 1989 The Parallel Virtual Machine (PVM) was originally developed at Oak Ridge National Laboratory and the University of Tennessee. The PVM offers a powerful set of process control and dynamic resource management functions.

PVM….. It provides programmers with a library of routines for – –Initiation and termination of tasks. – –Synchronization. – –Alteration of the virtual machine configuration. – –Facilitates message passing via a number of simple constructs. – –Interoperability among different heterogeneous computers. Programs written for some architecture can be copied to another architecture, compiled and executed without modification.

PVM… PVM has two components: – –A library of PVM routines – – A daemon that should reside on all the hosts in the virtual machine. The pvmd serves as a message router and controller. It provides a point of contact, authentication, process control, and fault detection. One task (process) normally instantiated on one machine (Supervisor), and other tasks as instantiated automatically by supervisor.

Task Creation… A task in PVM can be started manually or can be spawned from another task. The initiating task is always activated manually by simply running its executable code on one of the hosts. Other PVM tasks can be created dynamically from within other tasks. The function pvm_spawn() is used for dynamic task creation. The task that calls the function pvm_spawn() is referred to as the parent and the newly created tasks are called children.

To Create a child, you must specify The machine on which the child will be started A path to the executable file on the specified machine The number of copies of the child to be created An array of arguments to the child tasks

Pvm_spawn()… Num=pvm_spawn (Child, Arguments, Flag, Where, How Many, &Tids) This function has six parameters and returns the actual number of the successfully created tasks in the variable num. –Child: Name of task (process) to be executed. –Arguments : Same as command line arguments. –Flag : –Flag : A flag value decides what machine will run the spawned task.

Flag values Flag values –PvmTaskDefault 0 PVM can choose any machine to start task 0 PVM can choose any machine to start task –PvmTaskHost 1 where specifies a particular host 1 where specifies a particular host –PvmTaskArch 2 where specifies a type of architecture 2 where specifies a type of architecture Where : Depends on value of flag. Where : Depends on value of flag. How Many : Number of tasks to be spawned. How Many : Number of tasks to be spawned. Tids: Array to store Tid’s of spawned tasks. Tids: Array to store Tid’s of spawned tasks. Return : Total number of tasks created. Return : Total number of tasks created.

n1 = pvm_spawn(“simple_pvm”, 0, 1, “mynode11”, 1, &tid1) –Will create 1 tasks and run executables named simple_pvm on node mynode. numt = pvm_spawn( “simple_pvm”, 0, PvmTaskArch, “LINUX", 5, &tids); numt = pvm_spawn( “simple_pvm”, 0, PvmTaskArch, “LINUX", 5, &tids); –Will create 5 tasks running simple_pvm onspecific architecture i.e. Linux. res = pvm_spawn(“simple_pvm”, NULL, PvmTaskDefault, "", 5, children); res = pvm_spawn(“simple_pvm”, NULL, PvmTaskDefault, "", 5, children); –Will ask pvm to choose nodes itself.

Task Id’s All PVM tasks are identified by an integer task identifierAll PVM tasks are identified by an integer task identifier When a task is created it is assigned a unique identifier (TID)When a task is created it is assigned a unique identifier (TID) Task identifiers can be used to identify senders and receivers during communication.Task identifiers can be used to identify senders and receivers during communication. It can also be used to assign functions to different tasks based on their TIDsIt can also be used to assign functions to different tasks based on their TIDs

Retrieval Of Tid Task’s TID  pvm_mytid() Task’s TID  pvm_mytid() Mytid = pvm_mytid; Child’s TID  pvm_spawn() Child’s TID  pvm_spawn() pvm_spawn(…,…,…,…,…, &tid); Parent’s TID  pvm_parent() Parent’s TID  pvm_parent() my_parent_tid = pvm_parent(); Daemon’s TID  pvm_tidtohost() Daemon’s TID  pvm_tidtohost() daemon_tid = pvm_tidtohost(id);

Communication among Tasks Communication among PVM tasks is performed using the message passing approach, Achieved using a library of routines and a daemon. User program communicates with the PVM daemon Daemon Daemon determines the destination of each message. Communication is generally asynchronous.

User application Library Daemon User application Library Daemon Sending TaskReceiving Task

How to send a message Sending a message requires 3 steps. Sending a message requires 3 steps. 1.A send buffer must be initialized. 2.The message is packed into the buffer. 3.The completed message is sent to its destination(s). Receiving of message requires 2 steps Receiving of message requires 2 steps 1.The message is received. 2.The received items are unpacked.

Message Sending… Buffer Creation (before packing) Buffer Creation (before packing) Bufid = pvm_initsend(encoding_option) Bufid = pvm_mkbuf(encoding_option) Encoding optionMeaning 0XDR 1No encoding 2Leave data in place

Message Sending… Data Packing Data Packingpvm_pk*() –pvm_pkstr() – one argument pvm_pkstr(“This is my data”); –Others – three arguments 1. Pointer to the first item 2. Number of items to be packed 3. Stride pvm_pkint(my_array, n, 1); pvm_pkint(my_array, n, 1);

Message Sending… Point to point (one receiver) Point to point (one receiver) info = pvm_send(tid, tag) broadcast (multiple receivers) broadcast (multiple receivers) info = pvm_mcast(tids, n, tag) info = pvm_bcast(group_name, tag) Pack and Send (one step) Pack and Send (one step) info = pvm_psend(tid, tag, my_array, length, data type) The call returns integer status code info. A negative value of info indicates an error. The call returns integer status code info. A negative value of info indicates an error.

Receiving a Message PVM supports three types of message receiving functions: blocking, nonblocking, and timeout. PVM supports three types of message receiving functions: blocking, nonblocking, and timeout. Blocking Blocking bufid = pvm_recv (tid, tag) -1  wild card in either tid or tag Nonblocking Nonblocking bufid = pvm_nrecv (tid, tag) bufid = pvm_nrecv (tid, tag) bufid = 0 (no message was received) bufid = 0 (no message was received) Timeout Timeout bufid = pvm_trecv (tid, tag, timeout) bufid = pvm_trecv (tid, tag, timeout) bufid = 0 (no message was received)

Data Unpacking pvm_upk*() –pvm_upkstr() – one argument pvm_upkstr(string); –Others – three arguments 1. Pointer to the first item 2. Number of items to be unpacked 3. Stride pvm_upkint(my_array, n, 1); pvm_upkint(my_array, n, 1);

Data Unpacking pvm_upk*() pvm_upk*() –pvm_upkstr() – one argument pvm_upkstr(string); –Others – three arguments 1. Pointer to the first item 2. Number of items to be unpacked 3. Stride pvm_upkint(my_array, n, 1); pvm_upkint(my_array, n, 1); Receiving and unpacking in single step Receiving and unpacking in single step – –Info = pvm_precv(tid, tag, my_array, len, datatype, &src,&atag, &alen)

Work Assignment (different programs) info1 = pvm_spawn(“worker1”, 0, 1, “lpc01”, 1, &tid1) info2 = pvm_spawn(“worker2”, 0, 1, “lpc02”, 1, &tid2) info3 = pvm_spawn(“worker3”, 0, 1, “lpc03”, 1, &tid3) info4 = pvm_spawn(“worker4”, 0, 1, “lpc04”, 1, &tid4)

Distributed Algorithm… Main(){ Receive impression, start record number and end record number from master. Details = Match( picture, start_no, end_no ) Send details to Master or supervisor code. } Match( Picture, start_no, end_no) { Pick record from database one by one and match it with Picture from record no start_no till end_no. If (matched) return Details of that record }

Distributed Algorithm using PVM Master MasterMain() { Input thumb Impression… Int arr[2]; Int start=1; Pvm_spawn(“slave”,0,0,””,100,children);Pvm_init_send(XDR);for(i=0;i<100;i++){arr[0]=start;arr[1]=start+999;pvm_pkint(arr,2,1);pvm_send(children[i],)Start+=1000;Pvm_send(picture);}Pvm_recv(-1,-1);}

Distributed Algorithm using PVM Slave SlaveMain(){ Int arr[2]; Pvm_recv(pvm_parent(),1);Pvm_upkint(arr,2,1);Pvm_recv(pvm_parent(),-1);Match(picture,arr[0],arr[1]);} Match( Picture, start_no, end_no) { Pick record from database one by one and match it with Picture from record no start_no till end_no. If (matched) return Details of that record// pvm_send(pvm_parent,2); }

Distributed Algorithm using PVM Master and slave both in single program Master and slave both in single programMain() { int tid = pvm_parent(); if(tid=PvmNoParent){ Write supervisor code Here.. } Else{ Write worker code here.. }}

PVM Master Program #include #include int main() { int myTID; int myTID; int x = 12; int x = 12; int children[10]; int children[10]; int res; int res; myTID = pvm_mytid(); myTID = pvm_mytid(); printf("Master: TID is %d\n", myTID); printf("Master: TID is %d\n", myTID); res = pvm_spawn("slave", NULL, PvmTaskDefault, "", 1, children); res = pvm_spawn("slave", NULL, PvmTaskDefault, "", 1, children); if (res<1) { if (res<1) { printf("Master: pvm_spawn error\n"); printf("Master: pvm_spawn error\n"); pvm_exit(); pvm_exit(); exit(1); exit(1); }

pvm_initsend(PvmDataDefault); pvm_initsend(PvmDataDefault); pvm_pkint(&x, 1, 1); pvm_pkint(&x, 1, 1); pvm_send(children[0], 1); pvm_send(children[0], 1); pvm_recv(-1, -1); pvm_recv(-1, -1); pvm_upkint(&x, 1, 1); pvm_upkint(&x, 1, 1); printf("Master has received x=%d\n", x); printf("Master has received x=%d\n", x); pvm_exit(); pvm_exit(); return 0; return 0;}

How to Compile and Execute? gcc -I /opt/pvm3/include myprogram.c -L /opt/pvm3/lib/LINUX/ -lpvm3 -o myprogramexe gcc -I /opt/pvm3/include myprogram.c -L /opt/pvm3/lib/LINUX/ -lpvm3 -o myprogramexe Illustration: Illustration: gcc = C Compilergcc = C Compiler -I = Include-I = Include opt/pvm3/include = Path (include files)opt/pvm3/include = Path (include files) myprogram = Name of source code filemyprogram = Name of source code file -L = search path info as well as the pm3 lib-L = search path info as well as the pm3 lib -o = output file-o = output file