How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.

Slides:



Advertisements
Similar presentations
NERCS Users’ Group, Oct. 3, 2005 Interconnect and MPI Bill Saphir.
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Parallel ISDS Chris Hans 29 November 2004.
Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Using Clusters -User Perspective. Pre-cluster scenario So many different computers: prithvi, apah, tejas, vayu, akash, agni, aatish, falaq, narad, qasid.
OSCAR Jeremy Enos OSCAR Annual Meeting January 10-11, 2002 Workload Management.
High Performance Computing
Scientific Programming OpenM ulti- P rocessing M essage P assing I nterface.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
Evaluation of the Globus GRAM Service Massimo Sgaravatto INFN Padova.
Parallel Processing LAB NO 1.
PVM and MPI What is more preferable? Comparative analysis of PVM and MPI for the development of physical applications on parallel clusters Ekaterina Elts.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
PVM. PVM - What Is It? F Stands for: Parallel Virtual Machine F A software tool used to create and execute concurrent or parallel applications. F Operates.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems CSCI-6140 – Computer Operating Systems David Goldschmidt, Ph.D.
9 Chapter Nine Compiled Web Server Programs. 9 Chapter Objectives Learn about Common Gateway Interface (CGI) Create CGI programs that generate dynamic.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) MPI Applications with the Grid Engine Riccardo Rotondo
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
MIMD Distributed Memory Architectures message-passing multicomputers.
O.S.C.A.R. Cluster Installation. O.S.C.A.R O.S.C.A.R. Open Source Cluster Application Resource Latest Version: 2.2 ( March, 2003 )
Sharif University of technology, Parallel Processing course, MPI & ADA Server Introduction By Shervin Daneshpajouh.
Rochester Institute of Technology Job Submission Andrew Pangborn & Myles Maxfield 10/19/2015Service Oriented Cyberinfrastructure Lab,
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
MPI Workshop - I Introduction to Point-to-Point and Collective Communications AHPCC Research Staff Week 1 of 3.
Performance Monitoring Tools on TCS Roberto Gomez and Raghu Reddy Pittsburgh Supercomputing Center David O’Neal National Center for Supercomputing Applications.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
_______________________________________________________________CMAQ Libraries and Utilities ___________________________________________________Community.
Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers Chapter 2: Message-Passing Computing LAM/MPI at the.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
1 High-Performance Grid Computing and Research Networking Presented by David Villegas Instructor: S. Masoud Sadjadi
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Getting Started on Emerald Research Computing Group.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Globus Grid Tutorial Part 2: Running Programs Across Multiple Resources.
1 Running MPI on “Gridfarm” Bryan Carpenter February, 2005.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
1 Advanced MPI William D. Gropp Rusty Lusk and Rajeev Thakur Mathematics and Computer Science Division Argonne National Laboratory.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
POE Parallel Operating Environment. Cliff Montgomery.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
PARADOX Cluster job management
Track 1: Cluster and Grid Computing
MPI Basics.
Using Paraguin to Create Parallel Programs
Parallel Virtual Machine
Computational Physics (Lecture 17)
CommLab PC Cluster (Ubuntu OS version)
CRESCO Project: Salvatore Raia
NGS computation services: APIs and Parallel Jobs
Paul Sexton CS 566 February 6, 2006
Compiling and Job Submission
MPJ: A Java-based Parallel Computing System
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Parallel I/O for Distributed Applications (MPI-Conn-IO)
Presentation transcript:

How to for compiling and running MPI Programs. Prepared by Kiriti Venkat

MPI stands for Message Passing Interface MPI is a library specification of message-passing, proposed as a standard by a broadly base committee of vendors,implementers and users. MPI was designed for high performance on both massively parallel machines and on workstation clusters. What is MPI ?

Goals of MPI Develop a widely used standard for writing message-passing programs. As such the interface should establish a practical, portable, efficient, and flexible standard for message passing. The design of MPI primarily reflects the perceived needs of application programmers. Allow efficient communication by – Avoid memory-to-memory coping, –allow overlap of computation and communications, and –offload to a communication coprocessor-processor,where available. Allow for implementations that can be used in a heterogeneous environment.

MPI Programming Environments in Linux Cluster There are two basic MPI programming environments available in OSCAR, they are LAM/MPI MPICH

LAM/MPI –LAM (Local Area Multicomputer) is an MPI programming environment and development system for heterogeneous computers on a network. –With a LAM/MPI, a dedicated cluster or an existing network computing infrastructure can act as a single parallel computer. –LAM/MPI is considered to be a “cluster friendly”,in that it offers daemon-based process startup/control as well as fast client-to-client message passing protocols. –It can use TCP/IP and/or shared memory for message passing.

An open-source, portable implementation of the MPI Standard led by ANL. Implementation of MPI version 1.2 and also significant parts of MPI- 2, particularly in the area of parallel I/O. MPMD (Multiple Program and Multiple Data) and heterogeneous are supported. Supports clusters of SMPs and massively parallel computers. MPICH also includes components of a parallel programming environment like tracing and logfile tools based on the MPI profiling interface,including a scalable log file format (SLOG),parallel performance visualization tools, extensive correctness and performance tests and supports both small and large applications. MPICH

Commands for switching between MPI programming environments in OSCAR. $ switcher mpi –show //displays default environment $ switcher mpi –list // list of environments that are available $ switcher mpi = lam //This will set all future shells to use Iam as default environment For more Information on switcher refer man switcher MPICH

Running MPI-Program in LAM environment Before running MPI programs LAM daemon must be started on each node of cluster $ recon –d lamhosts // recon is for test cluster is connected or not. Lamhost is list of host names (ip) $ lamboot –v lamhosts //lamboot tool starts lam on the specified //cluster. Lamhosts are list of nodes on cluster. Compiling MPI programs $ mpicc -o foo foo.c // It compiles foo.c source code to foo object code. $ mpif77 -o foo foo.f // It compiles foo.f source code to foo object code.

Running MPI-Program in LAM environment $ mpirun –v –np 2 foo // this runs foo program on the available nodes.-np for running given number of copies of the program on given nodes. To run multiple programs at the same time create a application schema file that lists each program and its nodes. $ cat appfile n0 master n0-1slave $ mpirun –v appfile // this runs master and slave programs on 2 nodes simultaneously

$ mpitask //full mpi synchronization status of all processes and messages are displayed $ mpimsg //displayes messages sources and destinations and data types. $ lamclean –v //all users processes and messages are removed. $ lamhalt // removes lam daemons on all nodes. Monitoring MPI Applications

qsub: submits job to PBS qdel: deletes PBS job qstat [-n]: displays current job status and node associations pbsnodes [-a]: displays node status pbsdsh: distributed process launcher qsub command $ qsub ring.qsub Ring.qsub.o? as stdout and if error, output into Ring.qsub.e? An example script is given in Submitting jobs through PBS

qsub: submits job to PBS qdel: deletes PBS job qstat [-n]: displays current job status and node associations pbsnodes [-a]: displays node status pbsdsh: distributed process launcher qsub command $ qsub -N my_jobname -e my_stderr.txt -o my_stdout.txt -q workq -l \ nodes=X:ppn=Y:all,walltime=1:00:00 my_script.sh Example of my_script.sh #!/bin/sh echo Launchnode is ‘hostname‘ lamboot lamhost pbsdsh /path/to/my_executable The above command distrubute the job (my_executable) to x nodes and y processors using pbsdsh and output is piped into my_stdout.txt and if error, is piped into my_stderr.txt. An example script is given in Submitting jobs through PBS