HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research

Slides:



Advertisements
Similar presentations
NERCS Users’ Group, Oct. 3, 2005 Interconnect and MPI Bill Saphir.
Advertisements

© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Job Submission.
Greg Thain Computer Sciences Department University of Wisconsin-Madison Condor Parallel Universe.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Batch Queuing Systems The Portable Batch System (PBS) and the Load Sharing Facility (LSF) queuing systems share much common functionality in running batch.
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
Tutorial on MPI Experimental Environment for ECE5610/CSC
ISG We build general capability Job Submission on the Olympus Cluster J. DePasse; S. Brown, PhD; T. Maiden Pittsburgh Supercomputing Center Public Health.
New MPI Library on the cluster Since WSU’s Grid had an upgrade of its operating system recently, we need to use a new MPI Library to compile and run our.
WEST VIRGINIA UNIVERSITY HPC and Scientific Computing AN OVERVIEW OF HIGH PERFORMANCE COMPUTING RESOURCES AT WVU.
High Performance Computing
EGEE-II INFSO-RI Enabling Grids for E-sciencE Supporting MPI Applications on EGEE Grids Zoltán Farkas MTA SZTAKI.
Advanced MPI Lab MPI I/O Exercises. 1 Getting Started Get a training account from the instructor Login (using ssh) to ranger.tacc.utexas.edu.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
HPCC Mid-Morning Break Powertools Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
ISG We build general capability Purpose After this tutorial, you should: Be comfortable submitting work to the batch queuing system of olympus and be familiar.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Introduction to HPCC at MSU 09/08/2015 Matthew Scholz Research Consultant, Institute for Cyber-Enabled Research Download this presentation:
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) MPI Applications with the Grid Engine Riccardo Rotondo
Lab System Environment
DDT Debugging Techniques Carlos Rosales Scaling to Petascale 2010 July 7, 2010.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
Sharif University of technology, Parallel Processing course, MPI & ADA Server Introduction By Shervin Daneshpajouh.
Introduction to Using SLURM on Discover Chongxun (Doris) Pan September 24, 2013.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Software Overview Environment, libraries, debuggers, programming tools and applications Jonathan Carter NUG Training 3 Oct 2005.
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Executable scripts. So far We have made scripts echo hello #for example And called it hello.sh Run it as sh hello.sh This only works from current directory?
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery TotalView Parallel Debugger.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
RT-LAB Electrical Applications 1 Opal-RT Technologies Use of the “Store Embedded” mode Solution RT-LAB for PC-104.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
1 Running MPI on “Gridfarm” Bryan Carpenter February, 2005.
Jin Huang Ph.D. Candidate Massachusetts Institute of Technology.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Modules, Compiling WRF, and Running on CHPC Clusters Adam Varble WRF Users Meeting 10/26/15.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarksEGEE-III INFSO-RI MPI on the grid:
Advanced Computing Facility Introduction
Hands on training session for core skills
GRID COMPUTING.
Specialized Computing Cluster An Introduction
Auburn University
PARADOX Cluster job management
HPC usage and software packages
Introduction to High Performance Computing
Welcome to Indiana University Clusters
Introduction to Parallel Processing
How to use the HPCC to do stuff
Introduction to HPCC at MSU
Computational Physics (Lecture 17)
HPCC New User Training Getting Started on HPCC Resources
Advanced TAU Commander
MPI MPI = Message Passing Interface
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Presentation transcript:

HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research

MPI on HPCC Two Flavors of MPI Switching flavors and compiling Running in a script Running on the developer nodes

Two Flavors of MPI mvapich (default) vs openmpi Historically mvapich was much faster that openmpi. The newest version of openmpi is just as fast as mvapich. I feel that openmpi is much easier to use but either will work on HPCC.

Switching Flavors Use the “module” command to switch between the two versions of mpi. mvapich module is loaded by default. To switch to openmpi you first need to unload mvapich: – module unload mvapich Then you need to load openmpi: – module load openmpi You can do both commands in one step by using swap: – module swap mvapich openmpi

Submission Scripts #!/bin/bash –login #PBS nodes=10:ppn=1 cd ${PBS_O_WORKDIR} module swap mvapich openmpi mpirun #!/bin/bash –login #PBS nodes=10:ppn=1 cd ${PBS_O_WORKDIR} mpiexec openmpi mvapich

Running on the Command Line The scheduler automatically knows how many and where to run MPI processes. However, on the command line, you need to specify the nodes and processors. However openmpi and mvapich are a little different.

Command Line Differences Openmpi – mpirun – Default assumes one process on the current host. – You do not even need the mpirun command to run the default. – Optionally you can use the –n and –hostfile options to change the default mvapich – mpirun – Requires both the –np and –machinefile flag to run.

Creating a machinefile From the command line, type the following: This will create a file called machinefile with computer name of the developer node written four times. echo $HOST > machinefile echo $HOST >> machinefile dev-intel09

Command line mvapich openmpi NOTE: I did a check and either MPI implementation will work with either notation. mpirun -np 4 -machinefile machinefile mpirun -n 4 –hostfile machinefile

Which MPI command do you use? Command LineJob Script openmpimpirun mvapichmpirunmpiexec

Trying out an example 1.Log on to one of the developer nodes 2.Load the powertools module: – module load use.cus powertools 3.Run the getexample program. This will create a folder called helloMPI: – getexample helloMPI 4.Change to the helloMPI directory and read the readme files. 5.Or just type the following on the command line: –./README