Paul Sexton CS 566 February 6, 2006

Slides:



Advertisements
Similar presentations
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Advertisements

Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
Using Parallel Computing Resources at Marquette
High Performance Computing
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh ssh.fsl.byu.edu You will be logged in to an interactive node.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
Introduction to Parallel Computing Presented By The UTPA Division of Information Technology IT Support Department Office of Faculty and Research Support.
Understanding the Basics of Computational Informatics Summer School, Hungary, Szeged Methos L. Müller.
Debugging Cluster Programs using symbolic debuggers.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Intro to Linux/Unix (user commands) Box. What is Linux? Open Source Operating system Developed by Linus Trovaldsa the U. of Helsinki in Finland since.
Using The Cluster. What We’ll Be Doing Add users Run Linpack Compile code Compute Node Management.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
DDT Debugging Techniques Carlos Rosales Scaling to Petascale 2010 July 7, 2010.
Getting Started with HPC On Iceberg Michael Griffiths and Deniz Savas Corporate Information and Computing Services The University of Sheffield
How to get started on cees Mandy SEP Style. Resources Cees-clusters SEP-reserved disk20TB SEP reserved node35 (currently 25) Default max node149 (8 cores.
Introduction to Using SLURM on Discover Chongxun (Doris) Pan September 24, 2013.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
1 High-Performance Grid Computing and Research Networking Presented by David Villegas Instructor: S. Masoud Sadjadi
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
1 Running MPI on “Gridfarm” Bryan Carpenter February, 2005.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
© 2011 Pittsburgh Supercomputing Center XSEDE 2012 Intro To Our Environment John Urbanic Pittsburgh Supercomputing Center July, 2012.
Information Technology Services Brett D. Estrade, LSU – High Performance Computing Phone:
1 High-Performance Grid Computing and Research Networking Presented by Javier Delgodo Slides prepared by David Villegas Instructor: S. Masoud Sadjadi
1 CS 192 Lecture 4 Winter 2003 December 8-9, 2003 Dr. Shafay Shamail.
Advanced Computing Facility Introduction
Hands on training session for core skills
GRID COMPUTING.
Auburn University
Welcome to Indiana University Clusters
PARADOX Cluster job management
Unix Scripts and PBS on BioU
HPC usage and software packages
Welcome to Indiana University Clusters
Using Paraguin to Create Parallel Programs
BIOSTAT LINUX CLUSTER By Helen Wang October 29, 2015.
Computational Physics (Lecture 17)
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
CommLab PC Cluster (Ubuntu OS version)
BIMSB Bioinformatics Coordination
Practice #0: Introduction
Advanced TAU Commander
Compiling and Job Submission
HOPPER CLUSTER NEW USER ORIENTATION February 2018.
Good Testing Practices
Video Notes.
Quick Tutorial on MPICH for NIC-Cluster
Executing Host Commands
Working in The IITJ HPC System
Presentation transcript:

Paul Sexton CS 566 February 6, 2006 Using the Argo Cluster Paul Sexton CS 566 February 6, 2006

Logging in The address is argo-new.cc.uic.edu Use ssh to log in, using your ACCC username Ex: ssh psexto2@argo-new.cc.uic.edu Pay attention to the login message, this is how the admins will communicate information to you.

Environmental Variables Add the following to your .bash_profile: # For the PGI compilers export PGI_HOME=/usr/common/pgi-6.0-0/linux86/6.0 export LIB=$PGI_HOME/lib:/$PGI_HOME/liblf:/lib:/usr/lib # For the MPI libraries export MPICH2_HOME=/usr/common/mpich2-1.0.1 export LD_LIBRARY_PATH=$MPICH2_HOME/lib:$LD_LIBRARY_PATH export PATH=$MPICH2_HOME/bin:$PATH

Compiling with GCC MPICH provides wrapper scripts for C, C++, and Fortran. Use mpicc in place of gcc Use mpicxx in place of g++ Use mpif77 in place of g77.

Compiling with PGI Use pgcc in place of cc/gcc Use pgf77 in place of f77/g77. No wrapper scripts provided, need to add include and lib paths manually:

Compiling with PGI (cont.) For C or C++, add the following to your compiler flags: -I$MPICH2_HOME/include -L$MPICH2_HOME/lib -lmpich For Fortran, also add: -lfmpich

Running an MPI program Normally, to run an MPI program, you’d use the mpirun/mpiexec command. Ex: mpirun -np 4 myprogram Don’t do this on argo.

Running an MPI program on Argo Edit and compile your code on the master node, and run it on the slave nodes. Using PBS / Torque. The important commands: qsub qdel qstat qnodes

qsub - submit a job qsub [options] script_file script_file is a shell script that gives the path to your executable and the command to run it. Do not call qsub on your program directly! Returns a status message giving you the job id. Ex: “3208.argo-new.cc.uic.edu”

A qsub script For example, imagine your program is called “foo.exe”. The contents of the script file would be: mpiexec -n 4 /path/to/foo/foo.exe Where “/path/to/foo/” is the path to the executable’s directory.

qdel - remove a job qdel job_id job_id is the number qsub returned Ex: qdel 3208

qstat - queue status qstat job_id qstat -u userid qstat To check on the status of a job when you know the number Ex: qstat 3208 qstat -u userid To check on the status of all of your jobs. Ex: qstat -u psexto2 qstat To obtain status of all jobs running on argo.

qnodes - node status Shows the availability of all 64 nodes, and how many jobs they’re currently running. No arguments. No man page entry for some reason.

Viewing the output Standard output and standard error are both redirected to files: [script name].{e,o}[job id] Ex: stderr in my_script.e3208 stdout in my_script.o3208 The .e file should usually be empty.

More on qsub There are numerous options to qsub, but the most common / useful one is “-l nodes”: To set the number of nodes: qsub -l nodes=4 my_script To pick the nodes manually: qsub -l nodes=argo9-1+argo9-2+argo9-3 my_script To run only on a subset of nodes: qsub -l nodes=6:cpu.xeon my_script qsub -l nodes=8:cpu.amd my_script qsub -l nodes=8:cpu.smp my_script

And that’s about all there is… Final Notes: The Intel compilers are also available. The PGI tools include a debugger, PGDBG, and a profiler, PGPROF. These are highly recommended for debugging and profiling parallel code. Limit yourself to 3 jobs at a time.