Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.

Slides:



Advertisements
Similar presentations
Connecting to GMT machine via Windows 7. Windows PuTTy GMT on Mac server int-038.geosci.usyd.edu.au To use GMT, you will connect to a Mac server via PuTTy.
Advertisements

Chapter One The Essence of UNIX.
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
Guide To UNIX Using Linux Third Edition
Introduction to Parallel Computing Presented By The UTPA Division of Information Technology IT Support Department Office of Faculty and Research Support.
Asynchronous Solution Appendix Eleven. Training Manual Asynchronous Solution August 26, 2005 Inventory # A11-2 Chapter Overview In this chapter,
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
UNIT - III. Installing Samba Windows uses Sever Message Block(SMB) to communicate with each other using sharing services like file and printer. Samba.
ISG We build general capability Purpose After this tutorial, you should: Be comfortable submitting work to the batch queuing system of olympus and be familiar.
Microsoft Windows 2003 Server. Client/Server Environment Many client computers connect to a server.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
Introduction to UNIX/Linux Exercises Dan Stanzione.
Introduction to HP LoadRunner Getting Familiar with LoadRunner >>>>>>>>>>>>>>>>>>>>>>
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
COMP1070/2002/lec3/H.Melikian COMP1070 Lecture #3 v Operating Systems v Describe briefly operating systems service v To describe character and graphical.
UNIX command line. In this module you will learn: What is the computer shell What is the command line interface (or Terminal) What is the filesystem tree.
 Accessing the NCCS Systems  Setting your Initial System Environment  Moving Data onto the NCCS Systems  Storing Data on the NCCS Systems  Running.
Introduction to Shell Script Programming
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Intro to Linux/Unix (user commands) Box. What is Linux? Open Source Operating system Developed by Linus Trovaldsa the U. of Helsinki in Finland since.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
MCTS Guide to Microsoft Windows Server 2008 Applications Infrastructure Configuration (Exam # ) Chapter Four Windows Server 2008 Remote Desktop Services,
CS 390 Unix Programming Summer Unix Programming - CS 3902 Course Details Online Information Please check.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
CPSC 233 Run graphical Java programs remotely on Mac and Windows.
ACCESS IC LAB Graduate Institute of Electronics Engineering, NTU Usage of Workstation Lecturer: Yu-Hao( 陳郁豪 ) Date:
MCTS Guide to Microsoft Windows Server 2008 Applications Infrastructure Configuration (Exam # ) Chapter Five Windows Server 2008 Remote Desktop Services,
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Lesson 3-Touring Utilities and System Features. Overview Employing fundamental utilities. Linux terminal sessions. Managing input and output. Using special.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
CEG 2400 FALL 2012 Linux/UNIX Network Operating Systems.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Virtual Machines Module 2. Objectives Define virtual machine Define common terminology Identify advantages and disadvantages Determine what software is.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Active-HDL Server Farm Course 11. All materials updated on: September 30, 2004 Outline 1.Introduction 2.Advantages 3.Requirements 4.Installation 5.Architecture.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Agenda Customizing a Unix/Linux account Environment Introduction to Start-up Files (.bash_profile,.bashrc,.profile,.kshrc) Safe Methods for Changing Start-up.
INTRODUCTION TO SHELL SCRIPTING By Byamukama Frank
Information Technology Services Brett D. Estrade, LSU – High Performance Computing Phone:
Introduction to the Linux Command Line Interface Research Computing Systems Bob Torgerson July 19, 2016.
Advanced Computing Facility Introduction
Hands on training session for core skills
GRID COMPUTING.
Auburn University
Welcome to Indiana University Clusters
PARADOX Cluster job management
Development Environment
HPC usage and software packages
Andy Wang Object Oriented Programming in C++ COP 3330
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
CommLab PC Cluster (Ubuntu OS version)
The Linux Operating System
Telnet/SSH Connecting to Hosts Internet Technology.
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Introduction to High Performance Computing Using Sapelo2 at GACRC
Using the Omega3P Eigensolver
Software - Operating Systems
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Presentation transcript:

Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group

Introduction to Parallel Computing Outline Introduction How to Use the UTPA High Performance Computer Customizations of the User Environment Finding and Using Applications Building Applications Running Applications and Getting Results Reference Files

Introduction to Parallel Computing Introduction Parallel Computing allows for multiple computations to be carried out simultaneously, thus adding a speedup to the computations. Large problems can be divided into smaller ones which are then solved concurrently by several processors or computers, with the output of these multiple computations available to a single collection point. Parallel Computing is available in many forms most common are multiprocessor systems and multiple computer clusters. The UTPA cluster, bambi.utpa.edu incorporates both forms using known parallel computing algorithms and mechanisms.

Introduction to Parallel Computing Introduction Continued Two major Parallel Computing Mechanisms -SMP – Symmetric Multi Processing, works with multiprocessor computers enabling any CPU to work on any task in memory. These tasks can be moved between processors to balance the workload. SMP directives are put in program code and are allocated during program compilation by compiler flags. -MPI – Message Passing Interface, a communications protocol which allows computers to communicate and is used to program parallel applications. MPI provides the functionality between processes mapped to various computers and servers to allow parallel execution of those processes. OpenMPI directives can be placed in source code then compiled into MPI based executables.

Introduction to Parallel Computing bambi.utpa.edu Specs Nodes2 mgmt, 1 storage, 55 compute NW MediaGigabit Ethernet between all nodes. Node SpecsCompute - Dell 1950, w/ 2–dual core 2.33 GHz Xeon cpu’s, 4GB memory. Node SpecsManagement – Dell 2950, w/ 2 – dual core 2.33 GHz Xeon cpu’s, 4GB memory. Disk ArrayDell MD ~2 TB storage OSRed Hat Enterprise Linux. SchedulerPBS Pro Scheduler.

Introduction to Parallel Computing How to use UTPA High Performance Computing Cluster (HPCC) The following pages give a step by step explanation of how to use the UTPA Cluster with some examples to get users started. Feel free to use these examples as a template for your own personal work. All users should submit jobs thru the scheduler, the scheduler handles jobs in an orderly manner and will give all users equal access to HPC resources. If you haven't received a username and password to use the UTPA cluster consult the website for information on obtaining an account.

Introduction to Parallel Computing How to use UTPA High Performance Computing Cluster (HPCC) L ogging into Cluster Access is gained to the UTPA Cluster by using a secure shell (ssh) client for login. ssh client and server programs are part of most UNIX/Linux Operating Systems (OS), putty ( can be used as an ssh client on Windows machines. ssh expects a valid username a password to allow access. An X server is necessary for displaying graphics produced on a remote host locally. Linux/Unix systems usually come with an X server program. Xming is a freeware X server that can be used with Microsoft Windows and can be found at

Introduction to Parallel Computing How to use UTPA High Performance Computing Cluster (HPCC) Logging into Cluster Linux/Unix terminal login % ssh Linux/Unix terminal login with X forwarding $ ssh -l user -X bambi.utpa.edu Linux/Unix terminal login with verbosity > ssh –v

Introduction to Parallel Computing How to use UTPA High Performance Computing Cluster (HPCC) Logging into Cluster Windows login using an ssh client (putty) 1)Click on putty, select run, in the putty window type )In the ssh terminal at the login as: prompt type your username. 3)At the password: prompt type in your password. 4)The system will log you into your account. You should see a ~]$ " prompt.

Introduction to Parallel Computing How to use UTPA High Performance Computing Cluster (HPCC) Using Xming for graphics on the cluster Acquire Xming from xming-download/ the license is free to use this software. xming-download/ run Xming as follows 1)Click the Xming Icon on the desktop or in the start menu 2)Open a putty session and enter a hostname, IP OR load a saved session. 3)Click the ‘SSH’ ‘+’ located in the left pane of the putty menu. 4)Click X11 5)Click the ‘Enable X11 forwarding’ check box. 6)Click open. 7)Log into the putty terminal with username and password. 8)At the terminal prompt ~]%) type /usr/X11R6/bin/xclock, a clock should appear on the desktop or in the start bar. If this works your account is ready for graphics, if not talk to the Systems Administrator.

Introduction to Parallel Computing How to use UTPA High Performance Computing Cluster (HPCC ) Samba log in 1)On Windows desktop right click ‘My computer’. 2)Select ‘map network drive’. 3)In popup menu select drive (x, y, z). 4)Select folder as follows \\ \username 5)Enter username and password in popup menu 6)Click finish or ok 7)User account folder should popup on desktop. The folder can be used just like any Windows folder.

Introduction to Parallel Computing Customizations of the User Environment 1) The user environment is controlled by: environmental variables – $HOME, $PATH, $USER, etc. User files –.bashrc,.profile,.bash_profile (bash);.cshrc,.tcshrc,.login (csh, tcsh). System files – /etc/bashrc, /etc/profile, /etc/csh.cshrc. 2) Setting Environmental Variables and Shell Variables set VARNAME=value (csh, tcsh) ; add to.cshrc or.tcshrc for permanency. setenv VARNAME value (csh, tcsh) ; add to.cshrc or.tcshrc for permanency. export VARNAME=value (bash, sh, ksh) ; add to.bash_profile,.profile or.bashrc for permanency. 3) Displaying Variables value $ echo $VARIABLE (sh, bash) % printenv (csh) > env (tcsh)

Introduction to Parallel Computing Customizations of the User Environment Continued 4) Configuring paths in a users account: set path=(/newpath $path) add to.cshrc for permanency. setenv PATH="/newpath:$PATH" add to.bashrc or.profile for permanency. 5) Adding aliases alias aliasname='newalias' (bash) # add to.bashrc or.bash_profile for permanency. alias aliasname 'newalias' (csh, tcsh) # add to.cshrc or.tcshrc for permanancy. 6) Getting online help, using man pages from the command line $ man command example: % man tutorial

Introduction to Parallel Computing Finding and Using Applications 1)Finding executables and commands with UNIX/Linux tools. % which command (must be located in $path or $PATH) $ find./ -name "commandname" –ls > locate command 2) Modulefiles and module command. % module avail ; list available modules $ module list ; list loaded modules % module load hpc/openmpi ; load openmpi module > module unload intel ; unload intel module

Introduction to Parallel Computing Building Applications Two types of applications available on the HPC Cluster are interpreted programs like python scripts, and, compiled programs like 'C' source code compiled with the gcc or Intel compiler to build it and make it ready for execution. The third type of program available is the Vendor prepared and provided Application which can be installed and run on the High Performance Computer Cluster. 1)Build Examples % gcc -o someprog1 someprog1.c ; compile 'C' source code into executable. $ mpicc –o mpiprog1 mpiprog1.c; compile MPI based source coded into executable. 2)Interpreted programs $ someprog.py ; a PYTHON script, set permissions to make executable. % someprog.pl ; a PERL script, set permissions to make executable. $ someprog.jva ; a JAVA script, set permissions to make executable. 3)Vendor Applications Scilab Matlab NWChem

Introduction to Parallel Computing Running Applications and Getting Results Important to running applications on the Cluster is getting results of those runs. In previous sections applications have been run on the Cluster 'raw', i.e. applications have been run standalone without concern for the resource requirements of other Cluster users. In a high activity cluster, running applications without scheduler oversight is a 'recipe for disaster' and will not be permitted. This section will explain and demonstrate the running jobs via the PBSPro scheduler.

Introduction to Parallel Computing Running Applications and Getting Results The Scheduler The PBSPro Scheduler is the scheduler used on the UTPA High Performance Computing (HPC) Cluster. Scheduler command syntax may seem foreign, but with practice will become second nature. Scheduler scripts and info can be found in /usr/local/examples/sched_scripts. If you have a scheduler script to share put it in this directory. If you have generalized code to share put it in the /usr/local/examples directory.

Introduction to Parallel Computing Running Applications and Getting Results 1)Running jobs on the cluster Application jobs must be submitted to the scheduler for execution giving all users fair access to Cluster resources 2) Submitting jobs to the scheduler % qsub testscript ; submit testscript to the PBSPro scheduler. $ qsub –V –S /bin/csh somescript ; submit somescript to the scheduler with environment and interpreter of the submit node.

Introduction to Parallel Computing Running Applications and Getting Results Scheduler Scripts script #1 testscript that will be used with scheduler su bmission ##Example: MPI script #!/bin/bash ## interpreter #PBS -N name_of_parallel_job #PBS -l select=4:ncpus=4 ## use 4 compute nodes, and 4 cpus on each. #PBS -S /bin/sh ## use /bin/bash interpreter #PBS -V ## pass environment to compute nodes. #PBS -M your- -address ## status address. #PBS -q workq ## use queue named workq #PBS -o /export/home/user-name/output ## output file location. #PBS -e /export/home/user-name/errors ## error file location. ## Comment echo "I ran on: “ cat $PBS_NODEFILE ## send nodes file to stdout. ## Another Comment ##cd to your execution directory first cd ~ ##use mpirun to run MPI binary with 7 nodes mpirun./your_mpi_program

Introduction to Parallel Computing Running Applications and Getting Results 1)Getting Job Status $ qstat -s ## view job run status. % tracejob JOBID ## get detailed status of a job. 2) Displaying Output or Results Output should echo to the user terminal or be stored in a e-file for errors or an o-file for output results. example o-file – testjob hello World Hello, World! I am 0 of 4 Hello, World! I am 1 of 4 Hello, World! I am 3 of 4 Hello, World! I am 2 of 4

Introduction to Parallel Computing References There are many HPC resources, WebPages and USENET groups available. Listed here are a few that may be used to assist you. List of the Worlds fastest and powerful Computers - Distributed Computing Theory MPI USENET – Parallel Computing Newsgroups comp.arallel comp.parallel.mpi comp.parallel.pvm comp.programming.threads UTPA HPCC website

Introduction to Parallel Computing Files /usr/local/examples