Hands on training session for core skills

Slides:



Advertisements
Similar presentations
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Advertisements

HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
Tutorial on MPI Experimental Environment for ECE5610/CSC
High Performance Computing
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
Introduction to Parallel Computing Presented By The UTPA Division of Information Technology IT Support Department Office of Faculty and Research Support.
Introduction to UNIX/Linux Exercises Dan Stanzione.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
1 C/C++ UM/MCSR. 2 Logging into the system using ssh Logging into the system from Windows: –Start the secure shell client: Start->Programs->SSH.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
Sharif University of technology, Parallel Processing course, MPI & ADA Server Introduction By Shervin Daneshpajouh.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Introduction to UNIX Road Map: 1. UNIX Structure 2. Components of UNIX 3. Process Structure 4. Shell & Utility Programs 5. Using Files & Directories 6.
PP Lab MPI programming II. Program#1 Write a program that prints hello from every created process. Like: Hello World from process 0 of 5 Hello World from.
Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers Chapter 2: Message-Passing Computing LAM/MPI at the.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
1 Running MPI on “Gridfarm” Bryan Carpenter February, 2005.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Interface Using resources from
How to use HybriLIT Matveev M. A., Zuev M.I. Heterogeneous Computations team HybriLIT Laboratory of Information Technologies (LIT), Joint Institute for.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Modules, Compiling WRF, and Running on CHPC Clusters Adam Varble WRF Users Meeting 10/26/15.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Introduction to HPC Workshop March 1 st, Introduction George Garrett & The HPC Support Team Research Computing Services CUIT.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Advanced Computing Facility Introduction
Workstations & Thin Clients
GRID COMPUTING.
Auburn University
Distributed Computing using CloudLab
PARADOX Cluster job management
The OSCAR Cluster System
Assumptions What are the prerequisites? … The hands on portion of the workshop will be on the command-line. If you are not familiar with the command.
HPC usage and software packages
Advanced Topics: MPI jobs
MPI Basics.
ASU Saguaro 09/16/2016 Jung Hyun Kim.
Command line arguments
Parallel computation with R & Python on TACC HPC server
Joker: Getting the most out of the slurm scheduler
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
MPI Message Passing Interface
Short Read Sequencing Analysis Workshop
Introduction to HPC Workshop
College of Engineering
Introduction to Message Passing Interface (MPI)
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
High Performance Computing in Bioinformatics
Pattern Programming Tools
Parallel computation with R & Python on TACC HPC server
MPI MPI = Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Compile and run c files.
Distributed Memory Programming with Message-Passing
Short Read Sequencing Analysis Workshop
Some codes for analysis and preparation for programming
Maxwell Compute Cluster
Presentation transcript:

Hands on training session for core skills HPC Roadshow Hands on training session for core skills

Overview How to Access the System Software Environment - Modules How to Find Cluster Status and Job Summary Compiling and Submitting Jobs - Examples

How to access the system Login to the Supercomputing Training Portal (http://supercomputing.cyi.ac.cy) with the username and password that has been send to you

How to access the system Open a terminal in a new tab

How to access the system Click on the terminal icon in the new tab and you should be logged in to Euclid

Environment Setup with Modules The software environment used on LinkSCEEM systems can be managed via Modules Modules facilitate the task of updating applications and provide a user-controllable mechanism for accessing software revisions and controlling combination of versions module avail # lists available modules module list # lists currently loaded modules module load x # loads a specific module module unload x # unloads a specific module module help x # help on a specific module module purge # unloades all loaded modules module show x # lists the full path of modulefile and all of the environment changes the modulefile will make if loaded

Modules on Euclid

Hands on exercises From your home directory copy the hands on exercises directory into your home directory Go into the examples directory cp -r /opt/examples/ . cd examples

Hands on Exercise 1 Hello World – C code: #include <stdio.h> #include <mpi.h> int main(int argc, char ** argv) { int size,rank; int length; char name[80]; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&size); MPI_Get_processor_name(name,&length); printf("Hello MPI World! Process %d out of %d on %s\n",rank,size,name); MPI_Finalize(); return 0; }

Hands on Exercise 1 Script which can be used to define all commands required to submit a job to a HPC resource #!/bin/bash #SBATCH --job-name=hello-world #SBATCH --nodes=2 #SBATCH --ntasks-per-node=8 #SBATCH --time=00:01:00 #SBATCH --error=hello.out #SBATCH --output=hello.out module load goolf/1.4.10 mpirun -np 16 hello

Hands on Exercise 1 Load the goolf module which includes the GCC compiler, OpenMPI, OpenBLAS, ScaLAPACK, FFTW Compile the code Submit the job to the queue Check the queue status Check the contents of the output file module load goolf/1.4.10 mpicc hello.c –o hello sbatch hello.sub cat hello.out squeue

Hands on Exercise 2 Hello World – Cuda code Takes the string "Hello ", prints it, then passes it to CUDA with an array of offsets. Then the offsets are added in parallel to produce the string "World!“ Job submission script: #!/bin/bash #SBATCH --job-name=hello-world-cuda #SBATCH --nodes=1 #SBATCH --ntasks-per-node=8 #SBATCH --time=00:01:00 #SBATCH --gres=gpu:2 #SBATCH --error=hello-cuda.out #SBATCH --output=hello-cuda.out module load CUDA/5.5.22 ./hello-cuda

Hands on Exercise 2 Load the CUDA module Compile the code Submit the job to the queue Check the queue status Check the contents of the output file module load CUDA/5.5.22 nvcc hello.cu –o hello-cuda sbatch hello-cuda.sub squeue cat hello-cuda.out

Thank you