Hands on training session for core skills HPC Roadshow Hands on training session for core skills
Overview How to Access the System Software Environment - Modules How to Find Cluster Status and Job Summary Compiling and Submitting Jobs - Examples
How to access the system Login to the Supercomputing Training Portal (http://supercomputing.cyi.ac.cy) with the username and password that has been send to you
How to access the system Open a terminal in a new tab
How to access the system Click on the terminal icon in the new tab and you should be logged in to Euclid
Environment Setup with Modules The software environment used on LinkSCEEM systems can be managed via Modules Modules facilitate the task of updating applications and provide a user-controllable mechanism for accessing software revisions and controlling combination of versions module avail # lists available modules module list # lists currently loaded modules module load x # loads a specific module module unload x # unloads a specific module module help x # help on a specific module module purge # unloades all loaded modules module show x # lists the full path of modulefile and all of the environment changes the modulefile will make if loaded
Modules on Euclid
Hands on exercises From your home directory copy the hands on exercises directory into your home directory Go into the examples directory cp -r /opt/examples/ . cd examples
Hands on Exercise 1 Hello World – C code: #include <stdio.h> #include <mpi.h> int main(int argc, char ** argv) { int size,rank; int length; char name[80]; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&size); MPI_Get_processor_name(name,&length); printf("Hello MPI World! Process %d out of %d on %s\n",rank,size,name); MPI_Finalize(); return 0; }
Hands on Exercise 1 Script which can be used to define all commands required to submit a job to a HPC resource #!/bin/bash #SBATCH --job-name=hello-world #SBATCH --nodes=2 #SBATCH --ntasks-per-node=8 #SBATCH --time=00:01:00 #SBATCH --error=hello.out #SBATCH --output=hello.out module load goolf/1.4.10 mpirun -np 16 hello
Hands on Exercise 1 Load the goolf module which includes the GCC compiler, OpenMPI, OpenBLAS, ScaLAPACK, FFTW Compile the code Submit the job to the queue Check the queue status Check the contents of the output file module load goolf/1.4.10 mpicc hello.c –o hello sbatch hello.sub cat hello.out squeue
Hands on Exercise 2 Hello World – Cuda code Takes the string "Hello ", prints it, then passes it to CUDA with an array of offsets. Then the offsets are added in parallel to produce the string "World!“ Job submission script: #!/bin/bash #SBATCH --job-name=hello-world-cuda #SBATCH --nodes=1 #SBATCH --ntasks-per-node=8 #SBATCH --time=00:01:00 #SBATCH --gres=gpu:2 #SBATCH --error=hello-cuda.out #SBATCH --output=hello-cuda.out module load CUDA/5.5.22 ./hello-cuda
Hands on Exercise 2 Load the CUDA module Compile the code Submit the job to the queue Check the queue status Check the contents of the output file module load CUDA/5.5.22 nvcc hello.cu –o hello-cuda sbatch hello-cuda.sub squeue cat hello-cuda.out
Thank you