Download presentation
Presentation is loading. Please wait.
Published bySylvia Moore Modified over 8 years ago
1
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing A BRIEF INTRODUCTION TO HIGH PERFORMANCE COMPUTING
2
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING So, what is it about? Along with theory and experimentation, computational methods is considered a foundation of science Increases in computational capacity has lead scientists to tackle bigger problems Problems addressed by scientists have grow in scale and scope
3
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING Many scientific problems require powerful computers to address them – computationally In fact, –Many scientific problems have outgrown the capacity of normal computers
4
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING So, what do we do? Buy a faster computer! Moore’s Law – real one and the alleged one http://www.drgoulu.com/2008/06/19/moore-toujours/
5
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING There are some things chip manufacturers can do –faster, bigger caches, parallel instruction pipelines, hyperthreading We needed a new strategy Instead of ramping up process clock speed… How about throwing more processors at the computational problems?
6
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING High Performance Computing systems are (to a large extent) parallel computing systems. There are several ways to do this– two primary- –SMP (shared memory) –Distributed memory
7
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing SHARED MEMORY SYSTEMS Multiple processors connected to/share the same pool of memory SMP Every processor has, potentially, access to and control of every memory location
8
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing SHARED MEMORY COMPUTERS (SMP) Memory Processor
9
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing DISTRIBUTED MEMORY SYSTEMS Multiple processors each with their own memory Interconnected to share/exchange data, processing Modern architectural approach to supercomputers Supercomputers and Clusters similar
10
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HPC – DISTRIBUTED MEMORY Processor Memory Processor Memory Processor Memory Processor Memory Processor Memory Processor Memory Interconnect
11
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HPC - HYBRID DISTRIBUTED MEMORY WITH SMP Proc1 Memory Interconnect Proc2Proc1 Memory Proc2Proc1 Memory Proc2 Proc1Proc2Proc1Proc2Proc1Proc2
12
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HPC - HYBRID DISTRIBUTED MEMORY WITH SMP & COPROCESSORS (ACCELERATORS) Proc1 Memory Interconnect Proc2Proc1 Memory Proc2Proc1 Memory Proc2 Proc1Proc2Proc1Proc2Proc1Proc2 CoProc
13
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HPC - HYBRID DISTRIBUTED MEMORY WITH SMP – COPROCESSORS (ACCELERATORS) But what are CoProcessors? –Special processors to do some of the computational work –GPU –Intel Phi
14
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing SOFTWARE MODELS FOR PARALLEL COMPUTING How do we make a bunch of processors or cores work on the same problem? How do we get better computational performance by doing this?
15
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing FLYNN’S TAXONOMY Single Instruction/Single Data - SISD Multiple Instruction/Single Data - MISD Single Instruction/Multiple Data - SIMD Multiple Instruction/Multiple Data - MIMD Single Program/Multiple Data - SPMD
16
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing SOFTWARE MODELS FOR PARALLEL COMPUTING SISD – straight forward one step at a time (not parallel) SIMD – data parallel, vector processors, GPUs MISD - ? We don’t know what this is? MIMD – multiple instructions operating on different data (typical computer) …but wait
17
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing SOFTWARE MODELS FOR PARALLEL COMPUTING SPMD – Single Program, Multiple Data All processors/cores have the same code Processors/cores have their own (independent) data All processors/cores execute the same code, but not necessarily in sync.
18
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing SOFTWARE FOR PARALLEL COMPUTING Where does the parallel (SPMD) code come from? –Purchase/license – Fluent, Accelerys,… –Open source, community developed – Galaxy, OpenFOAM, LAMMPS, R, Octave,… –Add-ons to other packages – Matlab,… –Write your own
19
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing SOFTWARE FOR PARALLEL COMPUTING Write your own? –C, C++, Fortran With extensions! –Java – NO! –Python – yes sort of –Some special languages
20
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING- A NOTE ABOUT PERFORMANCE Speed-up –Speedup(n processors) = exectime(1 processor)/exectime(n processors) ** Culler, Singh and Gupta, Parallel Computing Architecture, A Hardware/Software Approach –Example – 1 processors exec time = 10 sec, 2 processors exec time = 5 sec. Speed-up = 2 –Linear Speed-up, Super Linear Speed-up
21
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING- A NOTE ABOUT PERFORMANCE Amdahl's Law –All algorithms have a serial or sequential portion, other portions may be executed in parallel –Speed-up can never be better than the sequential portion and the worst performing parallel portion
22
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING So what do we need to do? What do we need to know?
23
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING Linux –A handful of commands, tools and utilities –A handy text editor An application, or Skills in programming –Using parallelization tools, libraries, etc. PBS/Torque/Moab
24
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING A very, very short course on C –c is a very efficient language, available in most computer environments. –Free and commercial compilers available –Many tools to help build software in c
25
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING A very, very short course on C –c is a functional language, everything (almost) is a function –c is case sensitive, case matters –c is a strongly typed language int myage; float currenttemp; –Statements end with ; –group statements with { }
26
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING A very, very short course on C int main (void) { return 0; }
27
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING A very, very short course on C Lab – change the previous program to print a message on your terminal screen hint… int day = 1; printf(“Hello, I am having so much fun on day %d\n”, day); on one more thing! (#include)
28
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING A very, very short course on C So, how do you run this thing? first, compile it (translate it from human language to computer instructions). gcc very2.c –o very2.out Then run it./very2.out
29
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING A very, very short course on C –so let’s walk through this code #include int main (void) { int day = 1; printf("Hello, I am having so much fun on day %d.\n", day); return 0; }
30
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING #include int main (void) { int matrix[5000][5000]; int i,j; long walltime; struct timeval tim; double starttime = 0, endtime = 0; printf("Start\n"); srand(time(NULL)); gettimeofday(&tim, NULL); starttime = tim.tv_sec*1. + (tim.tv_usec/1000000.); for(i = 0; i <5000; i++) { for ( j = 0; j < 5000; j++) {matrix[i][j] = rand() % 1000 ; } /*more goes here */ gettimeofday(&tim, NULL); endtime = tim.tv_sec*1. + (tim.tv_usec/1000000.); for (i=0; i < 10; i++) { printf("%d, ",matrix[i][0]); } printf("\nFinished - %30.10f seconds elapsed\n", endtime - start time); return 0; }
31
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing #include int main (void) { int matrix[1500][1500]; int i,j; long walltime; struct timeval tim; double starttime = 0, endtime = 0; printf("Start\n"); srand(time(NULL)); gettimeofday(&tim, NULL); starttime = tim.tv_sec*1. + (tim.tv_usec/1000000.); for(i = 0; i <1500; i++) { for ( j = 0; j < 1500; j++) {matrix[i][j] = rand() % 1000 ; } /* do some work here */ for(i = 0; i < 1500; i++) { for(j = 0; j < 1500; j++) { matrix[i][j] = matrix[i][j] *4 + 20; } gettimeofday(&tim, NULL); endtime = tim.tv_sec*1. + (tim.tv_usec/1000000.); for (i=0; i < 10; i++) { printf("%d, ",matrix[i][0]); } printf("\nFinished - %30.10f seconds elapsed\n", endtime – starttime); return 0; }
32
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING PBS/Torque/Moab –What are these –PBS/Torque is a System Resource Management package Why? –Moab is a resource schedule and management utility Sits on PBS/Torque
33
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING PBS/Torque Basic User Commands –qsub --- submits requests for HPC resources “run my job” –qstat --- check on the status of your job (and everyone else) –qdel --- removes job from the queue –qalter --- change most attributes of submitted job –qmove --- move job to a different queue
34
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING PBS/Torque Basic Job Scriptsimplejob.sh #! /bin/bash #PBS -N testjob #PBS -l walltime=00:10:00 #PBS -m ae #PBS -M don.mclaughlin@mail.wvu.edu #PBS -q training cd /home/dem./simple.out
35
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING PBS/Torque Job Submission qsub simplejob.sh 15796.mountaineer.mountaineer qstat ….. 15796.mountaineer testjob dem 00:00:00 C debug
36
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING PBS/Torque Job Submission …and what happened? [dem@mountaineer ~]$ ls *15796* testjob.e15796 testjob.o15796 [dem@mountaineer ~]$ more testjob.o15796 Start 144, 300, 423, 206, 856, 313, 265, 97, 664, 221, Finished - 0.0144410133 seconds elapsed
37
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING PBS/Torque Job Submission …one more thing qsub –I (interactive jobs)
38
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING PBS/Torque Job Submission Find out more man pages Tutorials –https://wikis.nyu.edu/display/NYUHPC/Tutorial+- +Submitting+a+job+using+qsubhttps://wikis.nyu.edu/display/NYUHPC/Tutorial+- +Submitting+a+job+using+qsub
39
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING – CREATING PARALLEL PROGRAMS MPI – Message Passing Interface –really a standard, not software, but –comes in many flavors MPICH, MPICH2, Intel, OpenMPI,… –There are others message passing protocols –Defines a standard for exchanging messages across processes working on the same problem –Accomplished in the form of function or subroutine calls in your code. –MPI is not a language –MPI is a library –Can be called from and used by C, C++ and Fortran
40
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing BASIC MPI FUNCTIONS … but first include… –#include “mpi.h”
41
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing BASIC MPI FUNCTIONS int MPI_Init(int argc_ptr, char** argv_ptr) –Initializes MPI in the application –MPI_Init(&argc, &argv); int MPI_Finalize() –Closes/shutdown MPI for the application –MPI_Finalize();
42
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing BASIC MPI FUNCTIONS int MPI_Comm_size(MPI_comm comm, int* size) –Returns the size of the group in size –MPI_Comm_size(MPI_COMM_WORLD, &nprocs) Int MPI_Comm_rank(MPI_comm comm, int* rank) –Returns the rank (ID) of this process in rank –MPI_Comm_rank(MPI_WORLD_COMM, &myrank);
43
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing MPI SEND AND RECEIVE MPI_Send and MPI_Recv are the most elemental forms of MPI data communications Provide the core set of functions MPI_Send and MPI_Recv are blocking communications –Processing cannot proceed until the communication process is complete.
44
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing MPI_SEND – SEND A MESSAGE MPI_Send( void*message, intcount, MPI_Datatype datatype, intdest, inttag, MPI_Commcomm)
45
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing MPI_SEND MPI_Send(a,1,MPI_FLOAT,myrank+1,11,MPI_COMM_WORLD) sends a single float to the next process in MPI_COMM_WORLD and attaches a tag of 11. MPI_Send(vect,100, MPI_FLOAT,5,12, MPI_COMM_WORLD); sends a vector of 100 floats to process 5 in MPI_COMM_WORLD and uses a tag of 12.
46
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing MPI_RECV int MPI_Recv( void* message, intcount, MPI_Datatypedatatype, intsource, inttag, MPI_Commcomm, MPI_Status*status)
47
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing MPI_RECV MPI_Recv(x, 1, MPI_FLOAT, lsource, 11, MPI_COMM_WORLD, status); picks up a message with a tag=11,from the source lsource in MPI_COMM_WORLD. The status of the transaction is stored in status. MPI_Recv(xarray, 100, MPI_FLOAT, xproc, 12, MPI_COMM_WORLD, status); picks up a message tagged 12 from the source xproc in MPI_COMM_WORLD. That status of the transaction is stored in status.
48
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing MPI_RECV - WILDCARDS MPI_ANY_SOURCE –lets MPI_Recv take a message from any source. Use as the source parameter MPI_ANY_TAG –lets MPI_Recv take a message regardless of its tag.
49
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing OTHER MPI INFORMATION Appendix A of Pacheco book Argonne National Laboratories –http://www-unix.mcs.anl.gov/mpi/ The MPI Forum –http://www.mpi-forum.org/http://www.mpi-forum.org/ MPI Book –http://www.netlib.org/utk/papers/mpi-book/mpi- book.html
50
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING – CREATING PARALLEL PROGRAMS #include int main(int argc, char *argv[]) { int numprocs, rank, namelen; char processor_name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &numprocs); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Get_processor_name(processor_name, &namelen); printf("Process %d on %s out of %d\n", rank, processor_name, numprocs); MPI_Finalize(); }
51
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING – CREATING PARALLEL PROGRAMS …but running in parallel on an HPC means that you have to ask for different resources i.e. a new PBS script #! /bin/bash #PBS -N testjob #PBS -l walltime=00:01:00 #PBS -m ae #PBS -M don.mclaughlin@mail.wvu.edu #PBS -q hpc2013 #PBS -l nodes=1:ppn=4 cd /home/dmclaughlin/mpi /shared/intel/impi/4.0.2.003/intel64/bin/mpirun -np 4./hellompi
52
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing HIGH PERFORMANCE COMPUTING – CREATING PARALLEL PROGRAMS Run this and you should get output like this [dem@mountaineer ~]$ more testjob.o15803 Process 0 on compute-01-29 out of 4 Process 2 on compute-01-29 out of 4 Process 1 on compute-01-29 out of 4 Process 3 on compute-01-29 out of 4
53
WEST VIRGINIA UNIVERSITY High Performance and Scientific Computing FOR MORE INFORMATION PLEASE CONTACT: Don McLaughlin Don.McLaughlin@mail.wvu.edu (304)293-0388Don.McLaughlin@mail.wvu.edu Nathan Gregg Nathan.Gregg@mail.wvu.edu (304) 293-0963Nathan.Gregg@mail.wvu.edu
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.