Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to High Performance Computing

Similar presentations


Presentation on theme: "Introduction to High Performance Computing"— Presentation transcript:

1 Introduction to High Performance Computing
Using the MSU HPC Patrick Bills ICER Research Consultant April 13, 2017

2 Goal of this workshop is to give you an understanding of...
the nature of high-performance computing (HPC) and when to use it; the overall structure the MSU HPC Center (HPCC); Understand the different ways (Xterm, ssh, RDP) of connecting to the HPCC the steps to run software on the MSU HPCC (HPC workflow); managing files and data using GUI programs on the HPC for dev or computation how to get help, training, or consultation Use Computation to "Reduce the Mean Time to Science" - Dirk Colbry, former director MSU HPCC

3 How we will work We will have several hands-on exercises. Use these red/green stickies to communicate If you have trouble, put a red sticky on your computer. When you finish put a green stick on your computer. Before starting the next exercise, remove the stickies Bold Console Font indicates terminal commands for you to type in your terminal Shell Commands - sometimes preceded by "$" representing command prompt. don't type the $ ls /opt/software | wc -l Script Commands - sometimes preceded by "# representing comment. don't type these # this is my script to count files filecount=$(ls /opt/software | wc -l) echo $filecount

4 Prerequisite to using HPC: Basic knowledge of Linux
There is a lot to cover, so we gloss over basic Linux concepts, like: What is the terminal or 'xterm' is, What the shell is (BASH), what a shell script is Using the shell to issue basic commands like ls, cd, mkdir newdir, Connecting to HPCC with ssh Files and folders : viewing and creating directories, dot and dot-dot ( . , .. ), file permissions Environment variables such as $HOME Some basic Linux Commands We only ask for familiarity, not expertise. Stop me if you need any explanation

5 Requirements to use MSU's HPC:
terminal emulation software for ssh connection Mac & Linux: terminal software included ( terminal.app in Applications/Utilities) Windows: Install MobaXterm from USB or Download Other? MSU HPCC "user account " Could be: an account requested on behalf of faculty member (see wiki.hpcc.msu.edu) Or an account just for this class Most accounts created for class are temporary and will be disabled shortly after the class is over We have Generic "Class Accounts" in case there is a problem ssh

6 Quick run through How to connect: Prior to understanding HPCC concepts, let's ensure your account is in working order. Step 1. Log into to HPCC with Moba XTerm or Terminal and list files Start your terminal program. If your netid is sparty then type ssh # to connect to hpcc 'gateway' ls -l # can you list files in your home directory? touch testfile # can you write new file in your homedir? Can you complete these first steps to log-in?

7 Successful connection HPCC Gateway
HPCC message Hostname is “Gateway” Command prompt at bottom Note: Moba-Xterm lists your files along left side, terminal on right (not present in Mac terminal)

8 Getting Started copy a simple example
After connecting to hpcc, please type : ( don't type comments ssh dev-intel #connect to devnode mkdir examples # create folder cd examples # go into folder module load powertools # hpc utilities getexample helloworld # copy example cd helloworld # go into folder ls -l # list folder stuff what do you see? README hello.c hello.qsub extra: cat README and/pr cat hello.c

9 Getting Started: Run these commands to Compile, Test, and Queue
The README file has instructions for compiling and running compile the code new program called 'hello' gcc hello.c -o hello test run this hello program on this 'dev node'. Does it work? ./hello it works! submit to the queue run it on the cluster qsub hello.qsub You should get a "job id" for this job Wait. This 'job' was assigned the 'job' id and put into the 'queue.' Now, we wait for it to be scheduled, run, and output files created...

10 Copy the job id from the command above, and run this
Job Status Did the job run? Let's check your 'queue' qstat -u $USER Copy the job id from the command above, and run this starttime <jobid> Now....wait

11 What is the High Performance Computing and MSU ICER HPCC?
Description of MSU’s HPCC and why you might use it

12 Why use HPCC? "Takes too long on my computer" or "Takes too long on my dept/library/college's server" You have large amounts of data to process (hundreds of gigabytes) Your program requires massive amounts of memory ( 64gb - 6 TB ) “We don’t have shared computing/file resources available to us” “We no longer want to run it ourselves” You need specialized software (in a Linux environment) You need licensed software (Stata, Matlab, Gaussian, etc) Collaboration: Computing Commons for complex software configuration and large disk space Batch Run it and forget it: add your program to the queue;
 the Scheduler finds resources on the cluster and runs it for you Expert Computer Maintenance: Let someone else keep the lights blinking Consulting Make best use of software for your workflow; parallelism

13 Why NOT use HPCC? Note that some programs ( that are not parallel), when accessing moderate amounts of data, are faster on your own computer than on the HPC. Running public-facing Web sites or web-based services ('Hosting') HPC is big target for hacking, so stays behind a firewall (requires gateways) This can be done but not publicly (e.g. running web programs for your own use) Other units on campus (e.g IT Services) provide this infrastructure 24/7 Business computing, file storage and access HPCC System and processes are optimized for research computation, not file sharing We DO provide large file storage and some use that service, but it is not guaranteed uptime Do no provide any desktop computing support or management currently (Jan 2016) not compatible with Windows 10 College/University business computing services complement HPCC research computation services Not all software is compatible

14 What is a compute cluster?
a collection of (mostly) homogenous computers shared file systems organized with networking infrastructure interconnected with high-speed networking people! How many nodes in the cluster? module load powertools node_status # lists all nodes node_status | grep ^lac | wc -l

15 Types of Computing Services
Services available How you use Your Own Computer Laptop, Desktop, Lab Files, Computation, Local Servers, essentially anything Local only (Not available to others) DIY: You install and control Any software (compatible with your system or VM) Limited hardware and visibility Windows File Services Departmental Servers File Storage, Printer Sharing, ID management, . No computation or servers other than files Limited usually for business use; connect local machine to shared folders “Cloud” Amazon, Shared Hosting, Virtual Private Servers Computation and Servers (Database, Web, Big Data), File Storage, Linux or Windows DIY. Computation Limited by $, public servers (web, database), File storage limited by $. Some read-t0-use configurations High Performance Computing Primarily Parallel Computation, Shared and Private File Storage No public servers (web, etc) but can run private servers with work Big Data, Huge Files. LINUX ONLY System managed (not DIY), but user can install software if necessary Extensive computation, huge files can be stored on specialized file system

16 HPC vs PC Your Computer 2014 Cluster Processors (sockets) 1 2 per node
Cores 4-8 20,28 or more per node, thousands per cluster Speed ghz 2.7-3 ghz Connection to other computers Campus ethernet 1,000 MBit/sec "Infiniband" 50,000 MBit/sec Computers (nodes) 1 => 8 cores total 223 => 4,460 cores total Users ~1,200 Schedule On Demand 24/7, Queue Each computing node in a cluster is a single computer High Performance => Running work in parallel, for long periods of time

17 Start with single advanced computer
Includes: 1-4 Processors (CPUs or Chips) N cores per Processor (8, 16) 32GB+ Memory for variables (RAM) disk(s) to hold the files to read and write (1-6 TB) Linux Operating System and Software installed on Disk "rack mount" for cabinet storage This is a “Compute Node”

18 How to go faster? Parallelize
Many cores in a single CPU Multiple CPUs per Node Many Nodes communicating to work together “Multiprocessing” = MP Computing working computers = compute ‘nodes’ connected via a network each machine must be maintained, and data copied to each one Software manages processes (workers, threads) in the grid Applications must be written for grid computing 


19 More Parallel => Racks of Compute nodes
Connected via network Special node for logging in = head node 
controls grid Sun Grid Engine popular Problems: Machines must be the same OS, Software You must Data copied to all nodes Susceptible to hacks Difficult to share

20 Components of the HPCC a cluster of homogenous computers + high speed networking + shared file systems + infrastructure + people

21 Glossary A “node” is a single computer, usually with a few processors and many cores. Each node has it’s own host name/address like 'dev-intel14' Types of Nodes: gateway : computer available to outside world for you to login ; doorware only; not enough to run programs. From here, one can connect to other nodes inside the HPCC network dev-node : computer you choose and log-in to do setup, development, compile, and testing your programs using similar configuration as cluster nodes for the most accurate testing. dev-nodes includes dev-intel10, dev-intel14, dev-intel14-k20 (which includes k20 GPU add-in card). compute-nodes: run actual computation; accessible via the scheduler/queue; can’t log in directly accelerator nodes: gpu, phi: nodes within a cluster that have special-purpose hardware.

22 HPCC Workflow Get Account: Infrastructure
Login: connect via ssh or other: gateway Transfer files : gateway(s) Work: data, code, compile, test : dev node Submit: qsub to scheduler "Run" : your program runs on compute nodes Examine: collect output on dev nodes Note: Files are everywhere

23 HPCC Workflow Review TRANSFER
scp mydata.dat CONNECT ssh hpcc.msu.edu # log in to gateway ssh dev-intel # select a develpment node immedaitely DEVELOP module load R # load modules IF necessary. See modules currently load first nano myscript.R # edit your program directly on the system Rscript myscript.R # test run the program, press Ctrl+C to stop long programs QUEUE nano myscript.qsub # write submission file qsub myscript.qusb # put your program in the queue to run qstat -u $USER # check your queue status REVIEW OUTPUT less myscript.o123455 REPEAT

24 Another Example of using software and submission scripts
We have several examples of ways to use programs for you to copy and run yourself ls /opt/software/powertools/share/examples/ mkdir ~/examples cp -r /opt/software/powertools/share/examples/R_example cd ~/examples/R_example ls -l cat README cat cat R_job.qsub getexample … lists all examples. This command won’t work without loading powertools module first getexample R_example

25 Exercise: Download, compile and run a program
Matrix Multiplication in Parallel cd ~ cd examples mkdir calcdist cd calcdist curl -O gcc -fopenmp -o mxm_c mxm_openmp.c -lm ./mxm_c # run it nano ./mxm_openmp.c # change 500 to 1000 time ./mxm_c export OMP_NUM_THREADS="8" time mxm_c # can you write a qsub to submit to the cluster? # What will be nodes, ppn values?

26 Example parallel qsub #!/bin/bash -login #PBS -l walltime=00:20:00
# how much memory? #PBS -l mem=400mb # specify resources needed. #PBS -l nodes=1:ppn=8 #PBS -m n # change to current directory cd $PBS_O_WORKDIR export OMP_NUM_THREADS="8" # what is the program to run?

27 module avail – show available modules
Software on the HPCC Diverse User-base and need for historical software “Modules” System to allow one to 'load' different versions and programs Module load OpenMPI Key Commands module avail – show available modules module spider MPI – search modules for a keyword module load OpenMPI – load a module module list – list currently loaded modules module unload OpenMPI – unload a module module swap GNU/4.4.5 GNU/ – change environment module To maximize the different types of software and system configurations that are available to the users, HPCC uses a Module system to set Paths and Libraries in Linux Environment variables

28 # which version of Python do you need? module spider Python
Using Modules Example # which version of Python do you need? module spider Python module spider Python/2.7.2 # after purging (no modules), find the path to Python with the Linux "which" command module purge # no modules which python python --version module load python

29 Modules: Some Software require base programs to be loaded
module load R/3.3.0 # error: not found? misleading, actually requires GNU/4.9 module purge # remove all modules # … error: why? base modules not loaded # to find out what is needed, please use spider command module spider R/3.3.0 module load gnu/4.9 module load openmpi module list # which other modules did R/3.3.0 also need to load?

30 Default Modules re-load every time you connect
ssh dev-intel16 module list # remove all modules exit # return to gateway module list # … what do you see? default modules module load Mathematica; module list ssh dev-intel14 Need to load all modules you need for every session or job Including when you write a script to run your program

31 Demonstration: Running Basic R code
Download sample program from : and name is rgdal.R Download a R program from the Internet save as arma.R Copy to HPCC with command or GUI scp arma.R connect to hpcc with Remote Desktop, Xquartz or Moba Xterm connect to a development node load the software you need module load R/3.2.0 Try script from command line Rscript rgdal.R TCreate a Qsub command: copy files from the getexample R_example

32 dev-node for development = programming and interactive testing
Advantages of running Interactively You do not need to write a submission script; no waiting You can provide input to and get feedback from your programs as they are running Compile and test; Needed to see if your program will work Disadvantages of running Interactively All the resources on Interactive nodes are shared between all users. Any single process is limited to 2 hours of cpu time. If a process runs longer than 2 hours it will be killed. Programs that overutilize the resources on an integrative node (preventing other to use the system) can be killed without warning if they are affecting other users Your goal is to craft your program so it can be queued. It’s ok (and common) to queue and fail.

33 The Queue The system is busy and used by many people
may want to run one program for a long time may want to run many programs all at once on multiple nodes or cores To share we have a 'resource manager' that takes in requests to run programs, and allocates them you write a special script that tells the resource manager what you need, and how to run your program, and 'submit to the queue' (get in line) with the qsub command We submitted our job to the queue using the command qsub hello.qsub And the 'qsub file' had the instructions to the scheduler on how to run

34 Submission of Job to the Queue
A Submission Script is a mini program for the scheduler that has : List of required resources All command line instructions needed to run the computation including loading all modules Ability for script to identify arguments collect diagnostic output (qstat -f) Script tells the scheduler how to run your program on the cluster. Unless you specify, your program can run on any one of the 7,500 cores available.

35 Specifying Cluster Resources in Submission script
cd ~/examples getexample R_example cd R_example cat R_job.qsub Another example of Qsub file #PBS -l nodes=1:ppn=8 ppn= processors per node #PBS -l walltime=04:00:00 total time reserved HH:MM:SS #PBS -l mem=200mb TOTAL memory requests #PBS -l feature=intel optional specification of hardware All #PBS lines must come before any program code at the top of the QSUB File can also use commas on one line #PBS -l nodes=1:ppn=8,walltime=04:00:00,mem=2gb,feature=intel07

36 Unix Environment Variables Crucial For HPC Workflow
Explore the values of these variables using the 'echo' command. $HOME $SCRATCH $PATH $LD_LIBRARY_PATH # important for compilers $CPPFLAGS $LMOD_DEFAULT_MODULEPATH The module system sets environment variables extensively to customize which software you will use to help. Try the following. What changes? module purge; echo PATH=$PATH;module load GNU/4.9; echo PATH=$PATH Do the same for the $LIBRARY_PATH variable Many others; don't need to understand lmod to work with it, other than the fact that is changes env Using env vars (system, your own) helps your programs be more portable and flexible

37 Understanding environments, env vars, and shell will help you work successfully on the HPCC
Environment variables set by system, programs, and you note: Curly Braces mostly optional but needed delineated var names sometimes echo $PATH env # lists all variables env |grep HPCC # MSU HPCC specific vars Shell start-up file .bashrc has some env settings,... lets find them grep ~/.bashrc "=" # find statements setting thing with = operator

38 Environment Variables Used by Scheduler
Scheduler/queue system creates many variables that your programs can use to understand to adapt to their environment when then are run. run on different kinds of computers (nodes) multiple runs of the same program would over-write files unless you make custom files you can set different run-time settings, say the number of loops to perform for testing how long your program will take, e.g. short runs or long runs Commonly used variables set when your program runs via the queue/scheduler $PBS_JOBID # = your unique job number $PBS_O_WORKDIR #

39 Example submission script with Env Vars
Let's examine a submission script to see where vars are used go to our helloworld example directory, and "cat" to copy to the screen type: cd ~/class/helloworld cat hello.qsub Your file will look like the image on the right:

40 Example use of Job Environment variables
If you want to run repeated simulations that all save to a file - how can you make unique files fore each simulation? unique folder in Job submission script add mkdir FOLDER-$PBS_JOBID # make folder output- cd FOLDER-$PBS_JOBID .. run program 2) unique output file outfile = $PBS_JOBID-OUTPUT.dat ./myprogram > $outfile

41 Examine status of jobs qsub <Submission script> – Submit a job to the queue Returns the job ID, which looks like mgr04.i The id is the number part only qstat -u <NETID> what are your jobs doing? qdel <JOB ID> Delete a job from the queue showq –u <USERNAME> Show the current job queue checkjob <JOB ID> Check the status of the current job showstart –e all <JOB ID> Show the estimated start time of the job. qstat -f <JOBID> detail of a particular job; used at the end of all qsub's to write diagnostic info into output file

42 Job completion: output files
By default the job will automatically generate two files when it completes: – Standard Output file: – Standard Error file: jobname is name of qsub jobname.o jobname.e You can combine these with the PBS join option in your qsub script: #PBS -j oe jobname.o will have both output and errors. You can also change the output file name and/or folder #PBS -o /mnt/home/netid/mywork/myoutputfile.txt #PBS -o $HOME/myjob/output-${PBS_JOBID}.txt

43 Selecting Nodes for your job
Development nodes mimic cluster nodes Intel14 vs Intel16, with GPU vs without GPU Jobs will run on any node that fits requirements and is available on either cluser (intel14 or Laconia) -- the entire available HPCC is at your disposal** (special terms and conditions may apply) Some nodes are reserved by research groups that purchase them Only jobs with walltime <= 4 hours will run on buy-in nodes you Keep your wall time <= 4 hours to reduce your queue time understand your program ‘s need (profiling, not covered here) What is current status of nodes? module load powertools node_status To use Intel16, add #PBS -l feature=intel16

44 Time Time to results = queue time + run time
Walltime = time on wall clock that all processes are running, not queue time Lower walltime jobs are easier to schedule and will spend less time in the queue Faster or High-memory nodes (Intel14) spend less wall time but are in higher demand how to know how long your job will run? Experience, Time a small data set Queue it and see if it fails The command qstat -f <jobid> will print walltime Wall time limit = 168 hours (1 week) but there are ways to restart work to run longer

45 Exercise: Can you compile, run, and submit a program?
QSUB Techniques : edit your qsub file and resubmit Challenges: Run on 8 nodes instead of 1? nodes=8:ppn=1 Capture output to a file? mpirun -np 8 pi >pidata.txt Capture output to a unique file? outfile=pidata-$PBS_JOBID.txt mpirun -np 8 pi >$outfile Increase Number of iterations, recompile and re-submit? How much time is needed for 10X iterations? #define DARTS #!/bin/bash -login # ALTERATED QSUB #PBS -l nodes=8:ppn=1,walltime=00:30:00,mem=16gb #PBS -l feature=intel16 #PBS -j oe cd $PBS_O_WORKDIR module load openmpi outfile=pidata-$PBS_JOBID.txt mpirun -np 8 pi >$outfile qstat -f $PBS_JOBID

46 Interpreting output from qstat
Job Id: mgr-04.i Job_Name = fisibasesocialnetbuild Job_Owner = resources_used.cput = 00:53:56 resources_used.energy_used = 0 resources_used.mem = kb resources_used.vmem = kb resources_used.walltime = 00:54:27 job_state = R ctime = Thu May 19 13:14: Error_Path = dev-intel14-k20.i:/mnt/home/billspat/docs/fisibase-socialnet/ db/fisibasesocialnetbuild.e exec_host = qml-001/34 Hold_Types = n Join_Path = oe Keep_Files = n Mail_Points = a mtime = Thu May 19 13:24: Output_Path = dev-intel14-k20.i:/mnt/home/billspat/docs/fisibase-socialnet /db/fisibasesocialnetbuild.o Priority = 0 qtime = Thu May 19 13:14: Rerunable = False Resource_List.mem = 8gb Resource_List.nodect = 1 Resource_List.nodes = 1:ppn=1 Resource_List.walltime = 03:00:00 session_id = 79730 Variable_List = PBS_O_QUEUE=main,PBS_O_HOME=/mnt/home/billspat, PBS_O_LOGNAME=billspat, PBS_O_PATH=/opt/software/powertools/bin:/ PBS_O_WORKDIR=/mnt/home/billspat/docs/fisibase-socialnet/db, PBS_O_HOST=dev-intel14-k20.i,PBS_O_SERVER=mgr-04 euser = billspat egroup = staff-np queue_type = E etime = Thu May 19 13:14: submit_args = socialnet_build.qsub start_time = Thu May 19 13:24: Walltime.Remaining = 7472 start_count = 1 fault_tolerant = False job_radix = 0 submit_host = dev-intel14-k20.i Resource.list => what you've asked for Resource_List.mem = 8gb resources_used => what you used resources_used.mem = kb exec_host => compute node(s) that run If you know the exec_host, you can use command ssh $exec_host ps aux | grep $USER while the job is running, diagnostic info: checkjob -v <jobid>

47 Examine status of jobs qsub <Submission script> – Submit a job to the queue Returns the job ID, which looks like mgr04.i The id is the number part only qstat -u <NETID> what are your jobs doing? qdel <JOB ID> Delete a job from the queue showq –u <USERNAME> Show the current job queue checkjob <JOB ID> Check the status of the current job showstart –e all <JOB ID> Show the estimated start time of the job. qstat -f <JOBID> detail of a particular job; used at the end of all qsub's to write diagnostic info into output file

48 QSUB Techniques: Interactive Jobs
"DevNodes" are busy, many users, so maximum run time is ~ 2 hrs and share the cores with If you want to test run a program for several hours interactive, or a program that uses many cores, consider an "interactive Job" see for details Enter the qsub command with -I and all parameters Wait - you are logged into a compute node Change folders, load modules, and Compute! qsub -I -l nodes=1:ppn=4,mem=4gb,walltime=00:10:00 qsub: waiting for job mgr-04.i to start qsub: job mgr-04.i ready ~]$ echo $PBS_JOBID module load .... etc

49 Computing Resources: MSU Cluster Overview
Processors (per node) Cores (per node) Memory (per node) Local Disk intel14 201 Two 2.5Ghz 10-core Intel Xeon E5-2670v2 20 64 GB-256 GB 500 GB intel14-xl 5 Four Intel Xeon CPU E GHz 48 1, 1.5, or 3 TB 854 GB or 1.8TB 1 Eight Intel Xeon CPU E GHz 96 6 TB 854 GB Laconia (intel16) 370 Two 2.4Ghz 14-core Intel Xeon E5-2680v4 28 128 GB -512 GB ( 240 GB

50 HPC File Systems Single Computers have their own disks for files
Cluster computers connect to shared file systems for files All file systems are available to all nodes (except gateway) First logon automatically goes to: home directory, shortcut ‘~’ Home directories are in the path /mnt/home/<netid> Log-in and then... ls -al # list your home folder files pwd # show/print your current working directory ls /mnt #Linux ‘mounts’ multiple file systems under one root, typically /mnt ls /opt/software # list all software that’s available

51 MSU HPCC File Systems available to you
GLOBAL function path (mount point) Env Home directories your personal and private files, stored on the 'filers.' For general storage; backed-up. Subject to quotas /mnt/home/userid $HOME Research Space Shared members of working group, designated by a faculty member /mnt/research/groupid Scratch Very fast and large disk for temporary computational workspace. (45 day) /mnt/scratch/userid $SCRATCH Software software installed /opt/software $PATH LOCAL mount-local not networked => fast but not shared /mnt/local temp also single node only, some programs use /tmp /tmp Personal computers have one or two internal disks If you have a laptop (node) and desktop (node) they don't share a disk, you have to copy files to them (or let Dropbox do it) HPC has hundreds of nodes shared by thousands of users. Rather than disks, uses large "file systems" shared by all and interconnected to all nodes in the cluster via the network (NFS) These appear to you as special folders that are identical on every node

52 Methods for File Transfer
See our documentation: Using Linux commands for remote, secure copy: scp, sftp, rsync Using Programs (GUI) to use these commands : FileZilla, MobaXterm, Winscp Connecting to the File system with Windows File Sharing (Samba or SMB) Globus.org transfer service for very large file transfer

53 GUI Computing with X11 ssh dev-intel16 matlab& # wait for it Using X11
Windows users: Moba Xterm is all you need. Connect to a dev-node and you are ready to run X11 progams Mac Users Start XQuartz if you have it installed XQuartz, start a terminal if you haven't ssh -XY Both ssh dev-intel16 matlab& # wait for it Fairly slow but useable

54 GUI Computing: Remote Desktop
Mac requirment : download "Microsoft Remote Desktop" from the Apple APP store Windows: Click Start, then search for remote desktop this session is not running on your laptop, but on the HPCC nodes in the Engineering building

55 GUI High Performance Computing with Remote Desktop
RDP = MIcrosoft Remote Desktop Protocol, common way to send desktop pixels to your computer HPCC runs a Linux desktop system that you can connect to with an RDP client via the RDP Gateway to run GUI programs Software needed : RDP client built into Windows, Mac requires remote desktop client on the App Store (or Office RDP) LIN Benefit: You can run GUI programs on the cluster directly, eg. Matlab, Java programs different look and feel from X11 Downside: Requires VPN when off-campus (non-MSU collaborators or MSU Undergraduates require permission when you are finished, please log out of the RDP gateway to conserve resources

56 HPCC Remote Desktop Using RPD: if off-campus, connect to VPN
Start remote desktop connect to rdp.hpcc.msu.edu enter your NetID and password On remote computer, start the terminal connect to devnode ssh dev-intel14 load modules module load matlab run software matlab& logout

57 Getting Help HPCC Wiki Documentation http://wiki.hpcc.msu.edu
MPI: Submit a form via helpdesk us: for research-related questions, problems, new accounts, etc Provide as much detail about your problem as you can Speak with us: Mondays 1-2pm and Thursdays 1-2pm, 1440 BPS office hours Or by Appointment


Download ppt "Introduction to High Performance Computing"

Similar presentations


Ads by Google