IT MANAGEMENT OF FME, 21 ST JULY 2010.  THE HPC FACILITY  USING PUTTY AND WINSCP TO ACCESS THE SERVER  SENDING FILES TO THE SERVER  RUNNING JOBS 

Slides:



Advertisements
Similar presentations
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Advertisements

USING FLUENT FOR HPC IT MANAGEMENT OF FME, 3 RD MARCH 2011.
Job Submission Using PBSPro and Globus Job Commands.
Network for Computational Nanotechnology (NCN) Purdue, Norfolk State, Northwestern, UC Berkeley, Univ. of Illinois, UTEP Basic Portable Batch System (PBS)
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
PBS Job Management and Taskfarming Joachim Wagner
Batch Queuing Systems The Portable Batch System (PBS) and the Load Sharing Facility (LSF) queuing systems share much common functionality in running batch.
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
VIPBG LINUX CLUSTER By Helen Wang Sept. 10, 2014.
Tutorial on MPI Experimental Environment for ECE5610/CSC
Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
High Performance Computing
Job Submission on WestGrid Feb on Access Grid.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
Using the P4-Xeon cluster HPCCC, Science Faculty, HKBU Usage Seminar for the 64-nodes P4-Xeon Cluster in Science Faculty March 24, 2004.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Introduction to UNIX/Linux Exercises Dan Stanzione.
 Accessing the NCCS Systems  Setting your Initial System Environment  Moving Data onto the NCCS Systems  Storing Data on the NCCS Systems  Running.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
BIOSTAT LINUX CLUSTER By Helen Wang October 11, 2012.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Introduction to Using SLURM on Discover Chongxun (Doris) Pan September 24, 2013.
Network Queuing System (NQS). Controls batch queues Only on Cray SV1 Presently 8 queues available for general use and one queue for the Cray analyst.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
APST Internals Sathish Vadhiyar. apstd daemon should be started on the local resource Opens a port to listen for apst client requests Runs on the host.
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Introduction to HPC Workshop March 1 st, Introduction George Garrett & The HPC Support Team Research Computing Services CUIT.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
PuTTY Introduction to Web Programming Kirkwood Continuing Education by Fred McClurg © Copyright 2016, All Rights Reserved ssh client.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Editing, Transferring, and Running Files on Vieques Daniel Malmer Dowell Lab Short Reads Course 6/9/15.
BIOSTAT LINUX CLUSTER By Helen Wang October 6, 2016.
Advanced Computing Facility Introduction
Hackinars in Bioinformatics
GRID COMPUTING.
PARADOX Cluster job management
INTRODUCTION TO VIPBG LINUX CLUSTER
Unix Scripts and PBS on BioU
HPC usage and software packages
INTRODUCTION TO VIPBG LINUX CLUSTER
MPI Basics.
Welcome to Indiana University Clusters
How to use the HPCC to do stuff
BIOSTAT LINUX CLUSTER By Helen Wang October 29, 2015.
Computational Physics (Lecture 17)
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
CommLab PC Cluster (Ubuntu OS version)
Creating a Windows Server 2012 R2 Datacenter Virtual machine
Postdoctoral researcher Department of Environmental Sciences, LSU
Introduction to HPC Workshop
Introduction to TAMNUN server and basics of PBS usage
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Maxwell Compute Cluster
Presentation transcript:

IT MANAGEMENT OF FME, 21 ST JULY 2010

 THE HPC FACILITY  USING PUTTY AND WINSCP TO ACCESS THE SERVER  SENDING FILES TO THE SERVER  RUNNING JOBS  MONITORING JOBS  COPY RESULTS BACK FROM THE SERVER

 Sunfire  8 CPUs x 6 Nodes - Quad-Core AMD Opteron(tm) Processor 2376 HE (2.3 GHz)  Interconnected using Infiniband and Ethernet  Each Node has 8 GB of memory  Storage capacity GB at the moment

 Using Torque - torque-server cri.slc4 and torque-mom cri.slc4  Scheduler - maui-server-3.2.6p21- snap slc4  Current MPI does not utilise Infiniband. This soon will be fixed.

 THE HPC FACILITY  USING PUTTY AND WINSCP TO ACCESS THE SERVER  SENDING FILES TO THE SERVER  RUNNING JOBS  MONITORING JOBS  COPY RESULTS BACK FROM THE SERVER

 Using putty.exe   Using winscp.exe 

 Server : fkm.utm.my  Port : 2323 It will connect to ce.utmgrid.utm.my via MyREN

press YES

Click NEW

1.click YES 2.click Continue 3.enter Your Password

 THE HPC FACILITY  USING PUTTY AND WINSCP TO ACCESS THE SERVER  SENDING FILES TO THE SERVER  RUNNING JOBS  MONITORING JOBS  COPY RESULTS BACK FROM THE SERVER

1.Search files to be transferred at the left panel. 2.Create a new directory on the right panel. 3.Select files on the left panel. 4.Copy from left to right.

 THE HPC FACILITY  USING PUTTY AND WINSCP TO ACCESS THE SERVER  SENDING FILES TO THE SERVER  RUNNING JOBS  MONITORING JOBS  COPY RESULTS BACK FROM THE SERVER

 We need to configure the necessary files before we could run our model.  There are several files needed to be prepared before we could run the job.  Using putty.exe....

 Type ‘pico model_journal’  Enter file/read-case your_input_file.cas solve/init/initialize-flow solve/iterate 400 file/binary-files n file/confirm-overwrite n file/write-data your_output_file.dat exit y  Press Control–O to save  Press Enter  Press Control-X to exit

 prepare the pbs-script, type ‘pico pbs-script’  Enter the following #!/bin/sh #PBS -q utm #PBS -N istas.model3d #PBS -l nodes=1:ppn=8 #PBS -M #PBS -m abe nCPU=8 version=3d journal=model_journal cd $PBS_O_WORKDIR /opt/exp_soft/share/istas/ansys_inc/v121/fluent/bin/fluent $version -t$nCPU -g -i $journal - mpi=openmpi -cnf=$PBS_NODEFILE  Press Ctrl-O to save  Press Enter  Press Ctrl-X to exit

 Finally we want to submit the job,  just type ‘qsub pbs-script’  type ‘qstat’ to see the status of your job.  an will be sent to you to let you know the job has started.  another will be sent to let you know the job has ended.  You could open WINSCP again to copy the output back to your PC.

 Prepare a pbs-script, type ‘pico pbs-script’ #!/bin/bash #PBS -q utm #PBS -l nodes=1:ppn=8 #PBS -l walltime=1000:00:00 #PBS -N myjobname.date.runNumberX # Go to the directory from which you submitted the job cd $PBS_O_WORKDIR cpus=$(wc -l $PBS_NODEFILE | awk '{print $1}') mp_host_list="[" for n in $(sort -u $PBS_NODEFILE) do mp_host_list="${mp_host_list}['$n',$(grep -c $n $PBS_NODEFILE)]," done mp_host_list=$(echo ${mp_host_list} | sed -e "s/,$/]/") echo "mp_host_list=${mp_host_list}" > abaqus_v6.env echo >> abaqus_v6.env echo "mp_rsh_command = 'ssh -x -n -l %U %H %C'" >> abaqus_v6.env # Run the job /opt/exp_soft/share/apps/Abaqus/Commands/abaqus analysis mp_mode=MPI cpus=$cpus job=MYJOBNAME interactive

 to run the job, type ‘qsub pbs-script’  to view the status, type ‘qstat’ Job id Name User Time Use S Queue ce STDIN euasia016 0 Q euasia ce STDIN euasia006 0 Q euasia ce STDIN euasia001 0 Q euasia ce STDIN euasia001 0 Q euasia ce istaz 00:00:26 R utm

 qsub: Once a PBS job script is created, it is submitted to PBS via the qsub command. In its simplest form, qsub takes a single parameter, the name of the script file that you wish to submit.  qstat: The qstat command will allow you to view the contents of the PBS queue.  node1:~/test> qstat Job id Name User Time Use S Queue node1 testjob psmith 0 R default

 qdel: The qdel command takes a single argument, a job number. You can use qdel to abort execution of your job: qdel 147 would cancel execution of the job shown in the qstat example above.  qalter: The qalter command is helpful for altering the parameters of a job after it's submitted. qalter takes two arguments: the PBS directive that you wish to change (like -l), and the job number that you want to change. For example, if you forgot to set the walltime that your job requires, you can change it after it's been submitted:  node1:~> qalter -l walltime=4:00:00 147

 pbsnodes: The pbsnodes command, while a useful PBS administration command, can also be informative to the PBS user. pbsnodes -a will list all PBS nodes, their attributes, and job status. This is a useful way to get a list of valid machine properties for use in a #PBS -l directive. node1:~> pbsnodes -a node2 state = free np = 2 properties = gigabit,pcn,m2048,dual,p1800,athlon ntype = cluster