VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.

Slides:



Advertisements
Similar presentations
Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.
Advertisements

Job Submission Using PBSPro and Globus Job Commands.
Koç University High Performance Computing Labs Hattusas & Gordion.
Chapter One The Essence of UNIX.
Network for Computational Nanotechnology (NCN) Purdue, Norfolk State, Northwestern, UC Berkeley, Univ. of Illinois, UTEP Basic Portable Batch System (PBS)
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
PBS Job Management and Taskfarming Joachim Wagner
VIPBG LINUX CLUSTER By Helen Wang Sept. 10, 2014.
Introduction to TAMNUN server and basics of PBS usage Yulia Halupovich CIS, Core Systems Group.
IT MANAGEMENT OF FME, 21 ST JULY  THE HPC FACILITY  USING PUTTY AND WINSCP TO ACCESS THE SERVER  SENDING FILES TO THE SERVER  RUNNING JOBS 
ISG We build general capability Job Submission on the Olympus Cluster J. DePasse; S. Brown, PhD; T. Maiden Pittsburgh Supercomputing Center Public Health.
Introducing the Command Line CMSC 121 Introduction to UNIX Much of the material in these slides was taken from Dan Hood’s CMSC 121 Lecture Notes.
Installing and running COMSOL on a Windows HPCS2008(R2) cluster
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Understanding the Basics of Computational Informatics Summer School, Hungary, Szeged Methos L. Müller.
ISG We build general capability Purpose After this tutorial, you should: Be comfortable submitting work to the batch queuing system of olympus and be familiar.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2011assignprelim.1 Assignment Preliminaries ITCS 6010/8010 Spring 2011.
Introduction to UNIX/Linux Exercises Dan Stanzione.
BIOSTAT LINUX CLUSTER By Helen Wang October 10, 2013.
Chapter-4 Windows 2000 Professional Win2K Professional provides a very usable interface and was designed for use in the desktop PC. Microsoft server system.
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2012, Jan 18, 2012assignprelim.1 Assignment Preliminaries ITCS 4145/5145 Spring 2012.
 Accessing the NCCS Systems  Setting your Initial System Environment  Moving Data onto the NCCS Systems  Storing Data on the NCCS Systems  Running.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
BIOSTAT LINUX CLUSTER By Helen Wang October 11, 2012.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Network Queuing System (NQS). Controls batch queues Only on Cray SV1 Presently 8 queues available for general use and one queue for the Cray analyst.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
BIOSTAT LINUX CLUSTER By Helen Wang October 6, 2016.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
GRID COMPUTING.
Specialized Computing Cluster An Introduction
Welcome to Indiana University Clusters
PARADOX Cluster job management
INTRODUCTION TO VIPBG LINUX CLUSTER
Unix Scripts and PBS on BioU
Assumptions What are the prerequisites? … The hands on portion of the workshop will be on the command-line. If you are not familiar with the command.
HPC usage and software packages
INTRODUCTION TO VIPBG LINUX CLUSTER
Welcome to Indiana University Clusters
How to use the HPCC to do stuff
BIOSTAT LINUX CLUSTER By Helen Wang October 29, 2015.
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
CommLab PC Cluster (Ubuntu OS version)
Welcome to our Nuclear Physics Computing System
Paul Sexton CS 566 February 6, 2006
Introduction to TAMNUN server and basics of PBS usage
Compiling and Job Submission
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Welcome to our Nuclear Physics Computing System
Advanced Computing Facility Introduction
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Presentation transcript:

VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013

Basic Beowulf Cluster Structure

A brief look of our cluster

VIPBG Beowulf Cluster Server Name: light.vipbg.vcu.edu IP: nd server as failover: group.vipbg.vcu.edu IP: (invisible on mission) To access our server and how to use it, check the wiki page

Access Cluster What you need to do to access to server? get username and password Get webvpn.vcu.edu to install VCU webvpn on your PC so you can access it from anywhere. change your password to be qualified password: $passwd set up necessary variables to customize your personal console templates: ~/.cshrc ~/.login echo $PATH - add searching path into your.cshrc file Make temp and bin directory under your home dir $mkdir tmp $mkdir bin

Access Cluster Server and nodes Master node : light.vipbg.vcu.edu Running CentOS ( red hat kernel)Version 5.6, x86-64 Open source or Software download – choose 64 bits CentOS or RHEL 5 if possible Purposes and policy: front-end user interface; Do not run job directly on master, it will be terminated without contact user. accessible from outside by permission and webvpn Slave nodes (nodes): node22.cl.vcu.edu – node31.cl.vcu.edu (8 cores Xeon processors with 32 GB RAM) Node 2-19 fast nodes ( 12 cores Xeon processors with 98GB RAM on each node) Purposes and policy: computation; not prefer to access user interface, accessible via master and managed by portable batch management ( PBS ); fast; internal network; X, not accessible directly from outside

Access Cluster Nodes and queue configuration $qstat –q ## will give you all the queues and current running status server: light Queue Memory CPU Time Walltime Node Run Que Lm State workq E R serial E R mxq E R express E R openmx E R slowq E R $pbsnodes –a |more ## will give you all of the queue and nodes detailed information with page by page

Access Cluster Nodes and queues Serial ( default ): nodes assigned: Node2-3:8 cores, 24GB RAM Node19-15: 23 cores, 64GB RAM workq ( dedicated to converge project) Node14-12:12 cores, 64GB RAM Openmx ( dedicated to R openmx and parallel jobs) Node11-9: 23 cores, 64GB RAM Mxq (dedicated to traditional mx jobs or other open sources jobs, such as plink) Node6-5 Floating nodes: node4, node7, node8 – currently assigned to workq

Accessing Cluster Software available on master and nodes R with CRAN libraries and bioconductor libraries C++/G++ compiler, fortran compiler ( f77/f90) Python/biopython compilers Open sources needed by users Upon users requests SAS 9.3 is on all nodes PLINK Open Mx Impute2, samtool, gtool and open sources as requested.

Commands to be used on cluster Submitting R jobs on normal queue $qR MYSCRIPT ( if the script name is MYSCRIPT.R, submit it with no.R extension) each users is allowed to run 50 jobs simultaneously Submitting jobs on large memory queue large memory queue is on node1 for memory intensive jobs ( limited 8 totally) $qRL MYSCRIPT

Template used on cluster Modify template to create your own pbs script for running programs #!/bin/bash #PBS -q QUEUENAME ##serial, sasq, workq #PBS -N MYSCRIPT # echo "******STARTING****************************" # # cd to the directory from which I submitted the job. Otherwise it will execute in my home directory. # set WORKDIR = ~/YOURWORDIR #PBS -V #echo “PBS batch job id is $PBS_JOBID“ echo "Working directory of this job is: " $WORKDIR # echo "Beginning to run job“ Command line you need to execute the job ( /home/huan/bin/calculate - PARAMETEERS) SAVE IT IN AN FILE MYSCRIPT $qsub MYSCRIPT

Commands used on cluster Submitting interactive job when there is no script command for submitting jobs using new application $qsub -I to get on a node NODE7$plink –script PLKSCRIPT Checking job status “R” Running; “E” Exiting “H” Holding “Q” Queued $qstat $qstat –n ( show which node your job is on)

Use cluster wisely Quit or cancel job submission $qstat ( to get the jobID) #qdel YOURJOBID To kill all of your jobs if you have too many $qstat -u YOURNAME | tail --lines=+6 |awk '{print "qdel ", $1}‘ |/bin/sh Limitation for the name of the SCRIPT No more than 10 characters no space in between no special characters. use a temporary name if necessary and change it back when the job is done. Maximum job for each useer: 30, No more than 50 jobs for each submission No ssh connection directly to nodes Send request to admin if you need to run large jobs

New policies User quota will be enabled on cluster, each one will have 1TB, special request needed for more space. 6 month after leave vipbg, yoru account will be deactivated Always check ~/tmp and remove the temp files your program generated.