CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster

Slides:



Advertisements
Similar presentations
HCC Workshop Department of Earth and Atmospheric Sciences September 23/30, 2014.
Advertisements

Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
Introduction to Running CFX on U2  Introduction to the U2 Cluster  Getting Help  Hardware Resources  Software Resources  Computing Environment  Data.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Using HPC for Ansys CFX and Fluent
Introduction to RCC for Intro to MRI 2014 July 25, 2014.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Tuesday, September 08, Head Node – Magic.cse.buffalo.edu Hardware Profile Model – Dell PowerEdge 1950 CPU - two Dual Core Xeon Processors (5148LV)
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Integrating HADOOP with Eclipse on a Virtual Machine Moheeb Alwarsh January 26, 2012 Kent State University.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2011assignprelim.1 Assignment Preliminaries ITCS 6010/8010 Spring 2011.
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2012, Jan 18, 2012assignprelim.1 Assignment Preliminaries ITCS 4145/5145 Spring 2012.
HCC Workshop August 29, Introduction to LINUX ●Operating system like Windows or OS X (but different) ●OS used by HCC ●Means of communicating with.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Module 1: Installing and Configuring Servers. Module Overview Installing Windows Server 2008 Managing Server Roles and Features Overview of the Server.
Swarm on the Biowulf2 Cluster Dr. David Hoover, SCB, CIT, NIH September 24, 2015.
Introduction to Using SLURM on Discover Chongxun (Doris) Pan September 24, 2013.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Unix Servers Used in This Class  Two Unix servers set up in CS department will be used for some programming projects  Machine name: eustis.eecs.ucf.edu.
1 Getting Started with C++ Part 2 Linux. 2 Getting Started on Linux Now we will look at Linux. See how to copy files between Windows and Linux Compile.
HPC at HCC Jun Wang Outline of Workshop2 Familiar with Linux file system Familiar with Shell environment Familiar with module command Familiar with queuing.
How to use HybriLIT Matveev M. A., Zuev M.I. Heterogeneous Computations team HybriLIT Laboratory of Information Technologies (LIT), Joint Institute for.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Parallel MATLAB jobs on Biowulf Dave Godlove, NIH February 17, 2016 While waiting for the class to begin, log onto Helix.
Modules, Compiling WRF, and Running on CHPC Clusters Adam Varble WRF Users Meeting 10/26/15.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
+ Vieques and Your Computer Dan Malmer & Joey Azofeifa.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Services for Sensitive Research Data Iozzi Maria Francesca, Group Leader & Nihal D. Perera, Senior Engineer Research Support Services Group ”Services for.
Advanced Computing Facility Introduction
Linux & Joker – An Introduction
Compute and Storage For the Farm at Jlab
Interacting with the cluster ssh, sftp, & slurm batch scripts
Outline Introduction/Questions
Workstations & Thin Clients
GRID COMPUTING.
Auburn University
Welcome to Indiana University Clusters
Assumptions What are the prerequisites? … The hands on portion of the workshop will be on the command-line. If you are not familiar with the command.
HPC usage and software packages
Welcome to Indiana University Clusters
How to use the HPCC to do stuff
Installing Galaxy on a cluster :
ASU Saguaro 09/16/2016 Jung Hyun Kim.
Parallel computation with R & Python on TACC HPC server
Joker: Getting the most out of the slurm scheduler
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
CommLab PC Cluster (Ubuntu OS version)
Hyperion Named after one of the sons of Uranus and Gaia from the greek mythology. Hyperion himself: father of Helios (Sun), Selene (Moon) and Eos.
Welcome to our Nuclear Physics Computing System
Using Dogwood Instructor: Mark Reed
Using HPC for Ansys CFX and Fluent
College of Engineering
Advanced Computing Facility Introduction
Requesting Resources on an HPC Facility
High Performance Computing in Bioinformatics
Parallel computation with R & Python on TACC HPC server
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Maxwell Compute Cluster
Presentation transcript:

CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY cdc at buffalo.edu January 2016

CCR Cluster - CPLEX A cluster is a collection of individual computers connected by a network that can act as a single more powerful machine. Each individual computer has its own CPUs, memory and disks. It runs an operating system and it connected to a network that is usually private to the cluster. An individual computer is referred to as a compute node.

CCR Cluster - CPLEX The CCR cluster is collection of linux computers, private networks, a shared file system, a login computer, and job scheduler. Resources must be requested from the job scheduler. The user must decide what resources are required for the job.

ISE Cluster The josewalt partition resides in the mae cluster. The josewalt partition has 12 compute nodes. Each compute node has 12 CPUs (cores) and 128 GB of memory. The CPUs are Intel Xeon E5-2620 @ 2.40GHz The operating system is Linux CentOS 7. The time limit for jobs is 500 hours.

ISE Cluster The storage available on the cluster is the user’s home directory and the group’s projects space. The home directory is /user/username. The disk space quota is 5GB. The projects space is /projects/academic/group. Both the home and projects directories are available on all compute nodes.

ISE Cluster and CPLEX You will need CPLEX installed, a module file to set the path to the CPLEX installation, a SLURM script and your data files. Both the CPLEX installation and module file are in the group’s project space on the cluster.

ISE Cluster and CPLEX The SLURM script slurm-CPLEX is located the /util/academic/slurm-scripts directory on the cluster. Input and data files should transferred from your Windows machine to the cluster using WINSCP.

Accessing the ISE Cluster Use WINSCP client on the Windows machine to transfer, edit and manage files. Use X-Win32 or PuTTY to login to the cluster front-end machine. Submit jobs and show job status in the login window. See the following link for download and setup instructions: CCR helpdesk solution: login from windows

Accessing the ISE Cluster The josewalt partition resides in the mae cluster. This is not the default. All SLURM commands reference the CCR ub- hpc cluster by default. You have two choices for accessing the josewalt partition. 1. Always specify the cluster when issuing a SLURM command.

Accessing the ISE Cluster 2. Make the mae cluster your default by setting the SLURM_CONF variable. export SLURM_CONF=/util/academic/slurm/conf /mae/slurm.conf The advantage is that you do not have to specify the cluster for every SLURM command. The disadvantage is that this makes using the CCR cluster more difficult. Either you must specify the ub-hpc cluster or unset the variable.

ISE Cluster Use the squeue command to show the status of jobs in the josewalt partition. squeue -M mae -p josewalt

ISE Cluster Use “-u username” to show the status of only the jobs submitted by that user. squeue -M mae -p josewalt -u username

ISE Cluster Use the sinfo or snodes command to show the status and details of the compute nodes. sinfo -M mae -p josewalt

ISE Cluster The snodes command to shows more details: snodes all mae/josewalt

Submit a Job to the ISE Cluster Use the sbatch command to submit a job. sbatch slurm_script

Submit an Interactive Job How to submit an interactive job to the josewalt partition, use the fisbatch command

More on Interactive Jobs Advantage: an interactive job is useful in debugging a computation. Disadvantages: Waiting for a login to the compute node. A disconnection from the cluster login machine terminates the job. An interactive job is not constrained, so it could use resources intended for another computation when sharing a compute node.

Cancel a Job The scancel command cancels a running or pending job. scancel -M mae jobid

SLURM Script Example The #SBATCH lines are directives to the scheduler. The directives are the resource requests, job output file, and email preferences.

SLURM Script Example The following lines print information to the job output file. The job starts in the directory from which you submitted it. This is the working directory.

SLURM Script Example Load the module file for CPLEX. List the loaded modules. The ulimit command removes the size limit on the instruction stack. This helps large programs to run.

SLURM Script Example Run the computation using the command line. Once all the lines of the SLURM have been executed, then the job completes and exits the compute node.