Advanced Computing Facility Introduction

Slides:



Advertisements
Similar presentations
Cloud Computing Mick Watson Director of ARK-Genomics The Roslin Institute.
Advertisements

PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2011assignprelim.1 Assignment Preliminaries ITCS 6010/8010 Spring 2011.
Introductory Meeting. Why are we here? RIP FrontPage Versions prior to 2003 will not work with Windows7 No longer supported by Microsoft You can still.
Parallelization with the Matlab® Distributed Computing Server CBI cluster December 3, Matlab Parallelization with the Matlab Distributed.
2012/06/22 Contents  GPU (Graphic Processing Unit)  CUDA Programming  Target: Clustering with Kmeans  How to use.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Introduction to HPC resources for BCB 660 Nirav Merchant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
| nectar.org.au NECTAR TRAINING Module 5 The Research Cloud Lifecycle.
Lab System Environment
November 18, 2014 Centers for Medicare and Medicaid Services Virtual Research Data Center.
Packaging for Voracity Solutions Control Panel David Turner.
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
Monitor your child’s attendance Monitor your child’s behaviour View your child’s school timetable View when homework has been set for your child Download.
Using the Weizmann Cluster Nov Overview Weizmann Cluster Connection Basics Getting a Desktop View Working on cluster machines GPU For many more.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
| nectar.org.au NECTAR TRAINING Module 5 The Research Cloud Lifecycle.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
What’s Coming? What are we Planning?. › Better docs › Goldilocks – This slot size is just right › Storage › New.
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Transforming Science Through Data-driven Discovery Tools and Services Workshop Atmosphere Joslynn Lee – Data Science Educator Cold Spring Harbor Laboratory,
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Our New Submit Server. chtc.cs.wisc.edu One thing out of the way... Your science is King! We need rules to facilitate resource sharing. Given good reasons.
Advanced Computing Facility Introduction
Interacting with the cluster ssh, sftp, & slurm batch scripts
Workstations & Thin Clients
GRID COMPUTING.
Auburn University
Welcome to Indiana University Clusters
Assumptions What are the prerequisites? … The hands on portion of the workshop will be on the command-line. If you are not familiar with the command.
HPC usage and software packages
Welcome to Indiana University Clusters
GPU Computing Jan Just Keijser Nikhef Jamboree, Utrecht
Working With Azure Batch AI
Heterogeneous Computation Team HybriLIT
ASU Saguaro 09/16/2016 Jung Hyun Kim.
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
Architecture & System Overview
OSG Connect and Connect Client
RBS Remote Business Support System
The Scheduling Strategy and Experience of IHEP HTCondor Cluster
File Transfer Olivia Irving and Cameron Foss
Using the GO Portal A guide to the resources you can access through the GO Portal.
College of Engineering
Click on the Create Student Account Link
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
High-Performance Computing at the Martinos Center
Requesting Resources on an HPC Facility
High Performance Computing in Bioinformatics
Lecture 16B: Instructions on how to use Hadoop on Amazon Web Services
CS791v Homework and Submission
Introduction to High Performance Computing Using Sapelo2 at GACRC
The Neuronix HPC Cluster:
EN Software Carpentry The Linux Kernel and Associated Operating Systems.
GPU and Large Memory Jobs on Ibex
Windows Server Installation
Maxwell Compute Cluster
Presentation transcript:

Advanced Computing Facility Introduction Dr. Feng Cen 09/16/16 Modified: Yuanwei Wu, Wenchi Ma 09/12/18 1

Overview The Advanced Computing Facility (ACF) houses High Performance Computing (HPC) resources dedicated to scientific research 458 nodes, 8568 processing cores and 49.78TB memory 20 nodes have over 500GB memory per node 13 nodes have 64 AMD cores per node and 109 node have 24 Intel cores per node Coprocessor: Nvidia TitanXp, Nivida Tesla P100, Nvidia K80: 52, Nvidia K40C: 2, Nvidia K40m: 4, Nvidia K20m: 2, Nvidia M2070:1 Virtual machine operation system: Linux http://ittc.ku.edu/cluster/acf_cluster_hardware.html

Cluster Usage Website http://ganglia.acf.ku.edu/ 3

Useful Links ACF Cluster computing resources http://ittc.ku.edu/cluster/acf_cluster_hardware.html Advanced Computing Facility (ACF) documentation main page https://acf.ku.edu/wiki/index.php/Main_Page Cluster Jobs Submission Guide https://acf.ku.edu/wiki/index.php/Cluster_Jobs_Submission_Guide Advanced guide http://www.adaptivecomputing.com/support/documentation-index/torque-resource-manager-documentation/ ACF Portal Website http://portal.acf.ku.edu/ Cluster Usage Website http://ganglia.acf.ku.edu/ 4

ACF Portal Website http://portal.acf.ku.edu/ 5

ACF Portal Website Monitor jobs View cluster loads Download files Upload files ... 6

Access Cluster System via Linux Terminal Access cluster in Nichols hall 1. Login to login server → 2. Submit cluster jobs or start an interactive session from the login server . Cluster will create a virtual machine to run your job or for your interactive session. Access cluster from off campus Use the KU Anywhere VPN first : http://technology.ku.edu/software/ku- anywhere-0 login1 server or login2 server 7

Access Cluster System via Linux Terminal Login to login server Use “ssh” to directly connect to the cluster login servers: login1 or login2 Examples: ssh login1 # login with your default linux account ssh -X login1 # “-X” access login server with X11 forwarding ssh <username>@login1 # login with a different linux account ssh -X <username>@login1 Login server is an entry point to the cluster and cannot support computationally intensive tasks 8

Access Cluster System via Linux Terminal: updated by Yuanwei @ 9/12/18 Request GPU resources on cluster 1. Reference document: https://help.ittc.ku.edu/Cluster_Documentation 2. The GPU resources on cluster g002, 4 k20 (4G memory per k20) g003, 2 k20 + 2 k40 (4G memory per k20, 12G memory per k40) g015, 4 k80 (12G memory per k80) g017, 4 P100 (16G memory per P100) g018, 4 P100 (16G memory per P100), might be reserved g019, 2 Titanxp (12G memory per Titanxp) + 1T SSD (I saved the ImageNet12 images here for experiments) g020, 2 Titanxp (12G memory per Titanxp) g021, 2 Titanxp (12G memory per Titanxp) 9

Access Cluster System via Linux Terminal Request GPU resources on cluster 3. The steps of requesting GPU from your local machine @ ITTC 3.1 login to 'login1' node at cluster 3.2 Load the slurm module 3.3 Load the right version of CUDA, cuDNN, python, matlab modules you want 10

Access Cluster System via Linux Terminal Request GPU resources on cluster 3. The steps of requesting GPU from your local machine @ ITTC 3.4 check the usage of GPU on cluster 11

Access Cluster System via Linux Terminal Request GPU resources on cluster 3. The steps of requesting GPU from your local machine @ ITTC 3.5 request GPU from cluster Meaning of options (check the ittc cluster documentation for more details): --gres=”gpu:gpu_name_u_request:gpu_num” -w: select the gpu node --mem: this specifies the requested CPU memory per node -t: the requested time for using this GPU source to run your job, format is D- HH:MM:SS -N: This sets the number of requested nodes for the interactive session (you can add if you want) -n: Specifies the number of tasks or processes to run on each allocated node (you can add if you want) 12

Access Cluster System via Linux Terminal Request GPU resources on cluster 3. The steps of requesting GPU from your local machine @ ITTC 3.6 check the usage on your requested GPU (or use 'watch -n 0.5 nvidia-smi' to dynamically watch the usage on GPU) 13

Access Cluster System via Linux Terminal Request GPU resources on cluster 3. The steps of requesting GPU from your local machine @ ITTC 3.7 quit your GPU job Ending by Yuanwei @ 9/12/18 14

Thank you ! 15