Lab System Environment

Slides:



Advertisements
Similar presentations
ORNL is managed by UT-Battelle for the US Department of Energy Titan Cross Compile Adam Simpson OLCF User Support.
Advertisements

NERCS Users’ Group, Oct. 3, 2005 Interconnect and MPI Bill Saphir.
PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Introduction to DoC Private Cloud
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2011assignprelim.1 Assignment Preliminaries ITCS 6010/8010 Spring 2011.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Introduction to HPC resources for BCB 660 Nirav Merchant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
Parallel Programming on the SGI Origin2000 With thanks to Igor Zacharov / Benoit Marchand, SGI Taub Computer Center Technion Moshe Goldberg,
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
Status of the new NA60 “cluster” Objectives, implementation and utilization NA60 weekly meetings Pedro Martins 03/03/2005.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
A Testbed for Study of Thermal and Energy Dynamics in Server Clusters Shen Li, Fan Yang, Tarek Abdelzaher University of Illinois at Urbana Champaign.
PIC port d’informació científica DateText1 November 2009 (Elena Planas) PIC Site review.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
How to use HybriLIT Matveev M. A., Zuev M.I. Heterogeneous Computations team HybriLIT Laboratory of Information Technologies (LIT), Joint Institute for.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
CS 283Computer Networks Spring 2013 Instructor: Yuan Xue.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Willkommen Welcome Bienvenue How we work with users in a small environment Patrik Burkhalter.
Software framework and batch computing Jochen Markert.
This slide deck is for LPI Academy instructors to use for lectures for LPI Academy courses. ©Copyright Network Development Group Module 01 Introduction.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarksEGEE-III INFSO-RI MPI on the grid:
Intel Xeon Phi Training - Introduction Rob Allan Technology Support Manager The Hartree Centre.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Interacting with the cluster ssh, sftp, & slurm batch scripts
Configure the intercom IP
Auburn University
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
HPC usage and software packages
Working With Azure Batch AI
Getting Started with R.
ASU Saguaro 09/16/2016 Jung Hyun Kim.
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
OSG Connect and Connect Client
Welcome to our Nuclear Physics Computing System
Paul Sexton CS 566 February 6, 2006
SAPC Hardware Pentium CPU (or 486) 4M usable memory
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Welcome to our Nuclear Physics Computing System
Advanced Computing Facility Introduction
Overview of HPC systems and software available within
High Performance Computing in Bioinformatics
Introduction to High Performance Computing Using Sapelo2 at GACRC
Presentation transcript:

Lab System Environment Paul Kapinos 2014.10.07

Lab nodes: integrated to HPC Cluster OS: Scientific Linux 6.5 (RHEL6.5 compatible) Batch system: LSF 9.1  not for this lab  Storage: NetApp filer ($HOME / $WORK) no backup on $WORK Lustre ($HPCWORK) not available

Software Environment Compiler: MPI: Default: Intel 15.0 (and older) GCC 4.9 (and older) Oracle Studio, PGI MPI: Open MPI, Intel MPI No InfiniBand! 1GE only Warnings and 1/20 of usual performance Default: intel/14.0 + openmpi/1.6.5

How to login Frontends login / SCP File transfer: $ ssh [-Y] user@cluster.rz.rwth-aachen.de $ scp [[user@]host1:]file1 [...] [[user@]host2:]file2 then jump to the assigned lab node $ ssh lab5[.rz.rwth-aachen.de] cluster.rz.RWTH-Aachen.DE cluster2.rz.RWTH-Aachen.DE cluster-x.rz.RWTH-Aachen.DE GUI cluster-x2.rz.RWTH-Aachen.DE GUI cluster-linux.rz.RWTH-Aachen.DE cluster-linux-nehalem.rz.RWTH-Aachen.DE cluster-linux-xeon.rz.RWTH-Aachen.DE cluster-linux-tuning.rz.RWTH-Aachen.DE cluster-copy.rz.RWTH-Aachen.DE ‘scp’ cluster-copy2.rz.RWTH-Aachen.DE ‘scp’ cluster-x[2] only for GUI-based applications, not for compiling

Lab Node Assignment Please use your’s allocated node only or agree in advance with the node owner Node Account Institute lab1 hpclab01 EONRC lab2 hpclab02 IGPM lab3 hpclab03 GHI(AICES) lab4 hpclab04 ITV lab5 hpclab05 FZJ,PGI lab6 hpclab06 CATS lab7 hpclab07 Physik lab8 hpclab08 AIA

Lab Node Assignment Please use your’s allocated node only or agree in advance with the node owner

Lab nodes Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 64GB RAM Packages(sockets) / Cores per package / Threads per core : 2/18/2 Cores / Processors(CPUs) : 36 / 72 AVX2: 256bit register 2x Fused Multiply Add (FMA) >> double peak performance cf. previous chips 64GB RAM Stream: >100Gb/s (Triad) No InfiniBand connection MPI via 1GE network still possible Warnings and 1/20 of usual performance

Module System Many compilers, MPIs and ISV software The module system helps to manage all the packages List loaded modules / available modules $ module list $ module avail Load / unload a software $ module load <modulename> $ module unload <modulename> Exchange a module (Some modules depend on each other) $ module switch <oldmodule> <newmodule> $ module switch intel intel/15.0 Reload all modules (May fix your environment) $ module reload Find out in which category a module is: $ module apropos <modulename>

MPI No InfiniBand connection Default: Open MPI 1.6.5 MPI via 1GE network, >> warnings and 1/20 of usual performance Default: Open MPI 1.6.5 e.g. switch to Intel MPI: $ module switch openmpi intelmpi Wrapper in $MPIEXEC redirects the processes to ‘back end nodes’ by default your processes run on (random) non-Haswell node use the ‘-H’ option to start the processes on favoured node $ $MPIEXEC -H lab5,lab6 -np 12 MPI_FastTest.exe other options of the interactive wrapper $ $MPIEXEC -help | less

Documentation RWTH Compute Cluster Environment HPC Users‘s Guide (a bit outdated): http://www.rz.rwth-aachen.de/hpc/primer Online documentation (including example scripts): https://doc.itc.rwth-aachen.de/ Man-Pages for all commands available In case of errors / problems let us know: servicedesk@itc.rwth-aachen.de

Lab We provide laptops Log in to the laptops with the local „hpclab“ account (your own PC pool accounts might also work) Use X-Win32 to log in to the cluster (use “hpclab0Z” or your own account) Log in to the labZ node (use “hpclab0Z” account) Feel free to ask questions Source: D. Both, Bull GmbH