Intel Xeon Phi Training - Introduction Rob Allan Technology Support Manager The Hartree Centre.

Slides:



Advertisements
Similar presentations
Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
Advertisements

Introduction to RCC for Intro to MRI 2014 July 25, 2014.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
A crash course in njit’s Afs
Accessing the Internet with Anonymous FTP Transferring Files from Remote Computers.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2011assignprelim.1 Assignment Preliminaries ITCS 6010/8010 Spring 2011.
Introduction to UNIX/Linux Exercises Dan Stanzione.
Eucalyptus Virtual Machines Running Maven, Tomcat, and Mysql.
COMP1070/2002/lec3/H.Melikian COMP1070 Lecture #3 v Operating Systems v Describe briefly operating systems service v To describe character and graphical.
Christian Kocks April 3, 2012 High-Performance Computing Cluster in Aachen.
Customized cloud platform for computing on your terms !
 Accessing the NCCS Systems  Setting your Initial System Environment  Moving Data onto the NCCS Systems  Storing Data on the NCCS Systems  Running.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
UNIX command line. In this module you will learn: What is the computer shell What is the command line interface What is the directory tree Some UNIX commands.
Logging into the linux machines This series of view charts show how to log into the linux machines from the Windows environment. Machine name IP address.
Lab System Environment
CCPR Workshop Introduction to the Cluster July 13, 2006.
ACCESS IC LAB Graduate Institute of Electronics Engineering, NTU Usage of Workstation Lecturer: Yu-Hao( 陳郁豪 ) Date:
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers Chapter 2: Message-Passing Computing LAM/MPI at the.
SSH Tricks for CSF Slide 1 NEbraskaCERT SSH Tricks Matthew G. Marsh 05/21/03.
Running Genesis Free-Electron Laser Code on iDataPlex Dave Dunning 15 th January 2013.
Getting Started on Emerald Research Computing Group.
RT-LAB Electrical Applications 1 Opal-RT Technologies Use of the “Store Embedded” mode Solution RT-LAB for PC-104.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
+ Vieques and Your Computer Dan Malmer & Joey Azofeifa.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Assignprelim.1 Assignment Preliminaries © 2012 B. Wilkinson/Clayton Ferner. Modification date: Aug 10, 2012.
Advanced Computing Facility Introduction
Interacting with the cluster ssh, sftp, & slurm batch scripts
Hands on training session for core skills
GRID COMPUTING.
Welcome to Indiana University Clusters
PARADOX Cluster job management
Open OnDemand: Open Source General Purpose HPC Portal
HPC usage and software packages
Linux 101 Training Module Linux Basics.
Welcome to Indiana University Clusters
NTP, Syslog & Secure Shell
Andy Wang Object Oriented Programming in C++ COP 3330
Heterogeneous Computation Team HybriLIT
Part 3 – Remote Connection, File Transfer, Remote Environments
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
Introduction to Programming the WWW I
Assignment Preliminaries
Lab 1 introduction, debrief
Telnet/SSH Connecting to Hosts Internet Technology.
Introduction to HPC Workshop
INSTALLING AND SETTING UP APACHE2 IN A LINUX ENVIRONMENT
Using HPC for Ansys CFX and Fluent
SSH SSH is “Secure SHell” Secure, compressed, widely supported, fast
Web Programming Essentials:
Footer.
High Performance Computing in Bioinformatics
CGS 3175: Internet Applications Fall 2009
Andy Wang Object Oriented Programming in C++ COP 3330
UNIX/LINUX Commands Using BASH Copyright © 2017 – Curt Hill.
Lecture 16B: Instructions on how to use Hadoop on Amazon Web Services
Introduction to High Performance Computing Using Sapelo2 at GACRC
Logging into the linux machines
Getting Started: Accessing Our Amazon AWS Server
Presentation transcript:

Intel Xeon Phi Training - Introduction Rob Allan Technology Support Manager The Hartree Centre

The Hartree Centre, based at STFC's Daresbury Laboratory, was founded in 2012 principally to stimulate the uptake of HPC and to support UK industry. We are working with a number of high profile partners, including IBM, Intel, Unilever, Rolls-Royce, etc. We have a large variety of state of the art computation, data analysis and “cognitive” systems. All access is “project based”, you need to have a project set up first. This can be in a number of ways: Pay for access Be part of a commercial project Be part of an academic “call for proposals” Be part of a development project Enquiries: or

The Iden Cluster All our computer systems are named after obsolete British car manufacturers. Iden: 84 nodes of Intel 24 core IvyBridge 64 GB memory per node (host processor) 42 nodes with attached Xeon Phi 5110P (60 cores) 8 GB memory per Xeon Phi Mellanox FRD InfiniBand between hosts Iden and Napier are attached to the Calcott login and management cluster. They share common login nodes and LSF batch queue system. For information, see:

1) your course id is of the form xxxyy-dxp75 2) you need to look up your password in SAFE 3) log in using the course id and password – easy! – Tip: if you are asked to change your password, just put it in backwards 4) open a terminal (right click in background and select "Open in Terminal". Now initialise the environment for Intel software source ~/../shared/intel/parallel_studio_xe_ /psxevars.sh intel64 Logging onto the Workstations

Logging onto Iden 1) From the workstations open a terminal (right click in background and select "Open in Terminal". 2) ssh into one of the Phase-2 login nodes (generic phase2.wonder.hartree.stfc.ac.uk). IP Address DNS Name phase2-login1.wonder.hartree.stfc.ac.uk phase2-login2.wonder.hartree.stfc.ac.uk phase2-login3.wonder.hartree.stfc.ac.uk phase2-login4.wonder.hartree.stfc.ac.uk Now initialise the environment for Intel software for the Xeon Phi cp ~/../shared/bashconfig ~/. source ~/bashconfig module load intel/ _mic

Accessing a Xeon Phi There are 4 steps to physically access a Xeon Phi on the system: 1) SSH onto a login node (as above) 2) get interactive access to a Phi host node (see next) 3) set up a key pair (see next) 4) SSH to the MIC co-processor The Xeon Phis are attached to 42 of the nodes of Iden in chassis "1A" or "1C". Host nodes have alternate numbers idb1a01 to idb1a41 or idb1c02 to idb1c42.

Interactive Shell on a Host Node An interactive shell on an available host node can be started as follows: bsub -q phiq -U -Is /bin/bash source /etc/profile.d/modules.sh After a few seconds this should return a command prompt. If this does not happen it may be that the node is already in use (access is exclusive). To check, use a command like this bjobs - uall | grep idb1 which will identify any users of chassis 1a or 1c using the phiq. Note that the second line is sourcing the module environment which will be needed later.

To SSH into a MIC co-processor you will need a key pair if you do not already have one. # generate key pair and add it to authorized_keys cd ~/.ssh ssh-keygen -f mic.key -t rsa -b 1024 ^RTN cat mic.key.pub >> authorized_keys # add the IdentityFile line to ~/.ssh/config echo "IdentityFile ~/.ssh/mic.key" | cat >> config # ensure its only you that can read it chmod 600 config # now try logging onto the MIC ssh `hostname`-mic0 SSH onto the Phi

Copying Files The workstations are connected to a file server and have a shared file system on /gpfs/home/training. You can log onto any workstation and your home directory will be /gpfs/home/HCT00032/dxp75/xxxyy-dxp75. On the iDataPlex cluster your home file system is also /gpfs/home/HCT00032/dxp75/xxxyy-dxp75 although this is not the same GPFS system, and files will have to be copied between the two. To copy files from the workstation to Iden, first open a local terminal on the workstation by using a right mouse click in the background and selecting “open in terminal”, then do the following. cd ~ scp -r * phase2.hartree.stfc.ac.uk:.