Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.

Slides:



Advertisements
Similar presentations
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Advertisements

STFC and the UK e-Infrastructure Initiative The Hartree Centre Prof. John Bancroft Project Director, the Hartree Centre Member, e-Infrastructure Leadership.
Linux, it's not Windows A short introduction to the sub-department's computer systems Gareth Thomas.
CPSC 203 Introduction to Computers Tutorial 03 & 05 Jie (Jeff) Gao September 13 & 14.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
CCPR Workshop Lexis Cluster Introduction October 19, 2007 David Ash.
Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
1 Welcome to the Colgate University Online Employment System Applicant Tutorial.
A Quick Review of Unit 1 – Recognizing Computers Computing Fundamentals © CCI Learning Solutions.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
A crash course in njit’s Afs
Introduction to UNIX/Linux Exercises Dan Stanzione.
Welcome to the Southeastern Louisiana University’s Online Employment Site Applicant Tutorial!
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
COMP1070/2002/lec3/H.Melikian COMP1070 Lecture #3 v Operating Systems v Describe briefly operating systems service v To describe character and graphical.
Christian Kocks April 3, 2012 High-Performance Computing Cluster in Aachen.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Introduction to HPC resources for BCB 660 Nirav Merchant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Start the slide show by clicking on the "Slide Show" option in the above menu and choose "View Show”. or – hit the F5 Key.
Lab System Environment
Sept 2009 Web Services Team / ISAS / DMU What does this cover? 1. myDMU information portal 2.Blackboard a web learning system 3. DMU Google mail.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
Research Computing Environment at the University of Alberta Diego Novillo Research Computing Support Group University of Alberta April 1999.
Linux & Shell Scripting Small Group Lecture 3 How to Learn to Code Workshop group/ Erin.
11 CHAPTER INFORMATION TECHNOLOGY, THE INTERNET, AND YOU.
M. Schott (CERN) Page 1 CERN Group Tutorials CAT Tier-3 Tutorial October 2009.
1 How to Use the SCF in Today’s Environment June 13, 2007 DMT Meeting Lisa Coleman, Scott Zentz Denise Cooper, Walt Miller.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
APST Internals Sathish Vadhiyar. apstd daemon should be started on the local resource Opens a port to listen for apst client requests Runs on the host.
Running Genesis Free-Electron Laser Code on iDataPlex Dave Dunning 15 th January 2013.
Chapter 28 - Remote Login and Remote Desktops(TELNET) Introduction Early Computers Used Textual Interfaces A Timesharing System Requires User Identification.
Getting Started on Emerald Research Computing Group.
CPSC 203 Introduction to Computers T43, T46 & T68 TA: Jie (Jeff) Gao.
| nectar.org.au NECTAR TRAINING Module 5 The Research Cloud Lifecycle.
CSC190 Introduction to Computing Operating Systems and Utility Programs.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Unix Servers Used in This Class  Two Unix servers set up in CS department will be used for some programming projects  Machine name: eustis.eecs.ucf.edu.
LSF Universus By Robert Stober Systems Engineer Platform Computing, Inc.
Workshop on Advanced Computing for Accelerators Day 3 Roger Barlow.
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
University of Illinois at Urbana-Champaign Using the NCSA Supercluster for Cactus NT Cluster Group Computing and Communications Division NCSA Mike Showerman.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Earth System Modelling: an HPC perspective Mike Ashworth & Rupert Ford Scientific Computing Department and STFC Hartree Centre STFC Daresbury Laboratory.
Assignprelim.1 Assignment Preliminaries © 2012 B. Wilkinson/Clayton Ferner. Modification date: Jan 16a, 2014.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
Intel Xeon Phi Training - Introduction Rob Allan Technology Support Manager The Hartree Centre.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Workstations & Thin Clients
GRID COMPUTING.
Welcome to Indiana University Clusters
Assumptions What are the prerequisites? … The hands on portion of the workshop will be on the command-line. If you are not familiar with the command.
HPC usage and software packages
Welcome to Indiana University Clusters
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
Architecture & System Overview
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Advanced Computing Facility Introduction
High Performance Computing in Bioinformatics
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Presentation transcript:

Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory also ack: Crispin Keable (IBM)

Agenda 09: :00 Coffee (In the breakout room next door to the training suite) 10: :15 Roger BARLOW - Introduction to the Workshop - Day 1 10: :30 Rob ALLAN - Basic use of the system and job submission 10: :40 Peter WILLIAMS Introduction to running ELEGANT on Sid 10: :50 David DUNNING Introduction to using GENESIS on Sid 10: :00 Practical session for ELEGANT and GENESIS 13: :00 Lunch (In the breakout room next door to the training suite) 14: :15 Jonny SMITH Introduction to running VORPAL on SID 14: :30 Practical session running VORPAL (also ELEGANT and GENESIS if desired) Coffee is also available at 11:00 and 15:00 in the breakout room next door to the training suite for anyone that needs some refreshment during the practical sessions. Please note that no food or drink is allowed in the training suite at any time!

Hartree Centre Resources Modern Data Centre with water cooling – room for expansion 7 racks of IBM BlueGene/Q with 114,688 cores – 1.46 Pflop/s peak (#13 in Top500) IBM iDataPlex with 8,192 cores – 170 Tflop/s peak (#114 in Top500) including 8x 4 TB ScaleMP virtual shared memory systems ~6 PB of DDN disk storage with GPFS ~15 PB Tape library Visualisation systems at Daresbury and Rutherford Appleton Laboratories – to complement the Virtual Engineering Centre Training and conference facilities

11 Jan Hartree Centre HPC Resources

How to Log On Log on to a workstation: 1) in “other” enter your course id, e.g. xxxyy-jpf03 2) enter the login password that you have been given Log on to the iDataPlex: 1) Open the iDataPlex window 2) Or open a terminal (right click the mouse) and type: ssh idpxlogin3 From home (using your SSH keys): ssh

Transferring Files Open a terminal (right click the mouse) and type: cd ~ This should take you to, e.g. /gpfs/home/training/jpf03/xxxyy-jpf03 To copy a file from the iDataPlex do: scp idpxlogin3:. Data can be copied from USB memory sticks by plugging them into the keyboard port on the workstation. They should automount as /media/??? You can copy data to the shared training file system or to the iDataPlex. From home (using your SSH keys) e.g. scp

11 Jan Remote Connectivity

11 Jan Base & Data Intensive systems (IBM iDataPlex) Fan Pack (4×80mm Fans) 2×CFF PSUs 2U Chassis Simple-Swap Storage Supports 2×3.5”, 4×2.5”, or 8×1.8” SSD Rear Front 2×FHHL PCIex16 Adapters (per node) IB Mezz Card

11 Jan June MES upgrade 7 racks 512 nodes – 2 processors = 16 cores per node (Intel SandyBridge) 8192 cores in total InfiniBand commication network GPFS file system

Managed Applications We use environment modules to manage installed applications on the iDataPlex. Try: module avail Of interest here might be: genesis/2.0 pelegant-gcc/ vorpal/5.2.2 vorpal/6.0 To use Genesis do: module load genesis/2.0 module list Should show you that its correctly loaded.

Using the LSF Batch Queues Using idpxlogin3 or wonder.hartree.stfc.ac.uk will take you to a login node on the iDataPlex cluster. You can only edit files (using emacs or vi) and compile programs there. To run jobs you need to use the LSF batch queues which submit jobs to the 512 compute nodes using a fair share method. “bqueues” – shows what queues are available “bsub < testjob.bsub” – submits a job as defined in the script “testjob.bsub”. Note: be careful to remember the “<” re-direction symbol “bjobs -u all -w” - shows all jobs, their jobid and their status, PEND, RUN, etc. “bjobs -l ” - shows detailed information about a job “bkill ” - kills the job if you've made a mistake or its hung. To get more information type “man bjobs” etc.

BSUB - Job Description Syntax The following tutorials will give detailed examples for each application. Here are some general notes based on the Genesis example. #!/bin/bash # use the bash shell #BSUB -o stdout.%J.txt # stdout file, %J puts the jobid as a string #BSUB -e stderr.%J.txt # stderr file #BSUB -r [ptile=16] # tell the scheduler to put 16 processes per node #BSUB -n 32 # tells the scheduler a total of 32 processes are required (i.e. 2 nodes in this case) #BSUB -J Genesis # use “Genesis” as the display name for bjobs #BSUB -W 30 # job expected to run in less than 30 minutes wall time Other commands in the script file use bash shell syntax. For more information try “man bsub”.

More Information Web site: Community Portal: Document Repository: “Documents” Status and User Guide: Training and Events: “Clearing House” JISCMail group: Help desk