Getting Started on Emerald Research Computing Group.

Slides:



Advertisements
Similar presentations
Linux, it's not Windows A short introduction to the sub-department's computer systems Gareth Thomas.
Advertisements

Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Using Kure and Killdevil
Using Kure and Topsail Mark Reed Grant Murphy Charles Davis ITS Research Computing.
Getting Started with Linux Douglas Thain University of Wisconsin, Computer Sciences Condor Project October 2000.
Getting Started on Topsail Charles Davis ITS Research Computing April 8, 2009.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Asynchronous Solution Appendix Eleven. Training Manual Asynchronous Solution August 26, 2005 Inventory # A11-2 Chapter Overview In this chapter,
Getting Started on Emerald ITS- Research Computing Group.
1 Chapter Overview Introduction to Windows XP Professional Printing Setting Up Network Printers Connecting to Network Printers Configuring Network Printers.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
A crash course in njit’s Afs
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Christian Kocks April 3, 2012 High-Performance Computing Cluster in Aachen.
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2012, Jan 18, 2012assignprelim.1 Assignment Preliminaries ITCS 4145/5145 Spring 2012.
 Accessing the NCCS Systems  Setting your Initial System Environment  Moving Data onto the NCCS Systems  Storing Data on the NCCS Systems  Running.
Getting Started on Topsail Mark Reed Charles Davis ITS Research Computing.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
University of Illinois at Urbana-Champaign NCSA Supercluster Administration NT Cluster Group Computing and Communications Division NCSA Avneesh Pant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
Grid Computing I CONDOR.
O.S.C.A.R. Cluster Installation. O.S.C.A.R O.S.C.A.R. Open Source Cluster Application Resource Latest Version: 2.2 ( March, 2003 )
Computer and Information Science Ch1.3 Computer Networking Ch1.3 Computer Networking Chapter 1.
Sharif University of technology, Parallel Processing course, MPI & ADA Server Introduction By Shervin Daneshpajouh.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
Research Computing Environment at the University of Alberta Diego Novillo Research Computing Support Group University of Alberta April 1999.
Linux & Shell Scripting Small Group Lecture 3 How to Learn to Code Workshop group/ Erin.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Computer Networking From LANs to WANs: Hardware, Software, and Security Chapter 13 FTP and Telnet.
1 Alexandru V Staicu 1, Jacek R. Radzikowski 1 Kris Gaj 1, Nikitas Alexandridis 2, Tarek El-Ghazawi 2 1 George Mason University 2 George Washington University.
Experimental Comparative Study of Job Management Systems George Washington University George Mason University
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Portal Update Plan Ashok Adiga (512)
Introduction to SAS/willow (Unix) Sam Gordji Weir 107.
Linux Operations and Administration
LSF Universus By Robert Stober Systems Engineer Platform Computing, Inc.
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
University of Illinois at Urbana-Champaign Using the NCSA Supercluster for Cactus NT Cluster Group Computing and Communications Division NCSA Mike Showerman.
Parallel MATLAB jobs on Biowulf Dave Godlove, NIH February 17, 2016 While waiting for the class to begin, log onto Helix.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Software framework and batch computing Jochen Markert.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Assignprelim.1 Assignment Preliminaries © 2012 B. Wilkinson/Clayton Ferner. Modification date: Jan 16a, 2014.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
UNIX Epi Dakai Zhu January 5th, 2005.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Workstations & Thin Clients
GRID COMPUTING.
PARADOX Cluster job management
OpenPBS – Distributed Workload Management System
Creating and running applications on the NGS
Chapter 2: System Structures
Architecture & System Overview
Assignment Preliminaries
NGS computation services: APIs and Parallel Jobs
File Transfer Olivia Irving and Cameron Foss
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Presentation transcript:

Getting Started on Emerald Research Computing Group

its.unc.edu 2  What is Emerald?  Logging into Emerald  File manipulation on Emerald  Submitting jobs on Emerald  Interactive programs on Emerald Course Outline

its.unc.edu 3 Help Documentation  Getting Started on Emerald General overview of Emerald for range of users  Short Course – Getting Started on Emerald Detailed notes for beginning Emerald users

its.unc.edu 4 What is Emerald?  352-processor Linux Cluster Maintained by Research Computing Group  Appropriate for all users regardless of expertise level  Other Servers: Cedar/Cypress (128-processor SGI/Altix) Topsail (4160-processor Dell Linux Cluster)  Mass Storage Account access

its.unc.edu 5 Advantages of Using Emerald  High performance  Large capacity  Parallel processing  Many available software packages  Variety of compiling options

its.unc.edu 6 Emerald Linux Cluster

its.unc.edu 7 Distributed vs. Shared Memory Shared memory - single address space. All processors have access to a pool of shared memory. (examples: Yatta, Cedar/Cypress) Methods of memory access : Bus and Crossbar Distributed memory - each processor has it’s own local memory. Must do message passing to exchange data between processors. (examples: Emerald, Topsail) MEMORY BUS CPU MMMM NETWORK

its.unc.edu 8 Logging Into Emerald  UNIX/Linux/OSX ssh  Windows: SSH Secure Shell Setting up a Profile for Emerald Forwarding X11 packets

its.unc.edu 9 Home and Work Directories on Emerald  Home Directory /afs/isis/home/m/y/my_onyen/ 250MB quota ~/private/ Files backed up daily [ ~/OldFiles ] Space quota/usage in Home Directory:  fs lq  Work Directory /netscr/my_onyen/ No space limit but periodically cleaned

its.unc.edu 10 File Manipulation Commands  SSH Secure File Transfer  Copy files cp command cp /afs/isis/depts/atn/rcg/example_code/Gaussian/water.com /netscr/my_onyen/. cp /afs/isis/depts/atn/rcg/example_code/sas/test.sas /netscr/my_onyen/. cp -r  cp -r ~/private/TestDirectory.  Move files mv commands  mv ~/private/testfile.txt.

its.unc.edu 11 File Manipulation Commands  Tar archive To create a tar file  tar –cvzf TestDirectory.tgz./ To see a tar file’s table of content  tar –tvzf TestDirectory.tgz To untar a tar file  tar –xvzf TestDirectory.tgz

its.unc.edu 12 Submitting Jobs: LSF and Packages  LSF (Load Sharing Facility) Fairly distribute compute nodes among users 60 processor per user limit  Packages ipm commands  ipm add (ipm a)  ipm remove (ipm r)  ipm query (ipm q) Available packages  ications.php ications.php

its.unc.edu 13 Details of LSF Submission host LIM Batch API Master host MLIM MBD Execution host SBD Child SBD LIM RES User job LIM – Load Information Manager MLIM – Master LIM MBD – Master Batch Daemon SBD – Slave Batch Daemon RES – Remote Execution Server queue Load information other hosts other hosts bsub app

its.unc.edu 14 Submitting Jobs: bsub Command  bsub command  All files must be in /netscr/my_onyen/  bsub [- bsub_opts] executable [-exec_opts]  Queues – bqueues command week idle  bsub –o bsub –o out.%J

its.unc.edu 15 Submitting Jobs: Following Job Progress  bjobs bjobs –l JobID Shows current status of job  bhist bhist –l JobID  bkill bkill JobID Ends job prematurely  bfree

its.unc.edu 16 Submitting Jobs: Specialty Scripts  bsas bsub -q week -R blade sas program.sas bsas test.sas  bmatlab bsub -q week -R blade matlab -nodisplay - nojvm -nosplash program.m -logfile program.log bmatlab test.m

its.unc.edu 17 Compiling on Emerald Compilers – FORTRAN 77/90/95 – C/C++ Parallel Computing – OpenMP – MPI (MPICH, LAM/MPI, MPICH-GM)

its.unc.edu 18 Compiling Details on Emerald CompilerPackage nameCommand Intelintel_fortran, intel_CCifort, icc Portland Grouppgipgf77, pgf90,pgcc,pgCC Absoftprofortranf77, f90 GNUgccg77,gcc, g++

its.unc.edu 19 Compiling Details on Emerald  Add a compiler into your working environment ipm add package_name  Compile a code command code.f –o executable  Run executable on a compute node using the bsub command bsub –q week –R blade executable

its.unc.edu 20 Submitting Jobs: Job Output  Output sent to  bsub –o Output saved in working directory  bsub –u Output sent to specified address

its.unc.edu 21 Interactive Jobs: Setup  X-Windows Linux/OSX  X11 client Windows  X-Win32  Offered on UNC Software Acquisition site  Port forwarding on SSH Secure Shell  Setting up a session on X-Win32

its.unc.edu 22 Interactive Jobs: Submission  –Ip bsub –q int –R blade –Ip sas bsub –q int –R blade –Ip gv  Specialty Scripts xsas xstata

its.unc.edu 23 Contacting Research Computing  For assistance with Emerald, please contact the Research Computing Group: Phone: HELP Submit help ticket at