Auburn University

Slides:



Advertisements
Similar presentations
Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.
Advertisements

HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
High Performance Computing
1 Dr. Xiao Qin Auburn University Spring, 2011 COMP 7370 Advanced Computer and Network Security Generalizing.
1 Dr. Xiao Qin Auburn University Spring, 2011 COMP 7370 Advanced Computer and Network Security The VectorCover.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
Engineering H192 - Computer Programming The Ohio State University Gateway Engineering Education Coalition Lect 16AP. 1Winter Quarter UNIX Process Management.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Introduction to UNIX/Linux Exercises Dan Stanzione.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Intro to Linux/Unix (user commands) Box. What is Linux? Open Source Operating system Developed by Linus Trovaldsa the U. of Helsinki in Finland since.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Lab System Environment
O.S.C.A.R. Cluster Installation. O.S.C.A.R O.S.C.A.R. Open Source Cluster Application Resource Latest Version: 2.2 ( March, 2003 )
Sharif University of technology, Parallel Processing course, MPI & ADA Server Introduction By Shervin Daneshpajouh.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
Parallel Programming on the SGI Origin2000 With thanks to Igor Zacharov / Benoit Marchand, SGI Taub Computer Center Technion Moshe Goldberg,
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
ENEE150: Discussion 1 Section 0104/0105 Please Sit Down at a Computer and Login!
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
Using MPI on Dept. Clusters Min LI Sep Outline Run MPI programs on single machine Run mpi programs on multiple machines Assignment 1.
How to use HybriLIT Matveev M. A., Zuev M.I. Heterogeneous Computations team HybriLIT Laboratory of Information Technologies (LIT), Joint Institute for.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Introduction to HPC Workshop March 1 st, Introduction George Garrett & The HPC Support Team Research Computing Services CUIT.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Requesting Resources on an HPC Facility Michael Griffiths and Deniz Savas Corporate Information and Computing Services The University of Sheffield
1 COMP 3500 Introduction to Operating Systems Project 2 – An Introduction to OS/161 Overview Dr. Xiao Qin Auburn University
COMP8330/7330/7336 Advanced Parallel and Distributed Computing Decomposition and Parallel Tasks (cont.) Dr. Xiao Qin Auburn University
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Hands on training session for core skills
GRID COMPUTING.
Welcome to Indiana University Clusters
Welcome to Indiana University Clusters
IT244 - Introduction to Linux / Unix Instructor: Bo Sheng
Using Paraguin to Create Parallel Programs
Auburn University COMP 2710 Software Construction xCode Development Environment for C++ Programming in Mac OS Dr. Xiao.
Parallel computation with R & Python on TACC HPC server
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Parallel Odd-Even Sort Algorithm Dr. Xiao.
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
Postdoctoral researcher Department of Environmental Sciences, LSU
Auburn University COMP7500 Advanced Operating Systems I/O-Aware Load Balancing Techniques (2) Dr. Xiao Qin Auburn University.
How to Fix Secure Connection Error in WordPress?.
Paul Sexton CS 566 February 6, 2006
Introduction to HPC Workshop
Parallel computation with R on TACC HPC server
HOPPER CLUSTER NEW USER ORIENTATION February 2018.
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Advanced Computing Facility Introduction
Requesting Resources on an HPC Facility
Parallel computation with R & Python on TACC HPC server
MPI MPI = Message Passing Interface
Introduction to High Performance Computing Using Sapelo2 at GACRC
Logging into the linux machines
Quick Tutorial on MPICH for NIC-Cluster
Introduction to OS (concept, evolution, some keywords)
Working in The IITJ HPC System
Presentation transcript:

Auburn University http://www.eng.auburn.edu/~xqin COMP7330/7336 Advanced Parallel and Distributed Computing Setting Up Your Programming Environment Dr. Xiao Qin Auburn University http://www.eng.auburn.edu/~xqin xqin@auburn.edu 31 Min: Lec07b-MPI 10 Slides 10 Min: Demonstration using putty Login ssh aubxxq@dmc.asc.edu module load openmpi mpicc hello.c –o hello mpirun –np 4 hello 9 min: This lecture note Lec07b-Alabama Supercomputer

Login The DMC, SGI Ultraviolet, and SGI Altix and can be accessed using secure shell. Windows Machines: use PuTTY Secure shell is installed on many Linux and Unix machines. Command line:       ssh user_name@dmc.asc.edu      ssh user_name@uv.asc.edu      ssh user_name@altix.asc.edu

Running MPI Interactive on DMC Option 1) Use the login node Work run on the login node is limited to 10 minutes of CPU time and a small amount of memory. You can run mpi jobs like this mpirun -np 4 myprogram The login node has 16 cores.   If 20 students try to run 4 core jobs at the same time, it will bog down the node to the point of making it unusable to everyone

Running MPI Interactive on DMC Option 2) Use a queue Open an interactive session through the queue: qsub -I -q small-parallel -r n -l nodes=4,mem=2gb,partition=dmc The terminal session hangs until the job starts on the compute nodes. It may wait from a minute to a number of hours. ASC provides a "class" queue and will reserve some processors for that queue so you don't have to wait on research work. (but might wait for the other students in the same class)

Running MPI Interactive on DMC Option 2) Use a queue (cont.) Once it has started an interactive session on the compute nodes, you may have to again load the module for the mpi you are using.   Then run the mpi job like this: mpiexec myprogram (but might wait for the other students in the same class)

Compile and Run an MPI Application Load the module for the mpi:   module load openmpi Compile the program: mpicc myprogram.cpp –o myprogram Compile the program: mpirun –np 4 myprogram (but might wait for the other students in the same class)

HPC User Manual http://www.asc.edu/html/man.pdf