Jia Yao Director: Vishwani D. Agrawal High Performance Compute Cluster April 13, 2012 1.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč
Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
Information Technology Center Introduction to High Performance Computing at KFUPM.
Tamnun Hardware. Tamnun Cluster inventory – system Login node (Intel 2 E GB ) – user login – PBS – compilations, – YP master Admin.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Academic and Research Technology (A&RT)
Introduction to DoC Private Cloud
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
5.3 HS23 Blade Server. The HS23 blade server is a dual CPU socket blade running Intel´s new Xeon® processor, the E5-2600, and is the first IBM BladeCenter.
1. Outline Introduction Virtualization Platform - Hypervisor High-level NAS Functions Applications Supported NAS models 2.
Digital Graphics and Computers. Hardware and Software Working with graphic images requires suitable hardware and software to produce the best results.
Module 9 PS-M4110 Overview <Place supporting graphic here>
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2011assignprelim.1 Assignment Preliminaries ITCS 6010/8010 Spring 2011.
Project Overview:. Longhorn Project Overview Project Program: –NSF XD Vis Purpose: –Provide remote interactive visualization and data analysis services.
HPC at IISER Pune Neet Deo System Administrator
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
1 #msitcamp FL, MS.
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2012, Jan 18, 2012assignprelim.1 Assignment Preliminaries ITCS 4145/5145 Spring 2012.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
Introduction to HPC resources for BCB 660 Nirav Merchant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee.
Implementing Processes and Process Management Brian Bershad.
Hardware Trends. Contents Memory Hard Disks Processors Network Accessories Future.
Hotfoot HPC Cluster March 31, Topics Overview Execute Nodes Manager/Submit Nodes NFS Server Storage Networking Performance.
Block1 Wrapping Your Nugget Around Distributed Processing.
Sandor Acs 05/07/
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
SAN DIEGO SUPERCOMPUTER CENTER SDSC's Data Oasis Balanced performance and cost-effective Lustre file systems. Lustre User Group 2013 (LUG13) Rick Wagner.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
1 Lattice QCD Clusters Amitoj Singh Fermi National Accelerator Laboratory.
Weekly Report By: Devin Trejo Week of June 21, 2015-> June 28, 2015.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
 System Requirements are the prerequisites needed in order for a software or any other resources to execute efficiently.  Most software defines two.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Computational Sciences at Indiana University an Overview Rob Quick IU Research Technologies HTC Manager.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Five Components of a Computer Input Device – keyboard, scanner, PDA/stylus, digital camera, mouse, MP3 player, fax machine, microphone Storage Device –
Advanced Computing Facility Introduction
Welcome to Indiana University Clusters
HPC usage and software packages
Belle II Physics Analysis Center at TIFR
Heterogeneous Computation Team HybriLIT
ASU Saguaro 09/16/2016 Jung Hyun Kim.
CommLab PC Cluster (Ubuntu OS version)
System G And CHECS Cal Ribbens
Conditions leading to the rise of virtual machines
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Advanced Computing Facility Introduction
Tamnun Hardware.
High Performance Computing in Bioinformatics
Processes Hank Levy 1.
Processes Hank Levy 1.
Presentation transcript:

Jia Yao Director: Vishwani D. Agrawal High Performance Compute Cluster April 13,

Outline April 13, Computer Cluster Auburn University vSMP HPCC How to Access HPCC How to Run Programs on HPCC Performance

Computer Cluster April 13, A computer cluster is a group of linked computers Works together closely thus in many respects they can be viewd as a single computer Components are connected to each other through fast local area networks

Computer Cluster April 13, Computate Nodes Head Node User Terminals

Auburn University vSMP HPCC April 13, Virtual Symmetric Multiprocessing High Performance Compute Cluster Dell M1000E Blade Chassis Server Platform 4 M1000E Blade Chassis Fat Nodes 16 M610 half-height Intel dual socket Blade 2CPU, Quad-core Nehalem 2.80 GHz processors 24GB RAM, two 160GB SATA drives and Single Operating System image (CentOS).

Auburn University vSMP HPCC April 13, Each M610 blade server is connected internally to the chassis via a Mellanox Quad Data Rate (QDR) InfiniBand switch 40Gb/s for creation of the ScaleMP vSMP Each M1000E Fat Node is interconnected via 10 GbE Ethernet using M6220 blade switch stacking modules for parallel clustering using OpenMPI/MPICH2 Each M1000E Fat Node also has independent 10GbE Ethernet connectivity to the Brocade Turboiron 24X Core LAN Switch Each node with GHz Nehalem Total of GHz, 1.536TB shared memory RAM, and 20.48TB RAW internal storage

Auburn University vSMP HPCC April 13,

How to Access HPCC by SecureCRT April 13, access_information.html

How to Run Programs on HPCC April 13, After successfully connected to HPCC Step 1 Save.rhosts file in your H Drive Save.mpd.conf file in your H Drive Edit.mpd.conf file according to your user id secretword = your_au_user_id Chmod 700.rhosts Chmod 700.mpd.conf.rhost and.mpd.conf file can be downloaded from html

How to Run Programs on HPCC April 13, Step 2 Register your username on all 4 compute nodes by ssh compute-1 exit ssh compute-2 exit ssh compute-3 exit ssh compute-4 exit

How to Run Programs on HPCC April 13, Step 3 Save pi.c file in your H Drive Save newmpich_compile.sh file in your H Drive Save mpich2_script.sh in your H Drive Chmod 700 newmpich_compile.sh Chmod 700 mpich2_script.sh Three files can be downloaded from ming.html Run newmpich_compile.sh to compile pi.c

How to Run Programs on HPCC April 13, Step 4 Edit mpich2_script.sh file as shown on the right Submit your job onto HPCC by qsub./mpich2_script.sh Edit this line for varying number of nodes #PBS -l nodes=4:ppn=10,walltime=00:10:00 #PBS -l nodes=2:ppn=2,walltime=01:00:00 Add this line #PBS –d /home/au_user_id/folder name folder_name is the folder where you saved pi.c, newmpich_compile.sh and mpich2_script.sh Put in your user id into this line to receive s when job done #PBS -M At the end of this file, add this line data >> out

How to Run Programs on HPCC April 13, Step 5 After job submission, you will get a job number Check if your job is successfully submitted by pbsnodes –a and find out if your job number is listed Wait for job gets done and record the execution time of your job in out file

Performance April 13, RunProcessorTime in Minute

Performance April 13, Run time curve

Performance April 13, speedup curve

References April 13, “High Performance Compute Cluster”, Abdullah Al Owahid, all10/course.html all10/course.html