Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.

Slides:



Advertisements
Similar presentations
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Advertisements

ITE PC v4.0 Chapter 1 1 Operating Systems Computer Networks– 2.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
Lunarc history IBM /VF IBM S/VF Workstations, IBM RS/ – 1997 IBM SP2, 8 processors Origin.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
HPCC Mid-Morning Break Powertools Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2011assignprelim.1 Assignment Preliminaries ITCS 6010/8010 Spring 2011.
MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization.
Fundamentals of Networking Discovery 1, Chapter 2 Operating Systems.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Operating Systems Networking for Home and Small Businesses – Chapter 2 – Introduction To Networking.
HPCC Mid-Morning Break Module System. What are Modules? “The Environment Modules package provides for the dynamic modification of a user's environment.
ICER User Meeting 3/26/10. Agenda What’s new in iCER (Wolfgang) Whats new in HPCC (Bill) Results of the recent cluster bid Discussion of buy-in (costs,
Introduction to HPC resources for BCB 660 Nirav Merchant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Introduction to HPCC at MSU 09/08/2015 Matthew Scholz Research Consultant, Institute for Cyber-Enabled Research Download this presentation:
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Lab System Environment
O.S.C.A.R. Cluster Installation. O.S.C.A.R O.S.C.A.R. Open Source Cluster Application Resource Latest Version: 2.2 ( March, 2003 )
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
CCS Overview Rene Salmon Center for Computational Science.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
Using the ARCS Grid and Compute Cloud Jim McGovern.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Desktop Introduction. MASSIVE is … A national facility $8M of investment over 3 years Two high performance computing facilities, located at the Australian.
The Gateway Computational Web Portal Marlon Pierce Indiana University March 15, 2002.
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
2: Operating Systems Networking for Home & Small Business.
How to use HybriLIT Matveev M. A., Zuev M.I. Heterogeneous Computations team HybriLIT Laboratory of Information Technologies (LIT), Joint Institute for.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
Willkommen Welcome Bienvenue How we work with users in a small environment Patrik Burkhalter.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
ICER Bioinformatics Support Fall 2010 John B. Johnston HPC Programmer Institute for Cyber Enabled Research © 2010 Michigan State University Board of Trustees.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Workstations & Thin Clients
Brief introduction about “Grid at LNS”
Specialized Computing Cluster An Introduction
Auburn University
HPC usage and software packages
How to use the HPCC to do stuff
Introduction to HPCC at MSU
Creating and running applications on the NGS
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
OSG Connect and Connect Client
Shared Research Computing Policy Advisory Committee (SRCPAC)
Networking for Home and Small Businesses – Chapter 2
College of Engineering
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Advanced Computing Facility Introduction
HPCC Mid-Morning Break
High Performance Computing in Bioinformatics
Networking for Home and Small Businesses – Chapter 2
Introduction to High Performance Computing Using Sapelo2 at GACRC
The Neuronix HPC Cluster:
Working in The IITJ HPC System
H2020 EU PROJECT | Topic SC1-DTH | GA:
Overview of Computer system
Presentation transcript:

Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center

HPCC Online Resources – HPCC home wiki.hpcc.msu.edu – Public/Private Wiki forums.hpcc.msu.edu – User forums rt.hpcc.msu.edu – Help desk request tracking mon.hpcc.msu.edu – System Monitors

HPCC Cluster Overview Linux operating system Primary interface is text based though Secure Shell (ssh) All Machines in the main cluster are binary compatible (compile once run anywhere) Each user has 50Gigs of personal hard drive space. /mnt/home/username/ Users have access to 33TB of shared scratch space. /mnt/scratch/username/ A scheduler is used to manage jobs running on the cluster A submission script is used to tell the scheduler the resources required and how to run a job A Module system is used to manage the loading and unloading of software configurations

gateway Access to HPCC is primarily though the gateway machinie: ssh Access to all HPCC services uses your MSU NetID and password. For MSU NetID info- netid.msu.edunetid.msu.edu

HPCC System Diagram

Hardware Time Line YearNameDescriptionCoresMemoryTotal Cores 2005green1.6GHz Itanium2 (very old) (shared) 128 Main Cluster 2005amd05Dual-core 2.2GHz AMD Opterons48GB intel07Quad-core 2.3GHz Xeons88GB intel08Sun x4450s (Fat Node)1664GB amd09Sun Fire X4600 Opterons (Fat Node)32256GB We are currently have two new hardware additions for 2010 Graphics Processing Unit (GPU) Cluster – In House New General Purpose Large Cluster – RFP/RFQ stage

HPCC System Diagram

Cluster Developer Nodes Developer Nodes are accessible from gateway and used for testing. ssh dev-amd05 – Same hardware as amd05 ssh dev-intel07 – Same hardware as intel07 ssh dev-amd09 – Same hardware as amd09 We periodically have some test boxes. These include: ssh dev-intel09 – 8 core intel Xeon with 48GB of memory ssh gfx-000 – Nvidia Graphics Processing Node (permanent dev-gfx available soon) Jobs running on the developer nodes should be limited to two hours of walltime. Developer nodes are shared by everyone.

HPCC System Diagram

Available Software Center Supported Development Software Intel compilers, openmp, openmpi, mvapich, totalview, mkl, pathscale, gnu... Center Supported Research Software Matlab, R, fluent, abaqus, HEEDS, amber, blast, ls- dyna, starp... Center Unsupported Software (module use.cus) gromacs, cmake, cuda, imagemagick, java, openmm, siesta...

Steps in Using the HPCC 1. Connect to HPCC 2. Transfer required input files and source code 3. Determine required software 4. Compile programs (if needed) 5. Test software/programs on a developer node 6. Write a submission script 7. Submit the job 8. Get your results and write a paper!!

Module System To maximize the different types of software and system configurations that are available to the users. HPCC uses a Module system. Key Commands module avail – show available modules module list – list currently loaded modules module load modulename – load a module module unload modulename – unload a module

Getting Help Documentation and User Manual - wiki.hpcc.msu.edu User Forums - forums.hpcc.msu.edu Contact HPCC and iCER Staff for: Reporting System Problems HPC Program writing/debugging Consultation Help with HPC grant writing System Requests Other General Questions Primary form of contact - Apply for an account – HPCC Request tracking system – rt.hpcc.msu.edu HPCC Phone – (517) am-5pm HPCC Office – Engineering Building am-5pm