Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.

Slides:



Advertisements
Similar presentations
Introduction To The Course Network Architecture Hervey Allen Chris Evans Phil Regnauld September 3 - 4, 2009 Santiago, Chile.
Advertisements

HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
Information Technology Center Introduction to High Performance Computing at KFUPM.
ITE PC v4.0 Chapter 1 1 Operating Systems Computer Networks– 2.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Tuesday, September 08, Head Node – Magic.cse.buffalo.edu Hardware Profile Model – Dell PowerEdge 1950 CPU - two Dual Core Xeon Processors (5148LV)
HPCC Mid-Morning Break Powertools Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2011assignprelim.1 Assignment Preliminaries ITCS 6010/8010 Spring 2011.
Deploying and Managing Windows Server 2012
Fundamentals of Networking Discovery 1, Chapter 2 Operating Systems.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
BIF713 Operating Systems & Project Management Instructor: Murray Saul
Operating Systems Networking for Home and Small Businesses – Chapter 2 – Introduction To Networking.
HPCC Mid-Morning Break Module System. What are Modules? “The Environment Modules package provides for the dynamic modification of a user's environment.
ICER User Meeting 3/26/10. Agenda What’s new in iCER (Wolfgang) Whats new in HPCC (Bill) Results of the recent cluster bid Discussion of buy-in (costs,
1 “WinSun”: Deploying the Windows Desktop on a Sun Ray Anita Schwartz Carol Jarom.
Introduction to HPC resources for BCB 660 Nirav Merchant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Introduction to HPCC at MSU 09/08/2015 Matthew Scholz Research Consultant, Institute for Cyber-Enabled Research Download this presentation:
Computing Fundamenatls CMSC 201 Computer Science I Penny Rheingans University of Maryland Baltimore County (with inspiration from previous 201 instructors.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
© Paradigm Publishing Inc. 4-1 OPERATING SYSTEMS.
CCS Overview Rene Salmon Center for Computational Science.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery TotalView Parallel Debugger.
Using the Weizmann Cluster Nov Overview Weizmann Cluster Connection Basics Getting a Desktop View Working on cluster machines GPU For many more.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Unix Servers Used in This Class  Two Unix servers set up in CS department will be used for some programming projects  Machine name: eustis.eecs.ucf.edu.
Desktop Introduction. MASSIVE is … A national facility $8M of investment over 3 years Two high performance computing facilities, located at the Australian.
The Gateway Computational Web Portal Marlon Pierce Indiana University March 15, 2002.
Remote & Collaborative Visualization. TACC Remote Visualization Systems Longhorn – Dell XD Visualization Cluster –256 nodes, each with 48 GB (or 144 GB)
2: Operating Systems Networking for Home & Small Business.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Introduction to Parallel Computing Presented by The Division of Information Technology Computer Support Services Department Research Support Group.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
ICER Bioinformatics Support Fall 2010 John B. Johnston HPC Programmer Institute for Cyber Enabled Research © 2010 Michigan State University Board of Trustees.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
Interacting with the cluster ssh, sftp, & slurm batch scripts
Workstations & Thin Clients
Auburn University
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
HPC usage and software packages
How to use the HPCC to do stuff
Introduction to HPCC at MSU
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
OSG Connect and Connect Client
A few points to mention There are two Olympus machines!
Shared Research Computing Policy Advisory Committee (SRCPAC)
Networking for Home and Small Businesses – Chapter 2
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Advanced Computing Facility Introduction
HPCC Mid-Morning Break
High Performance Computing in Bioinformatics
Networking for Home and Small Businesses – Chapter 2
Introduction to High Performance Computing Using Sapelo2 at GACRC
The Neuronix HPC Cluster:
Working in The IITJ HPC System
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research

HPCC Online Resources – HPCC home wiki.hpcc.msu.edu – Public/Private Wiki forums.hpcc.msu.edu – User forums rt.hpcc.msu.edu – Help desk request tracking mon.hpcc.msu.edu – System Monitors

HPCC Cluster Overview Linux operating system Primary interface is text based though Secure Shell (ssh) All Machines in the main cluster are binary compatible (compile once run anywhere) Each user has 50Gigs of personal hard drive space. /mnt/home/username/ Users have access to 33TB of scratch space. /mnt/scratch/username/ A scheduler is used to manage jobs running on the cluster A submission script is used to tell the scheduler the resources required and how to run a job A Module system is used to manage the loading and unloading of software configurations

gateway Access to HPCC is primarily though the gateway machinie: ssh Access to all HPCC services uses MSU username and password.

HPCC System Diagram

Hardware Time Line YearNameDescriptionCoresMemoryTotal Cores 2005green1.6GHz Itanium2 (very old) (shared) 128 Main Cluster 2005amd05Dule-core 2.2GHz AMD Opterons48GB intel07Quad-core 2.3GHz Xeons88GB intel08Sun x4450s (Fat Node)1664GB amd09Sun Fire X4600 Opterons (Fat Node)32256GB We are currently investigating two new purchases for 2009/2010 Graphics Processing Unit (GPU) Cluster New General Purpose Large Cluster

HPCC System Diagram

Cluster Developer Nodes Developer Nodes are accessible from gateway and used for testing. ssh dev-amd05 – Same hardware as amd05 ssh dev-intel07 – Same hardware as intel07 ssh dev-amd09 – Same hardware as amd09 We periodically have some test boxes. These include: ssh dev-intel09 – 8 core intel Xeon with 24GB of memory ssh gfx-000 – Nvidia Graphics Processing Node Jobs running on the developer nodes should be limited to two hours of walltime. Developer nodes are shared by everyone.

HPCC System Diagram

Available Software Center Supported Development Software Intel compilers, openmp, openmpi, mvapich, totalview, mkl, pathscale, gnu... Center Supported Research Software Matlab, R, fluent, abaqus, HEEDS, amber, blast, ls- dyna, starp... Center Unsupported Software (module use.cus) gromacs, cmake, cuda, imagemagick, java, openmm, siesta...

Steps in Using the HPCC 1. Connect to HPCC 2. Transfer required input files and source code 3. Determine required software 4. Compile programs (if needed) 5. Test software/programs on a developer node 6. Write a submission script 7. Submit the job 8. Get your results and write a paper!!

Module System To maximize the different types of software and system configurations that are available to the users. HPCC uses a Module system. Key Commands module avail – show available modules module list – list currently loaded modules module load modulename – load a module module unload modulename – unload a module

Getting Help Documentation and User Manual - wiki.hpcc.msu.edu User Forums - forums.hpcc.msu.edu Contact HPCC and iCER Staff for: Reporting System Problems HPC Program writing/debugging Consultation Help with HPC grant writing System Requests Other General Questions Primary form of contact - HPCC Request tracking system – rt.hpcc.msu.edu HPCC Phone – (517) am-5pm HPCC Office – Engineering Building am-5pm

Next Week - Getting Connected Secure Shell - hpc.msu.edu Putty Windows Secure Shell X11 Server (windowing) xming cygwin File transfers Mapped Network Drives - files.hpc.msu.edu