NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.

Slides:



Advertisements
Similar presentations
Introduction to Grid Application On-Boarding Nick Werstiuk
Advertisements

A Dynamic World, what can Grids do for Multi-Core computing? Daniel Goodman, Anne Trefethen and Douglas Creager
CSE4251 The Unix Programming Environment
The Path to Multi-core Tools Paul Petersen. Multi-coreToolsThePathTo 2 Outline Motivation Where are we now What is easy to do next What is missing.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
Scripting Languages For Virtual Worlds. Outline Necessary Features Classes, Prototypes, and Mixins Static vs. Dynamic Typing Concurrency Versioning Distribution.
Systems Programming Course Gustavo Rodriguez-Rivera.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Types of software. Sonam Dema..
Microsoft Visual Basic 2012 CHAPTER ONE Introduction to Visual Basic 2012 Programming.
Visual Basic: An Object Oriented Approach 12 – Creating and using ActiveX objects.
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
Joshua Alexander University of Oklahoma – IT/OSCER ACI-REF Virtual Residency Workshop Monday June 1, 2015 Deploying Community Codes.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
ANDROID Presented By Mastan Vali.SK. © artesis 2008 | 2 1. Introduction 2. Platform 3. Software development 4. Advantages Main topics.
TRACEREP: GATEWAY FOR SHARING AND COLLECTING TRACES IN HPC SYSTEMS Iván Pérez Enrique Vallejo José Luis Bosque University of Cantabria TraceRep IWSG'15.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Sobolev Showcase Computational Mathematics and Imaging Lab.
Compiler BE Panel IDC HPC User Forum April 2009 Don Kretsch Director, Sun Developer Tools Sun Microsystems.
Debugging and Profiling GMAO Models with Allinea’s DDT/MAP Georgios Britzolakis April 30, 2015.
Python From the book “Think Python”
Ch 1. A Python Q&A Session Spring Why do people use Python? Software quality Developer productivity Program portability Support libraries Component.
Invitation to Computer Science 5 th Edition Chapter 6 An Introduction to System Software and Virtual Machine s.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Martin Schulz Center for Applied Scientific Computing Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory, P. O. Box 808, Livermore,
UNIX/LINUX SHELLS.  “A Unix shell is a command-line interpreter or shell that provides a traditional user interface for the Unix operating system and.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Software Overview Environment, libraries, debuggers, programming tools and applications Jonathan Carter NUG Training 3 Oct 2005.
Debugging parallel programs. Breakpoint debugging Probably the most widely familiar method of debugging programs is breakpoint debugging. In this method,
1 Cray Inc. 11/28/2015 Cray Inc Slide 2 Cray Cray Adaptive Supercomputing Vision Cray moves to Linux-base OS Cray Introduces CX1 Cray moves.
Connections to Other Packages The Cactus Team Albert Einstein Institute
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
 Programming - the process of creating computer programs.
Virtual Application Profiler (VAPP) Problem – Increasing hardware complexity – Programmers need to understand interactions between architecture and their.
ARCHER Advanced Research Computing High End Resource
Third-party software plan Zhengji Zhao NERSC User Services NERSC User Group Meeting September 19, 2007.
“NanoElectronics Modeling tool – NEMO5” Jean Michel D. Sellier Purdue University.
Using Maali to Efficiently Recompile Software Post-CLE Upgrades on a CRAY XC System Chris Bording, Chris Harris and David Schibeci Pawsey Supercomputing.
Intermediate Parallel Programming and Cluster Computing Workshop Oklahoma University, August 2010 Running, Using, and Maintaining a Cluster From a software.
Operational and Application Experiences with the Infiniband Environment Sharon Brunett Caltech May 1, 2007.
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Introduction to HPC Debugging with Allinea DDT Nick Forrington
Scaling up R computation with high performance computing resources.
Tuning Threaded Code with Intel® Parallel Amplifier.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Parallel OpenFOAM CFD Performance Studies Student: Adi Farshteindiker Advisors: Dr. Guy Tel-Zur,Prof. Shlomi Dolev The Department of Computer Science Faculty.
Advanced Computing Facility Introduction
Chapter 13 Web Application Infrastructure
Managing Combinatorial Software Installations with Spack
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Development Environment
Debugging, benchmarking, tuning i.e. software development tools
HPC usage and software packages
The 500 builds of 300 applications in the HeLmod repository will at least get you started on a full suite of scientific applications Aaron Kitzmiller,
Getting Started with R.
Spark Presentation.
Platform as a Service.
Is System X for Me? Cal Ribbens Computer Science Department
Developing applications using Chromium
Many-core Software Development Platforms
Welcome to our Nuclear Physics Computing System
Quick Start Guide for Visual Studio 2010
Intel® Parallel Studio and Advisor
Welcome to our Nuclear Physics Computing System
Overview of HPC systems and software available within
Presentation transcript:

NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable Energy, LLC. Peregrine Applications Q2 Peregrine New User Training User Software Chris Chang 6/3/2015

2 Outline User software on Peregrine Configuring runtime environments Software stack (user perspective) Development tools and process Xeon Phi and the Future of Performance Computing Interactive Computing: Jupyter

3 What Do We Mean by “User Software”? Not system software o NOT core OS functions o Toolchains—compilers + MPI o Libraries, etc.—mathematical, scientific, language- specific (e.g., Guile) o Applications—source and binary installations

4 Environment Modules Wiring up dependencies manually = pain System to automate o Adjusting paths o Auto-loading dependencies, or detecting incompatibilities o Choosing versions module [ avail | list | purge | load Y | unload X | swap X Y ] (X loaded; Y not loaded) module show X; module help X Mind the priority of loading modules 1. Explicit naming: impi-intel/ $MODULEPATH 3. (default)

5 Maintained Modules /nopt/nrel/apps/modules/ candidate  default  deprecated candidate: testing & verification, MAY NOT WORK default: production, WILL WORK deprecated: phasing out, MAY NOT WORK Watch peregrine-users list for announcements

6 Modules Can Go Anywhere /projects Every allocation gets one. /nopt/nrel/ecom/modules Of, by, for the Energy Community $HOME Make your own!

7 Building a Software Stack Extra Packages for Enterprise Linux (EPEL) o “module load epel” o Package listing via rpm—see “module help epel” Ask for additions via HPC-Help core OS EPEL toolchain libraries, frameworks applications

8 Toolchains Dependencies between binary application and binary library code Communication (MPI) particularly problematic “toolchain” = compiler + MPI Three primary stacks supported o “Performance”: impi-intel—when parallel performance is a (not the) concern. Intel Parallel Studio XE Cluster Edition o “Open”: openmpi-gcc—good ‘nuff. Open source, large community o “Scalable”: mvapich-intel—intrepid only

9 Libraries, Frameworks, Suites, Toolkits, Environments,... Modeling Frameworks OpenFOAM, FEniCS, GAMS Solvers/Optimizers PETSc, Dakota, Gurobi Math MKL, BLAS, LAPACK, FFTW, GSL I/O HDF5, netCDF, pnetCDF Language Frameworks Python, R, Boost, MATLAB Between toolchains and applications

10 Applications InputOutput Black box transformer of inputs+configuration to outputs, or Self-contained environment

11 Application Areas Development —debuggers, profilers Visualization & Analysis —Avizo, VisIt, VirtualGL, IDL, Paraview, POV- Ray A wide variety of domain-specific apps, including Quantum chemistry—Gaussian, GAMESS, NWChem, VASP, Q-Chem Molecular Dynamics—CHARMM, AMBER, GROMACS, LAMMPS, NAMD Continuum physics—ANSYS-Fluent, Star-CCM, COMSOL, MEEP, WRF Some applications are more supported than others /nopt/nrel/appsbroad anticipated or actual interest /nopt/nrel/ecomnarrower interest within the “Energy community” (i.e., Peregrine users) /projectsallocation-specific interest $HOMEindividual interest

12 Development—Building Applications autotools, CMake, git, SCons Runs are done on the compute nodes! o What you pick up on the head nodes stays on the head nodes o Build software on the compute nodes Environment is set by modules o Remember to load/unload/purge toolchain & library modules before builds and runs e/developer-notes e/developer-notes Head -dep dep2 Compute -dep1.3.2 (Build) (Run) static link shared object

13 Development—Performance MPI o Must, if huge memory requirements, multiple nodes needed o Best, if clear problem divisions, low communication o MPI typically accessed through wrappers—mpicc, etc. Threading (OpenMP) o Best if separate processes almost fit into memory; single problem of lightweight tasks; high communication o Threading via compiler options— -qopenmp, -fopenmp

14 Development—Profiling & Debugging When things go wrong (quickly or slowly) GDBDebugger Intel VTuneProfiler Intel Trace Analyzer/CollectorProfiler DDT (Allinea)Debugger MAP (Allinea)Profiler perf-report (Allinea)Profiler Totalview (Rogue Wave)Debugger TauProfiler PAPIProfiling Primarily single-process, multiple threads Primarily multiple-process, single thread Both multiple processes and multiple threads

15 Development—Scripting Probably most of what you’ll do Easily tuned to domain Most support available for Python o Batteries included o Performance where needed

16 Scripting Tools on Peregrine Shells: bash, csh, dash, ksh, ksh93, sh, tcsh, zsh, tclsh Perl, Octave, Lua, Lisp(Emacs, Guile), GNUplot, Tclx, Java, Ruby, Postgres, SQLite IDL, Python, R, Ruby,.NET (Mono) SQL (Postgres, Sqlite, MySQL) MATLAB, GAMS