Using Maali to Efficiently Recompile Software Post-CLE Upgrades on a CRAY XC System Chris Bording, Chris Harris and David Schibeci Pawsey Supercomputing.

Slides:



Advertisements
Similar presentations
Software change management
Advertisements

Configuration management
Va-scanCopyright 2002, Marchany Unit 8 – Solaris File Systems Randy Marchany VA Tech Computing Center.
05/11/2001 CPT week Natalia Ratnikova, FNAL 1 Software Distribution in CMS Distribution unitFormContent Version of SCRAM managed project.
Introduction to Maven 2.0 An open source build tool for Enterprise Java projects Mahen Goonewardene.
Profiling your application with Intel VTune at NERSC
LINUX-WINDOWS INTERACTION. One software allowing interaction between Linux and Windows is WINE. Wine allows Linux users to load Windows programs while.
Low level CASE: Source Code Management. Source Code Management  Also known as Configuration Management  Source Code Managers are tools that: –Archive.
Very Quick & Basic Unix Steven Newhouse Unix is user-friendly. It's just very selective about who its friends are.
RMsis – v Simplify Requirement Management for JIRA
Introduction to The Linaro Toolchain Embedded Processors Training Multicore Software Applications Literature Number: SPRPXXX 1.
1 Introduction to Tool chains. 2 Tool chain for the Sitara Family (but it is true for other ARM based devices as well) A tool chain is a collection of.
Linux Operations and Administration
Chapter-4 Windows 2000 Professional Win2K Professional provides a very usable interface and was designed for use in the desktop PC. Microsoft server system.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
SCRAM Software Configuration, Release And Management Background SCRAM has been developed to enable large, geographically dispersed and autonomous groups.
Client Installation StratusLab Tutorial (Orsay, France) 28 November 2012.
Guide to Linux Installation and Administration, 2e1 Chapter 8 Basic Administration Tasks.
RUP Implementation and Testing
High Performance Computing Cluster OSCAR Team Member Jin Wei, Pengfei Xuan CPSC 424/624 Project ( 2011 Spring ) Instructor Dr. Grossman.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
The Pipeline Processing Framework LSST Applications Meeting IPAC Feb. 19, 2008 Raymond Plante National Center for Supercomputing Applications.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
 To explain the importance of software configuration management (CM)  To describe key CM activities namely CM planning, change management, version management.
TRACEREP: GATEWAY FOR SHARING AND COLLECTING TRACES IN HPC SYSTEMS Iván Pérez Enrique Vallejo José Luis Bosque University of Cantabria TraceRep IWSG'15.
Apache Web Server v. 2.2 Reference Manual Chapter 1 Compiling and Installing.
Makefiles. makefiles Problem: You are working on one part of a large programming project (e. g., MS Word).  It consists of hundreds of individual.cpp.
GMT: The Generic Mapping Tools Paul Wessel, Walter H.F. Smith and the GMT team.
Configuration Management (CM)
The Cray XC30 “Darter” System Daniel Lucio. The Darter Supercomputer.
CVS – concurrent versions system Network Management Workshop intERlab at AIT Thailand March 11-15, 2008.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Memory: Relocation.
Ant & Jar Ant – Java-based build tool Jar – pkzip archive, that contains metadata (a manifest file) that the JRE understands.
Software Overview Environment, libraries, debuggers, programming tools and applications Jonathan Carter NUG Training 3 Oct 2005.
Installation Tutorial Paola Sivera ESO ACS 2.1 for Linux RH7.2.
National Center for Supercomputing ApplicationsNational Computational Science Grid Packaging Technology Technical Talk University of Wisconsin Condor/GPT.
WDO-It! 102 Workshop: Using an abstraction of a process to capture provenance UTEP’s Trust Laboratory NDR HP MP.
Earth System Modeling Framework Python Interface (ESMP) October 2011 Ryan O’Kuinghttons Robert Oehmke Cecelia DeLuca.
J.P. Wellisch, CERN/EP/SFT SCRAM Information on SCRAM J.P. Wellisch, C. Williams, S. Ashby.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
ASIS + RPM: ASISwsmp German Cancio, Lionel Cons, Philippe Defert, Andras Nagy CERN/IT Presented by Alan Lovell.
More Unix Naomi Altman. Directories Directory = folder mkdir - makes a new directory rmdir - removes an empty directory cd mydirectory - moves you into.
Yannick Patois - Datagrid Software Repository Presentation - March, n° 1 Datagrid Software Repository Presentation CVS, packages and automatic.
SPI NIGHTLIES Alex Hodgkins. SPI nightlies  Build and test various software projects each night  Provide a nightlies summary page that displays all.
Implementation of Embedded OS
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Package Administration 3/14/ Software package administration adds software to systems and removes software from systems Sun and its third-party.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Some of the utilities associated with the development of programs. These program development tools allow users to write and construct programs that the.
CyVerse Workshop Discovery Environment Overview. Welcome to the Discovery Environment A Simple Interface to Hundreds of Bioinformatics Apps, Powerful.
Advanced Computing Facility Introduction
HedEx Lite Obtaining and Using Huawei Documentation Easily
Outline Installing Gem5 SPEC2006 for Gem5 Configuring Gem5.
Specialized Computing Cluster An Introduction
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
HPC usage and software packages
Hadoop Architecture Mr. Sriram
StratusLab Tutorial (Bordeaux, France)
Machine Learning Workshop
June 2011 David Front Weizmann Institute
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
ns-3 Waf build system ns-3 Annual Meeting June 2017
Overview of HPC systems and software available within
Constructing a system with multiple computers or processors
An introduction to the Linux environment v
Black All good PowerPoint starts with black..
Presentation transcript:

Using Maali to Efficiently Recompile Software Post-CLE Upgrades on a CRAY XC System Chris Bording, Chris Harris and David Schibeci Pawsey Supercomputing Centre, Perth, Western Australia

Maali Origins The Pawsey Supercomputing Centre, nee iVEC has been one of the fasting growing Petascale HPC centres measured in FLOPS in the last five years. Has gone from < 10 Tflops on a single system to having six systems with 3 Cray XC systems, including Magnus which is ranked #41 on the Top 500.

In the beginning there was two EPIC Not-a-Cray 800 nodes(9600-cores) Intel X5660 Westmere 24 GB memory QDR infiniband Centos 6.3 Lustre PBS Pro 11.3 Fornax Not-a-Cray 96 nodes (1152 -cores) Intel X5650 Westmere 72 GB memory 7TB local disk / node Nvidia Tesla C2075/node Centos 6.3 Lustre PBS Pro 11.3

Maali Origins Multiple systems with different hardware configurations. More projects to support. More researchers with more complex software applications and libraries to support. The Pawsey Supercomputing Centre is going to be a Petascale centre.

Initial Design Criteria Needs to be simple! Establish Conventions and Policies that support those Conventions. Reduce staff effort to maintain the the software stack. Greater use of automation.

A brief primer on compiling and installing software Download source Configure code Compile code Install code Create an environment Module Document process

MaaliCore design Maali is a set of BASH scripts designed initially to allow for the automation of the Autoconf/Automake process of “configure”, “make” and “make install”. Uses a system-level configuration file that defines the Maali environment.

Maali Automatically Downloads applications with wget and maintains a repo of source packages. Runs the configure/cmake make make install Writes a new environment module file. Generates a separate build log file for each application

Maali System Configure File

Maali System Configuration Keypair Values MAALI_OS=cle52 MAALI_ROOT=”/ivec/$MAALI_OS/$MAALI_SY STEM” MAALI_SRC=”$MAALI_ROOT/src” MAALI_BUILD_DIR=”$MAALI_ROOT/build" MAALI_MODULE_DIR="$MAALI_ROOT/modul efiles"

Maali Build Directory The MAALI_BUILD_DIR is the keypair value that defines the absolute path to the directory where here the package sources are copied to from the MAALI_SRC directory then uncompressed and extracted. Ideally this should be somewhere in on the scratch working directory as builds take a lot of space. Not necessary to maintain.

Maali Source Directory MAALI_SRC is the keypair value that defines the absolute path to the directory that is the repository where all the original source packages maintained as tar-balls and compressed formats. Needs to be on a partition of sufficient size. Backed up if possible.

Maali Modulefiles Directory The MAALI_MODULE_DIR is the absolute path where Maali will install all new modules. The MAALI_MODULE_DIR will need to be added to the default MODULEPATH.

Maali build – core functions maali_build Function maali_build { # this is the core function for creating software # allows late evaluation MAALI_TOOL_CONFIGURE_EVAL=`eval echo ”$MAALI_TOOL_CONFIGURE"` cd ”$MAALI_TOOL_BUILD_DIR" maali_run "./configure --prefix=$MAALI_INSTALL_DIR $MAALI_TOOL_CONFIGURE_EVAL" maali_run "make" maali_run "make install" }

Maali cmake build- core function maali cmake build function maali_cmake_build { # for tools that use cmake # allows late evaluation MAALI_TOOL_CONFIGURE_EVAL=`eval echo "$MAALI_TOOL_CONFIGURE"` cd "$MAALI_TOOL_BUILD_DIR" # cmake likes to build in a director of it's own mkdir "$MAALI_TOOL_NAME-build" cd "$MAALI_TOOL_NAME-build" maali_run "cmake -DCMAKE_INSTALL_PREFIX=$MAALI_INSTALL_DIR $MAALI_TOOL_CONFIGURE $MAALI_CMAKE_PATH" if [ $DEBUG ]; then maali_run "make VERBOSE=TRUE" else maali_run "make" fi maali_run "make install" }

Maali Build Scripts Maali uses a build script for each application or library that defines: The environment (PrgEnv-*) Dependent environment modules (cray-hdf, etc) The configuration and optimization flags. Defines a set “Maali” variables to generate a new environment module.

Szip build file ############################################################################## # $Id: szip.maali,v /10/24 02:48:41 stapops Exp $ ############################################################################## # maali config file for szip ############################################################################## # specify which compilers we want to build the tool with MAALI_TOOL_COMPILERS="$MAALI_DEFAULT_COMPILERS" # URL to download the source code from MAALI_URL=" $MAALI_TOOL_VERSION.tar.gz" # location we are downloading the source code to MAALI_DST="$MAALI_SRC/$MAALI_TOOL_NAME-$MAALI_TOOL_VERSION.tar.gz" # where the unpacked source code is located MAALI_TOOL_BUILD_DIR="$MAALI_BUILD_DIR/$MAALI_TOOL_NAME-$MAALI_TOOL_VERSION" # type of tool (eg. apps, devel, python, etc.) MAALI_TOOL_TYPE="devel" # for auto-building module files MAALI_MODULE_SET_LD_LIBRARY_PATH=1 MAALI_MODULE_SET_CPATH=1 MAALI_MODULE_SET_FPATH=1 MAALI_MODULE_SET_FCPATH=1 MAALI_MODULE_WHATIS="SZIP compression software, providing lossless compression of scientific data."

Silo build script key elements # specify which compilers we want to build the tool with MAALI_TOOL_COMPILERS="$MAALI_DEFAULT_COMPILERS" # URL to download the source code from MAALI_URL=" # location we are downloading the source code to MAALI_DST="$MAALI_SRC/$MAALI_TOOL_NAME-$MAALI_TOOL_VERSION.tar.gz" # where the unpacked source code is located MAALI_TOOL_BUILD_DIR="$MAALI_BUILD_DIR/$MAALI_TOOL_NAME-$MAALI_TOOL_VERSION" # type of tool (eg. apps, devel, python, etc.) MAALI_TOOL_TYPE="devel" # tool pre-requisites MAALI_TOOL_PREREQ="hdf5/1.8.12" MAALI_TOOL_CONFIGURE='--enable-silex --enable-fortran --enable-shared --with- hdf5=$HDF5_DIR/include,$HDF5_DIR/lib' MAALI_TOOL_CONFIGURE_NOHDF5='--enable-silex --enable-fortran --enable-shared' # for auto-building module files MAALI_MODULE_SET_PATH=1 MAALI_MODULE_SET_LD_LIBRARY_PATH=1 MAALI_MODULE_WHATIS="Silo is a library which implements and application programing interface(API) designed for reading and writing a wide variety of scientific data to binary, disk files. The files that Silo produces and the data within them can be easily shared and exchanged between wholly independently developed applications running on disparate computing platforms”

Silo build file build function ############################################################################## function maali_build { cd "$MAALI_TOOL_BUILD_DIR" MAALI_TOOL_CONFIGURE_EVAL=`eval echo "$MAALI_TOOL_CONFIGURE_NOHDF5"` maali_run "./configure --prefix=$MAALI_INSTALL_DIR $MAALI_TOOL_CONFIGURE_EVAL" maali_run "make" maali_run "make install" maali_run "make clean" MAALI_TOOL_CONFIGURE_EVAL=`eval echo "$MAALI_TOOL_CONFIGURE"` maali_run "./configure --prefix=$MAALI_INSTALL_DIR $MAALI_TOOL_CONFIGURE_EVAL" maali_run "make" maali_run "make install" } ##############################################################################

GMT build file using cmake # maali config file for GMT # specify which compilers we want to build the tool with MAALI_TOOL_COMPILERS="$MAALI_DEFAULT_GCC_COMPILERS" # URL to download the source code from MAALI_URL=" # location we are downloading the source code to MAALI_DST="$MAALI_SRC/${MAALI_TOOL_NAME}-${MAALI_TOOL_VERSION}-src.tar.bz2" # where the unpacked source code is located MAALI_TOOL_BUILD_DIR="$MAALI_BUILD_DIR/${MAALI_TOOL_NAME}-${MAALI_TOOL_VERSION}" # type of tool (eg. apps, devel, python, etc.) MAALI_TOOL_TYPE="devel” # tool pre-requisites MAALI_TOOL_PREREQ="cray-netcdf/4.3.2 fftw/ gdal/1.11.1" # tool build pre-requisites - not added to the module, only needed for building (loaded after MAALI_TOOL_PREREQ) MAALI_TOOL_BUILD_PREREQ="cmake/ " # add additional configure options MAALI_TOOL_CONFIGURE='-DFFTW3_INCLUDE_DIR=$FFTW_INC -DFFTW3F_LIBRARY=$FFTW_DIR/libfftw3f.a' # for auto-building module files MAALI_MODULE_SET_PATH=1 MAALI_MODULE_SET_MANPATH=1 MAALI_MODULE_SET_LD_LIBRARY_PATH='lib64’ MAALI_MODULE_SET_WHATIS="GMT is an open source collection of about 80 command-line tools for the manipulating geographic and Cartesiandata sets (including filtering, trend setting, gridding,projecting,etc.) and producing PostScript illustrations ranging form simple x0y plots via contour maps to artificially illuminated surfaces adn 3D perspective views; the GMT supplements add another 40 more specialized and discipline-specific tools." export CRAYPE_LINK_TYPE='dynamic'

Maali build files with cmake # tool pre-requisites MAALI_TOOL_PREREQ="cray-netcdf/4.3.2 fftw/ gdal/1.11.1" # tool build pre-requisites - not added to the module, only needed for building (loaded after MAALI_TOOL_PREREQ) MAALI_TOOL_BUILD_PREREQ="cmake/ " # add additional configure options MAALI_TOOL_CONFIGURE='-DFFTW3_INCLUDE_DIR=$FFTW_INC - DFFTW3F_LIBRARY=$FFTW_DIR/libfftw3f.a' # for auto-building module files MAALI_MODULE_SET_PATH=1 MAALI_MODULE_SET_MANPATH=1 MAALI_MODULE_SET_LD_LIBRARY_PATH='lib64’

Haswell Sprint/CLE52 upgrade Magnus all the compute nodes where upgraded from Intel Sandybridge to Intel Haswell. Cray Linux Environment upgrade 50 to 52 Magnus went into production January 2015!

Application Support at the Pawsey A set of 31 applications/libraries were identified from the early adopter’s projects on Magnus. These included: Siesta exabayes mrbayes Beagle-lib Cmake gsl lapack xeres-c ephem gamess qbox ncview Hypre Gromacs Amber Libffi Glib Udunits gts zlib szip scons numpy scipy mercurial Distribute astropy d2to1 python

Maali Command maali –t python –v –c magnus maali –t python –v –c magnus maali –t scons –v –c magnus maali –t numpy –v –c magnus maali –t scipy –v –c magnus maali –t gsl –v 1.16 –c magnus

Haswell Sprint Outcomes Effort was spread out over 2 days with 7 members of the Supercomputing team. Majority of packages installed in a couple of hours. The Sprint was a success!

Haswell Sprint review Be smarter about using Maali, use current source repo, use the compute nodes. Improve the dependency documentation Create a verification and validation tests using the “must” build packages. Validating build outcomes automatically. Did it install stuff in /bin,/lib,/man correctly?

Maali System Configuration Keypair Values MAALI_OS=cle52 MAALI_ROOT=”/ivec/$MAALI_OS/$MAALI_SY STEM” MAALI_BUILD_DIR=”$MAALI_ROOT/build" MAALI_MODULE_DIR="$MAALI_ROOT/modul efiles" MAALI_SRC=”$MAALI_ROOT/src”

Simple Maali bash #!/bin/bash module load maali/1.0b1 maali –t python –v –c magnus maali –t scons –v –c magnus maali –t numpy –v –c magnus maali –t scipy –v –c magnus maali –t gsl –v 1.16 –c magnus

Maali Future OPEN SOURCE in May 2015 Improve the module files prereqs and dependencies and conflicts. Implemented on NeCTAR research cloud to use for bioinformatics pipeline construction. Regression testing of build scripts Improve wiki page creation.

Acknowledgements David Schibeci - the lead developer for Maali. Chris Harris- leading and organizing the Haswell sprint. Ashley Chew – System admin on Fornax. The Supercomputing team Rebecca Hartman-Baker, Paul Ryan, Daniel Grimwood, Charlene Yang, Brian Skjerven,Kevin Stratford, Moshin Shaikh at the Pawsey Supercomputing Centre. George Beckett who leads the Supercomputing Team and Neil Stringfellow the Executive Director at the Pawsey Supercomputing Centre for their support.

2 nd International HPC User Support Tools Workshop HUST15 at SC15 in Austin