workshop February 5, 2010 Geert Jan Bex

Slides:



Advertisements
Similar presentations
GXP in nutshell You can send jobs (Unix shell command line) to many machines, very fast Very small prerequisites –Each node has python (ver or later)
Advertisements

S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
Workshop: Using the VIC3 Cluster for Statistical Analyses Support perspective G.J. Bex.
HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
Batch Queuing Systems The Portable Batch System (PBS) and the Load Sharing Facility (LSF) queuing systems share much common functionality in running batch.
Using Clusters -User Perspective. Pre-cluster scenario So many different computers: prithvi, apah, tejas, vayu, akash, agni, aatish, falaq, narad, qasid.
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
OSCAR Jeremy Enos OSCAR Annual Meeting January 10-11, 2002 Workload Management.
Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
New MPI Library on the cluster Since WSU’s Grid had an upgrade of its operating system recently, we need to use a new MPI Library to compile and run our.
High Performance Computing
Information Technology Center Introduction to High Performance Computing at KFUPM.
Job Submission on WestGrid Feb on Access Grid.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh ssh.fsl.byu.edu You will be logged in to an interactive node.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
JGI/NERSC New Hardware Training Kirsten Fagnan, Seung-Jin Sul January 10, 2013.
Christian Kocks April 3, 2012 High-Performance Computing Cluster in Aachen.
ICER User Meeting 3/26/10. Agenda What’s new in iCER (Wolfgang) Whats new in HPCC (Bill) Results of the recent cluster bid Discussion of buy-in (costs,
Resource management system for distributed environment B4. Nguyen Tuan Duc.
Sun Grid Engine. Grids Grids are collections of resources made available to customers. Compute grids make cycles available to customers from an access.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Yeti Operations INTRODUCTION AND DAY 1 SETTINGS. Rob Lane HPC Support Research Computing Services CUIT
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Introduction to Using SLURM on Discover Chongxun (Doris) Pan September 24, 2013.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Scheduling in HPC Resource Management System: Queuing vs. Planning Matthias Hovestadt, Odej Kao, Alex Keller, and Achim Streit 2003 Job Scheduling Strategies.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Virtual mpirun Jason Hale Engineering 692 Project Presentation Fall 2007.
TORQUE Kerry Chang CCLS December 13, O UTLINE Torque How does it work? Architecture MADA Demo Results Problems Future Improvements.
1 High-Performance Grid Computing and Research Networking Presented by David Villegas Instructor: S. Masoud Sadjadi
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
HPC pilot code. Danila Oleynik 18 December 2013 from.
HUBbub 2013: Developing hub tools that submit HPC jobs Rob Campbell Purdue University Thursday, September 5, 2013.
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Modules, Compiling WRF, and Running on CHPC Clusters Adam Varble WRF Users Meeting 10/26/15.
Introduction to HPC Workshop March 1 st, Introduction George Garrett & The HPC Support Team Research Computing Services CUIT.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Auburn University
Scheduling systems Carsten Preuß
Assumptions What are the prerequisites? … The hands on portion of the workshop will be on the command-line. If you are not familiar with the command.
HPC usage and software packages
Matt Lemons Nate Mayotte
OpenPBS – Distributed Workload Management System
How to use the HPCC to do stuff
Lesson Objectives Aims Key Words
Joker: Getting the most out of the slurm scheduler
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
Architecture & System Overview
Postdoctoral researcher Department of Environmental Sciences, LSU
Cray Announces Cray Inc.
Requesting Resources on an HPC Facility
Sun Grid Engine.
Building and running HPC apps in Windows Azure
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Presentation transcript:

workshop February 5, 2010 Geert Jan Bex

What’s it all good for? Work to do on the cluster: –many things that can be done in parallel –one thing that takes a long time, but can be done in parallel However, the cluster is not a replacement for you desktop/laptop! –HPC: high performance computing –CPU intensive work

How to get started? 1.Send to: 2.Receive a temporary account 3.Follow the procedure at: center/k.u.leuven-vic-vic3-how- tos/howto-access-the-cluster-vic3 center/k.u.leuven-vic-vic3-how- tos/howto-access-the-cluster-vic3 4.When approved, get going

What can you run? All open source linux software All linux software you have a license for that covers the cluster No Windows software... for now… –pilot project on the way –however, technical & licensing issues

VIC3 at work

Running MrBayes Run MrBayes on your own computer: On VIC3, create PBS script ‘mrBayes.pbs’: Run the job: #!/bin/bash -l #PBS -N mrBayes-mpi-interleave #PBS -l walltime=71:59:00,nodes=1:ppn=8 module load openmpi/1.2.8_gcc cd $PBS_O_WORKDIR PATH="$HOME/apps/mrbayes mpi/:$PATH" mpirun mb-mpi interleave_final_Bayes PATH="$HOME/apps/mrbayes-3.1.2/:$PATH" mb interleave_final_Bayes qsub mrBayes.pbs now uses 8 processors!

Running R R is not parallelized However, some usage scenario’s can be done in parallel, e.g., –parameter exploration args <- commandArgs(TRUE) a <- as.double(args[1]) b <- as.double(args[2]) result <- c(a, b, soph_func(a + b)) print(result) my-pe.r for (a, b) in {(1.3, 5.7), (2.7, 1.4), (3.4, 2.1), (4.1, 3.8), … }

Running R, reloaded Run R on your own computer: For VIC3, create PBS script ‘my-pe.pbs’ and ‘my-data.csv’: Run the job: #!/bin/bash -l #PBS -N my-pe #PBS -l walltime=1:00:00,nodes=1:ppn=8 module load R cd $PBS_O_WORKDIR Rscript my-pe $a $b Rscript my-pe module load worker/1.0 wsub –batch my-pe.pbs –data my-data.csv a, b 1.3, , , , 3.8 …

VIC3 is shared by many compute nodes queue system: torque scheduler: moab priority queue, goals: 1.fairness 2.QoS guarantees ?

Priorities

Numbers, numbers, numbers… UHasselt total for 2009: days = year

Where to find help? –FAQs –How-To –Reference manuals –2 nd line support –3 rd line support for UHasselt

Conclusions It’s not that hard, now, is it? (oh well…) –don’t get intimidated If you need help, ask for it UHasselt has room for growth –currently 2.7 % of resources –maximum 5-6 %