Desktop Introduction. MASSIVE is … A national facility $8M of investment over 3 years Two high performance computing facilities, located at the Australian.

Slides:



Advertisements
Similar presentations
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Advertisements

IT Induction Course UCD IT Services. Contents Help & Advice Accessing IT Services UCD Connect Printing in UCD Teaching & Learning Services Apps Services.
LinkSCEEM-2: A computational resource for the Eastern Mediterranean.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Early Linpack Performance Benchmarking on IPE Mole-8.5 Fermi GPU Cluster Xianyi Zhang 1),2) and Yunquan Zhang 1),3) 1) Laboratory of Parallel Software.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
FSOSS Dr. Chris Szalwinski Professor School of Information and Communication Technology Seneca College, Toronto, Canada GPU Research Capabilities.
Jia Yao Director: Vishwani D. Agrawal High Performance Compute Cluster April 13,
Tamnun Hardware. Tamnun Cluster inventory – system Login node (Intel 2 E GB ) – user login – PBS – compilations, – YP master Admin.
LinkSCEEM Roadshow Introduction to LinkSCEEM/SESAME/IMAN1 4 May 2014, J.U.S.T Presented by Salman Matalgah Computing Group leader SESAME.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
HPC at University of Moratuwa & Sri Lanka Dilum Bandara, PhD Senior Lecturer Dept. of Computer Science & Engineering, University of.
1 AppliedMicro X-Gene ® ARM Processors Optimized Scale-Out Solutions for Supercomputing.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Desktop with Direct3D 10 capable hardware Laptop with Direct3D 10 capable hardware Direct3D 9 capable hardware Older or no graphics hardware.
ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 22, 2011assignprelim.1 Assignment Preliminaries ITCS 6010/8010 Spring 2011.
SUMMER VACATION SCHOLARSHIP | IM&T Scientific Computing in the Cloud.
Lecture 14: Operating Systems Intro to IT COSC1078 Introduction to Information Technology Lecture 14 Operating Systems James Harland
Introduction to LinkSCEEM and SESAME 15 June 2014, ibis Hotel, Amman - Jordan Presented by Salman Matalgah Computing Group leader SESAME.
Project Overview:. Longhorn Project Overview Project Program: –NSF XD Vis Purpose: –Provide remote interactive visualization and data analysis services.
Research Support Services Research Support Services.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Small File File Systems USC Jim Pepin. Level Setting  Small files are ‘normal’ for lots of people Metadata substitute (lots of image data are done this.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Use/User:LabServerField Engineer Electrical Engineer Software Engineer Mechanical Engineer Requirements: Small form factor.
Goings on in Australia.. and being a good open source software citizen Steve Androulakis edu edu.
NLIT May 26, 2010 Page 1 Computing Jefferson Lab Users Group Meeting 8 June 2010 Roy Whitney CIO & CTO.
Production System and Environment Sub-Program HPC applications in the International Potato Center Eng. Gonzalo Cucho-Padin Electronic Engineer,Research.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
V e RSI Victorian eResearch Strategic Initiative VBL Introduction Crystal 25 Rev 1.2.
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
The Birmingham Environment for Academic Research Setting the Scene Peter Watkins, School of Physics and Astronomy (on behalf of the Blue Bear team)
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
1 Workshop 9: General purpose computing using GPUs: Developing a hands-on undergraduate course on CUDA programming SIGCSE The 42 nd ACM Technical.
Patryk Lasoń, Marek Magryś
| nectar.org.au NECTAR TRAINING Module 2 Virtual Laboratories and eResearch Tools.
3May20111QCIF ENABLING RESEARCH HIGH PERFORMANCE INFRASTRUCTURE & SERVICES QUESTnet ARCS Tools Workshop 3May2011.
What’s Coming? What are we Planning?. › Better docs › Goldilocks – This slot size is just right › Storage › New.
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
National Nanotechnology Infrastructure Network Michael Stopa, Harvard University NNIN Computation Coordinator 2009 NSF Nanoscale Science and Engineering.
How to use HybriLIT Matveev M. A., Zuev M.I. Heterogeneous Computations team HybriLIT Laboratory of Information Technologies (LIT), Joint Institute for.
SESAME-NET: Supercomputing Expertise for Small And Medium Enterprises Todor Gurov Associate Professor IICT-BAS
Hartree Centre systems overview. Public nameInternal nameTechnologyService type Blue WonderInvictax86 SandyBridgeproduction Blue WonderNapierx86 IvyBridgeproduction.
N8-HPC and Polaris Alan Real, Robin Pinning Technical Director(s), N8-HPC
David Duff Coordinator: Educational Technology Workshop Resources ing/Edmodo
University GPU Club Tues 29 Oct
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
The Characterisation Virtual Laboratory James Wettenhall Clayton Campus 20 th June 2012.
Advanced Computing Facility Introduction
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
HPC usage and software packages
IT Facilities.
GPU Computing Jan Just Keijser Nikhef Jamboree, Utrecht
Heterogeneous Computation Team HybriLIT
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
Accelerated Computing in Cloud
Advanced Computing Facility Introduction
Overview of HPC systems and software available within
Tamnun Hardware.
High Performance Computing in Bioinformatics
Access your coordinator account
Applications software
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
Campus and Phoenix Resources
Presentation transcript:

Desktop Introduction

MASSIVE is … A national facility $8M of investment over 3 years Two high performance computing facilities, located at the Australian Synchrotron and Monash University, that will be designed for data processing and visualisation; Specialised imaging and visualisation software and databases; Expertise in visualisation, image processing, image analysis, HPC and GPU computing. NCI Specialised Facility for Imaging and Visualisation

MASSIVE Team Dr Wojtek James Goscinski, Coordinator Dr Paul McIntosh, Senior HPC Consultant/Technical Project Leader Dr Chris Hines, Senior HPC Consultant Dr Kai Xi, HPC Consultant Damien Leong, Senior HPC Consultant Dr Wendy Mason, eResearch Engagement Specialist Jupiter Hu, Software Specialist, Characterisation Virtual Laboratory + Monash eSolutions

Facilities MASSIVE1 (m1) –Real-time Computer Tomography at the Imaging Beamline at the Australian Synchrotron MASSIVE2 (m2) –General facility for image processing, data processing, simulation and analysis, GPU computing –Specialised fat nodes for visualisation

MASSIVE Source: Google Maps M1 M2 Us

CVL (NeCTAR) CVL (NeCTAR)

42 nodes, each with: 2 x 6-core X5650 CPUs 48 GB of RAM 2 x nVidia M2070 GPUs 42 nodes, each with: 2 x 6-core X5650 CPUs 48 GB of RAM 2 x nVidia M2070 GPUs 58TB GPFS file system capable of 2GB/s+ sustained write 4X QDR Mellanox IS5200 InfiniBand switch (~32Gb/s) M1 Stage 2 +95TB

32 compute nodes: 2 x 6-core X5650 CPUs 48 GB of RAM 2 x nVidia M2070 GPUs 32 compute nodes: 2 x 6-core X5650 CPUs 48 GB of RAM 2 x nVidia M2070 GPUs 250TB GPFS file system capable of 3GB/s+ sustained write 4X QDR Mellanox IS5200 InfiniBand switch (~32Gb/s) M2 10 Vis nodes: 192 GB RAM 2 x nVidia M2070Q 10 Vis nodes: 192 GB RAM 2 x nVidia M2070Q Stage TB Stage 2: GB RAM nodes GB RAM nodes +86 NVIDIA Tesla K20’s +20 Intel Xeon Phi’s Stage 2: GB RAM nodes GB RAM nodes +86 NVIDIA Tesla K20’s +20 Intel Xeon Phi’s

MASSIVE Resources Total 2224 CPU-cores –74 nodes with 48GB of RAM –56 nodes with 64GB of RAM –20 nodes with 128GB of RAM –10 nodes with 196GB of RAM 244 GPUs (total ~250,000 CUDA cores) –76 NVIDIA K20 GPU-coprocessors –20 NVIDIA M2070Q’s (Vis) –148 NVIDIA M2070’s GPU-coprocessors 20 Intel Phi’s (1200 cores) File –M2 ~350TB –M1 ~150TB

MASSIVE Photo: Steve Morton

MASSIVE

/home/researcher/ |-- myProject001 -> /home/projects/myProject001 |-- myProject001_scratch -> /scratch/myProject001

/home/researcher/ |-- myProject001 -> /home/projects/myProject001 |-- myProject001_scratch -> /scratch/myProject001 |-- Mx ->

Software Slurm Scheduler Linux Centos 6 module list /userguide /software-instructions /installed-software

Strudel

Science Research Outcomes Publications accepted or published –2012 – 74 –2013 – 160 –2014 –

Desktop Success July-Dec 2012 –the MASSIVE Desktop was used by 70+ MASSIVE users –average use of 90 times –85% of those users have used the desktop more than 10 times. –55% of those users have used the desktop more than 50 times.

Help Desk For any issues with using MASSIVE or the documentation on this site please contact the Help Desk. Phone: 03 Consulting For general enquires and enquires about value added services such as help with porting code to GPUs or using MASSIVE for Imaging and Visualisation, use the following: Phone: 03 Other For other enquiries please contact the MASSIVE Coordinator:

Exercises Training Accounts username: train[XY] password: MTrain[[X+5]Y] train01 - train54 MTrain51 – MTrain104