Hartree Centre systems overview. Public nameInternal nameTechnologyService type Blue WonderInvictax86 SandyBridgeproduction Blue WonderNapierx86 IvyBridgeproduction.

Slides:



Advertisements
Similar presentations
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Advertisements

1 Copyright © 2012 Oracle and/or its affiliates. All rights reserved. Convergence of HPC, Databases, and Analytics Tirthankar Lahiri Senior Director, Oracle.
Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
Scalable Multi-Cache Simulation Using GPUs Michael Moeng Sangyeun Cho Rami Melhem University of Pittsburgh.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
National Center for Atmospheric Research John Clyne 4/27/11 4/26/20111.
PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč
Running Large Graph Algorithms – Evaluation of Current State-of-the-Art Andy Yoo Lawrence Livermore National Laboratory – Google Tech Talk Feb Summarized.
GPGPU Introduction Alan Gray EPCC The University of Edinburgh.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Tamnun Hardware. Tamnun Cluster inventory – system Login node (Intel 2 E GB ) – user login – PBS – compilations, – YP master Admin.
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
Heterogeneous Computing Dr. Jason D. Bakos. Heterogeneous Computing 2 “Traditional” Parallel/Multi-Processing Large-scale parallel platforms: –Individual.
1 AppliedMicro X-Gene ® ARM Processors Optimized Scale-Out Solutions for Supercomputing.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Accelerating SQL Database Operations on a GPU with CUDA Peter Bakkum & Kevin Skadron The University of Virginia GPGPU-3 Presentation March 14, 2010.
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
GPU Programming with CUDA – Accelerated Architectures Mike Griffiths
DRAFT 1 Institutional Research Computing at WSU: A community-based approach Governance model, access policy, and acquisition strategy for consideration.
Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Are Supercomputers returning to Investment Banking?
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
COMPUTER BASICS HOW TO BUILD YOUR OWN PC. CHOOSING PARTS Motherboard Processor Memory (RAM) Disk drive Graphics card Power supply Case Blu-ray/DVD drive.
Biryaltsev E.V., Galimov M.R., Demidov D.E., Elizarov A.M. HPC CLUSTER DEVELOPMENT AND OPERATION EXPERIENCE FOR SOLVING THE INVERSE PROBLEMS OF SEISMIC.
Platform Disaggregation Lightening talk Openlab Major review 16 th Octobre 2014.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
Aneka Cloud ApplicationPlatform. Introduction Aneka consists of a scalable cloud middleware that can be deployed on top of heterogeneous computing resources.
Current Research Overview Jeremy Espenshade 09/04/08.
 System Requirements are the prerequisites needed in order for a software or any other resources to execute efficiently.  Most software defines two.
Science Support for Phase 4 Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
What’s Coming? What are we Planning?. › Better docs › Goldilocks – This slot size is just right › Storage › New.
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
Experiences with a SAS Grid Ray Lindsay ATO ACT SAS Users Group 21 May
IBM Power system – HPC solution
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
S. Pardi Frascati, 2012 March GPGPU Evaluation – First experiences in Napoli Silvio Pardi.
By: Joel Dominic and Carroll Wongchote 4/18/2012.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Earth System Modelling: an HPC perspective Mike Ashworth & Rupert Ford Scientific Computing Department and STFC Hartree Centre STFC Daresbury Laboratory.
Intel Xeon Phi Training - Introduction Rob Allan Technology Support Manager The Hartree Centre.
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
Learnings from the first Plugfest
NIIF HPC services for research and education
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Organizations Are Embracing New Opportunities
DSS-G Configuration Bill Luken – April 10th , 2017
Low-Cost High-Performance Computing Via Consumer GPUs
40% More Performance per Server 40% Lower HW costs and maintenance
Working With Azure Batch AI
Low-Cost High-Performance Computing Via Consumer GPUs
NSF cloud Chameleon: Phase 2 Networking
Overview of HPC systems and software available within
IBM Power Systems.
The Cambridge Research Computing Service
Argon Phase 3 Feedback June 4, 2019.
Presentation transcript:

Hartree Centre systems overview

Public nameInternal nameTechnologyService type Blue WonderInvictax86 SandyBridgeproduction Blue WonderNapierx86 IvyBridgeproduction Blue WonderIdenx86 IvyBridge + Xeon Phiproduction Blue WonderDawsonx86 IvyBridgeproduction Blue JouleBifortBlueGene/Qproduction Delorean x86 IvyBridge + FPGADevelopment - EECR Bantam BlueGene/QDevelopment - EECR Palmerston POWER8 + Kepler K40Development – on loan from IBM Ace ARM 64 bitDevelopment - EECR Neale x86 IvyBridgeDevelopment - EECR Panther POWER8 + Kepler K80Hartree Centre phase 3 research Name mappings

Invicta Phase 1 system (2012) IBM iDataplex 512 nodes, Sandybridge processors, 16 cores per node Range of memory sizes – 2GB per core, 8GB per core, 16GB per core Mellanox IB interconnect Platform LSF GPFS (same filesystem as Joule) Some graphical login capability Standard x86-based HPC system. Will be “sun- setted” after June 2015 – only key components will be kept on maintenance Service ends 30 th April 2016

Napier Phase 2 system (2014) IBM NeXtScale 360 nodes, Ivybridge processors, 24 cores per node 2.76GB per core (64GB per node) Mellanox IB interconnect Platform LSF GPFS (different filesystem to Invicta) Standard x86-based HPC system. 60 nodes reserved for Lenovo (IBM) Global Benchmarking Centre.

Iden Phase 2 system (2014) IBM iDataplex 84 nodes, Ivybridge processors, 24 cores per node 2.76GB per core (64GB per node) Mellanox IB interconnect Platform LSF GPFS (same filesystem as Napier but different to Invicta) 42 Xeon Phi accelerators Standard x86-based HPC system but with accelerators.

Dawson Data analytics Phase 2 system (2014) Range of hardware and software, including: IBM Big Insights (Hadoop and friends) IBM Streams (data streams processing) IBM SPSS Modeller and Data Engine (statistical modelling) Cognos BI (data analysis and reporting) IBM Content Analytics Has local GPFS filesystem with file placement optimisation (FPO – policy driven) Systems built and torn-down according to requirements of specific projects. Requires detailed technical assessment and solution planning.

Bifort Phase 1 system (2012) IBM BlueGene/Q platform Proprietary IBM Power-based processors – 96,384 in 6 racks Each processor can run up to 4 threads Proprietary IBM 5-dimensional torus interconnect IBM LoadLeveller GPFS (same filesystem as Invicta) Ideal for codes with very high levels of task or thread parallelism. Codes have to be re-compiled with IBM tools, and may need some porting effort. Clock frequency relatively slow compared to x86 systems – some codes may run more slowly. Sensitive to job topology. Service ends 30 th April 2016

Bantam Phase 1 system (2012) with extensions in phase 2 (2014) IBM BlueGene/Q platform Proprietary IBM Power-based processors Has 32,768 processors in 2 BG/Q racks, plus 8,192 processors in 8 additional I/O drawers (2 standard racks) – total 40,960 processors 256TB flash memory Proprietary IBM 5-dimensional torus interconnect IBM LoadLeveller GPFS (entirely standalone) Research project in conjunction with IBM. For development of next generation IBM platforms such as Power 9. Investigating different methods of moving compute to data.

Delorean Part of EECR programme Maxeler FPGA system 5 compute nodes (standard x86) 5 FPGA nodes, each with 8 “Maia dataflow engines” – total 40 FPGAs Local Panasas filestore Open GridEngine MaxCompiler and friends to help users port their codes to FPGA Very much a development system. Users have to have a need and understand what they’re going to be doing.

Palmerston On loan from IBM under Early Ship Programme Two Power8 servers, each with: 2 x 12 core Power Ghz 1TB system RAM 2 x 600GB 15k SAS disks 4 x 1.2TB 10k SAS disks 2 x nVidia Tesla K40 GPUs Ubuntu 14.04LTS IBM XLC / XLF compilers nVidia CUDA 7

ClusterVision “deep fat fryer” – novel cooling demonstrator Nodes are immersed in mineral oil which removes heat and transfers it to building water loop 1920 Ivybridge cores 4GB per core (64GB per node) 128GB SSD per node BeeGFS filesystem Used for EECR work, may also be used for private cloud assessment (OpenStack) Neale

Low Power Processor system Lenovo NeXtScale form factor, Cavium processors 64bit ARM cores Two “pass 1” boards arrived in June 2015 Twenty-four “pass 2” boards in December 2015 RedHat ARM distro support GPFS storage and LSF will be deployed when compatible clients available Twelve temporary x86 nodes are being used as a proof of concept to test “bursting” of workloads to IBM SoftLayer cloud – will be removed from service once the pass 2 ARM boards arrive Ace

Panther IBM POWER8 targeting Hartree Centre phase 3 workload Installation in progress. In service end March. Ribbons and bows courtesy of OCF and not IBM blue! 32 x 16 core 3.32GHz. 2 x nVidia K80 GPU per node. 2 x 1TB HDD per node. 28 nodes have 512GB RAM; 4 have 1TB RAM. 2 x IBM ESS GS4 storage arrays, providing 96 x 800GB SSD. IB (FDR) attached. 1 x IBM FlashStorage 900 IB (QDR) attached. 57TB usable. 1 x IBM FlashStorage 900 CAPI attached (to single host). 57TB usable.