Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hartree Centre systems overview. Public nameInternal nameTechnologyService type Blue WonderInvictax86 SandyBridgeproduction Blue WonderNapierx86 IvyBridgeproduction.

Similar presentations


Presentation on theme: "Hartree Centre systems overview. Public nameInternal nameTechnologyService type Blue WonderInvictax86 SandyBridgeproduction Blue WonderNapierx86 IvyBridgeproduction."— Presentation transcript:

1 Hartree Centre systems overview

2 Public nameInternal nameTechnologyService type Blue WonderInvictax86 SandyBridgeproduction Blue WonderNapierx86 IvyBridgeproduction Blue WonderIdenx86 IvyBridge + Xeon Phiproduction Blue WonderDawsonx86 IvyBridgeproduction Blue JouleBifortBlueGene/Qproduction Delorean x86 IvyBridge + FPGADevelopment - EECR Bantam BlueGene/QDevelopment - EECR Palmerston POWER8 + Kepler K40Development – on loan from IBM Ace ARM 64 bitDevelopment - EECR Neale x86 IvyBridgeDevelopment - EECR Panther POWER8 + Kepler K80Hartree Centre phase 3 research Name mappings

3 Invicta Phase 1 system (2012) IBM iDataplex 512 nodes, Sandybridge processors, 16 cores per node Range of memory sizes – 2GB per core, 8GB per core, 16GB per core Mellanox IB interconnect Platform LSF GPFS (same filesystem as Joule) Some graphical login capability Standard x86-based HPC system. Will be “sun- setted” after June 2015 – only key components will be kept on maintenance Service ends 30 th April 2016

4 Napier Phase 2 system (2014) IBM NeXtScale 360 nodes, Ivybridge processors, 24 cores per node 2.76GB per core (64GB per node) Mellanox IB interconnect Platform LSF GPFS (different filesystem to Invicta) Standard x86-based HPC system. 60 nodes reserved for Lenovo (IBM) Global Benchmarking Centre.

5 Iden Phase 2 system (2014) IBM iDataplex 84 nodes, Ivybridge processors, 24 cores per node 2.76GB per core (64GB per node) Mellanox IB interconnect Platform LSF GPFS (same filesystem as Napier but different to Invicta) 42 Xeon Phi accelerators Standard x86-based HPC system but with accelerators.

6 Dawson Data analytics Phase 2 system (2014) Range of hardware and software, including: IBM Big Insights (Hadoop and friends) IBM Streams (data streams processing) IBM SPSS Modeller and Data Engine (statistical modelling) Cognos BI (data analysis and reporting) IBM Content Analytics Has local GPFS filesystem with file placement optimisation (FPO – policy driven) Systems built and torn-down according to requirements of specific projects. Requires detailed technical assessment and solution planning.

7 Bifort Phase 1 system (2012) IBM BlueGene/Q platform Proprietary IBM Power-based processors – 96,384 in 6 racks Each processor can run up to 4 threads Proprietary IBM 5-dimensional torus interconnect IBM LoadLeveller GPFS (same filesystem as Invicta) Ideal for codes with very high levels of task or thread parallelism. Codes have to be re-compiled with IBM tools, and may need some porting effort. Clock frequency relatively slow compared to x86 systems – some codes may run more slowly. Sensitive to job topology. Service ends 30 th April 2016

8 Bantam Phase 1 system (2012) with extensions in phase 2 (2014) IBM BlueGene/Q platform Proprietary IBM Power-based processors Has 32,768 processors in 2 BG/Q racks, plus 8,192 processors in 8 additional I/O drawers (2 standard racks) – total 40,960 processors 256TB flash memory Proprietary IBM 5-dimensional torus interconnect IBM LoadLeveller GPFS (entirely standalone) Research project in conjunction with IBM. For development of next generation IBM platforms such as Power 9. Investigating different methods of moving compute to data.

9 Delorean Part of EECR programme Maxeler FPGA system 5 compute nodes (standard x86) 5 FPGA nodes, each with 8 “Maia dataflow engines” – total 40 FPGAs Local Panasas filestore Open GridEngine MaxCompiler and friends to help users port their codes to FPGA Very much a development system. Users have to have a need and understand what they’re going to be doing.

10 Palmerston On loan from IBM under Early Ship Programme Two Power8 servers, each with: 2 x 12 core Power8 processors @ 3.026Ghz 1TB system RAM 2 x 600GB 15k SAS disks 4 x 1.2TB 10k SAS disks 2 x nVidia Tesla K40 GPUs Ubuntu 14.04LTS IBM XLC / XLF compilers nVidia CUDA 7

11 ClusterVision “deep fat fryer” – novel cooling demonstrator Nodes are immersed in mineral oil which removes heat and transfers it to building water loop 1920 Ivybridge cores 4GB per core (64GB per node) 128GB SSD per node BeeGFS filesystem Used for EECR work, may also be used for private cloud assessment (OpenStack) Neale

12 Low Power Processor system Lenovo NeXtScale form factor, Cavium processors 64bit ARM cores Two “pass 1” boards arrived in June 2015 Twenty-four “pass 2” boards in December 2015 RedHat ARM distro support GPFS storage and LSF will be deployed when compatible clients available Twelve temporary x86 nodes are being used as a proof of concept to test “bursting” of workloads to IBM SoftLayer cloud – will be removed from service once the pass 2 ARM boards arrive Ace

13 Panther IBM POWER8 targeting Hartree Centre phase 3 workload Installation in progress. In service end March. Ribbons and bows courtesy of OCF and not IBM blue! 32 x 16 core nodes @ 3.32GHz. 2 x nVidia K80 GPU per node. 2 x 1TB HDD per node. 28 nodes have 512GB RAM; 4 have 1TB RAM. 2 x IBM ESS GS4 storage arrays, providing 96 x 800GB SSD. IB (FDR) attached. 1 x IBM FlashStorage 900 IB (QDR) attached. 57TB usable. 1 x IBM FlashStorage 900 CAPI attached (to single host). 57TB usable.


Download ppt "Hartree Centre systems overview. Public nameInternal nameTechnologyService type Blue WonderInvictax86 SandyBridgeproduction Blue WonderNapierx86 IvyBridgeproduction."

Similar presentations


Ads by Google