Presentation is loading. Please wait.

Presentation is loading. Please wait.

Integration center of the cyberinfrastructure of NRC “KI” Dubna, 16 july 2012 V.E. Velikhov V.A. Ilyin E.A. Ryabinkin.

Similar presentations


Presentation on theme: "Integration center of the cyberinfrastructure of NRC “KI” Dubna, 16 july 2012 V.E. Velikhov V.A. Ilyin E.A. Ryabinkin."— Presentation transcript:

1 Integration center of the cyberinfrastructure of NRC “KI” Dubna, 16 july 2012 V.E. Velikhov V.A. Ilyin E.A. Ryabinkin

2 1 Data center High- performance networks Remote access to megasience facilities Data stores Big Data: simulation, processing, analysis, visualization Computing: High- performance, Grid & Cloud GRID systems: Russian GRID, GridNNN, WLCG OpenStack cloud Cyberinfrastructure of NRC “KI” Cyberinfrastructure

3 2 Cyberinfrastructure services Problem solving environments & applications Big Data processing, analysis & visualization services HPC, cloud & Grid services High-performance network services Data center services Cybersecurity

4 3 Big Data DATA 1+ Petabytes Simulation, processing & analysis Computing Visualization 300+ Teraflops 20+ gigabit/second Distributed computing

5 4 HPC, GRID, Cloud & Storage platforms Grid & cloud infrastructure 1 PB Lustre Data store HPC 1: INTEL CPUs 34 TF Experimental facilities Grid & cloud farms Video cluster & wall HPC 2 : INTEL CPUs 123 TF HPC 3: GPGPUs 127 TF HPC: 3 SMPs N x 10 Gb/s

6 5 Data stores High-performance data stores based on Lustre parallel file system

7 6 System network High-performance system network InfiniBand QDR Message & data exchange

8 7 IB network core InfiniBand QDR Network Core

9 8 HPC 1 – thin nodes HPC 1 - 34 TF:,864 INTEL CPUs; 3456 cores, 2-4-8 GB/core RAM (4,7 TB in total) & 40 TB LUSTRE store

10 9 HPC 2 – thin nodes HPC 2 - 123 TF: 2560 INTEL CPUs, 10240 cores, 2 GB/core RAM (20,5 TB in total) & 144 TB LUSTRE store

11 10 HPC 3 – accelerators & SMPs HPC3 – 127 TF: 152 INTEL CPUs, 912 cores, 228 NVIDIA Tesla M2070 GPGPU HPC3V – 3 Nodes 6 NVIDIA Quadro 6000 HPC3SMP – 2 Nodes, 8 INTEL CPUs, 80 cores 1 TB RAM

12 11 Video wall Video wall connected by optics to video cluster (HPC3V) with SAGE software

13 12 Data center 1300 kW HPC 2- 123 TF, 1 PB HPC 3 - GPGPU 127 TF 700 kW WLCG Tier 2; GridNNN OpenStack cloud 700 kW FAIR Tier 1 (TBD) 700 kW WLCG Tier 1

14 13 Engineering infrastructure Electrical & mechanical subsystems 7.5 MVA

15 14 WLCG infrastructure in NIC”KI” WLCG Tier 2 centers in: -KI -IHEP -ITEP -PNPI Tier 1 center project: KI (ATLAS, ALICE, LHCb) JINR (CMS)

16 15 Grid data processing Night lights satellite data

17 16 GridNNN for HPC simulations & data processing in nano-bio GridNNN GRID middleware installed on 10+ HPC centers 10 000+ cores 6 VOs: Nanochem, Abinit, GAMESS, Nanospace, Moldyn, Fusion.

18 17 HPC applications Computational material science: ABINIT GAMESS GAUSSIAN Gromacs FDTD-II Firefly LAMMPS MOLPRO NAMD OpenMX VASP CAE: ABAQUS ANSYS FlowVision OpenFOAM SALOME

19 18 HPC simulations & data processing Video wall connected by optics to video cluster with SAGE software

20 19 GAII Server GAII IPAR Windows 2,5 TBytes rsync/samba Genome sequence restoration Disk cache 100 Tbytes Secondary data processing Genomes comparison Researcher Tape data store Bld 140 Data center 10 Gb/s optics Full genome sequencing project at NIC”KI”

21 20 APOS RU: Keldysh Institute of Applied Mathematics, of the Russian Academy of Sciences Moscow Institute of Physics and Technology (State University) National Research Centre "Kurchatov Institute" Ugra Research Institute for Information Technologies APOS EU: EPCC, The University of Edinburgh, UK CAPS entreprise, FR ICM, Uniwersytet Warszawski, PL TOTAL S.A., FR HLRS, University of Stuttgart, DE APOS FP7 project Incompatibility between the requirements of existing software and the capabilities of new supercomputers is a growing problem that will be addressed by a pioneering new Russian-European collaboration - Application Performance Optimisation and Scalability (APOS) project.

22 21 APOS RU project in NIC”KI” Many-body potential for carbon nanostructures Atomistic method for transport calculations Q Bond order (many-body term) Development of software for calculation of mechanical and transport properties of non-uniform nanostructures using many-body interatomic potentials for multiprocessor and heterogeneous computer systems. Tersoff, Brenner, REBO, ReaxFF … Molecular dynamics simulations are computationally intensive, so we need heterogeneous system with GPU to overcome that and novel approach in sense of programming

23 22 Thank you! И живёт-то он не в Дубне атомной, а в НИИ каком-то под Каширою, врет, что он там шеф над автоматною электронно-счётною машиною. А.Галич


Download ppt "Integration center of the cyberinfrastructure of NRC “KI” Dubna, 16 july 2012 V.E. Velikhov V.A. Ilyin E.A. Ryabinkin."

Similar presentations


Ads by Google