Integration center of the cyberinfrastructure of NRC “KI” Dubna, 16 july 2012 V.E. Velikhov V.A. Ilyin E.A. Ryabinkin.

Slides:



Advertisements
Similar presentations
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Advertisements

Tesla CUDA GPU Accelerates Computing The Right Processor for the Right Task.
Statewide IT Conference30-September-2011 HPC Cloud Penguin on David Hancock –
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Martin Hamilton, Centre Manager − hpc-midlands.ac.uk HPC Midlands Cloud Supercomputing for Academia and Industry.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Scientific Computing Laboratory I NSTITUTE OF P HYSICS B ELGRADE WWW. SCL. RS.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
HPC at IISER Pune Neet Deo System Administrator
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
A comparison of distributed data storage middleware for HPC, GRID and Cloud Mikhail Goldshtein 1, Andrey Sozykin 1, Grigory Masich 2 and Valeria Gribova.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
Use/User:LabServerField Engineer Electrical Engineer Software Engineer Mechanical Engineer Requirements: Small form factor.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
THE NAPLES GROUP: RESOURCES SCoPE Datacenter of more than CPU/core and 300TB including infiniband and MPI Library in supporting Fast Simulation activy.
Institute of High Energy Physics ( ) NEC’2005 Varna, Bulgaria, September Participation of IHEP in EGEE.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Biryaltsev E.V., Galimov M.R., Demidov D.E., Elizarov A.M. HPC CLUSTER DEVELOPMENT AND OPERATION EXPERIENCE FOR SOLVING THE INVERSE PROBLEMS OF SEISMIC.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
Patryk Lasoń, Marek Magryś
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
V. Ilyin, Russia – EU, Russia participation in EGEE stable core infrastructure - new applications/new resources/new.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
Operations in R ussian D ata I ntensive G rid Andrey Zarochentsev SPbSU, St. Petersburg Gleb Stiforov JINR, Dubna ALICE T1/T2 workshop in Tsukuba, Japan,
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
IBERGRID as RC Total Capacity: > 10k-20K cores, > 3 Petabytes Evolving to cloud (conditioned by WLCG in some cases) Capacity may substantially increase.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
Hopper The next step in High Performance Computing at Auburn University February 16, 2016.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
Report from US ALICE Yves Schutz WLCG 24/01/2007.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
NIIF HPC services for research and education
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
OpenStack Swift Where do big data go? Eben van Zyl
Operations and plans - Polish sites
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Heterogeneous Computation Team HybriLIT
Russian Regional Center for LHC Data Analysis
Daniel Murphy-Olson Ryan Aydelott1
Clouds of JINR, University of Sofia and INRNE Join Together
RDIG for ALICE today and in future
Traditional Enterprise Business Challenges
Overview of HPC systems and software available within
Office of Information Technology February 16, 2016
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
The Cambridge Research Computing Service
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

Integration center of the cyberinfrastructure of NRC “KI” Dubna, 16 july 2012 V.E. Velikhov V.A. Ilyin E.A. Ryabinkin

1 Data center High- performance networks Remote access to megasience facilities Data stores Big Data: simulation, processing, analysis, visualization Computing: High- performance, Grid & Cloud GRID systems: Russian GRID, GridNNN, WLCG OpenStack cloud Cyberinfrastructure of NRC “KI” Cyberinfrastructure

2 Cyberinfrastructure services Problem solving environments & applications Big Data processing, analysis & visualization services HPC, cloud & Grid services High-performance network services Data center services Cybersecurity

3 Big Data DATA 1+ Petabytes Simulation, processing & analysis Computing Visualization 300+ Teraflops 20+ gigabit/second Distributed computing

4 HPC, GRID, Cloud & Storage platforms Grid & cloud infrastructure 1 PB Lustre Data store HPC 1: INTEL CPUs 34 TF Experimental facilities Grid & cloud farms Video cluster & wall HPC 2 : INTEL CPUs 123 TF HPC 3: GPGPUs 127 TF HPC: 3 SMPs N x 10 Gb/s

5 Data stores High-performance data stores based on Lustre parallel file system

6 System network High-performance system network InfiniBand QDR Message & data exchange

7 IB network core InfiniBand QDR Network Core

8 HPC 1 – thin nodes HPC TF:,864 INTEL CPUs; 3456 cores, GB/core RAM (4,7 TB in total) & 40 TB LUSTRE store

9 HPC 2 – thin nodes HPC TF: 2560 INTEL CPUs, cores, 2 GB/core RAM (20,5 TB in total) & 144 TB LUSTRE store

10 HPC 3 – accelerators & SMPs HPC3 – 127 TF: 152 INTEL CPUs, 912 cores, 228 NVIDIA Tesla M2070 GPGPU HPC3V – 3 Nodes 6 NVIDIA Quadro 6000 HPC3SMP – 2 Nodes, 8 INTEL CPUs, 80 cores 1 TB RAM

11 Video wall Video wall connected by optics to video cluster (HPC3V) with SAGE software

12 Data center 1300 kW HPC TF, 1 PB HPC 3 - GPGPU 127 TF 700 kW WLCG Tier 2; GridNNN OpenStack cloud 700 kW FAIR Tier 1 (TBD) 700 kW WLCG Tier 1

13 Engineering infrastructure Electrical & mechanical subsystems 7.5 MVA

14 WLCG infrastructure in NIC”KI” WLCG Tier 2 centers in: -KI -IHEP -ITEP -PNPI Tier 1 center project: KI (ATLAS, ALICE, LHCb) JINR (CMS)

15 Grid data processing Night lights satellite data

16 GridNNN for HPC simulations & data processing in nano-bio GridNNN GRID middleware installed on 10+ HPC centers cores 6 VOs: Nanochem, Abinit, GAMESS, Nanospace, Moldyn, Fusion.

17 HPC applications Computational material science: ABINIT GAMESS GAUSSIAN Gromacs FDTD-II Firefly LAMMPS MOLPRO NAMD OpenMX VASP CAE: ABAQUS ANSYS FlowVision OpenFOAM SALOME

18 HPC simulations & data processing Video wall connected by optics to video cluster with SAGE software

19 GAII Server GAII IPAR Windows 2,5 TBytes rsync/samba Genome sequence restoration Disk cache 100 Tbytes Secondary data processing Genomes comparison Researcher Tape data store Bld 140 Data center 10 Gb/s optics Full genome sequencing project at NIC”KI”

20 APOS RU: Keldysh Institute of Applied Mathematics, of the Russian Academy of Sciences Moscow Institute of Physics and Technology (State University) National Research Centre "Kurchatov Institute" Ugra Research Institute for Information Technologies APOS EU: EPCC, The University of Edinburgh, UK CAPS entreprise, FR ICM, Uniwersytet Warszawski, PL TOTAL S.A., FR HLRS, University of Stuttgart, DE APOS FP7 project Incompatibility between the requirements of existing software and the capabilities of new supercomputers is a growing problem that will be addressed by a pioneering new Russian-European collaboration - Application Performance Optimisation and Scalability (APOS) project.

21 APOS RU project in NIC”KI” Many-body potential for carbon nanostructures Atomistic method for transport calculations Q Bond order (many-body term) Development of software for calculation of mechanical and transport properties of non-uniform nanostructures using many-body interatomic potentials for multiprocessor and heterogeneous computer systems. Tersoff, Brenner, REBO, ReaxFF … Molecular dynamics simulations are computationally intensive, so we need heterogeneous system with GPU to overcome that and novel approach in sense of programming

22 Thank you! И живёт-то он не в Дубне атомной, а в НИИ каком-то под Каширою, врет, что он там шеф над автоматною электронно-счётною машиною. А.Галич