Future development at INCDTIM computing center Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND MOLECULAR TECHNOLOGIES.

Slides:



Advertisements
Similar presentations
GRID and IT at INCDTIM Cluj-Napoca Recent Progress and Prospects Ing. Farcas Felix NATIONAL INSTITUTUL FOR RESEARCH AND DEVELOPMENT OF IZOTOPIC AND MOLECULAR.
Advertisements

National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge-2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev,
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Techniques for establishing and maintaining constant temperature in ICT systems in order to reduce energy consumption Mihail Radu Cătălin Truşcă, Ş. Albert,
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
GRID job tracking and monitoring Dmitry Rogozin Laboratory of Particle Physics, JINR 07/08/ /09/2006.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Green technology used for ATLAS processing Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND MOLECULAR.
Status of T3 GRID site infrastructure in Egypt Ashraf Mohamed Kasem Department of physics Ain Shams University.
BalticGrid-II Project MATLAB implementation and application in Grid Ilmars Slaidins, Lauris Cikovskis Riga Technical University AHM Riga May 12-14, 2009.
Current Status of the Grid Computing for Physics in Romania Horia Hulubei National Institute of R&D in Physics and Nuclear Engineering (IFIN-HH)
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
ISS-AliEn and ISS-gLite Adrian Sevcenco RO-LCG 2011 WORKSHOP Applications of Grid Technology and High Performance Computing in Advanced Research.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
CSG - Research Computing Redux John Holt, Alan Wolf University of Wisconsin - Madison.
St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
NCPHEP ATLAS/CMS Tier3: status update V.Mossolov, S.Yanush, Dz.Yermak National Centre of Particle and High Energy Physics of Belarusian State University.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2011 System manager: Pînzaru.
CSE 451: Operating Systems Spring 2013 Module 26 Cloud Computing Ed Lazowska Allen Center 570 © 2013 Gribble, Lazowska, Levy,
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
1 Development of a High-Throughput Computing Cluster at Florida Tech P. FORD, R. PENA, J. HELSBY, R. HOCH, M. HOHLMANN Physics and Space Sciences Dept,
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Development INCDTIM Datacenter using “Green IT” Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND.
Cloud Computing Ed Lazowska Bill & Melinda Gates Chair in Computer Science & Engineering University of Washington.
CSE 451: Operating Systems Autumn 2010 Module 25 Cloud Computing Ed Lazowska Allen Center 570.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
International Symposium on Grid Computing (ISGC-07), Taipei - March 26-29, 2007 Of 16 1 A Novel Grid Resource Broker Cum Meta Scheduler - Asvija B System.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
Performance assessment of the 2008 configuration of the JINR CICC cluster A. Ayriyan 1,*, Gh. Adam 1,2, S. Adam 1,2, E. Dushanov 1, V. Korenkov 1, A. Lutsenko.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Ismayilov Ali Institute of Physics of ANAS Creating a distributed computing grid of Azerbaijan for collaborative research NEC'2011.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
Page 1 NSTec –Impact of Server virtualization OFFICIAL USE ONLY Vision Service Partnership Impact of Virtualization on the Data Center Robert Morrow National.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Mihnea Dulea, IFIN-HH R-ECFA Meeting, National Physics Library IFIN-HH, Magurele Romanian participation in WLCG M. Dulea Elementary Particles.
ChinaGrid: National Education and Research Infrastructure Hai Jin Huazhong University of Science and Technology
RECENT DEVELOPMENTS IN THE CONTRIBUTION OF DFCTI/IFIN-HH TO THE WLCG COLLABORATION Department of Computational Physics and Information Technologies (DFCTI)
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
GRID OPERATIONS IN ROMANIA
Clouds , Grids and Clusters
Grid site as a tool for data processing and data analysis
The demonstration of Lustre in EAST data system
Operations and plans - Polish sites
Heterogeneous Computation Team HybriLIT
Kolkata Status and Plan
Update on Plan for KISTI-GSDC
Welcome! Thank you for joining us. We’ll get started in a few minutes.
The National Grid Service Mike Mineter NeSC-TOE
Presentation transcript:

Future development at INCDTIM computing center Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND MOLECULAR TECHNOLOGIES Donath St , , Cluj-Napoca, ROMANIA Tel.: ; Fax: ; GSM: web:

Achievements in A 10 GBps Uplink 2. Multiple Back up solutions 3. MPI Cluster 4. A functional Grid site processing ATLAS jobs Grid, MPI and Network 2 INCDTIM

Network abilities Cisco Catalyst 6509E Layer 3 system ITIM – RoEduNet 10 GB from 1 Mar Nexus system through the Grid system –~40 GB between WN and SE, Cream –~20 GB between Site and Switch INCDTIM 3

4 Network Link Network Link 2008 Farcas F. - AQTR 2008 Network Link 2009 Farcas F. – CISSE 2009 INCDTIM Testing the speed with MGEN program, March

5 Multiple Backup solution Power generator 275kW Uninterruptible Power Source ~ APC Symetra of max 160 KVA full redundant INCDTIM UPSUPS Power Generator 275 KW, start in full 8 seconds 5

MPI Cluster Program installed –CASTEP (NMR, Forcite, Reflex Plus, Conformers - Moduls) (license) –MolPro, SIESTA, Quantum Expresso, ProChem, Orca, etc Module Environment Package for job Monitored by Ganglia 25 processing station INCDTIM 6

RO-14-ITIM Grid Site Four Main station (Cream, Apel, SE, UI) Max number of processing core 300 Storage = 100 TB, online 35 (for the moment) Technology used, next slide… INCDTIM 7

8 Technology used INCDTIM Blade system and MSA storage IBM and HP Blade system, total of 46 Blades in only 29 U Intel 1U Sever Processing power: 432 core Management -> through web based interfaces MSA Storage capacity: 50 TB, Less space for 46 PC => core increase by 56% Less power consume - reduced by 34% HP & IBM web management interface

Grid Results 2011 INCDTIM CPU Efficiency ~ jobs ( ) = 6%(RO) in 2010 ~6000 jobs 9

10 Network Configuration and Logical Schema of the Datacenter INCDTIM Farcas F& all – REV 2011

Future development for 2012 Grid site Development MPI cluster Development Improving monitoring in the site and cluster IPv6 implementation What are the Financial sources? RO-LCG 2012 symposium/conference organizing 11 INCDTIM

Grid development Improving Memory (3 GB/core) Improving Storage capacity (online 150 TB) Building a virtual Grid site based on EMI (European Middleware Initiative) for testing If the testing are successful implementing the Grid site with EMI Implementing IPv6 in the virtual site INCDTIM 12

INCDTIM 13

MPI development INCDTIM 14 Building a strong monitoring system (nagios, ganglia) Implementing special queues for special processing program Installing new software (Gaussian 2009, ) Installing new software (Gaussian 2009, CRYSTAL & WIEN2k) Acquiring 80 processor for MPI processing, technology unknown !yet! Acquiring 80 processor for MPI processing, technology unknown !yet!

Fund through National Authority for Science Research (NASR) Continuing the 12 EU – ConDeGrid / Continuing the 15 EU / Cooperation program "Hulubei-Meshcheryakov“ together with the Laboratory of Information Technologies at JINR – Dubna POS-CCE 192, Improving the capacity and reliability of INCDTIM GRID center for integration in international networks (INGRID) POS-CCE 536, Axis 2, operation PN-II-RU-TE PN-II-RU-TE : „Modelarea First-Principles a Oxizilor bazati pe SrTiO3 pentru Aplicatii Termoelectrice”. PN-II-KAI2.2- O PM/2008 Molecular and Biomolecular Physics Department Upgrading – MDFMOLBIO INCDTIM Financial Resources 15

POS-CCE 192 Improving the capacity and reliability of INCDTIM GRID center for integration in international networks (INGRID) UNIUNEA EUROPEANA GUVERNUL ROMÂNIEI INCDTIM

Project 192 contract no: 42/ Total value: lei ~ Euro to INCDTIM Main acquisitions Power generator ( ~ Euro) HP Blade system ( ~ Euro) Cisco Catalyst 6509E system ( ~ Euro) APC UPS system ( ~ Euro) 17

INCDTIM 18

19 INCDTIM

Research area applied for MPI resources Molecular and biomolecular physics –Numerical Modeling –Molecular and Biomolecular Modeling –Structural Analysis on Solids INCDTIM 20

2012 RO-LCG meeting We are organizing the RO-LCG meeting next year in Cluj-Napoca at INCDTIM We are doing it with the support of IEEE Romania We will publish your contribution in an IEEE volume We are waiting for interesting reports and contribution in LCG and Parallel computing domain. INCDTIM 21

Published articles F. Farcas, R. Trusca, S. Albert, Izabella Szabo & G. Popeneciu, Application of Green IT for Physics Data Processing at INCDTIM, PIM Trusca MRC, Farcas F, Szabo I, Albert S, Floare C.G., Techniques for parameters monitoring in datacenter, XXIII International Symposium on Nuclear Electronuics & Computing, NEC 2011, Varna Felix Farcas, Development of INCDTIM Datacenter using „green IT”, XXIII International Symposium on Nuclear Electronuics & Computing, NEC 2011, Book of Abstract, plenary session. 4. Felix Farcas, Radu Trusca, Grid computing Improvment at INCDTIM Cluj, Proceedins REV-2011, Remote Engineering and Virtual Instrumentation, International Association of Online Engineering, ISBN INCDTIM 22

Thank you for your attention INCDTIM Happy St. Nicholas, Merry Christmas and A Happy New Year 23