Distributed High Performance Computing Environment of Ural Branch of RAS M.L.Goldshtein, A.V.Sozykin, Institute of Mathematics and Mechanics UrB RAS, Yekaterinburg.

Slides:



Advertisements
Similar presentations
Regulation Policy in the Info- Communications Technology Spheres in Russia Vladimir Efimushkin, Ph.D. Director for R&D, ZNIIS, Moscow SOIS Meeting in Riga,
Advertisements

Prof. Natalia Kussul, PhD. Andrey Shelestov, Lobunets A., Korbakov M., Kravchenko A.
Cloud computing in spatial data processing for the Integrated Geographically Distributed Information System of Earth Remote Sensing (IGDIS ERS) Open Joint-Stock.
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
SAN DIEGO SUPERCOMPUTER CENTER Emerging HIPAA and Protected Data Requirements for Research Computing at SDSC Ron Hawkins Director of Industry Relations.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
United Institute of Informatics Problems National Academy of Sciences of Belarus.
Clouds from FutureGrid’s Perspective April Geoffrey Fox Director, Digital Science Center, Pervasive.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Summary Role of Software (1 slide) ARCS Software Architecture (4 slides) SNS -- Caltech Interactions (3 slides)
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
A.Joutchkov *, N.Tverdokhlebov *, S.Arnautov **, A.Yanovsky **, Y.Lyssov ***, A.Cherny *** *Telecommunication Center “Science & Society” ** Institute for.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
Institute for High Performance Computing and Information Systems St.Petersburg, Russia Vladimir Korkhov
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
Problems of development of high performance infrastructure for scientific center S. Shikota 1, A.Yu.Menshutin 1,2, L. Shchur 1,2 1 Department of Applied.
1. 2 Telecommunication networks and technologies INTERNATIONAL AND BACKBONE NETWORKS Transmission Networks Fiber optic communication lines – PDH,SDH,
Sergey Belov, Tatiana Goloskokova, Vladimir Korenkov, Nikolay Kutovskiy, Danila Oleynik, Artem Petrosyan, Roman Semenov, Alexander Uzhinskiy LIT JINR The.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
A.V. Bogdanov Private cloud vs personal supercomputer.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
Development of High Performance Computing on the Basis of SSCC and СС Intel-SB RAS Development of High Performance Computing on the Basis of SSCC and СС.
1 1 Hybrid Cloud Solutions (Private with Public Burst) Accelerate and Orchestrate Enterprise Applications.
A comparison of distributed data storage middleware for HPC, GRID and Cloud Mikhail Goldshtein 1, Andrey Sozykin 1, Grigory Masich 2 and Valeria Gribova.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
UPPMAX and UPPNEX: Enabling high performance bioinformatics Ola Spjuth, UPPMAX
Connect communicate collaborate BASNET – Belarusian Academy of Sciences Network Minsk, Belarus National Supervisory Board of Belarusian NREN BASNET Uladzimir.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Presented by: La Min Htut Saint Petersburg Marine Technical University Authors: Alexander Bogdanov, Alexander Lazarev, La Min Htut, Myo Tun Tun.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
CCS Overview Rene Salmon Center for Computational Science.
Development of Russian Grid Segment in the frames of EU DataGRID, LCG and EGEE projects V.A.Ilyin (SINP MSU), V.V.Korenkov (JINR, Dubna) NEC’2003, Varna.
KISTI & Belle experiment Eunil Won Korea University On behalf of the Belle Collaboration.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Enabling the use of e-Infrastructures with.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
Telecommunication infrastructure of Russian Education V. Vasiliev, Y. Gugel, Y. Izhvanov, A. Tikhonov State Institute of Information Technologies and Telecommunications.
Program Systems Institute of the Russian Academy of Sciences 1 The Program Systems Institute of the Russian Academy of Sciences: Overview
Integration center of the cyberinfrastructure of NRC “KI” Dubna, 16 july 2012 V.E. Velikhov V.A. Ilyin E.A. Ryabinkin.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Dr. Isabel Campos Plasencia (IFCA-CSIC) Spanish NGI Coordinator ES-GRID The Spanish National Grid Initiative.
Prospects of regional fiber infrastructure development for research and education support (PORTA OPTICA STUDY PROJECT- DISTRIBUTED OPTICAL GATEWAY TO EASTERN.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Software Licensing in the EGEE Grid infrastructure.
北京大学计算中心 Computer Center of Peking University Building an IaaS System in Peking University Weijia Song April 23, 2012.
Northern Sea Route (NSR) and Development of the Arctic Coastal Areas Russian International Affairs Council Moscow, 2016 Prof. Vladimir PAVLENKO Director,
CYBERSAR Cybersar Gianluigi Zanetti Cybersar.
IBERGRID as RC Total Capacity: > 10k-20K cores, > 3 Petabytes Evolving to cloud (conditioned by WLCG in some cases) Capacity may substantially increase.
The use of the SCMS-EMI as scientific gateway in BCC of NGI_UA Authors: Andrii Golovynskyi, Andrii Malenko, Valentyna Cherepynets V. Glushkov Institute.
the project of the voluntary distributed computing ver.4.06 Ilya Kurochkin Institute for information transmission problem Russian academy of.
ENEA GRID & JPNM WEB PORTAL to create a collaborative development environment Dr. Simonetta Pagnutti JPNM – SP4 Meeting Edinburgh – June 3rd, 2013 Italian.
DutchGrid KNMI KUN Delft Leiden VU ASTRON WCW Utrecht Telin Amsterdam Many organizations in the Netherlands are very active in Grid usage and development,
Distributed storage system with dCache E.Y.Kuklin, A.V.Sozykin, A.Y.Bersenev Institute of Mathematics and Mechanics UB RAS, Yekaterinburg G.F.Masich Institute.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
NIIF HPC services for research and education
StratusLab First Periodic Review
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Grid infrastructure development: current state
Clouds of JINR, University of Sofia and INRNE Join Together
R.Mashinistov (UTA) July
Recap: introduction to e-science
Статус ГРИД-кластера ИЯФ СО РАН.
New developments for deploying
Advanced Computing Facility Introduction
Pioneering the Computing & Communication Services for Academic Studies
The C&C Center Three Major Missions: In This Presentation:
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Presentation transcript:

Distributed High Performance Computing Environment of Ural Branch of RAS M.L.Goldshtein, A.V.Sozykin, Institute of Mathematics and Mechanics UrB RAS, Yekaterinburg G.F.Masich, A.G.Masich Institute of Сontinuous Мedia Мechanics UrB RAS, Perm Distributed Computing and Grid-technologies in Science and Education, Dubna, July, 2012

2 Introduction Distributed high performance computing environment of UrB RAS is the infrastructure for eScience (cyberinfrastructure) Users:  40 Research Institutes of UrB RAS in 7 regions (Yekaterinburg, Arkhangelsk, Syktyvkar, Orenburg, Perm, Izhevsk, Chelyabinsk)  Universities  Industrial enterprises Project participations:  Institute of Mathematics and Mechanics UrB RAS (IMM UrB RAS) – computational recourses and information systems  Institute of Сontinuous Мedia Мechanics UrB RAS (ICMM UrB RAS) – backbone networks Supported by grants of UrB RAS No 12-P , RCP Urb RAS No RCP-12-I21

3 Overview High performance computing environment of UrB RAS consists of 4 parts:  Computational resources  Storage  Networks  Cloud platform

4 Computational resources Main computational resources are installed at the Supercomputer center of IMM UrB RAS:  Supercomputer “URAN”, peak performance 160 TFlops, 5 th position in the TOP50 Computational resources in the regional scientific centers of UrB RAS Connection using Grid technologies (Globus) and SLURM

5 Example tasks Calculation of launching of a carrier rocket (Sojuz-2 and Rus- M) into orbit (“Avtomatika”, Yekaterinburg) Remote sensing image processing (“Uralgeoinform”) Aero-engine computational fluid dynamics (“Aviodvigatel”, Perm)

6 Example tasks Молекулярно-динамическое моделирование и восстановление межчастичных потенциалов UO2–PuO2 для прогнозирования поведения ядерного топлива в процессах изготовления, эксплуатации и переработки Моделирования внутренней динамики Земли и других планет с целью изучения истории и перспектив их развития Создание математической модели сердца, отличающейся от существующих применением уникальной модели сократительного процесса мышечных волокон. Ее использование в вычислительном эксперименте позволит решить целый ряд проблем актуальных для современной кардиологии Синтез диаграмм направленности антенных решеток с двойным фазовым управлением для задач навигации автономных летательных аппаратов Разработка in silico новых противоопухолевых, противовоспалительных и антиаритмических лекарственных средств

7 Grid projects Grid of Russian Federation by Ministry of Communication  One of the resource centers  Network 1Gb/sec, Grid gateway  Setup and testing till the end of the 2012 Distributed information and computational environment of Ural Federal District  Institute of Mathematic and Mechanics UrB RAS  South Ural State University, Chelyabinsk  All-Russian Scientific Research Center of Technical Physics— Federal Nuclear Center, Snezhinsk  Ugra Research Institute of Information Technology

8 Storage Storage resources are also installed at the Supercomputer center of IMM UrB RAS Main storage EMC Celerra NS- 480, 150TB SATA, NAS (NFS, CIFS) Storage is used by:  Supercomputer “URAN”  Cloud platform of UrB RAS  Experimental facilities

9 Backbone networks Giga UrB RAS project:  High-speed, fiber-optic network infrastructure (1-100 Gbit/sec)  5 regional scientific centers of UrB RAS (Yekaterinburg, Perm, Izhevsk, Syktyvkar and Arkhangelsk)  “Dark" optical fiber and the DWDM technology Current status:  Communication channel Perm-Yekaterinburg  1 Gbit/sec  456 km Plans for 2012:  Increasing the speed of channel Perm-Yekaterinburg to 2x10 Gbit/sec  Building the communication channel Izhevsk-Perm

10 Cloud platform IaaS for computational tasks Technologies:  Linux, oVirt, Apache DeltaCloud Servers from old computational clusters Usage:  HPC application, integrated with clusters  Distributed applications (natural language processing, grant UrB RAS RCP-12-P10)  Education in the Supercomputing technologies

11 Matlab in the cloud Running task on the supercomputer:  job = imm_sch_f(8,20,'my_function');

12 Conclusion High performance computing environment of UrB RAS Computational resources, storage, backbone network, and cloud platform Available for research organization of UrB RAS for free