03.05.2015 SHORT OVERVIEW OF CURRENT STATUS A. A. Moskovsky Program Systems Institute, Russian Academy of Sciences IKI - MSR Research Workshop Moscow,

Slides:



Advertisements
Similar presentations
Prof. Natalia Kussul, PhD. Andrey Shelestov, Lobunets A., Korbakov M., Kravchenko A.
Advertisements

LinkSCEEM-2: A computational resource for the Eastern Mediterranean.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Blue Gene/P System Overview - Hardware.
1 Agenda … HPC Technology & Trends HPC Platforms & Roadmaps HP Supercomputing Vision HP Today.
©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
Beowulf Supercomputer System Lee, Jung won CS843.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Program Systems Institute Russian Academy of Sciences1 Program Systems Institute Research Activities Overview Extended Version Alexander Moskovsky, Program.
A Service-Oriented Approach of Integration of Computer-Aided Engineering Systems in Distributed Computing Environments Gleb Radchenko, Anastasiya Shamakina.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Supermicro © 2009Confidential HPC Case Study & References.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Claude TADONKI Mines ParisTech – LAL / CNRS / INP 2 P 3 University of Oujda (Morocco) – October 7, 2011 High Performance Computing Challenges and Trends.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
CSC Grid Activities Arto Teräs HIP Research Seminar February 18th 2005.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Building a High-performance Computing Cluster Using FreeBSD BSDCon '03 September 10, 2003 Brooks Davis, Michael AuYeung, Gary Green, Craig Lee The Aerospace.
Appro Products and Solutions Anthony Kenisky, Vice President of Sales Appro, Premier Provider of Scalable Supercomputing Solutions: 4/16/09.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
RSC Williams MAPLD 2005/BOF-S1 A Linux-based Software Environment for the Reconfigurable Scalable Computing Project John A. Williams 1
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
Budapest 2006 Grid Activities in Ukraine Nataliya Kussul Space Research Institute NASU-NSAU, Ukraine WGISS 21, Budapest 2006.
CCS machine development plan for post- peta scale computing and Japanese the next generation supercomputer project Mitsuhisa Sato CCS, University of Tsukuba.
Program Systems Institute of the Russian Academy of Sciences Supercomputer Projects SKIF and SKIF-GRID of Russia and Belorussia Sergei Abramov Workshop.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future.
Appro Products and Solutions Anthony Kenisky, Vice President of Sales Appro, Premier Provider of Scalable Supercomputing Solutions: 9/1/09.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Michael L. Norman Principal Investigator Interim Director, SDSC Allan Snavely.
HPC Business update HP Confidential – CDA Required
March 9, 2015 San Jose Compute Engineering Workshop.
Contact: Hirofumi Amano at Kyushu Mission 40 Years of HPC Services Though the R. I. I.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Leibniz Supercomputing Centre Garching/Munich Matthias Brehm HPC Group June 16.
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
Biryaltsev E.V., Galimov M.R., Demidov D.E., Elizarov A.M. HPC CLUSTER DEVELOPMENT AND OPERATION EXPERIENCE FOR SOLVING THE INVERSE PROBLEMS OF SEISMIC.
Towards energy efficient HPC HP Apollo 8000 at Cyfronet Part I Patryk Lasoń, Marek Magryś.
Patryk Lasoń, Marek Magryś
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Program Systems Institute of the Russian Academy of Sciences 1 The Program Systems Institute of the Russian Academy of Sciences: Overview
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage September 2010 Brandon.
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
Computing and Information Grid Development in Thailand Sornthep Vannarat NECTEC.
APE group Many-core platforms and HEP experiments computing XVII SuperB Workshop and Kick-off Meeting Elba, May 29-June 1,
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
NIIF HPC services for research and education
Modern supercomputers, Georgian supercomputer project and usage areas
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
Flex System Enterprise Chassis
Appro Xtreme-X Supercomputers
Welcome! Thank you for joining us. We’ll get started in a few minutes.
OCP: High Performance Computing Project
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
Super Computing By RIsaj t r S3 ece, roll 50.
Mattias Wadenstein Hepix 2012 Spring Meeting , Prague
System G And CHECS Cal Ribbens
Low Latency Analytics HPC Clusters
Overview of HPC systems and software available within
TeraScale Supernova Initiative
Cluster Computers.
Presentation transcript:

SHORT OVERVIEW OF CURRENT STATUS A. A. Moskovsky Program Systems Institute, Russian Academy of Sciences IKI - MSR Research Workshop Moscow, June, 2009 “SKIF-GRID” SUPERCOMPUTING PROJECT OF THE UNION STATE OF RUSSIA AND BELARUS

Slide 2 2 Pereslavl-Zalessky  Russian Golden Ring City: 857 years old  Hometown of Great Dukes of Russia  The first building site Peter The Great navy  Ancient capital of Russian Orthodox church Moscow Pereslavl Zalessky 120 km

Slide 3 “SKIF-GRID” PROJECT TIMELINE SKIF project, SKIF K-1000 is #98 in Top500 2.June 2004 – first proposal filed for “SKIF-GRID” project 3.March 2007 – approved by Government 4.March SKIF-MSU supercomputer deployed (#36 in June 08 Top 500) 5.May “SKIF-Testbed” federation created. 6.March 2009 – alliance agreement signed for SKIF series 4 development

Slide 4 PROJECT ORGANIZATION: Project directions 1. Grid technology 2. Supercomputers SW HW 3. Security 4. Pilot projects – applications of HPC and grid technology

Slide 5 «SKIF MSU»

Slide 6 SKIF MSU  Theoretical peak performance 60 TFlops  47 TFlops Linpack  Advanced clustering solutions:  diskless computational nodes  Original blade design ParameterValue CPU architecture:x86-64 CPU model:Intel XEON E5472 3,0 GHz (4-cores) Nodes (dual CPU)625 CPU cores total5 000 InterconnectInfiniband DDR, Fat Tree

Slide 7 «SKIF-Testbed» a/k/a “SKIF-Polygon”  Federation of HPC centers, ~100 Tflops  4 computers in the current Top 500  MSU (#35 in Top500)  South Urals State University  Tomsk State University  UFA state technical university

Slide 8 Middleware platform – UNICORE 6.1  X.509 for security  Certificate Authority at Pereslavl-Zalessky (PyCA)  Site platform  UNICORE 6.1  Java 1.5  Linux  Torque  Experimental sites: UNICORE is complemented with additional services/modules

Slide 9 Applications ( )  HPC applications:  Drug design (MSU Belozersky Institute, SRCC, Chelyabinsk SU)  Inverse problems in soil remote sensing (SRCC)  Computational chemistry (MSU Chemistry department)  Geophysical data services  Mammography database prototype (N.N. Semenov Chemical Physics Institute, RAS)  Text mining (PSI RAS)  Engineering (South Ural University …)  Space Research Institute...  …

SKIF-Aurora : second phase of SKIF-GRID project

Slide 11 SKIF Series 4: original R&D goals  Highest density of performance (biggest possible number CPU per 1U)  Smaller latency  Less cables and connectors — better reliability  Enlarged emission of heat per 1U We need new technology of cooling… How to?  Improved Interconnect: we need better scalability, bandwidth and latency that it’s provided by best available solutions (eg. Infiniband QDR)  New approach to monitoring and management of the supercomputer  Combining standard CPUs and accelerators in computational nodes of the supercomputer

Slide 12 Spring’2008: SKIF Series 4 — How To?

Slide 13 Summer’2008: SKIF Series 4 — Know How!  Italian-Russian Cooperation  «SKIF Series 4» == «SKIF-AURORA Project»  Designed by an alliance of Eurotech, PSI RAS and RSC SKIF with support by Intel  To be present at ISC 09 Program Systems Institute of RAS

Slide 14 SKIF-Aurora distinctive features  No moving parts  Liquid cooling – power efficiency  X86_64 processors (IntelNehalem)  3-D torus interconnect  Redundant management/monitoring subsystem  FPGA on board (optional)  SSD disks (optional)  QDR Infiniband

Slide 15 SKIF-Aurora  32 nodes per chassis  64 CPUs in 6U  Up to 8 chassis per rack  Up to 512 CPU per rack  Up to 2048 cores  To build 500 TFlops  21 racks in 2009  scalable due to 3-D torus  10 kW per chassis

Slide 16 SKIF-AURORA: Designed by the alliance of Eurotech, PSI RAS and RSC SKIF PCBs, mechanics, power supply, cooling, 1 and 2 levels of management system 3 level of management system, Interconnect (3D-torus: firmware, routing, drivers, MPI-2…), FPGA as accelerator

Slide 17 SKIF-AURORA Management Subsystem

Slide 18 3-D torus interconnect implementation System Interconnect, 3D-torus Subsidiary Interconnect, Infiniband FPGA... CPU standard part non- standard part  Only QCD specific is implemented by Italian team  Russian teams to upgrade network to general-purpose interconnect (MPI 2.0), due to appear fall 2009

Slide 19 R&D Directions Using FPGA  Collective MPI operations using FPGA  FPGA to facilitate support of PGAS- languages (UPC, Titanium, etc)  FPGA+CPU hybrid computing

Slide 20 Conclusions  Is based on collaboration between international teams  Harnesses shared expertise and results  Aimed to develop a family of petascale-level supercomputers with innovative techniques:  Higher density of CPUs (flops per volume)  Efficient water cooling system  Scalable powerful 3D-Torus Interconnect  Etc.

Slide 21 Datacenter visualization

Slide 22 Datacenter visualization

Slide 23 THANKS SKIF-GRID web site