The Finnish Grid Infrastructure Computing Environment and Tools Wednesday, 21 st of May 2014 - EGI Community Forum 2014 Helsinki Luís Alves Systems Specialist.

Slides:



Advertisements
Similar presentations
Finnish Material Sciences Grid (M-grid) Arto Teräs Nordic-Sgi Meeting October 28, 2004.
Advertisements

E-IRG Workshop CSC, October 4, 2006 Risto M. Nieminen Helsinki University of Technology HELSINKI UNIVERSITY OF TECHNOLOGY.
SALSA HPC Group School of Informatics and Computing Indiana University.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
Polish Infrastructure for Supporting Computational Science in the European Research Space EUROPEAN UNION Services and Operations in Polish NGI M. Radecki,
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
CSC Grid Activities Arto Teräs HIP Research Seminar February 18th 2005.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
1 Advanced Storage Technologies for High Performance Computing Sorin, Faibish EMC NAS Senior Technologist IDC HPC User Forum, April 14-16, Norfolk, VA.
SICSA student induction day, 2009Slide 1 Social Simulation Tutorial Session 6: Introduction to grids and cloud computing International Symposium on Grid.
E-Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network Armenian National Grid Initiative Armenian ATLAS site (AM-04-YERPHI)
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
November 16, 2012 Seo-Young Noh Haengjin Jang {rsyoung, Status Updates on STAR Computing at KISTI.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
ALICE-USA Grid-Deployment Plans (By the way, ALICE is an LHC Experiment, TOO!) Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry Pinsky—Computing.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SALSA HPC Group School of Informatics and Computing Indiana University.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
CCS Overview Rene Salmon Center for Computational Science.
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
SESAME-NET: Supercomputing Expertise for Small And Medium Enterprises Todor Gurov Associate Professor IICT-BAS
LIOProf: Exposing Lustre File System Behavior for I/O Middleware
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
The use of the SCMS-EMI as scientific gateway in BCC of NGI_UA Authors: Andrii Golovynskyi, Andrii Malenko, Valentyna Cherepynets V. Glushkov Institute.
Hopper The next step in High Performance Computing at Auburn University February 16, 2016.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
Grids and SMEs: Experience and Perspectives Emanouil Atanassov, Todor Gurov, and Aneta Karaivanova Institute for Parallel Processing, Bulgarian Academy.
Finnish Meteorological Institute IT Services and infrastructure Matti Keränen
ORCID consortium in Finland Hanna-Mari Puuska orcid.org/ April 22nd, 2016.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
PL-Grid: Polish Infrastructure for Supporting Computational Science in the European Research Space 1 ESIF - The PLGrid Experience ACK Cyfronet AGH PL-Grid.
NIIF HPC services for research and education
Accessing the VI-SEEM infrastructure
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
GRID OPERATIONS IN ROMANIA
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
HR-ZOO Infrastructure
Heterogeneous Computation Team HybriLIT
Christof Hanke, HEPIX Spring Meeting 2008, CERN
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Connecting the European Grid Infrastructure to Research Communities
Advanced Computing Facility Introduction
Footer.
Office of Information Technology February 16, 2016
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Ernst Haunschmid, TU WIEN EOSC, 30th October 2018
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

The Finnish Grid Infrastructure Computing Environment and Tools Wednesday, 21 st of May EGI Community Forum 2014 Helsinki Luís Alves Systems Specialist at CSC – IT Center for Science, Ltd. Finland csc.fi

CSC - IT Center for Science, Ltd. Private and non-profit company owned by the Ministry of Education and Culture; Provides IT support and resources for academia, research institutes and companies; Part of the Finnish national research structure; Finnish partner on: Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 2

Computing Resources for Science Sisu - Cray XC30 Super Computer [Upgrading] –Massive computational challenges –> cores, > 23TB memory –Theoretical peak performance > 240 Tflop/s Taito - HP-Cluster [Upgrading] –Small and medium-sized tasks –Theoretical peak performance 180 Tflop/s Hippu - Application server –Interactive usage, without job scheduler –Post-processing, e.g. vizualization Pouta - Cloud Service [New] –Openstack Finnish Grid Infrastructure (FGI) Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 3

About FGI Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 4

In the beginning, we had M-Grid Interest in Grid technology rose in Finland during 2003 A consortium of 7 Universities, HIP and CSC was formed which successfully obtained funding for the FIRST Finnish Computing Grid – M-Grid Effort was driven by CSC and Kai Nordlund (HU) M-Grid was operational from 2005 to Sites Theoretical total computing capacity ~ 2.5 TFlops Infrastructure had aged significantly by end 2008 Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 5

Then, FGI is born Second generation “M-Grid” planned since 2009 –Application for funding made in October 2010 –FIRI grant approved beginning 2011 –Consortium of 9 universities and CSC Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 6

Finnish Grid Infrastructure (FGI) 10 Computing Clusters connected through network and Grid middleware that provide a peak capacity of 154 TFLOPS; Available to any researcher affiliated to a Finnish Research Institution; Operations and coordination by CSC; Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 7

FGI in EGI FGI is the Finnish NGI and EGI sees us as NGI_FI CSC is the Finnish Operations Center –Uses the monitoring and service tools provided by EGI –Follows EGI procedures for operations –Manages the Regional Operational on Duty team Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 8

FGI - also a Federation Sites maintain their own clusters –Local use is open at all sites Site administrators are encouraged to collaborate and communicate –Attending weekly admin meetings –Providing Grid software support for users –Becoming part of the FGI community Small team from CSC coordinates general administration and support Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 9

Hardware details Standard node configuration HP SG7 scale out dual 6 core 2.67GHz Xeon X GB memory (min.) Big Memory nodes HP Proliant DL 580 G7 server 1 TB memory GPGPU nodes 2 Nvidia Tesla cards in a standard compute node Disk servers: Total storage capacity of about 1 PB QDR InfiniBand & Gigabit ethernet for interconnect and network. Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 10

Operating system is Scientific Linux 6 Scheduler used is Slurm Hardware distribution Aalto: 112 nodes, 8 GPGPU nodes, two 1TB big memory nodes Lappeenranta: 16 nodes Eastern Finland: 64 nodes Helsinki:49 nodes, 20 GPGPU nodes, one 1 TB big memory node Jyväskylä: 48 nodes, 8 GPGPU nodes Oulu: 30 nodes Tampere (TUT):37 nodes, 8 GPGPU nodes, one 1 TB big memory node Turku:20 nodes Åbo Akademi: 8 GPGPU nodes CSC:24 nodes (with 96GB memory) Operating System and Scheduler Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 11

Finnish University and Research Network FUNET is an advanced data communications network serving the Finnish research community. It connects about 80 research organizations and over users. Membership in Funet is open to all Finnish university-level academies and public research institutions. Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 12

Grid Middleware FGI uses the ARC middleware –Developed by NorduGrid, part of the European Middleware Initiative (EMI) –More info: Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 13

Software distribution - Cern VM-FS Central repository for FGI’s software Makes it easy to distribute software Modules and Runtime Environments are shared through CVMFS Each Cluster has a Squid proxy that caches most used files More details on Ulf Tigerstedt presentation “Managing multidisciplinary software repositories for grid with CernVM-FS” here: Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 14

FGI Computing Environment and Tools Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 15

Scientist's User Interface (SUI) More info at: Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 16

ARC xRSL file generator tool on SUI Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 17

arcrunner – Grid Job Submission Manager “Gridification” tool developed and maintained by Kimmo Mattila (CSC) Actively used to run large job sets on FGI i.e. BLAST, InterProScan, Exonerate Selects suitable and available resouces, Submits, Monitors and Fetches jobs outputs arcrunner -xrsl average.xrsl Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 18

Runtime Environments (RTE) Extended Resource Specification Language (xRSL) file example: Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 19 & (executable=runamberMPI.sh) (jobname=amber-test) (stdout=std.out) (stderr=std.err) (gmlog=gridlog_1) (walltime=1h) (memory=200) (disk=1000) (count=6) (runtimeenvironment=ENV/ONENODE) (runtimeenvironment=APPS/CHEM/AMBER-12) (inputfiles= ( "gbin" "gbin" ) ( "md12.x" "md12.x" ) ( "prmtop" "prmtop" ) ) (outputfiles= ( "output.tar" "output.tar" ) ) Available RTEs: –AMBER –AutoDock –BLAST –Bowtie and –BWA –Cufflinks –Elmer –EMBOSS –Exonerate –FreeSurfer –GPAW –Gromacs –GSNAP –HMMER 3.0 –Interproscan5 –Matlab Compiler Runtime –MISO –MrBayes –NAMD –ORCA –R –SAMtools –SHRiMP –TopHat

Modules Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 20

Results Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 21

Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 22

Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 23 FGI

FGI in the Clouds Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 24

Future: A Grid-Cloud Hybrid Cloud-Enabled FGI Application for funds submitted in April 2014 Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 25

Thank you Questions? Credits and special thanks to: Jura Tarus; Ulf Tigerstedt; Kimmo Mattila; Universities’ FGI admins; CSC’s CE group and Staff; FGI’s former members; FGI users. Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 26 More information about and how to use FGI:

Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 27

Conclusions Performance comparison –Per core performance ~2 x compared to Vuori/Louhi –Better interconnects enhance scaling –Larger memory –Smartest collective communications Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 28

Sisu&Taito vs. FGI vs. Local Cluster Sisu&Taito (Phase 1) FGIMerope Availability Available CPU Intel Sandy Bridge, 2 x 8 cores, 2.6 GHz, Xeon E Intel Xeon, 2 x 6 cores, 2.7 GHZ, X5650 Interconnect Aries / FDR IBQDR IB Cores / RAM/core 2 / 4 GB 16x 256GB/node 2 / 4 / 8 GB4 / 8 GB Tflops 244 / GPU nodes in Phase2886 Disc space 2.4 PB1+ PB100 TB Wednesday, May 21, 2014 EGI CF 2014 Helsinki, Finland 29