Www.eu-eela.org E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional.

Slides:



Advertisements
Similar presentations
E-science grid facility for Europe and Latin America Marcelo Risk y Juan Francisco García Eijó Laboratorio de Sistemas Complejos Departamento.
Advertisements

E-science grid facility for Europe and Latin America Marcelo Risk y Juan Francisco García Eijó Laboratorio de Sistemas Complejos Departamento.
Practical techniques & Examples
Using Parallel Genetic Algorithm in a Predictive Job Scheduling
E-science grid facility for Europe and Latin America E2GRIS1 Jaime Parada, Edgar Perdomo – UCV Itacuruça (Brazil), 2-15 November 2008 CATIVIC.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Workload Management meeting 07/10/2004 Federica Fanzago INFN Padova Grape for analysis M.Corvo, F.Fanzago, N.Smirnov INFN Padova.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Supporting MPI Applications on EGEE Grids Zoltán Farkas MTA SZTAKI.
Cross Cluster Migration Remote access support Adianto Wibisono supervised by : Dr. Dick van Albada Kamil Iskra, M. Sc.
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio COMETA
Asynchronous Web Services Approach Enrique de Andrés Saiz.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
E-science grid facility for Europe and Latin America FISIOCOMP - Laboratory of Computational Physiology Computer Science Department Universidade.
E-science grid facility for Europe and Latin America WAM Final Report Yassine LASSOUED & Ali Al Othman Coastal and Marine Resources Centre.
E-science grid facility for Europe and Latin America Bridging OurGrid-based and gLite-based Grid Infrastructures Abmar de Barros, Adabriand.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Neural and Evolutionary Computing - Lecture 10 1 Parallel and Distributed Models in Evolutionary Computing  Motivation  Parallelization models  Distributed.
E-science grid facility for Europe and Latin America gRREEMM Status Report-3 Nov 13, 2008 E2GRIS1 Alina Roig Rassi Maikel Dominguez Garcia.
E-science grid facility for Europe and Latin America OurGrid E2GRIS1 Rafael Silva Universidade Federal de Campina.
Computational grids and grids projects DSS,
E-science grid facility for Europe and Latin America Watchdog: A job monitoring solution inside the EELA-2 Infrastructure Riccardo Bruno,
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
1 HeMoLab - Porting HeMoLab's SolverGP to EELA glite Grid Environment FINAL REPORT Ramon Gomes Costa - Paulo Ziemer.
E-science grid facility for Europe and Latin America Marcelo Risk y Juan Francisco García Eijó Laboratorio de Sistemas Complejos Departamento.
E-science grid facility for Europe and Latin America E2GRIS1 Raúl Priego Martínez – CETA-CIEMAT (Spain)‏ Itacuruça (Brazil), 2-15 November.
E-science grid facility for Europe and Latin America E2GRIS1 André A. S. T. Ribeiro – UFRJ (Brazil) Itacuruça (Brazil), 2-15 November 2008.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
E-science grid facility for Europe and Latin America E2GRIS1 Rolando Navarro Jara Omar Palomino Huamaní International Potato Center Itacuruça.
Enabling Grids for E-sciencE EGEE-III INFSO-RI Using DIANE for astrophysics applications Ladislav Hluchy, Viet Tran Institute of Informatics Slovak.
E-science grid facility for Europe and Latin America E2GRIS1 Claudio Baeza Retamal and Rodrigo Delgado Urzúa SAEMC Project (
E-science grid facility for Europe and Latin America E2GRIS1 Alina Roig Rassi Maikel Dominguez Garcia CUBAENERGIA Itacuruça (Brazil), 2-15.
IST E-infrastructure shared between Europe and Latin America Climate Application Jose M. Gutierrez Valvanuz Fernandez Antonio.
E-science grid facility for Europe and Latin America Using Secure Storage Service inside the EELA-2 Infrastructure Diego Scardaci INFN (Italy)
E-science grid facility for Europe and Latin America gLite MPI Tutorial for Grid School Daniel Alberto Burbano Sefair, Universidad de Los.
LOGO Development of the distributed computing system for the MPD at the NICA collider, analytical estimations Mathematical Modeling and Computational Physics.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Status report on Application porting at SZTAKI.
Satellital Image Clasification with neural networks Step implemented – Final Report Susana Arias, Héctor Gómez UNIVERSIDAD TÉCNICA PARTICULAR DE LOJA ECUADOR.
Working with AliEn Kilian Schwarz ALICE Group Meeting April
George Goulas, Christos Gogos, Panayiotis Alefragis, Efthymios Housos Computer Systems Laboratory, Electrical & Computer Engineering Dept., University.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Project18’s Communication Drawing Design By: Camilo A. Silva BIOinformatics Summer 2008.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks HYP3D Gilles Bourhis Equipe SIMPA, laboratoire.
1 P-GRADE Portal tutorial at EGEE’09 Introduction to hands-on Gergely Sipos MTA SZTAKI EGEE.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Introduction to P-GRADE Portal hands-on Miklos Kozlovszky MTA SZTAKI
OPTIMIZATION OF DIESEL INJECTION USING GRID COMPUTING Miguel Caballer Universidad Politécnica de Valencia.
1 P-GRADE Portal hands-on Gergely Sipos MTA SZTAKI Hungarian Academy of Sciences.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Practical using WMProxy advanced job submission.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
Satellital Image Clasification with neural networks Susana Arias, Héctor Gómez UNIVERSIDAD TÉCNICA PARTICULAR DE LOJA ECUADOR
First INFN International School on Architectures, tools and methodologies for developing efficient large scale scientific computing applications Ce.U.B.
1 Support for parameter study applications in the P-GRADE Portal Gergely Sipos MTA SZTAKI (Hungarian Academy of Sciences)
Enabling Grids for E-sciencE LRMN ThIS on the Grid Sorina CAMARASU.
INFSO-RI Enabling Grids for E-sciencE Padova site report Massimo Sgaravatto On behalf of the JRA1 IT-CZ Padova group.
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio Cometa
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Enabling Grids for E-sciencE Work Load Management & Simple Job Submission Practical Shu-Ting Liao APROC, ASGC EGEE Tutorial.
E-science grid facility for Europe and Latin America gRREEMM Report-1 Nov 7, 2008 E2GRIS1 Alina Roig Rassi Maikel Dominguez Garcia CUBAENERGIA.
Enabling Grids for E-sciencE Claudio Cherubino INFN DGAS (Distributed Grid Accounting System)
Stephen Childs Trinity College Dublin
Checkpoint/restart in Slurm: current status and new developments
Advanced Topics: MPI jobs
Dag Toppe Larsen UiB/CERN CERN,
gLite MPI Job Amina KHEDIMI CERIST
Dag Toppe Larsen UiB/CERN CERN,
Grid Application Support Group Case study Schrodinger equations on the Grid Status report 16. January, Created by Akos Balasko
Introduction to P-GRADE Portal hands-on
CompChem VO: User experience using MPI
Special Jobs: MPI Alessandro Costa INAF Catania
Presentation transcript:

E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional - UFJF Itacuruça (Brazil), 2-15 November 2008 Heart Simulator Final Report

Itacuruça (Brazil), E2GRIS1, – Introduction InvCell –Genetic algorithm to calculate parameters of a set of equations, which simulate the cardiac cells Heart Simulator –Forward Problem –Solves the bidomain equations, which calculate the variance of voltage in cardiac tissue InvTissue –GA to estimate parameters to the bidomain equation

Itacuruça (Brazil), E2GRIS1, – InvCell This application was implemented using MPI; Cluster with 6 nodes; The first step was modify the algorithm strategy;

Itacuruça (Brazil), E2GRIS1, – InvCell Old strategy of parallelism: –Synchronous –The master splits all individual equally among the slaves Slaves Master Slaves Master Individuals waiting for a free slave Master I finish, send me more work! The new one –Asynchronous The more efficient slaves work more

Itacuruça (Brazil), E2GRIS1, – InvCell Why asynchronous instead of synchronous? –Synchronous method works well in a cluster using computers equal to each other; –The grid infra-structure is very heterogeneous; –So, asynchronous method avoids wasting of computational resources; –The master and another slaves don't have to wait the slowest slave finishes its task.

Itacuruça (Brazil), E2GRIS1, – InvCell During the porting... –We had problems to compile the application in the user interface:  Lots of libraries;  We had compiled statically; The static executable file is very big; It takes more time to the grid dealing with it;  So our tutor installed the libraries in the working nodes.

Itacuruça (Brazil), E2GRIS1, – InvCell We have fixed some problems in the code; Next step: –Put all parameters in the JDL file. Problems: –But some parameters usually had been lost. –Access permission to parameter files. Solutions: –using a script to copy and register parameter files in SE; –using a script to copy the executable file to each machine used in execution; –change some access permissions. The forward problem had the same problems.

Itacuruça (Brazil), E2GRIS1, – The Simulator Porting We already tried porting to GILDA before the E2gris1 Create a JDL file with Job Type MPICH and the number of nodes to run The job was submited successfully but status remained “running” until proxy expired Same thing tried in e2gris1 –Same problem happend

Itacuruça (Brazil), E2GRIS1, – The Simulator Porting Tutors showed us where the problem could be Many problems –Too many parameters –Parameter files too big Use the Storage Elements (SE)‏ –Send parameters to the SE –Retrieve the files before the execution –All working nodes copy the files to its local disk

Itacuruça (Brazil), E2GRIS1, – The Simulator Porting The shell script –Used in every working node to copy files from the storage element –Change the binary permissions –Run the binary file –Copy the results back to the storage elements JDL file changed to run the shell script insted of the binary Each working node runs a sequential version of the binary file –Loss of MPICH parameters to set the master and slaves

Itacuruça (Brazil), E2GRIS1, – The Simulator Porting The solution: use MPI hooks and some environment variables Set of environment variables to capture the parameters used by MPICH to run in parallel Possibility to include some code to execute before the application is started and after it's done –Pre run hooks  Here the parametes files were copied from the Ses –Post run hooks  Result files were copied to the SE

Itacuruça (Brazil), E2GRIS1, – The Simulator Porting The job still showed the status “running” until the proxy expired Executable Binary too big to be copied with the InputSandbox –50+ MB Same solution to parameter files –Copy-register the binary to the storage element –Copy the binary to the WN in the pre run hooks

Itacuruça (Brazil), E2GRIS1, – Conclusion E2gris1 helped us gridify 2/3 of the applications we intended to –InvCell is gridified –The Simulator is gridified –InvTissue is not InvTissue could not be ported because we coudn't contact the developer to fix some issues that are not related to the grid We believe the same gridification process used in the other applications could do the work

Itacuruça (Brazil), E2GRIS1, – Questions … 14