1 The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° RI-261507. Towards Environment.

Slides:



Advertisements
Similar presentations
Polska Infrastruktura Informatycznego Wspomagania Nauki w Europejskiej Przestrzeni Badawczej Institute of Computer Science AGH ACC Cyfronet AGH The PL-Grid.
Advertisements

A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
A component- and message-based architectural style for GUI software
UrbanFlood Towards a framework for creation, deployment and reliable operation of distributed, time-critical applications Marian Bubak and Marek Kasztelnik.
Scientific Workflow Support in the PL-Grid Infrastructure with HyperFlow Bartosz Baliś, Tomasz Bartyński, Kamil Figiela, Maciej Malawski, Piotr Nowakowski,
GridRPC Sources / Credits: IRISA/IFSIC IRISA/INRIA Thierry Priol et. al papers.
Polish Infrastructure for Supporting Computational Science in the European Research Space GridSpace Based Virtual Laboratory for PL-Grid Users Maciej Malawski,
Cracow Grid Workshop, November 5-6, 2001 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zając.
ARCS Data Analysis Software An overview of the ARCS software management plan Michael Aivazis California Institute of Technology ARCS Baseline Review March.
Problem-Solving Environments: The Next Level in Software Integration David W. Walker Cardiff University.
© , Michael Aivazis DANSE Software Issues Michael Aivazis California Institute of Technology DANSE Software Workshop September 3-8, 2003.
The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI Multiscale APPlications.
The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI CYFRONET Programming.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space Towards scalable, semantic-based virtualized storage.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
OpenAlea An OpenSource platform for plant modeling C. Pradal, S. Dufour-Kowalski, F. Boudon, C. Fournier, C. Godin.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space The Capabilities of the GridSpace2 Experiment.
The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI Multiscale Applications.
DISTRIBUTED COMPUTING
CCA Common Component Architecture Manoj Krishnan Pacific Northwest National Laboratory MCMD Programming and Implementation Issues.
Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Component Approach to Distributed Multiscale Simulations Katarzyna Rycerz(1,2), Marian.
In each iteration macro model creates several micro modules, sends data to them and waits for the results. Using Akka Actors for Managing Iterations in.
Cracow Grid Workshop, October 27 – 29, 2003 Institute of Computer Science AGH Design of Distributed Grid Workflow Composition System Marian Bubak, Tomasz.
INFSO-RI Module 01 ETICS Overview Etics Online Tutorial Marian ŻUREK Baltic Grid II Summer School Vilnius, 2-3 July 2009.
Experience with the OpenStack Cloud for VPH Applications Jan Meizner 1, Maciej Malawski 1,2, Piotr Nowakowski 1, Paweł Suder 1, Marian Bubak 1,2 AGH University.
Slide title In CAPITALS 50 pt Slide subtitle 32 pt Model based development for the RUNES component middleware platform Gabor Batori
DataNet – Flexible Metadata Overlay over File Resources Daniel Harężlak 1, Marek Kasztelnik 1, Maciej Pawlik 1, Bartosz Wilk 1, Marian Bubak 1,2 1 ACC.
High Level Architecture (HLA)  used for building interactive simulations  connects geographically distributed nodes  time management (for time- and.
Distributed Computing Environment (DCE) Presenter: Zaobo He Instructor: Professor Zhang Advanced Operating System Advanced Operating System.
EC-project number: Universal Grid Client: Grid Operation Invoker Tomasz Bartyński 1, Marian Bubak 1,2 Tomasz Gubała 1,3, Maciej Malawski 1,2 1 Academic.
EC-project number: ViroLab Virtual Laboratory Marian Bubak ICS / CYFRONET AGH Krakow virolab.cyfronet.pl.
Polish Infrastructure for Supporting Computational Science in the European Research Space Component Approach to Distributed Multiscale Simulations Katarzyna.
A Software Framework for Distributed Services Michael M. McKerns and Michael A.G. Aivazis California Institute of Technology, Pasadena, CA Introduction.
Lightweight construction of rich scientific applications Daniel Harężlak(1), Marek Kasztelnik(1), Maciej Pawlik(1), Bartosz Wilk(1) and Marian Bubak(1,
Architecture View Models A model is a complete, simplified description of a system from a particular perspective or viewpoint. There is no single view.
Federating PL-Grid Computational Resources with the Atmosphere Cloud Platform Piotr Nowakowski, Marek Kasztelnik, Tomasz Bartyński, Tomasz Gubała, Daniel.
The Astrophysical MUltiscale Software Environment (AMUSE) P-I: Portegies Zwart Co-Is: Nelemans, Pols, O’Nuallain, Spaans Adv.: Langer, Tolstoy, Hut, Ercolano,
High Level Architecture (HLA)  used for building interactive simulations  connects geographically distributed nodes  time management (for time- and.
Marian Bubak 1,2, Włodzimierz Funika 1,2, Roland Wismüller 3, Tomasz Arodź 1,2, Marcin Kurdziel 1,2 1 Institute of Computer Science, AGH, Kraków, Poland.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space The Capabilities of the GridSpace2 Experiment.
Migrating Desktop Uniform Access to the Grid Marcin Płóciennik Poznan Supercomputing and Networking Center Poznan, Poland EGEE’07, Budapest, Oct.
The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI Requirements for Multiscale.
Migrating Desktop Uniform Access to the Grid Marcin Płóciennik Poznan Supercomputing and Networking Center Poland EGEE’08 Conference, Istanbul, 24 Sep.
The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI Tools for Building and.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
TTCN-3 Testing and Test Control Notation Version 3.
The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI CYFRONET Hands.
InSilicoLab – Grid Environment for Supporting Numerical Experiments in Chemistry Joanna Kocot, Daniel Harężlak, Klemens Noga, Mariusz Sterzel, Tomasz Szepieniec.
The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI CYFRONET Multiscale.
The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI EGI and PRACE ecosystem.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI Distributed Multiscale.
PLG-Data and rimrock Services as Building
Seasonal School Demo and Assigments
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
The Role of Reflection in Next Generation Middleware
Demo of the Model Execution Environment WP2 Infrastructure Platform
Model Execution Environment for Investigation of Heart Valve Diseases
DICE - Distributed Computing Environments Team
Recap: introduction to e-science
Initial Adaptation of the Advanced Regional Prediction System to the Alliance Environmental Hydrology Workbench Dan Weber, Henry Neeman, Joe Garfield and.
CO6025 Advanced Programming
Abstract Machine Layer Research in VGrADS
University of Technology
PROCESS - H2020 Project Work Package WP6 JRA3
Management of Virtual Execution Environments 3 June 2008
System Concept Simulation for Concurrent Engineering
The ViroLab Virtual Laboratory for Viral Diseases
A Survey of Interactive Execution Environments
Presentation transcript:

1 The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI Towards Environment for Multiscale Applications Katarzyna Rycerz, Eryk Ciepiela, Joanna Kocot, Marcin Nowak, Pawel Pierzchała, Marian Bubak KUKDM, Zakopane,

2 Overview Multiscale simulations - overview Tools for multiscale simulations MUSCLE-based multiscale application in GridSpace Example demo with In-stent restenosis application Summary and future work

3 Multiscale Simulations Consists of modules of different scale Examples – e.g. modelling: virtual physiological human initiative reacting gas flows capillary growth colloidal dynamics stellar systems and many more... the reoccurrence of stenosis, a narrowing of a blood vessel, leading to restricted blood flow

4 Tools for multiscale simulations Model Couling Toolkit Applies a message passing (MPI) style of communication between simulation models. Oriented towards domain data decomposition of the simulated problem Provides a support for advanced data transformations between different models J. Larson, R. Jacob, E. Ong ”The Model Coupling Toolkit: A New Fortran90 Toolkit for Building Multiphysics Parallel Coupled Models.” 2005: Int. J. High Perf. Comp. App.,19(3), Astrophysical Multi-Scale Environment (AMUSE) Scripting approach (Python) is used to couple models together., MPI used to distribute modules Astrophysical models : stellar evolution, hydrodynamics, stellar dynamics and radiative transfer S. Portegies Zwart, S. McMillan, at al. A Multiphysics and Multiscale Software Environment for Modeling Astrophysical Systems, New Astronomy, volume 14, issue 4, year 2009, pp High Level Architecture components : Application components run concurrently and communicate using HLA mechanisms Coponents are steerable from outside during runtime using script interface Support for synchronisation between multiscale modules - time stamps, advanced time management K. Rycerz, M. Bubak, P. M. A. Sloot: HLA Component Based Environment For Distributed Multiscale Simulations In: T. Priol and M. Vanneschi (Eds.), From Grids to Service and Pervasive Computing, Springer, 2008, pp

5 Goals Support composition of simulation models. scripting approach to programmatically access simulation components to build multi-disciplinary and multi-scale “in silico” experiments Support execution of such experiments and achieve their reusability Integrate solutions designed for multiscale simulations’ development with possibilities given by: tools for multiscale simulations environments for application composition computational e-Infrastructures

6 Experiment workbench Constructing experiment plans from code snippets Interactively run experiments Experiment Execution Environment Multiple interpreters Access to libraries, programs and services (gems) Access to computing infrastructure: Cluster, grid, cloud Experience Virolab project PL-Grid NGI E. Ciepiela, D. Harezlak, J. Kocot, T. Bartynski, M. Kasztelnik, P. Nowakowski, T. Gubała, M. Malawski, M. Bubak; Exploratory Programming in the Virtual Laboratory, in Proceedings of the International Multiconference on Computer Science and Information Technology pp. 621– 628. GridSpace

7 Multiscale Coupling Library and Environment (MUSCLE) Provides a software framework to build simulations according to the complex automata theory Introduces concept of kernels that communicate by unidirectional pipelines dedicated to pass a specific kind of data from/to a kernel (asynchronous communication) J. Hegewald, M. Krafczyk, J. Tlke, A. G. Hoekstra, and B. Chopard. An agent-based coupling platform for complex automata. ICCS, volume 5102 of Lecture Notes in Computer Science, pages Springer, # CxA configuration of sample application # configure cxa properties cxa = Cxa.LAST cxa.env["max_timesteps"] = 2 cxa.env["cxa_path"] = File.dirname(__FILE__) # declare kernels cxa.add_kernel('w', 'examples.simplejava.Sender') cxa.add_kernel('r', 'examples.simplejava.ConsoleWriter') # configure connection scheme cs = cxa.cs cs.attach('w' => 'r') { tie('data', 'data') } Sender Module Console Module Sample application MUSCLE communication

8 GridSpace for MUSCLE application Integrated environment for: Configuring modules connections and parameters in cxa file Visualizing modules connections Running application on a chosen e-infrastructure Interactive post-processing the output using various tools (e.g. MATLAB)

9 MUSCLE application in GridSpace GridSpace Experiment host: Interpreters and libraries accessing PBS User files GridSpace Experiment Workbench QCG Grid Resource Management System BF Module SMC Module DD Module other Modules QCG infrastructure MUSCLE communication BF Module SMC Module DD Module other Modules Local DRMS (PBS) MUSCLE communication MUSCLE CxA Graphical Viewer - Models connections Various scripts editors General (Python, Ruby, Perl..) Specific (Matlab, Mathematica, CxA interpreter) Infrastructure access layer User file management -simulation output view cluster

10 Tool for automatic MUSCLE application distribution Main features Accessible from GridSpace level Automatically distributes MUSCLE applications in GRID environment Live stdout/stderr streaming Based on Distributed Ruby (DRb) and PBS

11 Demo – Instent restenosis in GridSpace kljyXIwhttp:// kljyXIw

12 Summary and Future Work GridSpace can be used as a high level tool for setting up and running MUSCLE-based multiscale applications We plan to extend our solution to a set of tools supporting programming and execution of multiscale applications in general To control and test behaviour of such applications we plan to support creation of their skeletons parametrised „empty” multiscale application of the same structure and requirements as the real one. We plan support for various European e-infrastructures and cloud resources See:

13 MAPPER architecture Develop computational strategies, software and services for distributed multiscale simulations across disciplines exploiting existing and evolving European e-infrastructure Deploy a computational science infrastructure Deliver high quality components aiming at large-scale, heterogeneous, high performance multi-disciplinary multiscale computing. Advance state-of-the-art in high performance computing on e- infrastructures enable distributed execution of multiscale models across e- Infrastructures,