Simultech 2011, 29-31 July, 2011, Noordwijkerhout, The Netherlands Component Approach to Distributed Multiscale Simulations Katarzyna Rycerz(1,2), Marian.

Slides:



Advertisements
Similar presentations
Polska Infrastruktura Informatycznego Wspomagania Nauki w Europejskiej Przestrzeni Badawczej Institute of Computer Science AGH ACC Cyfronet AGH The PL-Grid.
Advertisements

Institute of Computer Science AGH Towards Multilanguage and Multiprotocol Interoperability: Experiments with Babel and RMIX Maciej Malawski, Daniel Harężlak,
MicroKernel Pattern Presented by Sahibzada Sami ud din Kashif Khurshid.
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Distributed Systems Major Design Issues Presented by: Christopher Hector CS8320 – Advanced Operating Systems Spring 2007 – Section 2.6 Presentation Dr.
Autonomic Systems Justin Moles, Winter 2006 Enabling autonomic behavior in systems software with hot swapping Paper by: J. Appavoo, et al. Presentation.
Cracow Grid Workshop, November 5-6, 2001 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zając.
Software Connectors. Attach adapter to A Maintain multiple versions of A or B Make B multilingual Role and Challenge of Software Connectors Change A’s.
Seyed Mohammad Ghaffarian ( ) Computer Engineering Department Amirkabir University of Technology Fall 2010.
Scripting Languages For Virtual Worlds. Outline Necessary Features Classes, Prototypes, and Mixins Static vs. Dynamic Typing Concurrency Versioning Distribution.
Building Parallel Time-Constrained HLA Federates: A Case Study with the Parsec Parallel Simulation Language Winter Simulation Conference (WSC’98), Washington.
Software Engineering Module 1 -Components Teaching unit 3 – Advanced development Ernesto Damiani Free University of Bozen - Bolzano Lesson 2 – Components.
Copyright Arshi Khan1 System Programming Instructor Arshi Khan.
Architectural Design Establishing the overall structure of a software system Objectives To introduce architectural design and to discuss its importance.
The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI CYFRONET Programming.
Client/Server Software Architectures Yonglei Tao.
What is Software Architecture?
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space Cracow Grid Workshop’10 Kraków, October 11-13,
Cracow Grid Workshop 2003 Institute of Computer Science AGH A Concept of a Monitoring Infrastructure for Workflow-Based Grid Applications Bartosz Baliś,
Advanced Grid-Enabled System for Online Application Monitoring Main Service Manager is a central component, one per each.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space The Capabilities of the GridSpace2 Experiment.
Chapter 6 Operating System Support. This chapter describes how middleware is supported by the operating system facilities at the nodes of a distributed.
CCA Common Component Architecture Manoj Krishnan Pacific Northwest National Laboratory MCMD Programming and Implementation Issues.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Architecting Web Services Unit – II – PART - III.
High Level Architecture Overview and Rules Thanks to: Dr. Judith Dahmann, and others from: Defense Modeling and Simulation Office phone: (703)
Meta Scheduling Sathish Vadhiyar Sources/Credits/Taken from: Papers listed in “References” slide.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.
In each iteration macro model creates several micro modules, sends data to them and waits for the results. Using Akka Actors for Managing Iterations in.
Cracow Grid Workshop, October 27 – 29, 2003 Institute of Computer Science AGH Design of Distributed Grid Workflow Composition System Marian Bubak, Tomasz.
Grid Computing Research Lab SUNY Binghamton 1 XCAT-C++: A High Performance Distributed CCA Framework Madhu Govindaraju.
A Web-based Distributed Simulation System Christopher Taewan Ryu Computer Science Department California State University, Fullerton.
The High Level Architecture Introduction. Outline High Level Architecture (HLA): Background Rules Interface Specification –Overview –Class Based Subscription.
1 Geospatial and Business Intelligence Jean-Sébastien Turcotte Executive VP San Francisco - April 2007 Streamlining web mapping applications.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
High Level Architecture (HLA)  used for building interactive simulations  connects geographically distributed nodes  time management (for time- and.
Presented by An Overview of the Common Component Architecture (CCA) The CCA Forum and the Center for Technology for Advanced Scientific Component Software.
EC-project number: Universal Grid Client: Grid Operation Invoker Tomasz Bartyński 1, Marian Bubak 1,2 Tomasz Gubała 1,3, Maciej Malawski 1,2 1 Academic.
OPERATING SYSTEM SUPPORT DISTRIBUTED SYSTEMS CHAPTER 6 Lawrence Heyman July 8, 2002.
Server to Server Communication Redis as an enabler Orion Free
1 The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI Towards Environment.
Distribution and components. 2 What is the problem? Enterprise computing is Large scale & complex: It supports large scale and complex organisations Spanning.
Polish Infrastructure for Supporting Computational Science in the European Research Space Component Approach to Distributed Multiscale Simulations Katarzyna.
SOFTWARE DESIGN AND ARCHITECTURE LECTURE 13. Review Shared Data Software Architectures – Black board Style architecture.
A Software Framework for Distributed Services Michael M. McKerns and Michael A.G. Aivazis California Institute of Technology, Pasadena, CA Introduction.
CSC480 Software Engineering Lecture 10 September 25, 2002.
Mike Graves Summer 2005 University of Texas at Dallas Implicit Invocation: The Task Control Architecture Mike Graves CS6362 Term Paper Dr. Lawrence Chung.
Design Reuse Earlier we have covered the re-usable Architectural Styles as design patterns for High-Level Design. At mid-level and low-level, design patterns.
The Astrophysical MUltiscale Software Environment (AMUSE) P-I: Portegies Zwart Co-Is: Nelemans, Pols, O’Nuallain, Spaans Adv.: Langer, Tolstoy, Hut, Ercolano,
Programmability Hiroshi Nakashima Thomas Sterling.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
High Level Architecture (HLA)  used for building interactive simulations  connects geographically distributed nodes  time management (for time- and.
SOFTWARE DESIGN AND ARCHITECTURE LECTURE 15. Review Interaction-Oriented Software Architectures – MVC.
1 ProActive GCM – CCA Interoperability Maciej Malawski, Ludovic Henrio, Matthieu Morel, Francoise Baude, Denis Caromel, Marian Bubak Institute of Computer.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space The Capabilities of the GridSpace2 Experiment.
DS-Grid: Large Scale Distributed Simulation on the Grid Georgios Theodoropoulos Midlands e-Science Centre University of Birmingham, UK Stephen John Turner,
MSF and MAGE: e-Science Middleware for BT Applications Sep 21, 2006 Jaeyoung Choi Soongsil University, Seoul Korea
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
The Mapper project receives funding from the EC's Seventh Framework Programme (FP7/ ) under grant agreement n° RI CYFRONET Hands.
Seasonal School Demo and Assigments
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Architecting Web Services
SOFTWARE DESIGN AND ARCHITECTURE
Architecting Web Services
Parallel Objects: Virtualization & In-Process Components
University of Technology
Chapter 3: Windows7 Part 4.
Software Connectors – A Taxonomy Approach
Software Architecture
Presentation transcript:

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Component Approach to Distributed Multiscale Simulations Katarzyna Rycerz(1,2), Marian Bubak(1,3) (1) AGH University of Technology, Institute of Computer Science AGH, Mickiewicza 30, Kraków, Poland (2) ACC Cyfronet AGH, ul. Nawojki 11, Kraków, Poland (3)University of Amsterdam, Institute for Informatics, Amsterdam, The Netherlands

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Outline Requirements of multiscale simulations Motivation for a component model for such simulations HLA-based component model: idea, design challenges and solutions Experiment with Multiscale Multiphysics Scientific Environment (MUSE) Execution in GridSpace VL (demo) Summary

Multiscale Simulations Consists of modules of different scale Examples: virtual physiological human initiative reacting gas flows capillary growth colloidal dynamics stellar systems and many more... the reoccurrence of stenosis, a narrowing of a blood vessel, leading to restricted blood flow

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Multiscale Simulations - Requirements Actual connection of two or more models together obeying laws of physics (e.g. conservation law) advanced time management: ability to connect modules with different time scales and internal time management support for connecting models of different space scale Composability and reusability of existing models of different scale finding existing models needed and connecting them either together or to new models ease of plugging in and unplugging of models from a running system standarized models’ connections + many users sharing their models = more chances for general solutions

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Motivation To wrap simulations into recombinant components that can be selected and assembled in various combinations to satisfy requirements of multiscale simulations machanisms specyfic for distributed multiscale simulation adaptation of one of the existing solutions for distributed simulations – our choice – High Level Architecture (HLA) support for long running simulations - setup and steering of components should be possible also during runtime possibility to wrap legacy simulation kernels into components Need for an infrastructure that facilitates cross-domain exchange of components among scientists need for support for the component model using Grid solutions (e-infrastructures) for crossing administrative domains

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Related Work Model Coupling Toolkit message passing (MPI) style of communication between simulation models. domain data decomposition of the simulated problem support for advanced data transformations between different models J. Larson, R. Jacob, E. Ong ”The Model Coupling Toolkit: A New Fortran90 Toolkit for Building Multiphysics Parallel Coupled Models.” 2005: Int. J. High Perf. Comp. App.,19(3), Multiscale Multiphysics Scientific Environment (MUSE), now AMUSE The Astrophysical Multi-Scale Environment scripting approach (Python) is used to couple models together. models include: stellar evolution, hydrodynamics, stellar dynamics and radiative transfer S. Portegies Zwart, S. McMillan, at al. A Multiphysics and Multiscale Software Environment for Modeling Astrophysical Systems, New Astronomy, volume 14, issue 4, year 2009, pp The Multiscale Coupling Library and Environment (MUSCLE) a software framework to build simulations according to the complex automata theory concept of kernels that communicate by unidirectional pipelines dedicated to pass a specific kind of data from/to a kernel (asynchronous communication) J. Hegewald, M. Krafczyk, at al.. An agent-based coupling platform for complex automata. ICCS, volume 5102 of Lecture Notes in Computer Science, pages Springer, 2008.

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Why High Level Architecture (HLA) ? Introduces the concept of simulation systems (federations) built from distributed elements (federates) Supports joining models of different time scale - ability to connect simulations with different internal time management in one system Supports data management (publish/subscribe mechanism) Separates actual simulation from communication between fedarates Partial support for interoperability and reusability (Simulation Object Model (SOM), Federation Object Model (FOM), Base Object Model (BOM)) Well-known IEEE and OMT standard Reference implementation – HLA Runtime Infrastructure (HLA RTI) Open source implementations available – e.g. CERTI, ohla

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands HLA Component Model Model differs from common models (e.g. CCA) – no direct connections, no remote procedure call (RPC) Components run concurrently and communicate using HLA mechanisms Components use HLA facilities (e.g. time and data management) Differs from original HLA mechanism: interactions can be dynamically changed at runtime by a user change of state is triggered from outside of any federate CCA model HLA model

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands HLA Components Design Challenges Transfer of control between many layers requests from the Grid layer outside the component simulation code layer HLA RTI layer. The component should be able to efficiently process concurrently: actual simulation that communicates with other simulation components via RTI layer external requests of changing state of simulation in HLA RTI layer. Simulation Code CompoHLA library HLA RTI Component HLA Component HLA Grid platform (H2O) External requests: start/stop join/resign set time policy publish/subscribe Grid platform (H2O)

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands HLA RTI Concurrent Access Control Use concurrent access exception handling available in HLA Transparent to developer Synchronous mode - requests processed as they come simulation is running in a separate thread Dependent on implementation of concurrency control in used HLA RTI Concurrency difficult to handle effectively e.g starvation of requests that causes overhead in simulation execution Simulation Code CompoHLA library HLA RTI (concurrent access control) Component HLA Component HLA Grid platform (H2O) External requests Grid platform (H2O)

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Advanced Solution - Use Active Object Pattern Requires to call a single routine in a simulation loop Asynchronous mode - separates invocation from execution Requests processed when scheduler is called from simulation loop Independent on behavior of HLA implementation Concurrency easy to handle JNI used for communication between Simulation Code, Scheduler and CompoHLA library Simulation Code CompoHLA library HLA RTI Component HLA Component HLA Grid platform (H2O) External requests Grid platform (H2O) Scheduler Queue

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Interactions between Components Modules taken from Multiscale Multiphysics Scientific Environment (MUSE) Multiscale simulation of dense stellar systems Two modules of different time scale: stellar evolution (macro scale) stellar dynamics - N-body simulation (meso scale) Data management mass of changed stars are sent from evolution (macro scale) to dynamics (meso scale) no data is needed from dynamics to evolution data flow affects whole dynamics simulation Dynamics takes more steps than evolution to reach the same point of simulation time Time management - regulating federate (evolution) regulate the progress in time of constrained federate (dynamics) The maximal point in time which the constrained federate can reach (LBTS) at certain moment is calculated dynamically according to the position of regulating federate on the time axis

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Experiment Results Concurrent execution, conservative approach of dynamics and evolution as HLA components (total time 18.3 sec): Pure calculations of more computationally intensive (dynamics) component 17.6 sec Component architecture overhead: Request processing (through grid and component layer) 4-14 msec depending on request type Request realisation (scheduler) 0.6 sec HLA-based distribution overhead: Synchronization with evolution component 7 msec H2O v2.1 as a Grid platform and HLA CERTI v – open source Experiment run on DAS3 grid nodes in: Delft (MUSE sequential version and dynamics component) Amsterdam UvA (evolution component) Leiden (component client) Amsterdam VU (RTIexec control process) Detailed results - in a paper

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands HLA Components in GridSpace VL Demo

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Demo experiment – allocation of resources H2O kernel node A H2O kernel node B Ruby script (snippet 1) Run PBS job allocate nodes start H2O kernels GridSpace user PBS run job (start H2O kernel)

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Ruby script (snippet 1) H2O kernel node A H2O kernel node B Jruby script (snippet 2) Asks selected components to join simulation system Asks selected components to publish or subscribe to data objects (stars) Asks components to set their time policy Determines where output/error streams should go HLA communication join federation subscribe publish be constrained be regulating Dynamics HLAComponent Evolution HLAComponent GridSpace set streaming user create components Demo experiment – simulation setup

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Ruby script (snippet 1) GridSpace H2O kernel node A H2O kernel node B Asks components to start Alters the time policy at runtime Stop Dynamics HLAComponent Evolution HLAComponent HLA communication start unset regulation Star data object unset constrained Jruby script (snippet 2) user Jruby script (snippet 3) Jruby script (snippet 4) stop Dynamics view Evolution view Out/err Demo experiment - execution

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Ruby script (snippet 1) H2O kernel node A H2O kernel node B Ruby script (snippet 5) Delete job stop H2O kernels release nodes GridSpace user PBS Delete job ( stop H2O kernels) Ruby script (snippet 1) Ruby script (snippet 1) Ruby script (snippet 5) Demo experiment – cleaning up

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Recorded demo: HLA Components in GridSpace VL

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Summary Presented HLA component model enables the user to dynamically compose/decompose distributed simulations from multiscale elements residing on the Grid Architecture of the HLA component supports steering of interactions with other components during simulation runtime The presented approach differs from that in original HLA, where all decisions about actual interactions are made by federates themselves. The functionality of the prototype is shown on the example of multiscale simulation of a dense stellar system – MUSE environment. Experiment results show that that grid and component layers do not introduce much overhead. HLA components can be run and managed within GridSpace Virtual Laboratory

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands For more information see:

Simultech 2011, July, 2011, Noordwijkerhout, The Netherlands Experiment Results Comparision of: Concurrent execution, conservative approach of dynamics and evolution as HLA components Sequential execution (MUSE) Timing of: Request processing (through grid and component layer) Request realisation (scheduler) H2O v2.1 as a Grid platform and HLA CERTI v – open source Experiment run on DAS3 grid nodes in: Delft (MUSE sequential version and dynamics component) Amsterdam UvA (evolution component) Leiden (component client) Amsterdam VU (RTIexec control process)