Department of Computer Science, TU München

Slides:



Advertisements
Similar presentations
National Institute of Advanced Industrial Science and Technology Ninf-G - Core GridRPC Infrastructure Software OGF19 Yoshio Tanaka (AIST) On behalf.
Advertisements

DIGIDOC A web based tool to Manage Documents. System Overview DigiDoc is a web-based customizable, integrated solution for Business Process Management.
Ch:8 Design Concepts S.W Design should have following quality attribute: Functionality Usability Reliability Performance Supportability (extensibility,
Modelling FSI problems in ANSYS Fluent via UDF
An Eulerian Divergence Preserving Approach for Partitioned FSI Simulations on Cartesian Grids M. Mehl, M. Brenk, I. Muntean, T. Neckel, T. Weinzierl TU.
1 Internal Seminar, November 14 th Effects of non conformal mesh on LES S. Rolfo The University of Manchester, M60 1QD, UK School of Mechanical,
Technische Universität München Benefits of Structured Cartesian Grids for the Simulation of Fluid- Structure Interactions Miriam Mehl Department of Computer.
Steady Aeroelastic Computations to Predict the Flying Shape of Sails Sriram Antony Jameson Dept. of Aeronautics and Astronautics Stanford University First.
Computational Engineering Physics Lab. CAViDS Consortium Hydraulic Line FSI For CAViDS Consortium Internal Use Only 1 10/02/2014 For CAViDS Consortium.
Jun Peng Stanford University – Department of Civil and Environmental Engineering Nov 17, 2000 DISSERTATION PROPOSAL A Software Framework for Collaborative.
Software Issues Derived from Dr. Fawcett’s Slides Phil Pratt-Szeliga Fall 2009.
Distributed Systems: Client/Server Computing
Charm++ Load Balancing Framework Gengbin Zheng Parallel Programming Laboratory Department of Computer Science University of Illinois at.
COMP 410 & Sky.NET May 2 nd, What is COMP 410? Forming an independent company The customer The planning Learning teamwork.
Design patterns. What is a design pattern? Christopher Alexander: «The pattern describes a problem which again and again occurs in the work, as well as.
A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Vignesh Santhanagopalan Graduate Student Department Of CSE.
Numerical simulation of the Fluid-Structure Interaction in stented aneurysms M.-A. FERNÁNDEZ, J.-F. GERBEAU, J. MURA INRIA / REO Team Paris-Rocquencourt.
The Fujin Development of Parallel Coupler Takashi Arakawa Research Organization for Information Science & Technology.
Component Technology. Challenges Facing the Software Industry Today’s applications are large & complex – time consuming to develop, difficult and costly.
March 27, 2007HPC 07 - Norfolk, VA1 C++ Reflection for High Performance Problem Solving Environments Tharaka Devadithya 1, Kenneth Chiu 2, Wei Lu 1 1.
© Fraunhofer SCAI 20. February 2013 – Klaus Wolf MpCCI 4.3 (2013)
We can’t walk on water, Trinity Software computer simulation. but we can produce the.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
ICDL 2004 Improving Federated Service for Non-cooperating Digital Libraries R. Shi, K. Maly, M. Zubair Department of Computer Science Old Dominion University.
Development of a Distributed MATLAB Environment with Real-Time Data Visualization Authors: Joseph Diamond, Richard McEver Affiliation: Dr. Jian Huang,
Interactive Workflows Branislav Šimo, Ondrej Habala, Ladislav Hluchý Institute of Informatics, Slovak Academy of Sciences.
1 CSCD 326 Data Structures I Software Design. 2 The Software Life Cycle 1. Specification 2. Design 3. Risk Analysis 4. Verification 5. Coding 6. Testing.
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
Cracow Grid Workshop, November 5-6, 2001 Concepts for implementing adaptive finite element codes for grid computing Krzysztof Banaś, Joanna Płażek Cracow.
INFSO-RI Enabling Grids for E-sciencE Ganga 4 – The Ganga Evolution Andrew Maier.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
Ulrich Heck, DHCAE-Tools UG ___________________________ CAD geometry based pre-processing for CFD using abstract modeling techniques CastNet: CAD-based.
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
February 19, February 19, 2016February 19, 2016February 19, 2016 Azusa, CA Sheldon X. Liang Ph. D. Software Engineering in CS at APU Azusa Pacific.
Motivation: dynamic apps Rocket center applications: –exhibit irregular structure, dynamic behavior, and need adaptive control strategies. Geometries are.
Multigrid Methods The Implementation Wei E Universität München. Ferien Akademie 19 th Sep
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
Chapter 10 Software quality. This chapter discusses n Some important properties we want our system to have, specifically correctness and maintainability.
M. Khalili1, M. Larsson2, B. Müller1
Ifgi Institute for Geoinformatics 7th EC-GIS Workshop Potsdam, Optimization of field data management using mobile GIS, wireless technologies,
Design Engineering 1. Analysis  Design 2 Characteristics of good design 3 The design must implement all of the explicit requirements contained in the.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Application of Design Patterns to Geometric Decompositions V. Balaji, Thomas L. Clune, Robert W. Numrich and Brice T. Womack.
Strategy Design Pattern
Xing Cai University of Oslo
Alternatives to Mobile Agents
SOFTWARE DESIGN AND ARCHITECTURE
Mazen Alkoa’ & Ahmad Nabulsi Submitted to Dr. Sufyan Samara
Software Quality Engineering
An educational system for medical billers in training
#01 Client/Server Computing
Performance Evaluation of Adaptive MPI
FTCS Explicit Finite Difference Method for Evaluating European Options
Ch > 28.4.
OSKAR station simulator
Iteration 1 Presentation
INFS 6225 – Object-Oriented Systems Analysis & Design
Component Frameworks:
Chapter 5 Designing the Architecture Shari L. Pfleeger Joanne M. Atlee
Efficient Parallel Simulation of Fluid Dynamics on Cartesian Grids
GENERAL VIEW OF KRATOS MULTIPHYSICS
CS 252 Project Presentation
Ph.D. Thesis Numerical Solution of PDEs and Their Object-oriented Parallel Implementations Xing Cai October 26, 1998.
An Orchestration Language for Parallel Objects
David Cleverly – Development Lead
Deniz Beser A Fundamental Tradeoff in Knowledge Representation and Reasoning Hector J. Levesque and Ronald J. Brachman.
Chapter 2: Building a System
#01 Client/Server Computing
Building a “System” Moving from writing a program to building a system. What’s the difference?! Complexity, size, complexity, size complexity Breadth.
Presentation transcript:

Department of Computer Science, TU München A Modular and Efficient Simulation Environment for Fluid-Structure Interactions Miriam Mehl Bernhard Gatzhammer Department of Computer Science, TU München

Outline no fluid solver details FSIce motivation application programming interface modules data transfer coupling strategies numerical results

Plug-and-Play for FSI MpCCI Data Mapping Coupling Strategy Fluid Solver Structure Solver

Plug-and-Play for FSI MpCCI Fluid Solver Structure Solver

Plug-and-Play for FSI Coupling Strategy Central Surface Mesh FSIce Fluid Solver Structure Solver

FSIce – Application Programming Interface FSI_Init () FSI_Data_exchange () FSI_Finalize () FSI_Is_running () FSI_Is_new_interface_values () FSI_Is_implicit_converged () Fluid Solver API of FSIce 3 central communication functions 3 helper functions for querying coupled simulation state

FSIce – Application Programming Interface while (more time steps) Set time step length Compute values of next time step Store values of next time step end while Fluid Solver How to prepare a solver for coupled simulations with FSIce Basic solver

FSIce – Application Programing Interface while (more time steps) Read coupling data from com.mesh Set time step length Compute values of next time step Write coupling data to com.mesh Store values of next time step end while Fluid Solver Necessary for all partitioned approaches is the mapping of data to some data structure communicated

FSIce – Application Programing Interface FSI_Init () while (more time steps) Read coupling data from com.mesh Set time step length Compute values of next time step Write coupling data to com.mesh FSI_Data_exchange () Store values of next time step end while FSI_Finalize () Fluid Solver FSIce communication functions FSI init to initialize communication and exchange communication mesh and other initial information FSI Data exchange to send and receive coupling data and the state of the coupled simulation FSI Finalize to finish the coupled simulation, remove communication data structures

FSIce – Application Programing Interface FSI_Init () while (FSI_Is_running()) if (FSI_Is_new_interface_values()) Read coupling data from com.mesh Set time step length Compute values of next time step Write coupling data to com. mesh FSI_Data_exchange () if (FSI_Is_implicit_converged()) Store values of next time step end while FSI_Finalize () Fluid Solver Functions to query state of the coupled simulation Not all are necessary FSI_Is_running to give the control about the length of the coupled simulation to the supervisor FSI_Is_new_interface_values to avoid unnecessary mapping from the coupling mesh to the solver mesh, e.g. in a subcycle FSI_Is_implicit_converged for implicit coupling scheme, when in interface iteration process

FSIce – Modules - Modular structure of FSIce source code - FSIce is composed out of two main modules: Libraries and Supervisor - module libraries contains functionalities used by simulation programs to be coupled and functionalities for the coupling supervisor in module Supervisor - module Supervisor implements the coupling supervisor acting as client in the couple simulation - Libraries consists of the packages: FSImesh, FSIcom and FSItools - FSImesh provides the implementation of the coupling mesh and the FSIce internal communication interface (MPI or sockets)‏ - FSIcom defines and implements the API for solvers to be coupled - FSItools contains optional additional functionalites for solvers such as octree based neighbor search or geometry creation for Cartesian grids - Supervisor contains the packages CouplingUnit, GUI, Batch and TestDummies - CouplingUnit encapsulate the coupling schemes and logic of a coupled simulation - GUI and Batch are specific implementations of a coupling supervisor, using CouplingUnit - TestDummies contains solver dummy executables to enable a test of the coupling supervisor and the ability of a solver to perform coupled simulations with FSIce

FSIce – Data Transfer How to map data from solver mesh to coupling mesh ? Octree for efficient neighbour search Some projection / interpolation included Custom interpolations possible Provided to user in package FSItools A question appearing in every partitioned FSI simulation is how to map data from the native solver mesh to the communication data structure FSIce provides the functionality of an octree for this purpose, which allows to search for neighbouring points onthe two meshes Also some interpolation mehtods are included Users with specific requirements for interpolation schemes can write their own functions and plug them into FSIce This functionality is bundled in the package FSItools library

FSIce – Coupling Strategies Explicit (weak) Implicit (strong) Subcycling Pre-computations Many others possible: just extend supervisor A typical situation in partitioned FSI simulations is, that the fluid solver requires shorter time step lengths than the structure solver To save computation effort on the structure side, subcycling is employed, which allows different time step lengths on both sides of the coupled simulation Short explanation

Numerical Results – Implicit vs. Explicit

Numerical Results

Numerical Results – Data Mapping 17.4 sec 128,000 512 14.3 sec 32,000 10.1 sec 8,000 2.6 sec 256 runtime # triang. Cart.res. neighbourhood search measured on a Pentium M 1.6 GHz processor with 2048 kB cache

Conclusion enhancements of FSI*ce modular structure  extendability, flexibility integration of various components first examples computed future: more solvers, more coupling strategies

Thank you for your attention!

FSIce – Application Programing Interface Future goal: Multigrid Multilevel coupling mesh Multigrid scheme controlled from coupling supervisor Allows V, W, ... schemes without modifying solvers Fluid Solver A future goal will be to support partitioned coupled multigrid solution procedures This can be optimally supported by a coupling mesh with several grid resolutions The multigrid algorithm will be controlled from the coupling supervisor again When a solver has the necessary multigrid functionality integrated (restriction, perlongation) arbitrary multigrid schemes can be realized through the coupling supervisor, without the necessatiy to further modify the solver codes