Support for Adaptive Computations Applied to Simulation of Fluids in Biological Systems Kathy Yelick U.C. Berkeley.

Slides:



Advertisements
Similar presentations
Barcelona Supercomputing Center. The BSC-CNS objectives: R&D in Computer Sciences, Life Sciences and Earth Sciences. Supercomputing support to external.
Advertisements

1 Advancing Supercomputer Performance Through Interconnection Topology Synthesis Yi Zhu, Michael Taylor, Scott B. Baden and Chung-Kuan Cheng Department.
1 DISTRIBUTION STATEMENT XXX– Unclassified, Unlimited Distribution Laser Propagation Modeling Making Large Scale Ultrashort Pulse Laser Simulations Possible.
Parallelizing stencil computations Based on slides from David Culler, Jim Demmel, Bob Lucas, Horst Simon, Kathy Yelick, et al., UCB CS267.
Languages and Compilers for High Performance Computing Kathy Yelick EECS Department U.C. Berkeley.
1 Synthesis of Distributed ArraysAmir Kamil Synthesis of Distributed Arrays in Titanium Amir Kamil U.C. Berkeley May 9, 2006.
Cactus in GrADS (HFA) Ian Foster Dave Angulo, Matei Ripeanu, Michael Russell.
Towards a Digital Human Kathy Yelick EECS Department U.C. Berkeley.
Unified Parallel C at LBNL/UCB UPC at LBNL/U.C. Berkeley Overview Kathy Yelick U.C. Berkeley, EECS LBNL, Future Technologies Group.
Support for Adaptive Computations Applied to Simulation of Fluids in Biological Systems Kathy Yelick U.C. Berkeley.
Simulating the Cochlea With Titanium Generic Immersed Boundary Software (TiGIBS) Contractile Torus: (NYU) Oval Window of Cochlea: (CACR) Mammalian Heart:
1 Parallel multi-grid summation for the N-body problem Jesús A. Izaguirre with Thierry Matthey Department of Computer Science and Engineering University.
Applications for K42 Initial Brainstorming Paul Hargrove and Kathy Yelick with input from Lenny Oliker, Parry Husbands and Mike Welcome.
Parallel Mesh Refinement with Optimal Load Balancing Jean-Francois Remacle, Joseph E. Flaherty and Mark. S. Shephard Scientific Computation Research Center.
Impact of the Cardiac Heart Flow Alpha Project Kathy Yelick EECS Department U.C. Berkeley.
Programming Systems for a Digital Human Kathy Yelick EECS Department U.C. Berkeley.
© , Michael Aivazis DANSE Software Issues Michael Aivazis California Institute of Technology DANSE Software Workshop September 3-8, 2003.
Support for Adaptive Computations Applied to Simulation of Fluids in Biological Systems Immersed Boundary Method Simulation in Titanium Siu Man Yau, Katherine.
Support for Adaptive Computations Applied to Simulation of Fluids in Biological Systems Kathy Yelick U.C. Berkeley.
Use of a High Level Language in High Performance Biomechanics Simulations Katherine Yelick, Armando Solar-Lezama, Jimmy Su, Dan Bonachea, Amir Kamil U.C.
UPC at CRD/LBNL Kathy Yelick Dan Bonachea, Jason Duell, Paul Hargrove, Parry Husbands, Costin Iancu, Mike Welcome, Christian Bell.
Modeling Emerging Magnetic Flux W.P. Abbett, G.H. Fisher & Y. Fan.
Support for Adaptive Computations Applied to Simulation of Fluids in Biological Systems Immersed Boundary Method Simulation in Titanium.
Kathy Yelick, 1 Advanced Software for Biological Simulations Elastic structures in an incompressible fluid. Blood flow, clotting, inner ear, embryo growth,
Charm++ Load Balancing Framework Gengbin Zheng Parallel Programming Laboratory Department of Computer Science University of Illinois at.
Global Address Space Applications Kathy Yelick NERSC/LBNL and U.C. Berkeley.
© Fujitsu Laboratories of Europe 2009 HPC and Chaste: Towards Real-Time Simulation 24 March
4.x Performance Technology drivers – Exascale systems will consist of complex configurations with a huge number of potentially heterogeneous components.
CompuCell Software Current capabilities and Research Plan Rajiv Chaturvedi Jesús A. Izaguirre With Patrick M. Virtue.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Scientific Computing Topics for Final Projects Dr. Guy Tel-Zur Version 2,
Center for Programming Models for Scalable Parallel Computing: Project Meeting Report Libraries, Languages, and Execution Models for Terascale Applications.
1 History of compiler development 1953 IBM develops the 701 EDPM (Electronic Data Processing Machine), the first general purpose computer, built as a “defense.
R. Ryne, NUG mtg: Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.
©2007 · Georges Merx and Ronald J. NormanSlide 1 Chapter 13 Java on Various Computer Platforms.
Programming Models & Runtime Systems Breakout Report MICS PI Meeting, June 27, 2002.
CCGrid 2014 Improving I/O Throughput of Scientific Applications using Transparent Parallel Compression Tekin Bicer, Jian Yin and Gagan Agrawal Ohio State.
N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Fast Adaptive Storage and Retrieval Scott B. Baden Department of Computer Science and.
Boundary Assertion in Behavior-Based Robotics Stephen Cohorn - Dept. of Math, Physics & Engineering, Tarleton State University Mentor: Dr. Mircea Agapie.
HPC User Forum Back End Compiler Panel SiCortex Perspective Kevin Harris Compiler Manager April 2009.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Overcoming Scaling Challenges in Bio-molecular Simulations Abhinav Bhatelé Sameer Kumar Chao Mei James C. Phillips Gengbin Zheng Laxmikant V. Kalé.
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
1 Qualifying ExamWei Chen Unified Parallel C (UPC) and the Berkeley UPC Compiler Wei Chen the Berkeley UPC Group 3/11/07.
I/O for Structured-Grid AMR Phil Colella Lawrence Berkeley National Laboratory Coordinating PI, APDEC CET.
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
Master’s Degree in Computer Science. Why? Acquire Credentials Learn Skills –Existing software: Unix, languages,... –General software development techniques.
Cardiac Blood Flow Alpha Project Impact Kathy Yelick EECS Department U.C. Berkeley.
TR&D 2: NUMERICAL TOOLS FOR MODELING IN CELL BIOLOGY Software development: Jim Schaff Fei Gao Frank Morgan Math & Physics: Boris Slepchenko Diana Resasco.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Advanced User Support for MPCUGLES code at University of Minnesota October 09,
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
Structured ALE Solver. Overview Structured ALE mesh automatically generated Smaller input deck; Easier modifications to the mesh; Less I/O time. Shorter.
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
Hierarchical Load Balancing for Large Scale Supercomputers Gengbin Zheng Charm++ Workshop 2010 Parallel Programming Lab, UIUC 1Charm++ Workshop 2010.
ECG Simulation NCRR Overview Technology for the ECG Simulation project CardioWave BioPSE project background Tools developed to date Tools for the next.
Unified Parallel C at LBNL/UCB Berkeley UPC Runtime Report Jason Duell LBNL September 9, 2004.
Towards a Digital Human Kathy Yelick EECS Department U.C. Berkeley.
1 Titanium Review: Immersed Boundary Armando Solar-Lezama Biological Simulations Using the Immersed Boundary Method in Titanium Ed Givelberg, Armando Solar-Lezama,
Mesh Refinement: Aiding Research in Synthetic Jet Actuation By: Brian Cowley.
Introduction to Performance Tuning Chia-heng Tu PAS Lab Summer Workshop 2009 June 30,
Unified Parallel C at LBNL/UCB UPC at LBNL/U.C. Berkeley Overview Kathy Yelick LBNL and U.C. Berkeley.
Defining the Competencies for Leadership- Class Computing Education and Training Steven I. Gordon and Judith D. Gardiner August 3, 2010.
VisIt Project Overview
Engineering (Richard D. Braatz and Umberto Ravaioli)
Programming Models for SimMillennium
Ray-Cast Rendering in VTK-m
Immersed Boundary Method Simulation in Titanium Objectives
HPC User Forum: Back-End Compiler Technology Panel
Presentation transcript:

Support for Adaptive Computations Applied to Simulation of Fluids in Biological Systems Kathy Yelick U.C. Berkeley

Project Summary Provide easy-to-use, high performance tool for simulation of fluid flow in biological systems. –Using the Immersed Boundary Method Enable simulations on large-scale parallel machines. –Distributed memory machine including SMP clusters Using Titanium, ADR, and KeLP with AMR Specific demonstration problem: Simulation of the heart model on Blue Horizon.

Outline Short term goals and plans Technical status of project –Immersed Boundary Method –Software Tools –Solvers Next Steps

Short Term Goals for October 2001 IB Method written in Titanium (IBT) IBT Simulation on distributed memory Heart model input and visualization support in IBT Titanium running on Blue Horizon IBT users on BH and other SPs ?Performance tuning of code to exceed T90 performance ?Replace solver with (adaptive) multigrid

IB Method Users Peskin and McQueen at NYU –Heart model, including valve design At Washington –Insect flight Fauchy et al at Tulane –Small animal swimming Peter Kramer at RPI –Brownian motion in the IBM John Stocky at Simon Fraser –Paper making Others –parachutes, flags, flagellates, robot insects

Building a User Community Many users of the IB Method Lots of concern over lack of distributed memory implementation Once IBT is more robust and efficient (May ’01), advertise to users Identify 1 or 2 early adopters Longer term: workshop or short course

Long Term Software Release Model Titanium –Working with UPC and possibly others on common runtime layer –Compiler is relatively stable but needs ongoing support IB Method –Release Titanium source code –Parameterized “black box” for IB Method with possible cross-language support Visualization software is tied to SGI

Immersed Boundary Method Developed at NYU by Peskin & McQueen to model biological systems where elastic fibers are immersed in an incompressible fluid. –Fibers (e.g., heart muscles) modeled by list of fiber points –Fluid space modeled by a regular lattice

Immersed Boundary Method Structure Fiber activation & force calculation Interpolate Velocity Navier-Stokes Solver Spread Force 4 steps in each timestep Fiber Points Interaction Fluid Lattice

Challenges to Parallelization Irregular fiber lists need to interact with regular fluid lattice. –Trade-off between load balancing of fibers and minimizing communication Efficient “scatter-gather” across processors Need a scalable elliptic solver –Plan to uses multigrid –Eventually add Adaptive Mesh Refinement New algorithms under development by Colella’s group

Tools used for Implementation Titanium supports –Classes, linked data structures, overloading –Distributed data structures (global address space) –Useful for planned adaptive hierarchical structures ADR provides –Help with analysis and organization of output –Especially for hierarchical data KeLP provides –Alternative programming model for solvers ADR and KeLP are not critical for first-year

Titanium Status Titanium runs on uniprocessors, SMPs, and distributed memory with a single programming model It has run on Blue Horizon –Issues related to communication balance –Revamped backends are more organized, but BH backend not working right now Need to replace personnel

Solver Status Current solver is based on 3D FFT Multigrid might be more scalable Multigrid with adaptive meshes might be more so Balls and Colella algorithm could also be used KeLP implementations of solvers included Note: McQueen is looking into solver issues for numerical reasons unrelated to scaling Not critical for first year goals

IB Titanium Status IB (Generic) rewritten in Titanium. Running since October Contractile torus –runs on Berkeley NOW and SGI Origin Needed for heart: –Input file format –Performance tuning Uniprocessor (C code used temporarily in 2 kernels) Communication

Immersed Boundary on Titanium Performance Breakdown (torus simulation):

Immersed Boundary on Titanium

Next Steps Improve performance of IBT Generate heart input for IBT Recover Titanium on BH Identify early user(s) of IBT Improve NS solver Add functionality –Bending angles, anchorage points, source & sinks) to the software package.

Yelick(UCB), Peskin (NYU), Colella (LBNL), Baden (UCSD), Saltz (Maryland) Adaptive Computations for Fluids in Biological Systems Immersed Boundary Method Applications Human Heart (NYU) Embryo Growth (UCB) Blood Clotting (Utah) Robot Insect Flight (NYU) Pulp Fibers (Waterloo) Generic Immersed Boundary Method (Titanium) Heart (Titanium) Insect Wings Flagellate Swimming … Spectral (Titanium) Multigrid (KeLP) AMR Application Models Extensible Simulation Solvers

General Questions - How has your project addressed the goals of the PACI program (providing access to tradition HPC, providing early access to experimental systems, fostering interdisciplinary research, contributing to intellectual development, broadening the base)? - What infrastructure products (e.g., software, algorithms, etc.) have you produced? - Where have you deployed them (on NPACI systems, other systems)? - What have you done to communicate the availability of this infrastructure? - What training have you done? - What kind/size of community is using your infrastructure? - How have you integrated your work with EOT activities? - What scientific accomplishments - or other measurable impacts not covered by answers to previous questions - have resulted from its use? - What are the emerging trends/technologies that NPACI should build on/leverage? - How can we increase the impact of NPACI development to date? - How can we increase the community that uses the infrastructure you've developed?