ASC/Alliances Center for Astrophysical Thermonuclear Flashes

Slides:



Advertisements
Similar presentations
NG-CHC Northern Gulf Coastal Hazards Collaboratory Simulation Experiment Integration Sandra Harper 1, Manil Maskey 1, Sara Graves 1, Sabin Basyal 1, Jian.
Advertisements

Joint CASC/CCI Workshop Report Strategic and Tactical Recommendations EDUCAUSE Campus Cyberinfrastructure Working Group Coalition for Academic Scientific.
Collaborative Comparison of High-Energy-Density Physics Codes LA-UR Bruce Fryxell Center for Radiative Shock Hydrodynamics Dept. of Atmospheric,
Workshop on HPC in India Grid Middleware for High Performance Computing Sathish Vadhiyar Grid Applications Research Lab (GARL) Supercomputer Education.
ASCI/Alliances Center for Astrophysical Thermonuclear Flashes Simulating Self-Gravitating Flows with FLASH P. M. Ricker, K. Olson, and F. X. Timmes Motivation:
Quicktime Howell Istance School of Computing De Montfort University.
An Advanced Simulation and Computing (ASC) Academic Strategic Alliances Program (ASAP) Center at The University of Chicago The Center for Astrophysical.
September 19, 2011 Simulation of High Power Mercury Jet Targets using Smoothed Particle Hydrodynamics Roman Samulyak, Tongfei Guo AMS Department, Stony.
A Parallel Structured Ecological Model for High End Shared Memory Computers Dali Wang Department of Computer Science, University of Tennessee, Knoxville.
Chamber Dynamic Response Modeling Zoran Dragojlovic.
© , Michael Aivazis DANSE Software Issues Michael Aivazis California Institute of Technology DANSE Software Workshop September 3-8, 2003.
An Advanced Simulation and Computation (ASC) Academic Strategic Alliances Program (ASAP) Center at The University of Chicago The Center for Astrophysical.
An Accelerated Strategic Computing Initiative (ASCI) Academic Strategic Alliances Program (ASAP) Center at The University of Chicago The Center for Astrophysical.
GridSphere for GridLab A Grid Application Server Development Framework By Michael Paul Russell Dept Computer Science University.
Kathy Yelick, 1 Advanced Software for Biological Simulations Elastic structures in an incompressible fluid. Blood flow, clotting, inner ear, embryo growth,
Role of Deputy Director for Code Architecture and Strategy for Integration of Advanced Computing R&D Andrew Siegel FSP Deputy Director for Code Architecture.
Center for Magnetic Reconnection Studies The Magnetic Reconnection Code within the FLASH Framework Timur Linde, Leonid Malyshkin, Robert Rosner, and Andrew.
Jonathan Carroll-Nellenback University of Rochester.
2005 Materials Computation Center External Board Meeting The Materials Computation Center Duane D. Johnson and Richard M. Martin (PIs) Funded by NSF DMR.
A Unified Lagrangian Approach to Solid-Fluid Animation Richard Keiser, Bart Adams, Dominique Gasser, Paolo Bazzi, Philip Dutré, Markus Gross.
A particle-gridless hybrid methods for incompressible flows
An Accelerated Strategic Computing Initiative (ASCI) Academic Strategic Alliances Program (ASAP) Center at The University of Chicago The Center for Astrophysical.
High Energy and Nuclear Physics Collaborations and Links Stu Loken Berkeley Lab HENP Field Representative.
High Performance Computing on the GRID Infrastructure of COMETA S. Orlando 1,2, G. Peres 3,1,2, F. Reale 3,1,2, F. Bocchino 1,2, G.G. Sacco 2, M. Miceli.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Advanced Simulation and Computing (ASC) Academic Strategic Alliances Program (ASAP) Center at The University of Chicago The Center for Astrophysical Thermonuclear.
1 1  Capabilities: Building blocks for block-structured AMR codes for solving time-dependent PDE’s Functionality for [1…6]D, mixed-dimension building.
I/O for Structured-Grid AMR Phil Colella Lawrence Berkeley National Laboratory Coordinating PI, APDEC CET.
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
The Astrophysical MUltiscale Software Environment (AMUSE) P-I: Portegies Zwart Co-Is: Nelemans, Pols, O’Nuallain, Spaans Adv.: Langer, Tolstoy, Hut, Ercolano,
NA-MIC National Alliance for Medical Image Computing Core 1b – Engineering Computational Platform Jim Miller GE Research.
LCSE – NCSA Partnership Accomplishments, FY01 Paul R. Woodward Laboratory for Computational Science & Engineering University of Minnesota October 17, 2001.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
Final round table on petascale to yotta scale computing and Turbulence Bengaluru, December 16, 2011
1 Data Structures for Scientific Computing Orion Sky Lawlor /04/14.
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
Center for Extended MHD Modeling (PI: S. Jardin, PPPL) –Two extensively developed fully 3-D nonlinear MHD codes, NIMROD and M3D formed the basis for further.
Regression Testing for CHIMERA Jessica Travierso Austin Peay State University Bronson Messer National Center for Computational Sciences August 2009.
Regression Testing for CHIMERA Jessica Travierso Austin Peay State University Research Alliance in Math and Science National Center for Computational Sciences,
Google Summer of Code Project Updates Jeff Kinnison, University of Notre Dame Pradyut Madhavaram, City University of New York.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Centre of Excellence in Physics at Extreme Scales Richard Kenway.
CITA|ICAT Jonathan Dursi HPCS’06 15 May Towards Understanding some Astrophysical Flows using Multiscale Simulations with the FLASH code Jonathan Dursi,
cFS Platforms OSAL and PSP
VisIt Project Overview
Chamber Dynamic Response Modeling
current PicUp capabilities and expected performance from SPIS
For Massively Parallel Computation The Chaotic State of the Art
The Cactus Team Albert Einstein Institute
Data Structures for Efficient and Integrated Simulation of Multi-Physics Processes in Complex Geometries A.Smirnov MulPhys LLC github/mulphys
Parallel Unstructured Mesh Infrastructure
DOE Facilities - Drivers for Science: Experimental and Simulation Data
Programming Models for SimMillennium
Turbulent Convection and Dynamos in Stars
Cloud Computing Dr. Sharad Saxena.
Model-Driven Analysis Frameworks for Embedded Systems
SDM workshop Strawman report History and Progress and Goal.
CS 425/625 Software Engineering Architectural Design
McIDAS-V: Why it’s Based on VisAD and IDV
Chapter 2: The Linux System Part 1
CLUSTER COMPUTING.
Supported by the National Science Foundation.
Joint GEOS-Chem and NCAR Modeling Workshop:
Mariana Vertenstein CCSM Software Engineering Group NCAR
Model Base Validation Techniques for Software
Immersed Boundary Method Simulation in Titanium Objectives
Introduction to Computing
Presented By: Darlene Banta
Department of Computer Science, University of Tennessee, Knoxville
Presentation transcript:

ASC/Alliances Center for Astrophysical Thermonuclear Flashes FLASH Capabilities, Architecture, and Future Directions Anshu Dubey, Elias Balarus, Sean Couch, Chris Daley, Shravan Gopal, Carlo Graziani, Don Lamb, Dongwook Lee, Marcos Vanella, , Klaus Weide, Guohua Xia Abstract : FLASH is a highly capable, fully modular, professionally managed code with a wide user base. FLASH consists of inter-operable modules that can be combined to generate different applications such as novae, supernovae, X-Ray bursts, galaxy clusters, weakly compressible turbulence and many other problems in astrophysics and other fields. With its flexibility and extensibility, FLASH provides an excellent foundation for an open software base for other research communities such as the academic HEDP, and CFD/CFM. The HEDP project is underway in-house, with modeling of 2T, and laser energy deposition, while a Fluid Structure Interaction CFM project is being carried out in collaboration with University of Maryland. Additionally there is an NSF funded in-house project to add a fully implicit solver with AMR to enable simulations of phenomena that have a wide dynamic range of timescales. FLASH team is working to enhance the code in several directions simultaneously. We are adding physics solvers to enable simulations in fields that are new to FLASH, such as High Energy Density Physics, and Fluid Structure Interactions. We are also working with architecture, programming models and scientific libraries communities to ensure continued portability and scalability of the code. FLASH Version 3 FLASH is a multi-physics Eulerian code and framework whose capabilities include AMR, solvers for hydro, MHD, gravity, nuclear burning, several other source terms and material properties, Lagrangian tracer and active particles, and a mechanism to handle various types of EOS. It is also very portable, scales up to 160K processors. FLASH is composed of inter-opreable units/modules, particular modules are set up together to run different physics problems. FLASH is professionally managed with regression testing, version control, coding standards, extensive documentation, user support, and an active users list. More than 600 scientists around the world have now used FLASH, and more than 340 papers have been published that report results that directly use it. The last release of versions 3 was on October 20, 2010 FLASH Version 4 The first release of Version 4 expected in April/May 2011. With this release FLASH will exist in two distinct incarnations. One that will continue to be distributed under the current licensing agreement, and will contain all the traditional capabilities, and the capabilities added to support High Energy Density Physics. These capabilities are being added with joint funding from the DOE NNSA and ASCR offices. The second incarnation will be distributed freely from a mirror site at University of Maryland, and will have capabilities for Computational Fluid Dynamics/Mechanics. This set of capabilities is being added under NSF Peta-apps and PIF programs. High-order unsplit Compressible Hydro solver Incompressible Navier-Stokes Solver Fully implicit solvers Second-order (explicit) super- time-stepping for stiff systems Scalable Poisson solver-hybrid of multigrid and parallel exact solver Infrastructure Additional AMR packages such as Chombo and/or SAMRAI Generalization of Lagrangian infrastructure Shared memory infrastructure at wrapper layer Preparing code for multicore/heterogeneous architectures State-of-the-art parallel I/O General Purpose Solvers (all have AMR capabilities) FLASH Community and External Contributions: Plasma particle-in-cell solver : Mats Holmstrom Most recent addition, immediately beneficial for our HEDP efforts Primordial Chemistry: William Gray (being imported) Enhancement to Particles mapping : being added by Milos and Chalence Multigrid : Paul Ricker Navier-Stokes Solver: Marcos Vanella/Elias Balaras Direct gravity solver : Tom Theuns Single source ray trace, Erik Jan Rijkhorst, adapted later by Natalie Hinkel Barnes hut tree solver: Kevin Olson Ionization : Salvatore Orlando Two-temperatures (Te, Ti) hydro with Radiation Embedded boundaries Multi-component EOS with ionization and radiation Lagrangian trackers communicating to and from fluid Ray tracing and laser energy deposition Domain Specific Capabilities HEDP CFD/CFM This work was supported in part at U. Chicago by ASCR, Office of Science, DOE, and ASC, NNSA, DOE, and at U. Chicago and U. Maryland by OCI/NSF