1 1  Capabilities: PCU: Communication, threading, and File IO built on MPI APF: Abstract definition of meshes, fields, and their algorithms GMI: Interface.

Slides:



Advertisements
Similar presentations
Current Progress on the CCA Groundwater Modeling Framework Bruce Palmer, Yilin Fang, Vidhya Gurumoorthi, Computational Sciences and Mathematics Division.
Advertisements

1 Coven a Framework for High Performance Problem Solving Environments Nathan A. DeBardeleben Walter B. Ligon III Sourabh Pandit Dan C. Stanzione Jr. Parallel.
1 A Common Application Platform (CAP) for SURAgrid -Mahantesh Halappanavar, John-Paul Robinson, Enis Afgane, Mary Fran Yafchalk and Purushotham Bangalore.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Oil & Gas. Built Environment External Aerodynamics.
Software Version Control SubVersion software version control system WebSVN graphical interface o View version history logs o Browse directory structure.
A Load Balancing Framework for Adaptive and Asynchronous Applications Kevin Barker, Andrey Chernikov, Nikos Chrisochoides,Keshav Pingali ; IEEE TRANSACTIONS.
1 ProActive performance evaluation with NAS benchmarks and optimization of OO SPMD Brian AmedroVladimir Bodnartchouk.
 Introduction Introduction  Definition of Operating System Definition of Operating System  Abstract View of OperatingSystem Abstract View of OperatingSystem.
Infrastructure for Parallel Adaptive Unstructured Mesh Simulations
ExaFMM --An open source fast multipole method library aimed for Exascale systems Rio Yokota (KAUST), L. A. Barba (BU)
1 ATPESC 2014 Vijay Mahadevan Tutorial Session for Scalable Interfaces for Geometry and Mesh based Applications (SIGMA) FASTMath SciDAC Institute.
Alok 1Northwestern University Access Patterns, Metadata, and Performance Alok Choudhary and Wei-Keng Liao Department of ECE,
Enabling HPC Simulation Workflows for Complex Industrial Flow Problems  C.W. Smith, S. Tran, O. Sahni, and M.S. Shephard, Rensselaer Polytechnic Institute.
Scalable Data Clustering with GPUs Andrew D. Pangborn Thesis Defense Rochester Institute of Technology Computer Engineering Department Friday, May 14 th.
1 Presenters: Cameron W. Smith and Glen Hansen Workflow demonstration using Simmetrix/PUMI/PAALS for parallel adaptive simulations FASTMath SciDAC Institute.
Jonathan Carroll-Nellenback University of Rochester.
Computational Design of the CCSM Next Generation Coupler Tom Bettge Tony Craig Brian Kauffman National Center for Atmospheric Research Boulder, Colorado.
Kernel, processes and threads Windows and Linux. Windows Architecture Operating system design Modified microkernel Layered Components HAL Interacts with.
Strategic Goals: To align the many efforts at Sandia involved in developing software for the modeling and simulation of physical systems (mostly PDEs):
ALEGRA is a large, highly capable, option rich, production application solving coupled multi-physics PDEs modeling magnetohydrodynamics, electromechanics,
MediaGrid Processing Framework 2009 February 19 Jason Danielson.
Supercomputing Center CFD Grid Research in N*Grid Project KISTI Supercomputing Center Chun-ho Sung.
Discontinuous Galerkin Methods and Strand Mesh Generation
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
BG/Q vs BG/P—Applications Perspective from Early Science Program Timothy J. Williams Argonne Leadership Computing Facility 2013 MiraCon Workshop Monday.
Web and Grid Services from Pitt/CMU Andrew Connolly Department of Physics and Astronomy University of Pittsburgh Jeff Gardner, Alex Gray, Simon Krughoff,
Earth System Modeling Framework Status Cecelia DeLuca NOAA Cooperative Institute for Research in Environmental Sciences University of Colorado, Boulder.
1 1  Capabilities: Dynamic load balancing and static data partitioning -Geometric, graph-based, hypergraph-based -Interfaces to ParMETIS, PT-Scotch, PaToH.
A Summary of the Distributed System Concepts and Architectures Gayathri V.R. Kunapuli
Advanced Simulation and Computing (ASC) Academic Strategic Alliances Program (ASAP) Center at The University of Chicago The Center for Astrophysical Thermonuclear.
CSC 7600 Lecture 28 : Final Exam Review Spring 2010 HIGH PERFORMANCE COMPUTING: MODELS, METHODS, & MEANS FINAL EXAM REVIEW Daniel Kogler, Chirag Dekate.
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
Nguyen Tuan Anh. VN-Grid: Goals  Grid middleware (focus of this presentation)  Tuan Anh  Grid applications  Hoai.
1 1  Capabilities: Scalable algebraic solvers for PDEs Freely available and supported research code Usable from C, C++, Fortran 77/90, Python, MATLAB.
CFX-10 Introduction Lecture 1.
Earth System Modeling Framework Python Interface (ESMP) October 2011 Ryan O’Kuinghttons Robert Oehmke Cecelia DeLuca.
By Dirk Hekhuis Advisors Dr. Greg Wolffe Dr. Christian Trefftz.
MESQUITE: Mesh Optimization Toolkit Brian Miller, LLNL
Silberschatz, Galvin and Gagne  Operating System Concepts UNIT II Operating System Services.
Cracow Grid Workshop, November 5-6, 2001 Concepts for implementing adaptive finite element codes for grid computing Krzysztof Banaś, Joanna Płażek Cracow.
ESMF Regridding Update Robert Oehmke Ryan O’Kuinghttons Amik St. Cyr.
Connections to Other Packages The Cactus Team Albert Einstein Institute
COMPUTER SIMULATION OF BLOOD FLOW WITH COMPLIANT WALLS  ITC Software All rights reserved.
Partitioning using Mesh Adjacencies  Graph-based dynamic balancing Parallel construction and balancing of standard partition graph with small cuts takes.
Python/C FASE prototype L. Paioro, B. Garilli et al. OPTICON Network 9.2 MiMa Collaboration INAF-IASF Milano L. Paioro - Python/C FASE prototype.
1 Welcome Hans Andersson Der Yao Leong Yee Jiun Song Wendy Tobagus Yang Bei Sherif Yousef.
Generic GUI – Thoughts to Share Jinping Gwo EMSGi.org.
FY 12 IPR Parallel Framework Capabilities PT123 computational kernel handles various ODE solvers. P2P communication model. Particle-mesh correlation provides.
CS 351/ IT 351 Modeling and Simulation Technologies HPC Architectures Dr. Jim Holten.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
Extreme Computing’05 Parallel Graph Algorithms: Architectural Demands of Pathological Applications Bruce Hendrickson Jonathan Berry Keith Underwood Sandia.
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
© Ram Ramanan 2/22/2016 Commercial Codes 1 ME 7337 Notes Computational Fluid Dynamics for Engineers Lecture 4: Commercial Codes.
Collaboration with Craig Henriquez’ laboratory at Duke University Multi-scale Electro- physiological Modeling.
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
Predictive Load Balancing Using Mesh Adjacencies for Mesh Adaptation  Cameron Smith, Onkar Sahni, Mark S. Shephard  Scientific Computation Research Center.
Derek Weitzel Grid Computing. Background B.S. Computer Engineering from University of Nebraska – Lincoln (UNL) 3 years administering supercomputers at.
Scientific Computing Goals Past progress Future. Goals Numerical algorithms & computational strategies Solve specific set of problems associated with.
1 1 Zoltan: Toolkit of parallel combinatorial algorithms for unstructured, dynamic and/or adaptive computations Unstructured Communication Tools -Communication.
Application of Design Patterns to Geometric Decompositions V. Balaji, Thomas L. Clune, Robert W. Numrich and Brice T. Womack.
VisIt Project Overview
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Scalable Interfaces for Geometry and Mesh based Applications (SIGMA)
Parallel Unstructured Mesh Infrastructure
Parallel Objects: Virtualization & In-Process Components
Programming Models for SimMillennium
Construction of Parallel Adaptive Simulation Loops
The Improvement of PaaS Platform ZENG Shu-Qing, Xu Jie-Bin 2010 First International Conference on Networking and Distributed Computing SQUARE.
Multiple Processor Systems
Presentation transcript:

1 1  Capabilities: PCU: Communication, threading, and File IO built on MPI APF: Abstract definition of meshes, fields, and their algorithms GMI: Interface to geometric modeling kernels MDS: Compact but flexible array-based mesh data structure SPR: Super-convergent Patch Recovery error estimator Interfaces to collaborators: PHASTA, Zoltan, STK, MPAS, etc.  Download:  Further information Parallel Unstructured Mesh Infrastructure (PUMI)

2 2 Implicit, Adaptive Grid CFD  Extreme Scale Applications: Aerodynamics flow control Multiphase flow  Mira Strong scaling 92 Billion tetrahedra to 3,145,728 parts  SCOREC tools used for General mesh adaptation  See MeshAdapt slides Partitioning and load balancing  See Zoltan and ParMA slides Massively Parallel Unstructured Mesh Solver (PHASTA)

3 3  Dependency Inversion allows algorithms written in abstraction without lock-in to implementations, runs multiple implementations at once. PUMI: Architecture PCU 2K LOC APF 7K LOC APF_SIM 800 LOC MDS 3K LOC GMI 600 LOC APF_ZOLTAN 600 LOC APF_STK 1K LOC GMI_SIM 200 LOC APF_MPAS 1K LOC Simmetrix (external) APF_PHASTA 2K LOC