SciDAC Software Introduction Osni Marques Lawrence Berkeley National Laboratory DOE Workshop for Industry Software Developers March 31,

Slides:



Advertisements
Similar presentations
DELOS Highlights COSTANTINO THANOS ITALIAN NATIONAL RESEARCH COUNCIL.
Advertisements

S ITE R EPORT : L AWRENCE B ERKELEY N ATIONAL L ABORATORY J OERG M EYER
What do we currently mean by Computational Science? Traditionally focuses on the “hard sciences” and engineering –Physics, Chemistry, Mechanics, Aerospace,
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
Presentation at WebEx Meeting June 15,  Context  Challenge  Anticipated Outcomes  Framework  Timeline & Guidance  Comment and Questions.
Priority Research Direction (I/O Models, Abstractions and Software) Key challenges What will you do to address the challenges? – Develop newer I/O models.
Presented by Suzy Tichenor Director, Industrial Partnerships Program Computing and Computational Sciences Directorate Oak Ridge National Laboratory DOE.
Scientific Grand Challenges Workshop Series: Challenges in Climate Change Science and the Role of Computing at the Extreme Scale Warren M. Washington National.
Workshop on HPC in India Grid Middleware for High Performance Computing Sathish Vadhiyar Grid Applications Research Lab (GARL) Supercomputer Education.
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
The Role of DANSE at SNS Steve Miller Scientific Computing Group Leader January 22, 2007.
1 Dr. Frederica Darema Senior Science and Technology Advisor NSF Future Parallel Computing Systems – what to remember from the past RAMP Workshop FCRC.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Assessment of Core Services provided to USLHC by OSG.
© Fujitsu Laboratories of Europe 2009 HPC and Chaste: Towards Real-Time Simulation 24 March
4.x Performance Technology drivers – Exascale systems will consist of complex configurations with a huge number of potentially heterogeneous components.
Cloud Computing 1. Outline  Introduction  Evolution  Cloud architecture  Map reduce operation  Platform 2.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program CASC, May 3, ADVANCED SCIENTIFIC COMPUTING RESEARCH An.
Wireless Networks Breakout Session Summary September 21, 2012.
November 13, 2006 Performance Engineering Research Institute 1 Scientific Discovery through Advanced Computation Performance Engineering.
R. Ryne, NUG mtg: Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.
David S. Ebert David S. Ebert Visual Analytics to Enable Discovery and Decision Making: Potential, Challenges, and.
1 Computer Programming (ECGD2102 ) Using MATLAB Instructor: Eng. Eman Al.Swaity Lecture (1): Introduction.
Automatic Differentiation: Introduction Automatic differentiation (AD) is a technology for transforming a subprogram that computes some function into a.
DOE 2000, March 8, 1999 The IT 2 Initiative and NSF Stephen Elbert program director NSF/CISE/ACIR/PACI.
Components for Beam Dynamics Douglas R. Dechow, Tech-X Lois Curfman McInnes, ANL Boyana Norris, ANL With thanks to the Common Component Architecture (CCA)
High Energy and Nuclear Physics Collaborations and Links Stu Loken Berkeley Lab HENP Field Representative.
Center for Component Technology for Terascale Simulation Software CCA is about: Enhancing Programmer Productivity without sacrificing performance. Supporting.
Combinatorial Scientific Computing and Petascale Simulation (CSCAPES) A SciDAC Institute Funded by DOE’s Office of Science Investigators Alex Pothen, Florin.
Service - Oriented Middleware for Distributed Data Mining on the Grid ,劉妘鑏 Antonio C., Domenico T., and Paolo T. Journal of Parallel and Distributed.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Presented by An Overview of the Common Component Architecture (CCA) The CCA Forum and the Center for Technology for Advanced Scientific Component Software.
Presented by Scientific Data Management Center Nagiza F. Samatova Network and Cluster Computing Computer Sciences and Mathematics Division.
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Parallel and Distributed Computing Research at the Computing Research Institute Ananth Grama Computing Research Institute and Department of Computer Sciences.
I/O for Structured-Grid AMR Phil Colella Lawrence Berkeley National Laboratory Coordinating PI, APDEC CET.
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
Connections to Other Packages The Cactus Team Albert Einstein Institute
1 1 Office of Science Jean-Luc Vay Accelerator Technology & Applied Physics Division Lawrence Berkeley National Laboratory HEP Software Foundation Workshop,
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
1 1  Capabilities: Serial (C), shared-memory (OpenMP or Pthreads), distributed-memory (hybrid MPI+ OpenM + CUDA). All have Fortran interface. Sparse LU.
Presented by Adaptive Hybrid Mesh Refinement for Multiphysics Applications Ahmed Khamayseh and Valmor de Almeida Computer Science and Mathematics Division.
1 1  Capabilities: Serial (thread-safe), shared-memory (SuperLU_MT, OpenMP or Pthreads), distributed-memory (SuperLU_DIST, hybrid MPI+ OpenM + CUDA).
Supercomputing 2006 Scientific Data Management Center Lead Institution: LBNL; PI: Arie Shoshani Laboratories: ANL, ORNL, LBNL, LLNL, PNNL Universities:
The Performance Evaluation Research Center (PERC) Participating Institutions: Argonne Natl. Lab.Univ. of California, San Diego Lawrence Berkeley Natl.
Computing Systems: Next Call for Proposals Dr. Panagiotis Tsarchopoulos Computing Systems ICT Programme European Commission.
Algebraic Solvers in FASTMath Argonne Training Program on Extreme-Scale Computing August 2015.
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
Oct 2005 page 1 The CIO of the Future – Changing the Dialogue Rolf Kubli, EDS EMEA Architects Office, CTO EDS Switzerland EGEE04 Industry Forum.
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
High Risk 1. Ensure productive use of GRID computing through participation of biologists to shape the development of the GRID. 2. Develop user-friendly.
Center for Component Technology for Terascale Simulation Software (CCTTSS) 110 April 2002CCA Forum, Townsend, TN CCA Status, Code Walkthroughs, and Demonstrations.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Toward High Breakthrough Collaboration (HBC) Susan Turnbull Program Manager Advanced Scientific Computing Research March 4, 2009.
CSCAPES Mission Research and development Provide load balancing and parallelization toolkits for petascale computation Develop advanced automatic differentiation.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Design and Planning Tools John Grosh Lawrence Livermore National Laboratory April 2016.
Presented by SciDAC-2 Petascale Data Storage Institute Philip C. Roth Computer Science and Mathematics Future Technologies Group.
Defining the Competencies for Leadership- Class Computing Education and Training Steven I. Gordon and Judith D. Gardiner August 3, 2010.
Challenges in Electromagnetic Modeling Scalable Solvers
Programming Models for SimMillennium
What's New in eCognition 9
What's New in eCognition 9
What's New in eCognition 9
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

SciDAC Software Introduction Osni Marques Lawrence Berkeley National Laboratory DOE Workshop for Industry Software Developers March 31, 2011

Scientific Discovery through Advanced Computing (SciDAC) Create comprehensive, scientific computing software infrastructure to enable scientific discovery in the physical, biological, and environmental sciences at the petascale Develop new generation of data management and knowledge discovery tools for large data sets (obtained from users and simulations) 35 projects: 9 centers, 4 institutes, 19 efforts in 12 application areas (Astrophysics, Accelerator Science, Climate, Biology, Fusion, Petabyte data, Materials & Chemistry, Nuclear physics, High Energy physics, QCD, Turbulence, Groundwater) 2

Centers for Enabling Technologies and Institutes Centers: interconnected multidisciplinary teams that are coordinated with scientific applications; development of a comprehensive, integrated, scalable, and robust high performance software environment (e.g. algorithms, operating systems, tools for data management and visualization of petabyte-scale scientific data sets) to enable the effective use of terascale and petascale resources Institutes: university-led centers of excellence intended to increase the presence of the SciDAC program in the academic community and to complement the efforts of the Centers 3

Software Stack 4 APPLICATIONS GENERAL PURPOSE TOOLS SUPPORT TOOLS AND UTILITIES HARDWARE SciDAC Centers and Institutes

Applied Mathematics Towards Optimal Petascale Simulation (TOPS, Center: development of scalable solver software (linear systems and eigenvalue problems) Applied Partial Differential Equations Center (APDEC, algorithms and software components for the solution of PDEs in complex multicomponent physical systems Combinatorial Scientific Computing and Petascale Simulations (CSCAPES, Institute: development and deployment of algorithms and software tools for tackling combinatorial problems in scientific computing Interoperable Technologies for Advanced Petascale Simulations (ITAPS, Center: interoperable and interchangeable mesh, geometry, and field manipulation services 5

PETSc: solution of PDEs that require the solution of large-scale, sparse linear and nonlinear systems of equations Hypre: iterative solution of large sparse linear systems of equation using scalable preconditioners Trilinos: algorithms and enabling technologies within an object-oriented software framework for the solution of large-scale, complex multi-physics problems TAO: solution of large scale non-linear optimization problems SuperLU: direct solution of large, sparse, nonsymmetric systems of linear equations Zoltan: parallel partitioning, load balancing, and data management services Chombo and Boxlib: C++ class libraries (with Fortran interfaces and computational kernel) for parallel calculations over block-structured, adaptively refined grids ⁞ Applied Mathematics: software 6

Example of usage: hypre 7 problem domain oriented

Example of usage: SuperLU SuperLU solves Ax=b by through a factorization A=L*U (with appropriate manipulations of A for reduced storage, performance and stability) Because of its performance advantage and ease-of-use, it has a large user base worldwide, with nearly 10,000 downloads in FY10 It has been used in a variety of commercial applications, including Walt Disney Feature Animation, airplane design (e.g. Boeing), oil industry (e.g. Chevron), circuit simulation in semiconductor industry, earthquake simulation and prediction, economic modeling, design of novel materials, and study of alternative energy sources It has been adopted in many academic/lab's high performance computing libraries or simulation codes (e.g. PETSc, Hyper, FEAP, M3D-C1, Omega3P, OpenSees, NIKE, NIMROD, PHOENIX and Trilinos) It has been is adopted in many computer vendor's mathematical libraries and commercial software (e.g. Cray's LibSci, HP's MathLib, IMSL, NAG, Optima Numerics and SciPy) (courtesy of Sherry Li) 8

Computer Science and Visualization (1/2) Center for Scalable Application Development Software (CScADS, development of systems software, libraries, compilers, and tools for leadership computing platforms The Performance Evaluation Research Institute (PERI, scidac.org): development of models to accurately predict code performance; development of tools for performance monitoring, modeling and optimization; development of technology for automatic performance tuning Scientific Data Management (SDM, development of advanced data management technologies for storage efficient access, data mining and analysis and scientific process automation (workflows) Petascale Data Storage Institute (PDSI, development of high performance storage solutions (e.g. capacity, performance, concurrency, reliability) for large scale computer simulations 9

Computer Science and Visualization (2/2) Institute for Ultrascale Visualization (IUSV, development and promotion of cutting edge visualization technologies for the analyses and evaluation of the vast amount of complex data produced by large scientific simulation applications Visualization and Analytics Center for Enabling Technology (VACET, leveraging of scientific visualization and analytics software technology as an enabling technology for increasing scientific productivity and insight, focusing on the challenges posed by vast collections of complex data Center for Technology for Advanced Scientific Component Software (TASCS, ): development of a common component architecture (CCA) designed for high-performance scientific computing with support for parallel and distributed computing, mixed language programming and data types 10

Computer Science and Visualization: software 11 ZeptoOS: perating systems and run-time systems for petascale computers PLASMA: parallel linear algebra for multicore architectures DynInst: binary rewriter, interfaces for runtime code generation VisIt: Scalable techniques for the visualization of large data sets, feature detection, analysis and tracking ParaView: open‐source, scalable, multiplatform visualization tool CUDPP and DCGN: libraries for general-purpose computing on GPUs FastBit: compressed bitmap indexing technology to accelerate analysis of very large datasets and to perform query-driven visualization VisTrails: workflow system for data exploration and visualization HPCToolkit: tools for node-based performance analysis Jumpshot: visualization tools for analysis of performance of MPI programs LoopTool: tool for source-to-source transformations to improve data reuse ⁞

Example of usage: IUSV technologies 12 close‐up view of turbulent flame surfaces Following the natural coordinate system of a flame, the level‐set distance function‐based adaptive data reduction algorithms enables us to zoom in and see for the first time the interaction of small turbulent eddies with the preheat layer of a turbulent flame, a region that was previously obscured by the multiscale nature of turbulence. (Jackie Chen, SNL)

Example of usage: VACET technologies 13 new visualization infrastructure for icosahedral grid infrastructure for the visualization and analysis of ensemble runs of new global cloud models (courtesy of the VACET Team)

Example of usage: FastBit 14 Cell identification: identify all cells that satisfy user specified conditions, for example “ ” Region growing: connect neighboring cells into regions Region tracking: track the evolution of the features through time searching for regions that satisfy particular criteria is a challenge but FastBit efficiently finds regions of interest

Code performance (questions for application developers) How does performance vary with different compilers? Is poor performance correlated with certain OS features? Has a recent change caused unanticipated performance? How does performance vary with MPI variants? Why is one application version faster than another? What is the reason for the observed scaling behavior? Did two runs exhibit similar performance? How are performance data related to application events? Which machines will run my code the fastest and why? Which benchmarks predict my code performance best? ⁞ (courtesy of S. Shende) 15

Example of usage: HPCToolkit and LoopTool 16 opportunities for performance improvement in S3D (DNS of turbulent combustion) (courtesy of the CScADS Team)

Collaboratories Earth Systems Grid Center for Enabling Technologies (ESG-CET, pcmdi.llnl.gov): infrastructure to provide climate researchers with access to data, models, analysis tools, and computational resources for studying climate simulation datasets Center for Enabling Distributed Petascale Science (CEDPS, tools for moving large quantities of data reliably, efficiently and transparently, among institutions connected by high-speed networks 17

Delivering the Science Highlighting Scientific Discovery and the Role of High End Computing 18