Allen D. Malony Department of Computer and Information Science Performance Research Laboratory NeuroInformatics Center University.

Slides:



Advertisements
Similar presentations
Machine Learning-based Autotuning with TAU and Active Harmony Nicholas Chaimov University of Oregon Paradyn Week 2013 April 29, 2013.
Advertisements

K T A U Kernel Tuning and Analysis Utilities Department of Computer and Information Science Performance Research Laboratory University of Oregon.
Last Lecture The Future of Parallel Programming and Getting to Exascale 1.
Dynamic performance measurement control Dynamic event grouping Multiple configurable counters Selective instrumentation Application-Level Performance Access.
Workload Characterization using the TAU Performance System Sameer Shende, Allen D. Malony, Alan Morris University of Oregon {sameer,
Sameer Shende, Allen D. Malony, and Alan Morris {sameer, malony, Steven Parker, and J. Davison de St. Germain {sparker,
Allen D. Malony Department of Computer and Information Science Performance Research Laboratory University of Oregon Multi-Experiment.
Robert Bell, Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science.
Allen D. Malony, Sameer Shende, Alan Morris Department of Computer and Information Science Performance Research.
Scalability Study of S3D using TAU Sameer Shende
Sameer Shende Department of Computer and Information Science Neuro Informatics Center University of Oregon Tool Interoperability.
Profiling S3D on Cray XT3 using TAU Sameer Shende
The TAU Performance Technology for Complex Parallel Systems (Performance Analysis Bring Your Own Code Workshop, NRL Washington D.C.) Sameer Shende, Allen.
Nick Trebon, Alan Morris, Jaideep Ray, Sameer Shende, Allen Malony {ntrebon, amorris, Department of.
On the Integration and Use of OpenMP Performance Tools in the SPEC OMP2001 Benchmarks Bernd Mohr 1, Allen D. Malony 2, Rudi Eigenmann 3 1 Forschungszentrum.
Allen D. Malony Department of Computer and Information Science Performance Research Laboratory University of Oregon Performance Technology.
Allen D. Malony, Sameer Shende Department of Computer and Information Science Computational Science Institute University.
The TAU Performance System: Advances in Performance Mapping Sameer Shende University of Oregon.
TAU Performance System Alan Morris, Sameer Shende, Allen D. Malony University of Oregon {amorris, sameer,
Performance Tools BOF, SC’07 5:30pm – 7pm, Tuesday, A9 Sameer S. Shende Performance Research Laboratory University.
Allen D. Malony Department of Computer and Information Science Computational Science Institute University of Oregon TAU Performance.
Performance Evaluation of S3D using TAU Sameer Shende
TAU: Performance Regression Testing Harness for FLASH Sameer Shende
Scalability Study of S3D using TAU Sameer Shende
Allen D. Malony, Sameer Shende, Robert Bell Department of Computer and Information Science Computational Science Institute, NeuroInformatics.
Kai Li, Allen D. Malony, Robert Bell, Sameer Shende Department of Computer and Information Science Computational.
The TAU Performance System Sameer Shende, Allen D. Malony, Robert Bell University of Oregon.
Sameer Shende, Allen D. Malony Computer & Information Science Department Computational Science Institute University of Oregon.
Performance Tools for Empirical Autotuning Allen D. Malony, Nick Chaimov, Kevin Huck, Scott Biersdorff, Sameer Shende
Allen D. Malony Performance Research Laboratory (PRL) Neuroinformatics Center (NIC) Department.
MpiP Evaluation Report Hans Sherburne, Adam Leko UPC Group HCS Research Laboratory University of Florida.
A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Vignesh Santhanagopalan Graduate Student Department Of CSE.
Allen D. Malony Performance Research Laboratory (PRL) Neuroinformatics Center (NIC) Department.
November 13, 2006 Performance Engineering Research Institute 1 Scientific Discovery through Advanced Computation Performance Engineering.
Integrated Performance Views in Charm++: Projections meets TAU Scott Biersdorff Allen D. Malony Department Computer and Information Science University.
A Component Infrastructure for Performance and Power Modeling of Parallel Scientific Applications Boyana Norris Argonne National Laboratory Van Bui, Lois.
Allen D. Malony Department of Computer and Information Science Performance Research Laboratory University of Oregon Performance Technology.
Profile Analysis with ParaProf Sameer Shende Performance Reseaerch Lab, University of Oregon
Supercomputing Cross-Platform Performance Prediction Using Partial Execution Leo T. Yang Xiaosong Ma* Frank Mueller Department of Computer Science.
Martin Schulz Center for Applied Scientific Computing Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory, P. O. Box 808, Livermore,
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Dynamic performance measurement control Dynamic event grouping Multiple configurable counters Selective instrumentation Application-Level Performance Access.
PerfExplorer Component for Performance Data Analysis Kevin Huck – University of Oregon Boyana Norris – Argonne National Lab Li Li – Argonne National Lab.
Allen D. Malony, Sameer S. Shende, Alan Morris, Robert Bell, Kevin Huck, Nick Trebon, Suravee Suthikulpanit, Kai Li, Li Li
Allen D. Malony Performance Research Laboratory (PRL) Neuroinformatics Center (NIC) Department.
Allen D. Malony, Sameer Shende, Li Li, Kevin Huck Department of Computer and Information Science Performance.
Allen D. Malony Department of Computer and Information Science TAU Performance Research Laboratory University of Oregon Discussion:
1 SciDAC High-End Computer System Performance: Science and Engineering Jack Dongarra Innovative Computing Laboratory University of Tennesseehttp://
Allen D. Malony, Sameer S. Shende, Robert Bell Kai Li, Li Li, Kevin Huck Department of Computer.
Shangkar Mayanglambam, Allen D. Malony, Matthew J. Sottile Computer and Information Science Department Performance.
The Performance Evaluation Research Center (PERC) Participating Institutions: Argonne Natl. Lab.Univ. of California, San Diego Lawrence Berkeley Natl.
Allen D. Malony Department of Computer and Information Science Performance Research Laboratory University.
Integrated Performance Views in Charm++: Projections meets TAU Scott Biersdorff Allen D. Malony Department Computer and Information Science University.
Allen D. Malony Department of Computer and Information Science Performance Research Laboratory.
SDM Center High-Performance Parallel I/O Libraries (PI) Alok Choudhary, (Co-I) Wei-Keng Liao Northwestern University In Collaboration with the SEA Group.
Performane Analyzer Performance Analysis and Visualization of Large-Scale Uintah Simulations Kai Li, Allen D. Malony, Sameer Shende, Robert Bell Performance.
Allen D. Malony Performance Research Laboratory (PRL) Neuroinformatics Center (NIC) Department.
Presented by SciDAC-2 Petascale Data Storage Institute Philip C. Roth Computer Science and Mathematics Future Technologies Group.
Kai Li, Allen D. Malony, Sameer Shende, Robert Bell
Introduction to the TAU Performance System®
Performance Technology for Scalable Parallel Systems
TAU integration with Score-P
Tutorial Outline – Part 1
Allen D. Malony, Sameer Shende
TAU Parallel Performance System
TAU Parallel Performance System
TAU: A Framework for Parallel Performance Analysis
Outline Introduction Motivation for performance mapping SEAA model
Parallel Program Analysis Framework for the DOE ACTS Toolkit
Presentation transcript:

Allen D. Malony Department of Computer and Information Science Performance Research Laboratory NeuroInformatics Center University of Oregon Performance Technology for Productive, High-End Parallel Computing

PSC2 Research Motivation  Tools for performance problem solving  Empirical-based performance optimization process  Performance technology concerns characterization Performance Tuning Performance Diagnosis Performance Experimentation Performance Observation hypotheses properties Instrumentation Measurement Analysis Visualization Performance Technology Experiment management Performance storage Performance Technology

Performance Technology for Productive, High-End Parallel ComputingPSC3 Challenges in Performance Problem Solving  How to make the process more effective (productive)?  Process may depend on scale of parallel system  What are the important events and performance metrics?  Tied to application structure and computational model  Tied to application domain and algorithms  Process and tools can/must be more application-aware  Tools have poor support for application-specific aspects  What are the significant issues that will affect the technology used to support the process?  Enhance application development and benchmarking  New paradigm in performance process and technology

Performance Technology for Productive, High-End Parallel ComputingPSC4 Large Scale Performance Problem Solving  How does our view of this process change when we consider very large-scale parallel systems?  What are the significant issues that will affect the technology used to support the process?  Parallel performance observation is clearly needed  In general, there is the concern for intrusion  Seen as a tradeoff with performance diagnosis accuracy  Scaling complicates observation and analysis  Performance data size becomes a concern  Analysis complexity increases  Nature of application development may change

Performance Technology for Productive, High-End Parallel ComputingPSC5 Role of Intelligence, Automation, and Knowledge  Scale forces the process to become more intelligent  Even with intelligent and application-specific tools, the decisions of what to analyze is difficult and intractable  More automation and knowledge-based decision making  Build automatic/autonomic capabilities into the tools  Support broader experimentation methods and refinement  Access and correlate data from several sources  Automate performance data analysis / mining / learning  Include predictive features and experiment refinement  Knowledge-driven adaptation and optimization guidance  Address scale issues through increased expertise

Performance Technology for Productive, High-End Parallel ComputingPSC6 Outline of Talk  Performance problem solving  Scalability, productivity, and performance technology  Application-specific and autonomic performance tools  TAU parallel performance system and advances  Performance data management and data mining  Performance Data Management Framework (PerfDMF)  PerfExplorer  Multi-experiment case studies  Clustering analysis  Comparative analysis (PERC tool study)  Future work and concluding remarks

Performance Technology for Productive, High-End Parallel ComputingPSC7 TAU Performance System  Tuning and Analysis Utilities (13+ year project effort)  Performance system framework for HPC systems  Integrated, scalable, flexible, and parallel  Targets a general complex system computation model  Entities: nodes / contexts / threads  Multi-level: system / software / parallelism  Measurement and analysis abstraction  Integrated toolkit for performance problem solving  Instrumentation, measurement, analysis, and visualization  Portable performance profiling and tracing facility  Performance data management and data mining  University of Oregon, Research Center Jülich, LANL

Performance Technology for Productive, High-End Parallel ComputingPSC8 TAU Parallel Performance System Goals  Multi-level performance instrumentation  Multi-language automatic source instrumentation  Flexible and configurable performance measurement  Widely-ported parallel performance profiling system  Computer system architectures and operating systems  Different programming languages and compilers  Support for multiple parallel programming paradigms  Multi-threading, message passing, mixed-mode, hybrid  Support for performance mapping  Support for object-oriented and generic programming  Integration in complex software, systems, applications

Performance Technology for Productive, High-End Parallel ComputingPSC9 TAU Performance System Architecture event selection

Performance Technology for Productive, High-End Parallel ComputingPSC10 TAU Performance System Architecture

Performance Technology for Productive, High-End Parallel ComputingPSC11 Advances in TAU Instrumentation  Source instrumentation  Program Database Toolkit (PDT)  automated Fortran 90/95 support (Cleanscape Flint parser)  statement level support in C/C++ (Fortran soon)  TAU_COMPILER to automate instrumentation process  Automatic proxy generation for component applications  automatic CCA component instrumentation  Python instrumentation and automatic instrumentation  Continued integration with dynamic instrumentation  Update of OpenMP instrumentation (POMP2)  Selective instrumentation and overhead reduction  Improvements in performance mapping instrumentation

Performance Technology for Productive, High-End Parallel ComputingPSC12 Program Database Toolkit (PDT) Application / Library C / C++ parser Fortran parser F77/90/95 C / C++ IL analyzer Fortran IL analyzer Program Database Files IL DUCTAPE PDBhtml SILOON CHASM TAU_instr Program documentation Application component glue C++ / F90/95 interoperability Automatic source instrumentation

Performance Technology for Productive, High-End Parallel ComputingPSC13 Advances in TAU Measurement  Profiling (four types)  Memory profiling  global heap memory tracking (several options)  Callpath profiling and calldepth profiling  user-controllable callpath length and calling depth  Phase-based profiling  Tracing  Generation of VTF3 / SLOG traces files (fully portable)  Inclusion of hardware performance counts in trace files  Hierarchical trace merging  Online performance overhead compensation  Component software proxy generation and monitoring

Performance Technology for Productive, High-End Parallel ComputingPSC14 Profile Measurement  Flat profiles  Metric (e.g., time) spent in an event (callgraph nodes)  Exclusive/inclusive, # of calls, child calls  Callpath profiles (Calldepth profiles)  Time spent along a calling path (edges in callgraph)  “main=> f1 => f2 => MPI_Send” (event name)  TAU_CALLPATH_LENGTH environment variable  Phase-based profiles  Flat profiles under a phase (nested phases are allowed)  Default “main” phase  Supports static or dynamic (per-iteration) phases

Performance Technology for Productive, High-End Parallel ComputingPSC15 Advances in TAU Performance Analysis  Enhanced parallel profile analysis (ParaProf)  Callpath analysis integration in ParaProf  Event callgraph view  Performance Data Management Framework (PerfDMF)  First release of prototype  In use by several groups  S. Moore (UTK), P. Teller (UTEP), P. Hovland (ANL), …  Integration with Vampir Next Generation (VNG)  Online trace analysis  3D Performance visualization prototype (ParaVis)  Component performance modeling and QoS

Performance Technology for Productive, High-End Parallel ComputingPSC16 Pprof – Flat Profile (NAS PB LU)  Intel Linux cluster  F90 + MPICH  Profile - Node - Context - Thread  Events - code - MPI  Metric - time  Text display

Performance Technology for Productive, High-End Parallel ComputingPSC17 ParaProf – Manager Window performance database derived performance metrics

Performance Technology for Productive, High-End Parallel ComputingPSC18 ParaProf – Full Profile (Miranda) 8K processors!

Performance Technology for Productive, High-End Parallel ComputingPSC19 ParaProf– Flat Profile (Miranda)

Performance Technology for Productive, High-End Parallel ComputingPSC20 ParaProf– Callpath Profile (Flash)

Performance Technology for Productive, High-End Parallel ComputingPSC21 ParaProf– Callpath Profile (ESMF) 21-level callpath

Performance Technology for Productive, High-End Parallel ComputingPSC22 ParaProf – Phase Profile (MFIX) In 51 st iteration, time spent in MPI_Waitall was secs Total time spent in MPI_Waitall was secs across all 92 iterations dynamic phases one per interation

Performance Technology for Productive, High-End Parallel ComputingPSC23 ParaProf – Histogram View (Miranda) 8k processors 16k processors  Scalable 2D displays

Performance Technology for Productive, High-End Parallel ComputingPSC24 ParaProf –Callgraph View (MFIX)

Performance Technology for Productive, High-End Parallel ComputingPSC25 ParaProf – Callpath Highlighting (Flash) MODULEHYDRO_1D:HYDRO_1D

Performance Technology for Productive, High-End Parallel ComputingPSC26 Profiling of Miranda on BG/L (Miller, LLNL) 128 Nodes512 Nodes1024 Nodes  Profile code performance (automatic instrumentation)  Scaling studies (problem size, number of processors)  Run on 8K and 16K processors!

Performance Technology for Productive, High-End Parallel ComputingPSC27 ParaProf – 3D Full Profile (Miranda) 16k processors

Performance Technology for Productive, High-End Parallel ComputingPSC28 ParaProf – 3D Scatterplot (Miranda)  Each point is a “thread” of execution  A total of four metrics shown in relation  ParaVis 3D profile visualization library  JOGL

Performance Technology for Productive, High-End Parallel ComputingPSC29 Performance Tracing on Miranda  Use TAU to generate VTF3 traces for Vampir analysis  MPI calls with HW counter information (not shown)  Detailed code behavior to focus optimization efforts

Performance Technology for Productive, High-End Parallel ComputingPSC30 S3D on Lemieux (TAU-to-VTF3, Vampir)

Performance Technology for Productive, High-End Parallel ComputingPSC31 S3D on Lemieux (Zoomed)

Performance Technology for Productive, High-End Parallel ComputingPSC32 TAU Performance System Status  Computing platforms (selected)  IBM SP/pSeries, SGI Origin, Cray T3E/SV-1/X1/XT3, HP (Compaq) SC (Tru64), Sun, Hitachi SR8000, NEC SX-5/6, Linux clusters (IA-32/64, Alpha, PPC, PA-RISC, Power, Opteron), Apple (G4/5, OS X), Windows  Programming languages  C, C++, Fortran 77/90/95, HPF, Java, OpenMP, Python  Thread libraries (selected)  pthreads, SGI sproc, Java,Windows, OpenMP  Compilers (selected)  Intel KAI (KCC, KAP/Pro), PGI, GNU, Fujitsu, Sun, PathScale, SGI, Cray, IBM (xlc, xlf), HP, NEC, Absoft

Performance Technology for Productive, High-End Parallel ComputingPSC33 Project Affiliations (selected)  Center for Simulation of Accidental Fires and Explosion  University of Utah, ASCI ASAP Center, C-SAFE  Uintah Computational Framework (UCF) (C++)  Center for Simulation of Dynamic Response of Materials  California Institute of Technology, ASCI ASAP Center  Virtual Testshock Facility (VTF) (Python, Fortran 90)  Earth Systems Modeling Framework (ESMF)  NSF, NOAA, DOE, NASA, …  Instrumentation for ESMF framework and applications  C, C++, and Fortran 95 code modules  MPI wrapper library for MPI calls

Performance Technology for Productive, High-End Parallel ComputingPSC34 Project Affiliations (selected) (continued)  Lawrence Livermore National Lab  Hydrodynamics (Miranda), Radiation diffusion (KULL)  Sandia National Lab and Los Alamos National Lab  DOE CCTTSS SciDAC project  Common component architecture (CCA) integration  Argonne National Lab  OS / RTS for Extreme Scale Scientific Computation  Zeptos - scalable components for petascale architectures  KTAU - integration of TAU infrastructure in Linux kernel  Oak Ridge National Lab  Contribution to the Joule Report: S3D, AORSA3D

Performance Technology for Productive, High-End Parallel ComputingPSC35 Important Questions for Application Developers  How does performance vary with different compilers?  Is poor performance correlated with certain OS features?  Has a recent change caused unanticipated performance?  How does performance vary with MPI variants?  Why is one application version faster than another?  What is the reason for the observed scaling behavior?  Did two runs exhibit similar performance?  How are performance data related to application events?  Which machines will run my code the fastest and why?  Which benchmarks predict my code performance best?

Performance Technology for Productive, High-End Parallel ComputingPSC36 Performance Problem Solving Goals  Answer questions at multiple levels of interest  Data from low-level measurements and simulations  use to predict application performance  High-level performance data spanning dimensions  machine, applications, code revisions, data sets  examine broad performance trends  Discover general correlations application performance and features of their external environment  Develop methods to predict application performance on lower-level metrics  Discover performance correlations between a small set of benchmarks and a collection of applications that represent a typical workload for a given system

Performance Technology for Productive, High-End Parallel ComputingPSC37 Automatic Performance Analysis Tool (Concept) Performance database Build application Execute application Simple analysis feedback 105% Faster! 72% Faster! build information environment / performance data Offline analysis

Performance Technology for Productive, High-End Parallel ComputingPSC38 Performance Data Management Framework

Performance Technology for Productive, High-End Parallel ComputingPSC39 ParaProf Performance Profile Analysis HPMToolkit MpiP TAU Raw files PerfDMF managed (database) Metadata Application Experiment Trial

Performance Technology for Productive, High-End Parallel ComputingPSC40 PerfExplorer (K. Huck, Ph.D. student, UO)  Performance knowledge discovery framework  Use the existing TAU infrastructure  TAU instrumentation data, PerfDMF  Client-server based system architecture  Data mining analysis applied to parallel performance data  comparative, clustering, correlation, dimension reduction,...  Technology integration  Relational DatabaseManagement Systems (RDBMS)  Java API and toolkit  R-project / Omegahat statistical analysis  WEKA data mining package  Web-based client

Performance Technology for Productive, High-End Parallel ComputingPSC41 PerfExplorer Architecture

Performance Technology for Productive, High-End Parallel ComputingPSC42 PerfExplorer Client GUI

Performance Technology for Productive, High-End Parallel ComputingPSC43 Hierarchical and K-means Clustering (sPPM)

Performance Technology for Productive, High-End Parallel ComputingPSC44 Miranda Clustering on 16K Processors

Performance Technology for Productive, High-End Parallel ComputingPSC45 PERC Tool Requirements and Evaluation  Performance Evaluation Research Center (PERC)  DOE SciDAC  Evaluation methods/tools for high-end parallel systems  PERC tools study (led by ORNL, Pat Worley)  In-depth performance analysis of select applications  Evaluation performance analysis requirements  Test tool functionality and ease of use  Applications  Start with fusion code – GYRO  Repeat with other PERC benchmarks  Continue with SciDAC codes

Performance Technology for Productive, High-End Parallel ComputingPSC46 Primary Evaluation Machines  Phoenix (ORNL – Cray X1)  512 multi-streaming vector processors  Ram (ORNL – SGI Altix (1.5 GHz Itanium2))  256 total processors  TeraGrid  ~7,738 total processors on 15 machines at 9 sites  Cheetah (ORNL – p690 cluster (1.3 GHz, HPS))  864 total processors on 27 compute nodes  Seaborg (NERSC – IBM SP3)  6080 total processors on 380 compute nodes

Performance Technology for Productive, High-End Parallel ComputingPSC47 GYRO Execution Parameters  Three benchmark problems  B1-std: 16n processors, 500 timesteps  B2-cy: 16n processors, 1000 timesteps  B3-gtc: 64n processors, 100 timesteps (very large)  Test different methods to evaluate nonlinear terms:  Direct method  FFT (“nl2” for B1 and B2, “nl1” for B3)  Task affinity enabled/disabled (p690 only)  Memory affinity enabled/disabled (p690 only)  Filesystem location (Cray X1 only)

Performance Technology for Productive, High-End Parallel ComputingPSC48 PerfExplorer Analysis of Self-Instrumented Data  PerfExplorer  Focus on comparative analysis  Apply to PERC tool evaluation study  Look at user timer data  Aggregate data  no per process data  process clustering analysis is not applicable  Timings output every N timesteps  some phase analysis possible  Goal  Recreate manually generated performance reports

Performance Technology for Productive, High-End Parallel ComputingPSC49 PerfExplorer Interface Select experiments and trials of interest Data organized in application, experiment, trial structure (will allow arbitrary in future) Experiment metadata

Performance Technology for Productive, High-End Parallel ComputingPSC50 PerfExplorer Interface Select analysis

Performance Technology for Productive, High-End Parallel ComputingPSC51 B1-std B3-gtc Timesteps per Second  Cray X1 is the fastest to solution in all 3 tests  FFT (nl2) improves time for B3-gtc only  TeraGrid faster than p690 for B1-std?  Plots generated automatically B1-std B2-cy B3-gtc TeraGrid

Performance Technology for Productive, High-End Parallel ComputingPSC52 Relative Efficiency (B1-std)  By experiment (B1-std)  Total runtime (Cheetah (red))  By event for one experiment  Coll_tr (blue) is significant  By experiment for one event  Shows how Coll_tr behaves for all experiments 16 processor base case CheetahColl_tr

Performance Technology for Productive, High-End Parallel ComputingPSC53 Automated Parallel Performance Diagnosis

Performance Technology for Productive, High-End Parallel ComputingPSC54 Current and Future Work  ParaProf  Developing phase-based performance displays  PerfDMF  Adding new database backends and distributed support  Building support for user-created tables  PerfExplorer  Extending comparative and clustering analysis  Adding new data mining capabilities  Building in scripting support  Performance regression testing tool (PerfRegress)  Integrate in Eclipse Parallel Tool Project (PTP)

Performance Technology for Productive, High-End Parallel ComputingPSC55 Concluding Discussion  Performance tools must be used effectively  More intelligent performance systems for productive use  Evolve to application-specific performance technology  Deal with scale by “full range” performance exploration  Autonomic and integrated tools  Knowledge-based and knowledge-driven process  Performance observation methods do not necessarily need to change in a fundamental sense  More automatically controlled and efficiently use  Develop next-generation tools and deliver to community  Open source with support by ParaTools, Inc.

Performance Technology for Productive, High-End Parallel ComputingPSC56 Support Acknowledgements  Department of Energy (DOE)  Office of Science contracts  University of Utah ASCI Level 1 sub-contract  ASC/NNSA Level 3 contract  NSF  High-End Computing Grant  Research Centre Juelich  John von Neumann Institute  Dr. Bernd Mohr  Los Alamos National Laboratory

Performance Technology for Productive, High-End Parallel ComputingPSC57