Status of Dynamical Core C++ Rewrite Oliver Fuhrer (MeteoSwiss), Tobias Gysi (SCS), Men Muhheim (SCS), Katharina Riedinger (SCS), David Müller (SCS), Thomas.

Slides:



Advertisements
Similar presentations
Operating Systems Components of OS
Advertisements

Prasanna Pandit R. Govindarajan
Scalable Multi-Cache Simulation Using GPUs Michael Moeng Sangyeun Cho Rami Melhem University of Pittsburgh.
Extensibility, Safety and Performance in the SPIN Operating System Presented by Allen Kerr.
OpenFOAM on a GPU-based Heterogeneous Cluster
CUDA Programming Lei Zhou, Yafeng Yin, Yanzhi Ren, Hong Man, Yingying Chen.
State Machines Timing Computer Bus Computer Performance Instruction Set Architectures RISC / CISC Machines.
1 New Architectures Need New Languages A triumph of optimism over experience! Ian Watson 3 rd July 2009.
PhD/Master course, Uppsala  Understanding the interaction between your program and computer  Structuring the code  Optimizing the code  Debugging.
A Source-to-Source OpenACC compiler for CUDA Akihiro Tabuchi †1 Masahiro Nakao †2 Mitsuhisa Sato †1 †1. Graduate School of Systems and Information Engineering,
Accelerating SQL Database Operations on a GPU with CUDA Peter Bakkum & Kevin Skadron The University of Virginia GPGPU-3 Presentation March 14, 2010.
PP POMPA (WG6) Overview Talk COSMO GM12, Lugano Oliver Fuhrer (MeteoSwiss) and the whole POMPA project team.
Slide 1/8 Performance Debugging for Highly Parallel Accelerator Architectures Saurabh Bagchi ECE & CS, Purdue University Joint work with: Tsungtai Yeh,
Language Evaluation Criteria
A Portable Virtual Machine for Program Debugging and Directing Camil Demetrescu University of Rome “La Sapienza” Irene Finocchi University of Rome “Tor.
Status of Dynamical Core C++ Rewrite (Task 5) Oliver Fuhrer (MeteoSwiss), Tobias Gysi (SCS), Men Muhheim (SCS), Katharina Riedinger (SCS), David Müller.
Codeplay CEO © Copyright 2012 Codeplay Software Ltd 45 York Place Edinburgh EH1 3HP United Kingdom Visit us at The unique challenges of.
Extracted directly from:
Revisiting Kirchhoff Migration on GPUs Rice Oil & Gas HPC Workshop
COMPUTER SCIENCE &ENGINEERING Compiled code acceleration on FPGAs W. Najjar, B.Buyukkurt, Z.Guo, J. Villareal, J. Cortes, A. Mitra Computer Science & Engineering.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Cell processor implementation of a MILC lattice QCD application.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Tekin Bicer Gagan Agrawal 1.
Porting the physical parametrizations on GPUs using directives X. Lapillonne, O. Fuhrer, Cristiano Padrin, Piero Lanucara, Alessandro Cheloni Eidgenössisches.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Operational COSMO Demonstrator OPCODE André Walser and.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Graduate Student Department Of CSE 1.
Evaluating FERMI features for Data Mining Applications Masters Thesis Presentation Sinduja Muralidharan Advised by: Dr. Gagan Agrawal.
Interrupts By Ryan Morris. Overview ● I/O Paradigm ● Synchronization ● Polling ● Control and Status Registers ● Interrupt Driven I/O ● Importance of Interrupts.
A Map-Reduce System with an Alternate API for Multi-Core Environments Wei Jiang, Vignesh T. Ravi and Gagan Agrawal.
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
GPU Architecture and Programming
1 06/09/2011, COSMO GM Xavier Lapillonne Porting the physical parametrizations on GPU using directives X. Lapillonne, O. Fuhrer Eidgenössisches Departement.
Processes Introduction to Operating Systems: Module 3.
The Cosmic Cube Charles L. Seitz Presented By: Jason D. Robey 2 APR 03.
Virtual Machines, Interpretation Techniques, and Just-In-Time Compilers Kostis Sagonas
Structure Layout Optimizations in the Open64 Compiler: Design, Implementation and Measurements Gautam Chakrabarti and Fred Chow PathScale, LLC.
CS536 Semantic Analysis Introduction with Emphasis on Name Analysis 1.
Manno, , © by Supercomputing Systems 1 1 COSMO - Dynamical Core Rewrite Approach, Rewrite and Status Tobias Gysi POMPA Workshop, Manno,
QCAdesigner – CUDA HPPS project
Compiler and Runtime Support for Enabling Generalized Reduction Computations on Heterogeneous Parallel Configurations Vignesh Ravi, Wenjing Ma, David Chiu.
Silberschatz, Galvin and Gagne  Operating System Concepts UNIT II Operating System Services.
Slide 1 Using OpenACC in IFS Physics’ Cloud Scheme (CLOUDSC) Sami Saarinen ECMWF Basic GPU Training Sept 16-17, 2015.
Experiences with Achieving Portability across Heterogeneous Architectures Lukasz G. Szafaryn +, Todd Gamblin ++, Bronis R. de Supinski ++ and Kevin Skadron.
1 Compiler & its Phases Krishan Kumar Asstt. Prof. (CSE) BPRCE, Gohana.
Shangkar Mayanglambam, Allen D. Malony, Matthew J. Sottile Computer and Information Science Department Performance.
Priority Project Performance On Massively Parallel Architectures (POMPA) Nice to meet you! COSMO GM10, Moscow.
UDI Technology Benefits Slide 1 Uniform Driver Interface UDI Technology Benefits.
Benchmarking and Applications. Purpose of Our Benchmarking Effort Reveal compiler (and run-time systems) weak points and lack of adequate automatic optimizations.
SixTrack for GPU R. De Maria. SixTrack Status SixTrack: Single Particle Tracking Code [cern.ch/sixtrack]. 70K lines written in Fortran 77/90 (with few.
Tools and Libraries for Manycore Computing Kathy Yelick U.C. Berkeley and LBNL.
11 Brian Van Straalen Portable Performance Discussion August 7, FASTMath SciDAC Institute.
Synergy.cs.vt.edu VOCL: An Optimized Environment for Transparent Virtualization of Graphics Processing Units Shucai Xiao 1, Pavan Balaji 2, Qian Zhu 3,
GMProf: A Low-Overhead, Fine-Grained Profiling Approach for GPU Programs Mai Zheng, Vignesh T. Ravi, Wenjing Ma, Feng Qin, and Gagan Agrawal Dept. of Computer.
GPU Acceleration of Particle-In-Cell Methods B. M. Cowan, J. R. Cary, S. W. Sides Tech-X Corporation.
1 ”MCUDA: An efficient implementation of CUDA kernels for multi-core CPUs” John A. Stratton, Sam S. Stone and Wen-mei W. Hwu Presentation for class TDT24,
TensorFlow– A system for large-scale machine learning
PT Evaluation of the Dycore Parallel Phase (EDP2)
5.2 Eleven Advanced Optimizations of Cache Performance
PP POMPA status Xavier Lapillonne.
Dycore Rewrite Tobias Gysi.
Optimizing MapReduce for GPUs with Effective Shared Memory Usage
PASC PASCHA Project The next HPC step for the COSMO model
Bin Ren, Gagan Agrawal, Brad Chamberlain, Steve Deitz
Background and Motivation
Chapter 2: Operating-System Structures
The Challenge of Cross - Language Interoperability
TensorFlow: A System for Large-Scale Machine Learning
Chapter 2: Operating-System Structures
6- General Purpose GPU Programming
CSc 453 Interpreters & Interpretation
Rohan Yadav and Charles Yuan (rohany) (chenhuiy)
Presentation transcript:

Status of Dynamical Core C++ Rewrite Oliver Fuhrer (MeteoSwiss), Tobias Gysi (SCS), Men Muhheim (SCS), Katharina Riedinger (SCS), David Müller (SCS), Thomas Schulthess (CSCS) …and the rest of the HP2C team!

Outline Motivation Design choices CPU and GPU implementation status Outlook

Motivation Memory bandwidth is the main performance limiter on “commodity” hardware

Motivation Prototype implementation of fast-waves solver (30% of total runtime) showed considerable potential Current Prototype

Wishlist Correctness Unit-testing Verification framework Performance Apply performance optimizations from prototype (avoid pre- computation, loop merging, iterators, configurable storage order, cache friendly buffers) Portability Run both on x86 and GPU 3 levels of parallelism (vector, multi-core node, multiple nodes) Ease of use Readibility Useability Maintainability

Version 1 (current) stencil written out inefficient Domain Specific Embedded Language (DSEL) Version 2 (optimized) more difficult to read efficient Version 3 (DSEL) stencil and loop abstracted operator notation easy to read/modify efficient (optimizations hidden in library)

Example: du/dt = -1/ρ dp/dx [...precompute rhoqx_i(i,j,k) using rho(i,j,k) ] [...precompute sqrtg_r_u(i,j,k using hhl(i,j,k) ] [...precompute hml(i,j,k) using hhl(i,j,k) ] DO k = 1, ke DO j = jstart-1, jend DO i = istart-1, iend dzdx(i,j,k) = 0.5 * sqrtg_r_u(i,j,k) * ( hml(i+1,j,k) - hml(i,j,k) ) ENDDO DO k = 1, ke DO j = jstart-1, jend+1 DO i = istart-1, iend+1 dpdz(i,j,k) = + pp(i,j,k+1) * (wgt(i,j,k+1) ) & + pp(i,j,k ) * (1.0 - wgt(i,j,k+1) - wgt(i,j,k)) & + pp(i,j,k-1) * (wgt(i,j,k) – 1.0 ) ENDDO DO k = 1, ke DO j = jstartu, jendu DO i = ilowu, iendu zdzpz = ( dpdz(i+1,j,k) + dpdz(i,j,k) ) * dzdx(i,j,k) zdpdx = pp(i+1,j,k) - pp(i,j,k) zpgradx = ( zdpdx + zdzpz ) * rhoqx_i(i,j,k) u(i,j,k,nnew) = u(i,j,k,nnew) – zpgradx * dts ENDDO (in terrain-following coords)

Example: du/dt = -1/ρ dp/dx Abbreviated version of code (e.g. declarations missing)! “Language” details of DSEL are subject to change! static void Do(Context ctx, TerrainCoordinates) { ctx[dzdx] = ctx[Delta::With(i+1, hhl)]; } static void Do(Context ctx, TerrainCoordinates) { ctx[ppgradcor] = ctx[Delta2::With(wgtfac, pp)]; } static void Do(Context ctx, FullDomain) { T rhoi = ctx[fx] / ctx[Average::With(i+1, rho)]; T pgrad = ctx[Gradient::With(i+1, pp, Delta::With(k+1, ppgradcor), dzdx)]; ctx[u] = ctx[u] - pgrad * rhoi * ctx[dts]; } (in terrain-following coords)

Dycore Rewrite Status Fully functional single-node CPU implementation fast wave solver horizontal advection (5 th -order upstream, Bott) implicit vertical diffusion and advection horizontal hyper-diffusion Coriolis and other smaller stencils Verified against Fortran reference to machine precision No SSE-specific optimizations done yet!

Rewrite vs. Current COSMO The following table compares total execution time 100 timesteps using 6 cores on Palu (Cray XE6, AMD Opteron Magny-Cours) COSMO performance is dependent on domain size (partly due to vectorization) Domain SizeCOSMORewriteSpeedup 32x s10.25 s x s10.17 s x s10.13 s1.54

Performance and scaling

Schedule Feasibility StudyLibrary Rewrite TestTune Feasibility Library Test & Tune ~2 Years CPU GPU t You Are Here

GPU Implementation - Design Decisions IJK loop order (vs. KJI for CPU) Iterators are replace by pointers, indexes and strides There is only one index and stride instance per data field type Strides and pointers are stored in shared memory Indexes are stored in registers There is no range check! 3D fields are padded in order to improve alignment Automatic synchronization between device and host storage Column buffers are full 3D fields If necessary there is a halo around every block in order to guarantee block private access to the buffer

GPU Implementation - Status The GPU backend of the library is functional The following kernels adapted and tested so far Fast Wave UV Update Vertical Advection Horizontal Advection Horizontal Diffusion Coriolis But there is still a lot of work ahead Adapt all kernels to the framework Implement boundary exchange and data field initialization kernels Write more tests Potentially a lot of performance work (e.g. merge loops and buffer intermediate values in shared memory)

Conclusions Successful CPU DSEL implementation of COSMO dynamical core Significant speedup on CPU Most identified risks turned out to be manageable Team members without C++ experience were able to implement kernels Error messages pointed mostly directly to problem Compilation time reasonable Debug information / symbols make executable huge There are areas where C++ is lagging behind Fortran e.g. bad SSE support (manual effort needed) GPU backend implementation ongoing NVIDIA toolchain is capable to handle C++ rewrite

Next Steps Port whole HP2C dycore to GPU Understand GPU performance characteristics GPU performance results by October 2011 Decide on how to proceed further…

For more information…