Status of Dynamical Core C++ Rewrite (Task 5) Oliver Fuhrer (MeteoSwiss), Tobias Gysi (SCS), Men Muhheim (SCS), Katharina Riedinger (SCS), David Müller.

Slides:



Advertisements
Similar presentations
Operating Systems Components of OS
Advertisements

A Dynamic World, what can Grids do for Multi-Core computing? Daniel Goodman, Anne Trefethen and Douglas Creager
Extensibility, Safety and Performance in the SPIN Operating System Presented by Allen Kerr.
1 Coven a Framework for High Performance Problem Solving Environments Nathan A. DeBardeleben Walter B. Ligon III Sourabh Pandit Dan C. Stanzione Jr. Parallel.
OpenFOAM on a GPU-based Heterogeneous Cluster
Extensible Processors. 2 ASIP Gain performance by:  Specialized hardware for the whole application (ASIC). −  Almost no flexibility. −High cost.  Use.
L13: Review for Midterm. Administrative Project proposals due Friday at 5PM (hard deadline) No makeup class Friday! March 23, Guest Lecture Austin Robison,
CUDA Programming Lei Zhou, Yafeng Yin, Yanzhi Ren, Hong Man, Yingying Chen.
State Machines Timing Computer Bus Computer Performance Instruction Set Architectures RISC / CISC Machines.
Shekoofeh Azizi Spring  CUDA is a parallel computing platform and programming model invented by NVIDIA  With CUDA, you can send C, C++ and Fortran.
Accelerating SQL Database Operations on a GPU with CUDA Peter Bakkum & Kevin Skadron The University of Virginia GPGPU-3 Presentation March 14, 2010.
PP POMPA (WG6) Overview Talk COSMO GM12, Lugano Oliver Fuhrer (MeteoSwiss) and the whole POMPA project team.
Extracted directly from:
Revisiting Kirchhoff Migration on GPUs Rice Oil & Gas HPC Workshop
COMPUTER SCIENCE &ENGINEERING Compiled code acceleration on FPGAs W. Najjar, B.Buyukkurt, Z.Guo, J. Villareal, J. Cortes, A. Mitra Computer Science & Engineering.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Cell processor implementation of a MILC lattice QCD application.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Tekin Bicer Gagan Agrawal 1.
1 Dryad Distributed Data-Parallel Programs from Sequential Building Blocks Michael Isard, Mihai Budiu, Yuan Yu, Andrew Birrell, Dennis Fetterly of Microsoft.
Porting the physical parametrizations on GPUs using directives X. Lapillonne, O. Fuhrer, Cristiano Padrin, Piero Lanucara, Alessandro Cheloni Eidgenössisches.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Operational COSMO Demonstrator OPCODE André Walser and.
GPU in HPC Scott A. Friedman ATS Research Computing Technologies.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Graduate Student Department Of CSE 1.
Evaluating FERMI features for Data Mining Applications Masters Thesis Presentation Sinduja Muralidharan Advised by: Dr. Gagan Agrawal.
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
GPU Architecture and Programming
CE Operating Systems Lecture 3 Overview of OS functions and structure.
1 06/09/2011, COSMO GM Xavier Lapillonne Porting the physical parametrizations on GPU using directives X. Lapillonne, O. Fuhrer Eidgenössisches Departement.
The Cosmic Cube Charles L. Seitz Presented By: Jason D. Robey 2 APR 03.
Manno, , © by Supercomputing Systems 1 1 COSMO - Dynamical Core Rewrite Approach, Rewrite and Status Tobias Gysi POMPA Workshop, Manno,
QCAdesigner – CUDA HPPS project
Compiler and Runtime Support for Enabling Generalized Reduction Computations on Heterogeneous Parallel Configurations Vignesh Ravi, Wenjing Ma, David Chiu.
Silberschatz, Galvin and Gagne  Operating System Concepts UNIT II Operating System Services.
Slide 1 Using OpenACC in IFS Physics’ Cloud Scheme (CLOUDSC) Sami Saarinen ECMWF Basic GPU Training Sept 16-17, 2015.
Status of Dynamical Core C++ Rewrite Oliver Fuhrer (MeteoSwiss), Tobias Gysi (SCS), Men Muhheim (SCS), Katharina Riedinger (SCS), David Müller (SCS), Thomas.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Gedae, Inc. Gedae: Auto Coding to a Virtual Machine Authors: William I. Lundgren, Kerry B. Barnes, James W. Steed HPEC 2004.
Software Transactional Memory Should Not Be Obstruction-Free Robert Ennals Presented by Abdulai Sei.
FORTRAN History. FORTRAN - Interesting Facts n FORTRAN is the oldest Language actively in use today. n FORTRAN is still used for new software development.
Shangkar Mayanglambam, Allen D. Malony, Matthew J. Sottile Computer and Information Science Department Performance.
Priority Project Performance On Massively Parallel Architectures (POMPA) Nice to meet you! COSMO GM10, Moscow.
AUTO-GC: Automatic Translation of Data Mining Applications to GPU Clusters Wenjing Ma Gagan Agrawal The Ohio State University.
Benchmarking and Applications. Purpose of Our Benchmarking Effort Reveal compiler (and run-time systems) weak points and lack of adequate automatic optimizations.
My Coordinates Office EM G.27 contact time:
SixTrack for GPU R. De Maria. SixTrack Status SixTrack: Single Particle Tracking Code [cern.ch/sixtrack]. 70K lines written in Fortran 77/90 (with few.
Tools and Libraries for Manycore Computing Kathy Yelick U.C. Berkeley and LBNL.
Synergy.cs.vt.edu VOCL: An Optimized Environment for Transparent Virtualization of Graphics Processing Units Shucai Xiao 1, Pavan Balaji 2, Qian Zhu 3,
GMProf: A Low-Overhead, Fine-Grained Profiling Approach for GPU Programs Mai Zheng, Vignesh T. Ravi, Wenjing Ma, Feng Qin, and Gagan Agrawal Dept. of Computer.
GPU Acceleration of Particle-In-Cell Methods B. M. Cowan, J. R. Cary, S. W. Sides Tech-X Corporation.
1 ”MCUDA: An efficient implementation of CUDA kernels for multi-core CPUs” John A. Stratton, Sam S. Stone and Wen-mei W. Hwu Presentation for class TDT24,
TensorFlow– A system for large-scale machine learning
PT Evaluation of the Dycore Parallel Phase (EDP2)
PP POMPA status Xavier Lapillonne.
Database Performance Tuning and Query Optimization
Accelerating MapReduce on a Coupled CPU-GPU Architecture
Linchuan Chen, Xin Huo and Gagan Agrawal
Implementation of IDEA on a Reconfigurable Computer
Faster File matching using GPGPU’s Deephan Mohan Professor: Dr
J-Zephyr Sebastian D. Eastham
Dycore Rewrite Tobias Gysi.
Compiler Back End Panel
Optimizing MapReduce for GPUs with Effective Shared Memory Usage
Compiler Back End Panel
PASC PASCHA Project The next HPC step for the COSMO model
Bin Ren, Gagan Agrawal, Brad Chamberlain, Steve Deitz
Chapter 2: Operating-System Structures
Chapter 11 Database Performance Tuning and Query Optimization
TensorFlow: A System for Large-Scale Machine Learning
Chapter 2: Operating-System Structures
6- General Purpose GPU Programming
Rohan Yadav and Charles Yuan (rohany) (chenhuiy)
Presentation transcript:

Status of Dynamical Core C++ Rewrite (Task 5) Oliver Fuhrer (MeteoSwiss), Tobias Gysi (SCS), Men Muhheim (SCS), Katharina Riedinger (SCS), David Müller (SCS), Thomas Schulthess (CSCS) …and the rest of the HP2C team!

Outline Motivation Design choices CPU and GPU implementation status Users perspective Outlook

Motivation Memory bandwidth is the main performance limiter on “commodity” hardware

Motivation Prototype implementation of fast-waves solver (30% of total runtime) showed considerable potential Current Prototype

Wishlist Correctness Unit-testing Verification framework Performance Apply performance optimizations from prototype (avoid pre- computation, loop merging, iterators, configurable storage order, cache friendly buffers) Portability Run both on x86 and GPU 3 levels of parallelism (vector, multi-core node, multiple nodes) Ease of use Readibility Useability Maintainability

Version 1 (current) stencil written out inefficient Idea: Domain Specific Embedded Language (DSEL) Version 2 (optimized) more difficult to read efficient (x1.2 speedup) Version 3 (DSEL) stencil and loop abstracted operator notation easy to read/modify efficient (optimizations hidden in library)

Dycore Rewrite Status Fully functional single-node CPU implementation fast wave solver horizontal advection (5 th -order upstream, Bott) implicit vertical diffusion and advection horizontal hyper-diffusion Coriolis and other smaller stencils Verified against Fortran reference to machine precision No SSE-specific optimizations done yet!

Rewrite vs. Current COSMO The following table compares total execution time 100 timesteps using 6 cores on Palu (Cray XE6, AMD Opteron Magny-Cours) COSMO performance is dependent on domain size (partly due to vectorization) Domain SizeCOSMORewriteSpeedup 32x s10.25 s x s10.17 s x s10.13 s1.54

Performance and scaling

Schedule Feasibility StudyLibrary Rewrite TestTune Feasibility Library Test & Tune ~2 Years CPU GPU t You Are Here

GPU Implementation - Design Decisions IJK loop order (vs. KJI for CPU) Iterators replaced by pointers, indexes and strides There is only one index and stride instance per data field type Strides and pointers are stored in shared memory Indexes are stored in registers There is no range check! 3D fields are padded in order to improve alignment (no overfetch!) Automatic synchronization between device and host storage Column buffers are full 3D fields If necessary there is a halo around every block in order to guarantee block private access to the buffer

GPU Implementation - Status The GPU backend of the stencil library is functional The following kernels adapted and tested so far Fast Wave UV Update Vertical Advection Horizontal Advection Horizontal Diffusion Coriolis But there is still work ahead for a GPU Adapt all kernels to the framework Implement boundary exchange and data field initialization kernels Write more tests Potentially a lot of performance work (e.g. merge loops and buffer intermediate values in shared memory)

An Example (1/2) Pressure gradient force (coordinate free) x-component (Cartesian coordinates) x-component (transformed into spherical, terrain-following coordinates)

Computational Grid Terrain-following coordinates Staggered grid u(i+1,k) u(i,k) w(i,k+1) hhl(i,k+1) w(i,k) hhl(i,k) rho(i,k) t(i,k)

An Example (2/2) x-component (transformed into spherical, terrain-following coordinates) x-component (discretized form) Basic operators

Fortran Version [...precompute sqrtg_r_u(i,j,k using hhl(i,j,k) ] [...precompute rhoqx_i(i,j,k) using rho(i,j,k) ] [...precompute hml(i,j,k) using hhl(i,j,k) ] DO k = 1, ke DO j = jstart-1, jend DO i = istart-1, iend dzdx(i,j,k) = 0.5 * sqrtg_r_u(i,j,k) * ( hml(i+1,j,k) - hml(i,j,k) ) ENDDO DO k = 1, ke DO j = jstart-1, jend+1 DO i = istart-1, iend+1 dpdz(i,j,k) = + pp(i,j,k+1) * (wgt(i,j,k+1) ) & + pp(i,j,k ) * (1.0 - wgt(i,j,k+1) - wgt(i,j,k)) & + pp(i,j,k-1) * (wgt(i,j,k) – 1.0 ) ENDDO DO k = 1, ke DO j = jstartu, jendu DO i = ilowu, iendu zdzpz = ( dpdz(i+1,j,k) + dpdz(i,j,k) ) * dzdx(i,j,k) zdpdx = pp(i+1,j,k) - pp(i,j,k) zpgradx = ( zdpdx + zdzpz ) * rhoqx_i(i,j,k) u(i,j,k,nnew) = u(i,j,k,nnew) – zpgradx * dts ENDDO

C++ Version

FastWaveUV.h

FastWave.cpp Stencil stages Input / Output / Temporary fields

Input / Output / Buffer Fields

Stencil stages

Stencil stages (UStage) dzdxppgradcor

Stencil stages (PGradCorStage)

Stencil stages (DZDXStage)

Conclusions Successful DSEL implementation of COSMO dynamical core Significant speedup on CPU (x1.5 – x1.8) Most identified risks turned out to be manageable Team members without C++ experience were able to implement kernels (e.g. Bott advection) Error messages pointed mostly directly to problem Compilation time reasonable Debug information / symbols make executable huge There are areas where C++ is lagging behind Fortran e.g. bad SSE support (manual effort needed) GPU backend implementation ongoing NVIDIA toolchain is capable to handle C++ rewrite

Next Steps Port whole HP2C dycore to GPU Understand GPU performance characteristics GPU performance results by October 2011 Decide on how to proceed further… Questions Is COSMO ready/willing to absorb a shift to C++ for dycore and have a mixed-language code?

For more information…