BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 1 Analysis and Visualization Algorithms in VMD David.

Slides:



Advertisements
Similar presentations
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 28, 2011 GPUMemories.ppt GPU Memories These notes will introduce: The basic memory hierarchy.
Advertisements

Scalable Multi-Cache Simulation Using GPUs Michael Moeng Sangyeun Cho Rami Melhem University of Pittsburgh.
University of Michigan Electrical Engineering and Computer Science Transparent CPU-GPU Collaboration for Data-Parallel Kernels on Heterogeneous Systems.
Cyberinfrastructure for Scalable and High Performance Geospatial Computation Xuan Shi Graduate assistants supported by the CyberGIS grant Fei Ye (2011)
OpenFOAM on a GPU-based Heterogeneous Cluster
Acceleration of the Smith– Waterman algorithm using single and multiple graphics processors Author : Ali Khajeh-Saeed, Stephen Poole, J. Blair Perot. Publisher:
GPUs. An enlarging peak performance advantage: –Calculation: 1 TFLOPS vs. 100 GFLOPS –Memory Bandwidth: GB/s vs GB/s –GPU in every PC and.
1 Threading Hardware in G80. 2 Sources Slides by ECE 498 AL : Programming Massively Parallel Processors : Wen-Mei Hwu John Nickolls, NVIDIA.
CUDA Programming Lei Zhou, Yafeng Yin, Yanzhi Ren, Hong Man, Yingying Chen.
Panda: MapReduce Framework on GPU’s and CPU’s
To GPU Synchronize or Not GPU Synchronize? Wu-chun Feng and Shucai Xiao Department of Computer Science, Department of Electrical and Computer Engineering,
Accelerating SQL Database Operations on a GPU with CUDA Peter Bakkum & Kevin Skadron The University of Virginia GPGPU-3 Presentation March 14, 2010.
BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 1 Future Direction with NAMD David Hardy
An approach for solving the Helmholtz Equation on heterogeneous platforms An approach for solving the Helmholtz Equation on heterogeneous platforms G.
“High-performance computational GPU-stand for teaching undergraduate and graduate students the basics of quantum-mechanical calculations“ “Komsomolsk-on-Amur.
CuMAPz: A Tool to Analyze Memory Access Patterns in CUDA
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign Visualization and Analysis.
Accelerating Knowledge-based Energy Evaluation in Protein Structure Modeling with Graphics Processing Units 1 A. Yaseen, Yaohang Li, “Accelerating Knowledge-based.
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign Petascale Molecular.
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC High Performance Molecular Simulation, Visualization,
Extracted directly from:
GPUs and Accelerators Jonathan Coens Lawrence Tan Yanlin Li.
Introduction to CUDA (1 of 2) Patrick Cozzi University of Pennsylvania CIS Spring 2012.
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign S5246—Innovations in.
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC S04: High Performance Computing with CUDA Case.
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign Interactive Molecular.
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign GPU-Accelerated Analysis.
BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 1 Demonstration: Using NAMD David Hardy
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign Programming in CUDA:
Porting Irregular Reductions on Heterogeneous CPU-GPU Configurations Xin Huo, Vignesh T. Ravi, Gagan Agrawal Department of Computer Science and Engineering.
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC Faster, Cheaper, and Better Science: Molecular.
GPU-Accelerated Surface Denoising and Morphing with LBM Scheme Ye Zhao Kent State University, Ohio.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 CS 395 Winter 2014 Lecture 17 Introduction to Accelerator.
Programming for Hybrid Architectures
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC Accelerating Molecular Modeling Applications.
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign GPU-Accelerated Molecular.
GPU Architecture and Programming
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC S03: High Performance Computing with CUDA Heterogeneous.
IIIT Hyderabad Scalable Clustering using Multiple GPUs K Wasif Mohiuddin P J Narayanan Center for Visual Information Technology International Institute.
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC Using GPUs to compute the multilevel summation.
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign Visualization of Petascale.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lectures 8: Threading Hardware in G80.
Introduction What is GPU? It is a processor optimized for 2D/3D graphics, video, visual computing, and display. It is highly parallel, highly multithreaded.
Compiler and Runtime Support for Enabling Generalized Reduction Computations on Heterogeneous Parallel Configurations Vignesh Ravi, Wenjing Ma, David Chiu.
1)Leverage raw computational power of GPU  Magnitude performance gains possible.
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC High Performance Molecular Simulation, Visualization,
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign Early Experiences Scaling.
Havok FX Physics on NVIDIA GPUs. Copyright © NVIDIA Corporation 2004 What is Effects Physics? Physics-based effects on a massive scale 10,000s of objects.
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC In-Situ Visualization and Analysis of Petascale.
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC Multilevel Summation of Electrostatic Potentials.
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign CUDA Applications I.
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign Broadening the Use of.
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC High Performance Molecular Visualization and.
Shangkar Mayanglambam, Allen D. Malony, Matthew J. Sottile Computer and Information Science Department Performance.
NIH BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, U. Illinois at Urbana-Champaign VMD: GPU-Accelerated.
Computer Architecture Lecture 24 Parallel Processing Ralph Grishman November 2015 NYU.
My Coordinates Office EM G.27 contact time:
CS 179: GPU Computing LECTURE 4: GPU MEMORY SYSTEMS 1.
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC Molecular Visualization and Analysis on GPUs.
GPU Acceleration of Particle-In-Cell Methods B. M. Cowan, J. R. Cary, S. W. Sides Tech-X Corporation.
High Performance Molecular Visualization and Analysis on GPUs
ECE408 Fall 2015 Applied Parallel Programming Lecture 21: Application Case Study – Molecular Dynamics.
John Stone Theoretical and Computational Biophysics Group
Linchuan Chen, Xin Huo and Gagan Agrawal
GPU-Accelerated Molecular Visualization and Analysis with VMD
ECE408 / CS483 Applied Parallel Programming Lecture 23: Application Case Study – Electrostatic Potential Calculation.
6- General Purpose GPU Programming
Multicore and GPU Programming
Presentation transcript:

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 1 Analysis and Visualization Algorithms in VMD David Hardy NAIS: State-of-the-Art Algorithms for Molecular Dynamics (Presenting the work of John Stone.)

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 2 VMD – “Visual Molecular Dynamics” Visualization and analysis of molecular dynamics simulations, sequence data, volumetric data, quantum chemistry simulations, particle systems, … User extensible with scripting and plugins

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 3 GPU Accelerated Trajectory Analysis and Visualization in VMD GPU-Accelerated FeatureGPU Speedup Molecular orbital display120x Radial distribution function92x Electrostatic field calculation44x Molecular surface display40x Ion placement26x MDFF density map synthesis26x Implicit ligand sampling25x Root mean squared fluctuation25x Radius of gyration21x Close contact determination20x Dipole moment calculation15x

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 4 VMD for Demanding Analysis Tasks Parallel VMD Analysis w/ MPI Analyze trajectory frames, structures, or sequences in parallel on clusters and supercomputers: –Compute time-averaged electrostatic fields, MDFF quality-of-fit, etc. –Parallel rendering, movie making Addresses computing requirements beyond desktop User-defined parallel reduction operations, data types Dynamic load balancing: –Tested with up to 15,360 CPU cores Supports GPU-accelerated clusters and supercomputers VMD Sequence/Structure Data, Trajectory Frames, etc… Gathered Results Data-Parallel Analysis in VMD

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 5 Time-Averaged Electrostatics Analysis on Energy-Efficient GPU Cluster 1.5 hour job (CPUs) reduced to 3 min (CPUs+GPU) Electrostatics of thousands of trajectory frames averaged Per-node power consumption on NCSA “AC” GPU cluster: –CPUs-only: 299 watts –CPUs+GPUs: 742 watts GPU Speedup: 25.5x Power efficiency gain: 10.5x Quantifying the Impact of GPUs on Performance and Energy Efficiency in HPC Clusters. J. Enos, C. Steffen, J. Fullop, M. Showerman, G. Shi, K. Esler, V. Kindratenko, J. Stone, J. Phillips. The Work in Progress in Green Computing, pp , 2010.

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 6 Time-Averaged Electrostatics Analysis on NCSA Blue Waters Early Science System NCSA Blue Waters Node TypeSeconds per trajectory frame for one compute node Cray XE6 Compute Node: 32 CPU cores (2xAMD 6200 CPUs) 9.33 Cray XK6 GPU-accelerated Compute Node: 16 CPU cores + NVIDIA X2090 (Fermi) GPU 2.25 Speedup for GPU XK6 nodes vs. CPU XE6 nodesGPU nodes are 4.15x faster overall Preliminary performance for VMD time-averaged electrostatics w/ Multilevel Summation Method running Blue Waters Early Science System

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 7 Visualizing Molecular Orbitals Visualization of MOs aids in understanding the chemistry of molecular system Display of MOs can require tens to hundreds of seconds on multi-core CPUs, even with hand-coded SSE GPUs enable MOs to be computed and displayed in a fraction of a second, fully interactively

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 8 Padding optimizes global memory performance, guaranteeing coalesced global memory accesses Grid of thread blocks Small 8x8 thread blocks afford large per-thread register count, shared memory MO 3-D lattice decomposes into 2-D slices (CUDA grids) … 0,0 0,1 1,1 …… … … Threads producing results that are discarded Each thread computes one MO lattice point. Threads producing results that are used 1,0 … GPU 2 GPU 1 GPU 0 Lattice can be computed using multiple GPUs MO GPU Parallel Decomposition

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 9 VMD MO GPU Kernel Snippet: Loading Tiles Into Shared Memory On-Demand [… outer loop over atoms …] if ((prim_counter + (maxprim = SHAREDSIZE) { prim_counter += sblock_prim_counter; sblock_prim_counter = prim_counter & MEMCOAMASK; s_basis_array[sidx ] = basis_array[sblock_prim_counter + sidx ]; s_basis_array[sidx + 64] = basis_array[sblock_prim_counter + sidx + 64]; s_basis_array[sidx + 128] = basis_array[sblock_prim_counter + sidx + 128]; s_basis_array[sidx + 192] = basis_array[sblock_prim_counter + sidx + 192]; prim_counter -= sblock_prim_counter; __syncthreads(); } for (prim=0; prim < maxprim; prim++) { float exponent = s_basis_array[prim_counter ]; float contract_coeff = s_basis_array[prim_counter + 1]; contracted_gto += contract_coeff * __expf(-exponent*dist2); prim_counter += 2; } [… continue on to angular momenta loop …] Shared memory tiles: Tiles are checked and loaded, if necessary, immediately prior to entering key arithmetic loops Adds additional control overhead to loops, even with optimized implementation

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 10 VMD MO GPU Kernel Snippet: Fermi kernel based on L1 cache [… outer loop over atoms …] // loop over the shells belonging to this atom (or basis function) for (shell=0; shell < maxshell; shell++) { float contracted_gto = 0.0f; int maxprim = shellinfo[(shell_counter<<4) ]; int shell_type = shellinfo[(shell_counter<<4) + 1]; for (prim=0; prim < maxprim; prim++) { float exponent = basis_array[prim_counter ]; float contract_coeff = basis_array[prim_counter + 1]; contracted_gto += contract_coeff * __expf(-exponent*dist2); prim_counter += 2; } [… continue on to angular momenta loop …] L1 cache: Simplifies code! Reduces control overhead Gracefully handles arbitrary-sized problems Matches performance of constant memory

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 11 VMD Single-GPU Molecular Orbital Performance Results for C 60 KernelCores/GPUsRuntime (s)Speedup Xeon 5550 ICC-SSE Xeon 5550 ICC-SSE CUDA shared mem CUDA L1-cache (16KB) CUDA const-cache CUDA const-cache, zero-copy Intel X5550 CPU, GeForce GTX 480 GPU Fermi GPUs have caches: may outperform hand-coded shared memory kernels. Zero-copy memory transfers improve overlap of computation and host-GPU I/Os.

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 12 VMD Multi-GPU Molecular Orbital Performance Results for C 60 KernelCores/GPUsRuntime (s)Speedup Intel X5550-SSE Intel X5550-SSE GeForce GTX GeForce GTX GeForce GTX GeForce GTX Intel X5550 CPU, 4x GeForce GTX 480 GPUs, Uses persistent thread pool to avoid GPU init overhead, dynamic scheduler distributes work to GPUs

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 13 Molecular Orbital Computation and Display Process Read QM simulation log file, trajectory Compute 3-D grid of MO wavefunction amplitudes Most performance-demanding step, run on GPU… Extract isosurface mesh from 3-D MO grid Apply user coloring/texturing and render the resulting surface Preprocess MO coefficient data eliminate duplicates, sort by type, etc… For current frame and MO index, retrieve MO wavefunction coefficients One-time initialization For each trj frame, for each MO shown Initialize Pool of GPU Worker Threads

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 14 Multi-GPU Load Balance Many early CUDA codes assumed all GPUs were identical Host machines may contain a diversity of GPUs of varying capability (discrete, IGP, etc) Different GPU on-chip and global memory capacities may need different problem “tile” sizes Static decomposition works poorly for non-uniform workload, or diverse GPUs GPU 1 14 SMs GPU N 30 SMs …

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 15 Multi-GPU Dynamic Work Distribution // Each GPU worker thread loops over // subset 2-D planes in a 3-D cube… while (!threadpool_next_tile(&parms, tilesize, &tile){ // Process one plane of work… // Launch one CUDA kernel for each // loop iteration taken… // Shared iterator automatically // balances load on GPUs } GPU 1GPU N … Dynamic work distribution

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 16 Example Multi-GPU Latencies Relevant to Interactive Sci-Viz, Script-Driven Analyses (4 Tesla C2050 GPUs, Intel Xeon 5550) 6.3us CUDA empty kernel (immediate return) 9.0us Sleeping barrier primitive (non-spinning barrier that uses POSIX condition variables to prevent idle CPU consumption while workers wait at the barrier) 14.8us pool wake, host fctn exec, sleep cycle (no CUDA) 30.6us pool wake, 1x(tile fetch, simple CUDA kernel launch), sleep us pool wake, 100x(tile fetch, simple CUDA kernel launch), sleep

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 17 Multi-GPU Dynamic Scheduling Performance with Heterogeneous GPUs KernelCores/GPUsRuntime (s)Speedup Intel X5550-SSE Quadro Tesla C GeForce GTX GeForce GTX Tesla C Quadro (91% of ideal perf) Dynamic load balancing enables mixture of GPU generations, SM counts, and clock rates to perform well.

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 18 Multi-GPU Runtime Error/Exception Handling Competition for resources from other applications can cause runtime failures, e.g. GPU out of memory half way through an algorithm Handle exceptions, e.g. convergence failure, NaN result, insufficient compute capability/features Handle and/or reschedule failed tiles of work GPU 1 SM MB GPU N SM MB … Original Workload Retry Stack

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 19 Radial Distribution Functions RDFs describes how atom density varies with distance Can be compared with experiments Shape indicates phase of matter: sharp peaks appear for solids, smoother for liquids Quadratic time complexity O(N 2 ) Solid Liquid

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 20 Computing RDFs Compute distances for all pairs of atoms between two groups of atoms A and B A and B may be the same, or different Use nearest image convention for periodic systems Each pair distance is inserted into a histogram Histogram is normalized one of several ways depending on use, but usually according to the volume of the spherical shells associated with each histogram bin

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 21 Multi-GPU RDF Performance 4 NVIDIA GTX480 GPUs 30 to 92x faster than 4-core Intel X5550 CPU Fermi GPUs ~3x faster than GT200 GPUs: larger on-chip shared memory Solid Liquid Fast Analysis of Molecular Dynamics Trajectories with Graphics Processing Units – Radial Distribution Functions. B. Levine, J. Stone, and A. Kohlmeyer (submitted)

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 22 Displays continuum of structural detail: –All-atom models –Coarse-grained models –Cellular scale models –Multi-scale models: All-atom + CG, Brownian + Whole Cell –Smoothly variable between full detail, and reduced resolution representations of very large complexes Uses multi-core CPUs and GPU acceleration to enable smooth real-time animation of MD trajectories Linear-time algorithm, scales to hundreds of millions of particles, as limited by memory capacity Molecular Surface Display: “QuickSurf” Representation Fast Visualization of Gaussian Density Surfaces for Molecular Dynamics and Particle System Trajectories. M. Krone, J. Stone, T. Ertl, K. Schulten. EuroVis (Submitted)

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 23 Recurring Algorithm Design Principles (1) Extensive use of on-chip shared memory and constant memory to further amplify memory bandwidth Pre-processing and sorting of operands to organize computation for peak efficiency on the GPU, particularly for best use of L1 cache and shared mem Tiled/blocked data structures in GPU global memory for peak bandwidth utilization Use of CPU to “regularize” the work done by the GPU, handle exceptions & unusual work units Asynchronous operation of CPU/GPU enabling overlapping of computation and I/O on both ends

BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 24 Recurring Algorithm Design Principles (2) Take advantage of special features of the GPU memory systems –Broadcasts, wide loads/stores (float4, double2), texture interpolation, write combining, etc. Avoid doing complex array indexing arithmetic within the GPU threads, pre- compute as much as possible outside of the GPU kernel so the GPU is doing what it’s best at: floating point arithmetic