Download presentation
Presentation is loading. Please wait.
1
GPGPU use cases from the MoBrain community
João Rodrigues Postdoctoral Researcher Utrecht University, NL
2
MoBrain main activities
Task 1: User support and training Task 2: Cryo-EM in the cloud: bringing clouds to the data Task 3: Increasing the throughput efficiency of WeNMR portals via DIRAC4EGI Task 4: Cloud VMs for structural biology Task 5: GPU portals for biomolecular simulations Task 6: Integrating the micro (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) VRCs
3
Software our solutions
Powerfit Fitting atomic structures in Cryo-EM density maps using a full exhaustive 6D cross-correlation search based on FFT techniques. DisVis Visualization and quantification of accessible interaction space of distance-restrained protein-protein docking based on FFT techniques. GROMACS Versatile package to perform Molecular Dynamics simulations on systems with hundreds to millions of particles. AMBER Package to perform Molecular Dynamics simulations.
4
Use Case fitting atomic structures in Cryo-EM density maps
Powerfit Fitting atomic structures in Cryo-EM density maps using a full exhaustive 6D cross-correlation search based on FFT techniques.
5
Software powerfit & disvis
Core Dependencies Numpy Cython Scipy
6
Software powerfit & disvis
Numpy Cython Scipy Accelerated CPU FFTW3 pyFFTW
7
Software powerfit & disvis
OpenCL pyOpenCL clFFT Numpy Cython gpyFFT Scipy GPGPU Acceleration FFTW3 pyFFTW
8
Software powerfit & disvis
9
Use Case MD simulation of a large protein system
Ferritin is a protein of 450 kDa, consisting of 24 subunits A MD simulation in explicit solvent involves: more than 4000 amino acids more than water molecules Total atoms: Test Simulations were run using AMBER 14, with OpenMPI Performance on 2 GPU K20m: 8.66 ns/day
10
Software GROMACS & AMBER
CUDA 4.x MKL CC & CMake FFTW3
11
Software GROMACS & AMBER
CUDA 4.x GBs of data per day per simulation. MKL CC & CMake FFTW3
12
Queueing & Middleware resources & requirements
Example Hardware Cluster based on 3 Worker Nodes: 2x XEON E v2 2x K20m 64 Gb RAM Total 36 CPU core and 6 GPU 192 Gb RAM
13
Queueing & Middleware resources & requirements
OpenMPI Middleware Requirements One Job per GPU (AMBER) CPUs must be powerful to match the GPU CPU is still doing some work (e.g. bonded interactions) Discoverable within the e-infrastructure (e.g. jdl requirement) Preferrably containing GPU type (GTX vs K-series, AMD vs NVIDIA) AMD GPUs not supported by MD code (yet) Double-precision only supported by Tesla cards Torque & Maui
14
Conclusions & Questions
OpenCL Scipy pyOpenCL CUDA CC & CMake Numpy clFFT FFTW3 OpenMPI Torque & Maui gpyFFT Cython pyFFTW MKL Thank you for your attention
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.