Docking with Autodock and Molecular Dynamic analysis with Gromacs: Part of Indonesian Herbal Farmacological activities screening in Silico On a Cluster.

Slides:



Advertisements
Similar presentations
Molecular Dynamic analysis with Gromacs: Part of Indonesian Herbal Farmacological activities screening in Silico study On a Cluster computing environment.
Advertisements

+ Accelerating Fully Homomorphic Encryption on GPUs Wei Wang, Yin Hu, Lianmu Chen, Xinming Huang, Berk Sunar ECE Dept., Worcester Polytechnic Institute.
Monte-Carlo method and Parallel computing  An introduction to GPU programming Mr. Fang-An Kuo, Dr. Matthew R. Smith NCHC Applied Scientific Computing.
GPU Virtualization Support in Cloud System Ching-Chi Lin Institute of Information Science, Academia Sinica Department of Computer Science and Information.
The Central Processing Unit: What Goes on Inside the Computer.
Parallel ISDS Chris Hans 29 November 2004.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
TTU High Performance Computing User Training: Part 2 Srirangam Addepalli and David Chaffin, Ph.D. Advanced Session: Outline Cluster Architecture File System.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Running Gromacs Input files: *.pdb, *.gro, *.itp, *.top, *.mdp, *.tpr
The Protein Folding Problem David van der Spoel Dept. of Cell & Mol. Biology Uppsala, Sweden
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
IT Infrastructure: Software September 18, LEARNING GOALS Identify the different types of systems software. Explain the main functions of operating.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
VMware Infrastructure Alex Dementsov Tao Yang Clarkson University Feb 28, 2007.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Prepared by Careene McCallum-Rodney Hardware specification of a computer system.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Digital Graphics and Computers. Hardware and Software Working with graphic images requires suitable hardware and software to produce the best results.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Motivation “Every three minutes a woman is diagnosed with Breast cancer” (American Cancer Society, “Detailed Guide: Breast Cancer,” 2006) Explore the use.
Training Program on GPU Programming with CUDA 31 st July, 7 th Aug, 14 th Aug 2011 CUDA Teaching UoM.
Predictive Runtime Code Scheduling for Heterogeneous Architectures 1.
Computer Graphics Graphics Hardware
Christopher Mitchell CDA 6938, Spring The Discrete Cosine Transform  In the same family as the Fourier Transform  Converts data to frequency domain.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Clusters at IIT KANPUR - 1 Brajesh Pande Computer Centre IIT Kanpur.
£899 – Ultimatum Computers indiegogo.com/ultimatumcomputers The Ultimatum.
High Performance Computing Processors Felix Noble Mirayma V. Rodriguez Agnes Velez Electric and Computer Engineer Department August 25, 2004.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
CCS Overview Rene Salmon Center for Computational Science.
1 The Performance Analysis of Molecular dynamics RAD GTPase with AMBER application on Cluster computing environtment. The Performance Analysis of Molecular.
Hyper Threading Technology. Introduction Hyper-threading is a technology developed by Intel Corporation for it’s Xeon processors with a 533 MHz system.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
ICAL GPU 架構中所提供分散式運算 之功能與限制. 11/17/09ICAL2 Outline Parallel computing with GPU NVIDIA CUDA SVD matrix computation Conclusion.
The Components of the System Unit
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
Computational Research in the Battelle Center for Mathmatical medicine.
How are they called?.
Installation of Storage Foundation for Windows High Availability 5.1 SP2 1 Daniel Schnack Principle Technical Support Engineer.
Current Research Overview Jeremy Espenshade 09/04/08.
Understanding Parallel Computers Parallel Processing EE 613.
 System Requirements are the prerequisites needed in order for a software or any other resources to execute efficiently.  Most software defines two.
Computer Hardware & Processing Inside the Box CSC September 16, 2010.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
Graphic Processing Units Presentation by John Manning.
Presentation on SIRIUS B Pocket PC NAME : MD. ALIUM BASIR ID : CSE 341 Sec : 05.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Acer Graphics Dock | Introduction
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures – Proposal n Gromacs.
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
IPPP Grid Cluster Phil Roffe David Ambrose-Griffith.
Brief introduction about “Grid at LNS”
Computer Graphics Graphics Hardware
Using IBB Cluser: “Celler”.
GPGPU use cases from the MoBrain community
PC Components Microprocessor - performs all computations RAM - larger RAM memory contains more data Motherboard - holds all the above components Ports.
Graphics Processor Graphics Processing Unit
HPC usage and software packages
Using Paraguin to Create Parallel Programs
CPU Central Processing Unit
“The Brain”… I will rule the world!
Hybrid Programming with OpenMP and MPI
Computer Graphics Graphics Hardware
Presentation transcript:

Docking with Autodock and Molecular Dynamic analysis with Gromacs: Part of Indonesian Herbal Farmacological activities screening in Silico On a Cluster computing environment. Screening aktifitas farmakologis beberapa bahan aktif tumbuhan obat Indonesia secara in silico menggunakan High Performance Computing berbasis Cluster system Joint research : Arry Yanuar, Dept of Pharmacy, and Heru Suhartanto, Faculty of Computer Science, Universitas Indonesia Supported by The Indonesian Ministry of Research and Technology Office, ,research grant

Gromacs  GROMACS is a versatile package program to perform molecular dynamics.  GROMACS can be run with single processor or using multiple processor (parallel using standard MPI communication)  Our Research is study the performance (time) between, on the Cluster computing resources and on the GPU (Graphic Processor Unit)

Hastinapura  Hastinapura.grid.ui.ac.id is the first Cluster computing resources, the Faculty of Computer Science Universitas Indonesia.  This cluster can be used to run parallel and serial applications (gromacs).  It consists of 16 dual-core machines that act as worker nodes.

Hardware Specification  Head node  Sun Fire X2100 Sun Fire X2100  AMD Opteron 2.2GHz (Dual Core) AMD Opteron  2 GB RAM  Debian GNU/Linux 3.1 “Sarge”  Worker nodes (16)  Sun Fire X2100 Sun Fire X2100  AMD Opteron 2.2GHz (Dual Core) AMD Opteron  1 GB RAM  Debian GNU/Linux 3.1 “Sarge”  Storage node  Dual Intel Xeon 2.8GHz (HT)Intel Xeon  2 GB RAM  Debian GNU/Linux 4.0-testing “Etch”  Harddisk 3x320 GB

GPU Hardware Specification  Dual Core 3.2 GHz  4 GB RAM  Ubuntu Bit  Harddisk 80 Gb  Gromacs OpenMM  GeForce GTS 250 CUDA Cores128 Graphics Clock (MHz)738 MHz Processor Clock (MHz)1836 MHz Texture Fill Rate (billion/sec) 47.2 Memory Clock (MHz)1100 Standard Memory Config512MB or 1 GB GDDR3 Memory Interface Width256-bit Memory Bandwidth (GB/sec) 70.4 GPU Engine Specs: Memory Specs: Feature Support:

File Preparation

File Cyp34a pdb2gmx -f 1TQN.pdb -p 1TQN.top -o 1TQN.gro editconf -f 1TQN.gro -o 1TQN.gro -d 1.0 genbox -cp 1TQN.gro -cs spc216.gro -p 1TQN.top –o 1TQN-solvate.pdb grompp -np 16 -f md.mdp -c 1TQN.gro -p 1TQN.top -o 1TQN-md.tpr Convert File Into.topology &.gro Periodic Boundary Condition Adding solvent into the molecule Energy Minimization 1TQN-md.tpr is ready to be executed with 16 processor

#!/bin/sh # CYP34A #$ -N gromacs #$ -cwd # Jumlah prosesor #$ -pe mpich 16 #$ -l arch=lx24-x86 #$ -o /export/home/nico/cyp3a4/stdout #$ -e /export/home/nico/cyp3a4/stderr #$ -i /export/home/nico/cyp3a4/stdin # # needs in # $NSLOTS # the number of tasks to be used # $TMPDIR/machines # a valid machine file to be passed to mpirun echo "Got $NSLOTS slots." /usr/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines /export/home/nico/gromacs/bin/mdrun_mpi -s /export/home/nico/cyp3a4/1TQN-md.tpr -o /export/home/nico/cyp3a4/1TQN-md.trr -c /export/home/nico/cyp3a4/1TQN-after-md.gro -np 16 -v Md-job.sh qsub md-job.sh

File Cyp34a (GPU) pdb2gmx -f 1TQN.pdb -p 1TQN.top -o 1TQN.gro editconf -f 1TQN.gro -o 1TQN.gro -d 1.0 genbox -cp 1TQN.gro -cs spc216.gro -p 1TQN.top –o 1TQN-solvate.pdb grompp -f md.mdp -c 1TQN.gro -p 1TQN.top -o 1TQN-md.tpr Convert File Into.topology &.gro Periodic Boundary Condition Adding solvent into the molecule Energy Minimization mdrun-openmm -v -deffnm 1TQN-md Production Simulation

File Curcumin grompp -np 10 -f md.mdp -c lox_pr.gro -p model.top -o topol.tpr topol.tpr is ready to be executed with 10 processor dt x nsteps = …pikosecond x = 200 pikosecond

Md-job.sh #!/bin/sh # Curcumin #$ -N gromacs #$ -cwd # Jumlah prosesor #$ -pe mpich 10 #$ -l arch=lx24-x86 #$ -o /export/home/ari/simulasi/curcumin10/stdout #$ -e /export/home/ari/simulasi/curcumin10/stderr #$ -i /export/home/ari/simulasi/curcumin10/stdin # # needs in # $NSLOTS # the number of tasks to be used # $TMPDIR/machines # a valid machine file to be passed to mpirun echo "Got $NSLOTS slots." /usr/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines /export/home/nico/gromacs/bin/mdrun_mpi -s /export/home/ari/simulasi/curcumin12/topol.tpr -o /export/home/ari/simulasi/curcumin12/curcumin12.trr -c /export/home/ari/simulasi/curcumin12/lox_pr.gro -np 10 –v qsub md-job.sh

File Curcumin (GPU) grompp -f md.mdp -c lox_pr.gro -p model.top -o curcumin.tpr dt x nsteps = …pikosecond x = 200 pikosecond mdrun-openmm -v -deffnm curcumin Production Simulation

Performance Result File : Curcumin Performance Time Timesteps Single Processsor 24h:01M200 ps GPU (GTS 250)17h:01M200 ps GPU (GTS 250)9h:24m100 ps File : CYP3A4Performance Time Timesteps Single Processsor 22h :32 M200 ps GPU (GTS 250)14h : 23M200 ps GPU (GTS 250)7h : 45 M100 ps