Molecular Dynamic analysis with Gromacs: Part of Indonesian Herbal Farmacological activities screening in Silico study On a Cluster computing environment.

Slides:



Advertisements
Similar presentations
Research Computing The Apollo HPC Cluster
Advertisements

MIMOS Berhad. All Rights Reserved. Nazarudin Wijee Mohd Sidek Salleh Grid Computing Lab MIMOS Berhad P-GRADE Performance.
White Rose Grid Infrastructure Overview Chris Cartledge Deputy Director Corporate Information and Computing Services, The University of Sheffield
Raspberry Pi Performance Benchmarking
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Job Submission.
Setting up Small Grid Testbed
Monte-Carlo method and Parallel computing  An introduction to GPU programming Mr. Fang-An Kuo, Dr. Matthew R. Smith NCHC Applied Scientific Computing.
GPU Virtualization Support in Cloud System Ching-Chi Lin Institute of Information Science, Academia Sinica Department of Computer Science and Information.
Parallel ISDS Chris Hans 29 November 2004.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
TTU High Performance Computing User Training: Part 2 Srirangam Addepalli and David Chaffin, Ph.D. Advanced Session: Outline Cluster Architecture File System.
Cyberinfrastructure for Scalable and High Performance Geospatial Computation Xuan Shi Graduate assistants supported by the CyberGIS grant Fei Ye (2011)
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Running Gromacs Input files: *.pdb, *.gro, *.itp, *.top, *.mdp, *.tpr
Job Submission on WestGrid Feb on Access Grid.
The Protein Folding Problem David van der Spoel Dept. of Cell & Mol. Biology Uppsala, Sweden
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Prepared by Careene McCallum-Rodney Hardware specification of a computer system.
Digital Graphics and Computers. Hardware and Software Working with graphic images requires suitable hardware and software to produce the best results.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Motivation “Every three minutes a woman is diagnosed with Breast cancer” (American Cancer Society, “Detailed Guide: Breast Cancer,” 2006) Explore the use.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
Training Program on GPU Programming with CUDA 31 st July, 7 th Aug, 14 th Aug 2011 CUDA Teaching UoM.
Predictive Runtime Code Scheduling for Heterogeneous Architectures 1.
Computer Graphics Graphics Hardware
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Using The Cluster. What We’ll Be Doing Add users Run Linpack Compile code Compute Node Management.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Docking with Autodock and Molecular Dynamic analysis with Gromacs: Part of Indonesian Herbal Farmacological activities screening in Silico On a Cluster.
Clusters at IIT KANPUR - 1 Brajesh Pande Computer Centre IIT Kanpur.
High Performance Computing Processors Felix Noble Mirayma V. Rodriguez Agnes Velez Electric and Computer Engineer Department August 25, 2004.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
E-science grid facility for Europe and Latin America E2GRIS1 André A. S. T. Ribeiro – UFRJ (Brazil) Itacuruça (Brazil), 2-15 November 2008.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
CCS Overview Rene Salmon Center for Computational Science.
1 The Performance Analysis of Molecular dynamics RAD GTPase with AMBER application on Cluster computing environtment. The Performance Analysis of Molecular.
Hyper Threading Technology. Introduction Hyper-threading is a technology developed by Intel Corporation for it’s Xeon processors with a 533 MHz system.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
ICAL GPU 架構中所提供分散式運算 之功能與限制. 11/17/09ICAL2 Outline Parallel computing with GPU NVIDIA CUDA SVD matrix computation Conclusion.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
How are they called?.
Current Research Overview Jeremy Espenshade 09/04/08.
 System Requirements are the prerequisites needed in order for a software or any other resources to execute efficiently.  Most software defines two.
Computer Hardware & Processing Inside the Box CSC September 16, 2010.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
Presentation on SIRIUS B Pocket PC NAME : MD. ALIUM BASIR ID : CSE 341 Sec : 05.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
IPPP Grid Cluster Phil Roffe David Ambrose-Griffith.
Parallel Computers Today LANL / IBM Roadrunner > 1 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating point.
Brief introduction about “Grid at LNS”
Computer Graphics Graphics Hardware
Using IBB Cluser: “Celler”.
GPGPU use cases from the MoBrain community
Graphics Processor Graphics Processing Unit
HPC usage and software packages
Heterogeneous Computation Team HybriLIT
Using Paraguin to Create Parallel Programs
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
NGS computation services: APIs and Parallel Jobs
WORKFLOW PETRI NETS USED IN MODELING OF PARALLEL ARCHITECTURES
Computer Graphics Graphics Hardware
Presentation transcript:

Molecular Dynamic analysis with Gromacs: Part of Indonesian Herbal Farmacological activities screening in Silico study On a Cluster computing environment. Screening aktifitas farmakologis beberapa bahan aktif tumbuhan obat Indonesia secara in silico menggunakan High Performance Computing berbasis Cluster system Joint research : Arry Yanuar, Dept of Pharmacy, and Heru Suhartanto, Ari Wibisono, Faculty of Computer Science, Universitas Indonesia Supported by The Indonesian Ministry of Research and Technology Office, ,research grant

Gromacs  GROMACS is a one of the best program to perform molecular dynamics for protein and bio-molecule simulation.  GROMACS can be run with single processor or using multiple processor (parallel using standard MPI communication)  Our Research is to study the performance (time) between, on the Cluster computing resources and on the GPU (Graphic Processor Unit)

The InGrid: Let’s see a 1 minute visit to the portal and the monitor

Hastinapura  Hastinapura.grid.ui.ac.id is the first Cluster computing resources, the Faculty of Computer Science Universitas Indonesia.  This cluster can be used to run parallel and serial applications (gromacs).  It consists of 16 dual-core machines that act as worker nodes.

Hardware Specification  Head node  Sun Fire X2100 Sun Fire X2100  AMD Opteron 2.2GHz (Dual Core) AMD Opteron  2 GB RAM  Debian GNU/Linux 3.1 “Sarge”  Worker nodes (16)  Sun Fire X2100 Sun Fire X2100  AMD Opteron 2.2GHz (Dual Core) AMD Opteron  1 GB RAM  Debian GNU/Linux 3.1 “Sarge”  Storage node  Dual Intel Xeon 2.8GHz (HT)Intel Xeon  2 GB RAM  Debian GNU/Linux 4.0-testing “Etch”  Harddisk 3x320 GB

GPU PC Hardware Specification  Dual Core 3.2 GHz  4 GB RAM  Ubuntu Bit  Harddisk 80 Gb  Gromacs OpenMM  GeForce GTS 250 CUDA Cores128 Graphics Clock (MHz)738 MHz Processor Clock (MHz)1836 MHz Texture Fill Rate (billion/sec) 47.2 Memory Clock (MHz)1100 Standard Memory Config512MB or 1 GB GDDR3 Memory Interface Width256-bit Memory Bandwidth (GB/sec) 70.4 GPU Engine Specs: Memory Specs: Feature Support:

File Preparation

File Cyp34a pdb2gmx -f 1TQN.pdb -p 1TQN.top -o 1TQN.gro editconf -f 1TQN.gro -o 1TQN.gro -d 1.0 genbox -cp 1TQN.gro -cs spc216.gro -p 1TQN.top –o 1TQN-solvate.pdb grompp -np 16 -f md.mdp -c 1TQN.gro -p 1TQN.top -o 1TQN-md.tpr Convert File Into.topology &.gro Periodic Boundary Condition Adding solvent into the molecule Energy Minimization 1TQN-md.tpr is ready to be executed with 16 processor

#!/bin/sh # CYP34A #$ -N gromacs #$ -cwd # Jumlah prosesor #$ -pe mpich 16 #$ -l arch=lx24-x86 #$ -o /export/home/nico/cyp3a4/stdout #$ -e /export/home/nico/cyp3a4/stderr #$ -i /export/home/nico/cyp3a4/stdin # # needs in # $NSLOTS # the number of tasks to be used # $TMPDIR/machines # a valid machine file to be passed to mpirun echo "Got $NSLOTS slots." /usr/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines /export/home/nico/gromacs/bin/mdrun_mpi -s /export/home/nico/cyp3a4/1TQN-md.tpr -o /export/home/nico/cyp3a4/1TQN-md.trr -c /export/home/nico/cyp3a4/1TQN-after-md.gro -np 16 -v Md-job.sh qsub md-job.sh

File Cyp34a (GPU) pdb2gmx -f 1TQN.pdb -p 1TQN.top -o 1TQN.gro editconf -f 1TQN.gro -o 1TQN.gro -d 1.0 genbox -cp 1TQN.gro -cs spc216.gro -p 1TQN.top –o 1TQN-solvate.pdb grompp -f md.mdp -c 1TQN.gro -p 1TQN.top -o 1TQN-md.tpr Convert File Into.topology &.gro Periodic Boundary Condition Adding solvent into the molecule Energy Minimization mdrun-openmm -v -deffnm 1TQN-md Production Simulation

File Curcumin grompp -np 10 -f md.mdp -c lox_pr.gro -p model.top -o topol.tpr topol.tpr is ready to be executed with 10 processor dt x nsteps = …pikosecond x = 200 pikosecond

Md-job.sh #!/bin/sh # Curcumin #$ -N gromacs #$ -cwd # Jumlah prosesor #$ -pe mpich 10 #$ -l arch=lx24-x86 #$ -o /export/home/ari/simulasi/curcumin10/stdout #$ -e /export/home/ari/simulasi/curcumin10/stderr #$ -i /export/home/ari/simulasi/curcumin10/stdin # # needs in # $NSLOTS # the number of tasks to be used # $TMPDIR/machines # a valid machine file to be passed to mpirun echo "Got $NSLOTS slots." /usr/bin/mpirun -np $NSLOTS -machinefile $TMPDIR/machines /export/home/nico/gromacs/bin/mdrun_mpi -s /export/home/ari/simulasi/curcumin12/topol.tpr -o /export/home/ari/simulasi/curcumin12/curcumin12.trr -c /export/home/ari/simulasi/curcumin12/lox_pr.gro -np 10 –v qsub md-job.sh

File Curcumin (GPU) grompp -f md.mdp -c lox_pr.gro -p model.top -o curcumin.tpr dt x nsteps = …pikosecond x = 200 pikosecond mdrun-openmm -v -deffnm curcumin Production Simulation

Performance Result File : Curcumin Performance Time Timesteps 1 CPU24h:01M200 ps GPU17h:01M200 ps InGrid 1 CPU18h200 ps InGrid 2 CPU14h200 ps InGrid 4 CPU08h200 ps InGrid > 4 CPUProblems !!! File : CYP3A4Performance Time Timesteps 1 CPU22h :32 M200 ps GPU14h : 23M200 ps CYP3A4 Result on InGrid is almost the same with the above result

The next targets: Improve the performance Thank you !