Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.

Slides:



Advertisements
Similar presentations
1/17/20141 Leveraging Cloudbursting To Drive Down IT Costs Eric Burgener Senior Vice President, Product Marketing March 9, 2010.
Advertisements

1 Bio Applications Virtualization using Rocks Nadya Williams UCSD.
Resource WG Summary Mason Katz, Yoshio Tanaka. Next generation resources on PRAGMA Status – Next generation resource (VM-based) in PRAGMA by UCSD (proof.
Dynamic Resource Management for Virtualization HPC Environments Xiaohui Wei College of Computer Science and Technology Jilin University, China. 1 PRAGMA.
Virtual Screening for SHP-2 Specific Inhibitors Using Grid Computing By Simon Han UCSD Bioengineering 09 November 18-21, 2008 SC08, Austin, TX.
PRAGMA Futures Panel Philip Papadopoulos, UCSD All Opinions are mine and do not necessarily reflect those of my university, the US government or my cat.
GLOBAL VIRTUAL CLUSTER DEPLOYMENT THROUGH A CONTENT DELIVERY NETWORK Pongsakorn U-chupala, Kohei Ichikawa (NAIST) Luca Clementi, Philip Papadopoulos (UCSD)
Copyright GeneGo CONFIDENTIAL »« MetaCore TM (System requirements and installation) Systems Biology for Drug Discovery.
An OpenFlow based virtual network environment for Pragma Cloud virtual clusters Kohei Ichikawa, Taiki Tada, Susumu Date, Shinji Shimojo (Osaka U.), Yoshio.
An Approach to Secure Cloud Computing Architectures By Y. Serge Joseph FAU security Group February 24th, 2011.
PRAGMA19, Sep. 15 Resources breakout Migration from Globus-based Grid to Cloud Mason Katz, Yoshio Tanaka.
“It’s going to take a month to get a proof of concept going.” “I know VMM, but don’t know how it works with SPF and the Portal” “I know Azure, but.
Cyberaide Virtual Appliance: On-demand Deploying Middleware for Cyberinfrastructure Tobias Kurze, Lizhe Wang, Gregor von Laszewski, Jie Tao, Marcel Kunze,
Japanese Friendship Garden Mobile Tour 7/3/2013 VisLab Osaka, Japan JESUS RIOS.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Installation and Integration of Virtual Clusters onto Pragma Grid NAIST Nara, Japan Kevin Lam 06/28/13.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
PRAGMA20 – PRAGMA 21 Collaborative Activities Resources Working Group.
9/13/20151 Threads ICS 240: Operating Systems –William Albritton Information and Computer Sciences Department at Leeward Community College –Original slides.
OSG Public Storage and iRODS
1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.
Machine Creation Services (MCS)
Building the Infrastructure Grid: Architecture, Design & Deployment Logan McLeod – Database Technology Strategist.
Windows Azure Conference 2014 Deploy your Java workloads on Windows Azure.
Japanese Friendship Garden Mobile Tour 7/17/2013 VisLab Osaka, Japan JESUS RIOS.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
608D CloudStack 3.0 Omer Palo Readiness Specialist, WW Tech Support Readiness May 8, 2012.
COMS E Cloud Computing and Data Center Networking Sambit Sahu
From Virtualization Management to Private Cloud with SCVMM 2012 Dan Stolts Sr. IT Pro Evangelist Microsoft Corporation
Image Management and Rain on FutureGrid: A practical Example Presented by Javier Diaz, Fugang Wang, Gregor von Laszewski.
Installation and Integration of Virtual Clusters onto Pragma Grid NAIST Nara, Japan Kevin Lam 07/10/13.
Reports on Resources Breakouts. Wed. 11am – noon - Openflow demo rehearsal - Show physical information. How easily deploy/configure OpenvSwitch to create.
Cardiac Modeling of Steady-State Transients in the Ventricular Myocyte Raymond Tran University of Queensland July UCSD PRIME.
Using a Grid Enabled, Virtual Screening Platform to Discover Unique Inhibitors for SSH-2 Phillip Pham University of California, San Diego.
Fabian Lema Wk3 Remote Monitoring of Android Devices Using Inca Framework University Of Queensland, Australia July 17, 2013.
Nara Institute of Science and technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
A Personal Cloud Controller Yuan Luo School of Informatics and Computing, Indiana University Bloomington, USA PRAGMA 26 Workshop.
VApp Product Support Engineering Rev E VMware Confidential.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
DATABASE REPLICATION DISTRIBUTED DATABASE. O VERVIEW Replication : process of copying and maintaining database object, in multiple database that make.
Purdue RP Highlights TeraGrid Round Table May 20, 2010 Preston Smith Manager - HPC Grid Systems Rosen Center for Advanced Computing Purdue University.
Wireless Network Security Virtual Lab Team sdDec11-10 Shishir Gupta, Anthony Lobono, Mike Steffen Client Dr. George Amariucai Advisor Dr. Doug Jacobson.
Fabian Lema Wk6 Remote Monitoring of Android Devices Using Inca Framework University Of Queensland, Australia August 7, 2013.
Grid testing using virtual machines Stephen Childs*, Brian Coghlan, David O'Callaghan, Geoff Quigley, John Walsh Department of Computer Science Trinity.
Cardiac Modeling of Steady-State Transients in the Ventricular Myocyte Raymond Tran University of Queensland July UCSD PRIME.
Japanese Friendship Garden Mobile Tour 7/10/2013 VisLab Osaka, Japan JESUS RIOS.
PRAGMA 25 Working Group Updates Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD) Most slides by courtesy of Peter, Nadya, Luca, and.
Inspirirani ljudima. Ugasite mobitele. Hvala.. Paolo Pialorsi Senior Consultant PiaSys ( Publishing apps for SharePoint 2013 on Microsoft.
PRAGMA18 Demonstration 2-4 March 2010 Kei Kokubo, Yasuyuki Kusumoto, Susumu Date Osaka University, Cybermedia Center Wen-Wai Yim, Jason Haga Department.
Japanese Friendship Garden Mobile Tour 8/8/2013 VisLab Osaka, Japan JESUS RIOS.
© 2007 UC Regents1 Rocks – Present and Future The State of Things Open Source Grids and Clusters Conference Philip Papadopoulos, Greg Bruno Mason Katz,
Microsoft Virtual Academy Module 9 Configuring and Managing the VMM Library.
Installation and Integration of Virtual Clusters onto Pragma Grid NAIST Nara, Japan Kevin Lam 07/17/13.
ShoreTel Virtualization February, © 2014 ShoreTel, Inc. All rights reserved worldwide. Unified Communications Deployment Model 1 Secure Reliable.
Accessing the VI-SEEM infrastructure
Puppet and Cobbler for the configuration of multiple grid sites
Installation and Integration of Virtual Clusters onto Pragma Grid
Building a Virtual Infrastructure
Customer Profile (Target)
Sensor Pod Configuration at TFRI with PRIME Week 4
Haiyan Meng and Douglas Thain
TechEd /23/2019 9:23 AM © 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks.
Japanese Friendship Garden Mobile Tour 8/1/2013 VisLab Osaka, Japan
Cardiac Modeling of Steady-State Transients in the Ventricular Myocyte
Cardiac Modeling of Steady-State Transients in the Ventricular Myocyte
Deploying machine learning models at scale
06 | SQL Server and the Cloud
Presentation transcript:

Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING Karen Rodriguez 8/7/2013

Overview Virtual machines (VMs) have been observed to yield molecular docking results that are far more consistent than those obtained from a grid configuration. Inhomogeneous results obtained from a grid are thought to be due to physical differences in the cluster’s components. This is eliminated by creating and networking cloned VMs. This study’s objectives consist of constructing a clustered VM environment that is scalable according to job demand and which yields consistent dock results. This system is to be tested, packaged, and deployed on the PRAGMA grid. This will provide sufficient computing resources to perform a full-scale docking project of large protein databases.

Week 6: Continued working with PRAGMA administration to upload VM (CentOS 5.9, gcc 4.1.2, Dock 6.2). However, some problems may arise because this is a 32-bit machine & OS. Most VM’s on PRAGMA are 64- bit. Since we are not sure if this will work, built another VM (Sailboat) with the same configurations, but in 64-bit OS and architecture. However Sailboat does not work as expected, as Dock test scores do not match and MPICH cannot be enabled due to missing parts of compiled library. Another option was to build a VM (Master) with CentOS 5.9, gcc 4.1.2, and Dock 6.2. It has a 64-bit structure (to fit PRAGMA standards) but 32-bit OS (to match correct test scores). Test scores seem to be just as accurate the machines we originally wanted to use; this is the primary backup if those do not work. CentOS5 scores (with Dock 6.6) seem to match developer’s scores but we are not too sure about docking outputs viewed on Chimera. This will be a second alternative if it can be figured out.

Week 6: Table of docking Results: Machine bit does not seem to make as much of a difference as OS bit. Masternode and Slavenode1 are currently being processed for uploading to PRAGMA. Master and CentOS5 are backups should the 32-bit VMs not work.

Future Plans Continue working with PRAGMA administration to upload Masternode and Slavenode1 (32-bit OS on 32-bit machines). Virtual machine images have been provided. Now we are waiting until they can be uploaded to Gfarm pool. Once the images have been uploaded, they need to be retrieved at their respective sites. This will be done initially with a deployment script similar to those published on the PRAGMA wiki, and the process will be customized as needed in consultation with administrators of remote clusters. Masternode will be moved to the Rocks cluster such that the network can be controlled from the server in San Diego. Slavenode copies will be distributed to remote clusters that can support KVM. Machines will be networked utilizing VPN’s N2N service.

Acknowledgments Mentors Dr. Jason Haga, UC San Diego Bioengineering Dr. Kohei Ichikawa, Nara Institute of Science and Technology UCSD PRIME Program Teri Simas Jim Galvin Dr. Gabrielle Wienhausen Dr. Peter Arzberger Tricia Taylor Funding NAIST Japanese Student Services Organization (JASSO) PRIME PRIME alumna Haley Hunter-Zinck National Science Foundation