Centre for Computational Science, University College London

Slides:



Advertisements
Similar presentations
PRAGMA BioSciences Portal Raj Chhabra Susumu Date Junya Seo Yohei Sawai.
Advertisements

The Moab Grid Suite CSS´ 06 – Bonn – July 28, 2006.
© S.J. Coles 2006 Usability WS, NeSC Jan 06 Experiences in deploying a useable Grid-enabled service for the National Crystallography Service Simon J. Coles.
HIV-1 protease molecular dynamics of a wild- type and of the V82F/I84V mutant: Possible contributions to drug resistance and a potential new target site.
Key-word Driven Automation Framework Shiva Kumar Soumya Dalvi May 25, 2007.
Workshop on HPC in India Grid Middleware for High Performance Computing Sathish Vadhiyar Grid Applications Research Lab (GARL) Supercomputer Education.
Monitoring and performance measurement in Production Grid Environments David Wallom.
Agent Caching in APHIDS CPSC 527 Computer Communication Protocols Project Presentation Presented By: Jake Wires and Abhishek Gupta.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Atomistic Protein Folding Simulations on the Submillisecond Timescale Using Worldwide Distributed Computing Qing Lu CMSC 838 Presentation.
Workload Management Massimo Sgaravatto INFN Padova.
HIV-1 Protease HIV-1 Protease is one of the targets in the therapeutic treatment of AIDS. It cleaves the nascent polyproteins of HIV-1 and plays an essential.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
EUROPEAN UNION Polish Infrastructure for Supporting Computational Science in the European Research Space Cracow Grid Workshop’10 Kraków, October 11-13,
Ana Damjanovic (JHU, NIH) JHU: Petar Maksimovic Bertrand Garcia-Moreno NIH: Tim Miller Bernard Brooks OSG: Torre Wenaus and team.
SICSA student induction day, 2009Slide 1 Social Simulation Tutorial Session 6: Introduction to grids and cloud computing International Symposium on Grid.
Cloud Usage Overview The IBM SmartCloud Enterprise infrastructure provides an API and a GUI to the users. This is being used by the CloudBroker Platform.
DCI for Clinical Translational Research
Grid Data Management A network of computers forming prototype grids currently operate across Britain and the rest of the world, working on the data challenges.
1 AHE Server Deployment and Hosting Applications Stefan Zasada University College London.
Application of e-infrastructure to real research.
Investigating Protein Conformational Change on a Distributed Computing Cluster Christopher Woods Jeremy Frey Jonathan Essex University.
Crystal-25 April The Rising Power of the Web Browser: Douglas du Boulay, Clinton Chee, Romain Quilici, Peter Turner, Mathew Wyatt. Part of a.
1 Overview of the Application Hosting Environment Stefan Zasada University College London.
Protein Molecule Simulation on the Grid G-USE in ProSim Project Tamas Kiss Joint EGGE and EDGeS Summer School.
Spectral Analysis in the VO Thomas Rauch, Iliya Nickelt and the GAVO and AstroGrid-D Teams.
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
1 Scripting Workflows with the Application Hosting Environment Stefan Zasada University College London.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Molecular Dynamics of the Avian Influenza Virus Team Members: Ashvin Srivatsa, Michael Fu, Ellen Chuang, Ravi Sheth Team Leader: Yuan Zhang.
Altman et al. JACS 2008, Presented By Swati Jain.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
11/15/04PittGrid1 PittGrid: Campus-Wide Computing Environment Hassan Karimi School of Information Sciences Ralph Roskies Pittsburgh Supercomputing Center.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Parameter Sweep and Resources Scaling Automation in Scalarm Data Farming Platform J. Liput, M. Paciorek, M. Wrona, M. Orzechowski, R. Slota, and J. Kitowski.
David Foster LCG Project 12-March-02 Fabric Automation The Challenge of LHC Scale Fabrics LHC Computing Grid Workshop David Foster 12 th March 2002.
BalticGrid-II Project EGEE UF’09 Conference, , Catania Partner’s logo Framework for Grid Applications Migrating Desktop Framework for Grid.
Collaborative Tools for the Grid V.N Alexandrov S. Mehmood Hasan.
Interoperable HT-BAC for Personalised Medicine Peter V Coveney & team Shantenu Jha & team With Special thanks to Dieter Kranzlmuller.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
Solving inter grid interoperability issues: example of WISDOM and molecular dynamics Jean Salzemann, LPC Clermont-Ferrand, CNRS/IN2P3 Morris Riedel, Jülich.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
System Software Laboratory Databases and the Grid by Paul Watson University of Newcastle Grid Computing: Making the Global Infrastructure a Reality June.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Workload Management Workpackage
Integrating Scientific Tools and Web Portals
Effect of inhibitor binding on the 1H-15N HSQC spectra of RGS4.
Clouds , Grids and Clusters
Duncan MacMichael & Galen Deal CSS 534 – Autumn 2016
Introduction to the Application Hosting Environment
Creation Of Novel Compounds by Evaluation of Residues at Target Sites
Joseph JaJa, Mike Smorul, and Sangchul Song
Integrated Computational Materials Engineering
Enable computational and experimental  scientists to do “more” computational chemistry by providing capability  computing resources and services at their.
240 ns of molecular dynamic (MD) simulations.
Grid Portal Services IeSE (the Integrated e-Science Environment)
OGF HPC Basic Profile Interoperability Demonstrator
Interoperability & Standards
Simulation in a Distributed Computing Environment
GATES: A Grid-Based Middleware for Processing Distributed Data Streams
Overview of Workflows: Why Use Them?
Resource Allocation for Distributed Streaming Applications
rvGAHP – Push-Based Job Submission Using Reverse SSH Connections
Monte carlo simulations on mixed resolution protein models
Presentation transcript:

Centre for Computational Science, University College London GRID Middleware for Biomolecular Science Applications: A User’s Perspective Kashif Sadiq Centre for Computational Science, University College London

Overview Scientific Motivation Computational Techniques HIV-1 Protease Computational Techniques Ensemble MD Thermodynamic Integration Computational Requirements HPC Requirements Middleware Requirements: MOCAS Conclusion

HIV-1 Protease Enzyme of HIV responsible for protein maturation Monomer B 101 - 199 Monomer A 1 - 99 Enzyme of HIV responsible for protein maturation Target for Anti-retroviral Inhibitors Example of Structure Assisted Drug Design 8 FDA inhibitors of HIV-1 protease Flaps Glycine - 48, 148 Saquinavir So what’s the problem ? P2 Subsite Catalytic Aspartic Acids - 25, 125 Emergence of drug resistant mutations in protease Render drug ineffective Drug Resistant mutants have emerged for all FDA inhibitors Leucine - 90, 190 C-terminal N-terminal

Molecular Dynamics Simulations of HIV-1 Protease AIMS Study the differential interactions between wild-type and mutant proteases with an inhibitor Gain insight at molecular level into dynamical cause of drug resistance Determine conformational differences of the drug in the active site Calculate drug binding affinities Mutant 1: G48V (Glycine to Valine) Inhibitor: Saquinavir Mutant 2: L90M (Leucine to Methionine)

Computational Techniques Ensemble MD is suited for HPC GRID Simulate each system many times from same starting position Each run has randomized atomic energies fitting a certain temperature Allows conformational sampling Start Conformation Series of Runs End Conformations Launch simultaneous runs (60 sims, each 1.5 ns) C1 C2 Cx C3 Equilibration Protocols C4 S.K. Sadiq, S. Wan and P.V. Coveney Biophys J., (submitted) eq1 eq2 eq3 eq4 eq5 eq6 eq7 eq8

V ∂V/∂ … Thermodynamic integration is ideally suited for a HPC Grid  Calculating drug affinities Thermodynamic integration is ideally suited for a HPC Grid t V Use steering to launch, spawn and terminate - jobs Starting conformation Check for convergence Combine and calculate integral =0.1 time ∂V/∂ =0.2 =0.3 … Seed successive simulations (10 sims, each 2ns) =0.9  Run each independent job on the Grid P.W. Fowler, S. Jha and P.V. Coveney, Phil. Trans. R. Soc. A., 363,1999-2015 (2005)

HPC Requirements Computational Technique Total N.o. Simulations Required Nanoseconds/sim Total number of cpus for optimal efficiency Ensemble MD 60 1.5 2560 Thermodynamic Integration 20 2 640 Each Simulation ideally uses 32 cpus (~ 30000 atoms) NAMD - scaled performance currently 8hrs/ns Simultaneous use of multiple and heterogeneous Supercomputing Resources Improved policies resource co-allocation capability computing vs task farming EMD and TI are techniques that can really take advantage of a HPC GRID ! But also…… Require suitable middleware to make simulations manageable !

Middleware Requirements Monitoring - Checking resource status and job progression Optimizing - Determining best resources to launch specific jobs on Chaining - Launching a series of jobs in a sequential chain Automating - Launching multiple jobs automatically based on definable prerequisites, automatic staging retrieval and clean up Steering - Interacting and changing the direction of simulation during job execution

GSISSH vs AHE GSISSH Globus installation difficult – does it effect the user ? Resource awareness Manual file staging, retrieval and monitoring for each resource Chaining is simple Multiple job submission is simple but limited to single resource AHE Client can be installed by user – no admin necessary Does some monitoring, centralized staging and retrieval Single interface to multiple GRID resources No queue/resource availability information Chaining is currently limited Multiple job submission is simple and can be implemented across multiple resources Potential for MOCAS

Conclusion Several techniques exist and are still emerging that take advantage of HPC GRID resources Robust and easy to use middleware assists the scientist to implement these techniques in a feasible manner

Acknowledgements Collaborators CCS Department of Infection & Immunity, UCL Paul Kellam Deenan Pillay Robert Gifford Simon Watson MRC HIV Clinical Trials Unit, UCL David Dunn CCS Peter Coveney Radhika Saksena Stefan Zasada Shunzhou Wan Shantenu Jha Philip Fowler