NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.

Slides:



Advertisements
Similar presentations
National Institute of Advanced Industrial Science and Technology Ninf-G - Core GridRPC Infrastructure Software OGF19 Yoshio Tanaka (AIST) On behalf.
Advertisements

Severs AIST Cluster (50 CPU) Titech Cluster (200 CPU) KISTI Cluster (25 CPU) Climate Simulation on ApGrid/TeraGrid at SC2003 Client (AIST) Ninf-G Severs.
The Charm++ Programming Model and NAMD Abhinav S Bhatele Department of Computer Science University of Illinois at Urbana-Champaign
1 NAMD - Scalable Molecular Dynamics Gengbin Zheng 9/1/01.
1 Coven a Framework for High Performance Problem Solving Environments Nathan A. DeBardeleben Walter B. Ligon III Sourabh Pandit Dan C. Stanzione Jr. Parallel.
10/21/20091 Protein Explorer: A Petaflops Special-Purpose Computer System for Molecular Dynamics Simulations Makoto Taiji, Tetsu Narumi, Yousuke Ohno,
Abhinav Bhatele, Laxmikant V. Kale University of Illinois at Urbana-Champaign Sameer Kumar IBM T. J. Watson Research Center.
Workshop on HPC in India Grid Middleware for High Performance Computing Sathish Vadhiyar Grid Applications Research Lab (GARL) Supercomputer Education.
City University London
Components for high performance grid programming in the GRID.it project 1 Workshop on Component Models and Systems for Grid Applications - St.Malo 26 june.
Adaptive MPI Chao Huang, Orion Lawlor, L. V. Kalé Parallel Programming Lab Department of Computer Science University of Illinois at Urbana-Champaign.
Swift: A Scientist’s Gateway to Campus Clusters, Grids and Supercomputers Swift project: Presenter contact:
Topology Aware Mapping for Performance Optimization of Science Applications Abhinav S Bhatele Parallel Programming Lab, UIUC.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
Charm++ Load Balancing Framework Gengbin Zheng Parallel Programming Laboratory Department of Computer Science University of Illinois at.
BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 1 Future Direction with NAMD David Hardy
1CPSD NSF/DARPA OPAAL Adaptive Parallelization Strategies using Data-driven Objects Laxmikant Kale First Annual Review October 1999, Iowa City.
Alok 1Northwestern University Access Patterns, Metadata, and Performance Alok Choudhary and Wei-Keng Liao Department of ECE,
KARMA with ProActive Parallel Suite 12/01/2009 Air France, Sophia Antipolis Solutions and Services for Accelerating your Applications.
MSc in High Performance Computing Computational Chemistry Module Parallel Molecular Dynamics (ii) Bill Smith CCLRC Daresbury Laboratory
Checkpoint & Restart for Distributed Components in XCAT3 Sriram Krishnan* Indiana University, San Diego Supercomputer Center & Dennis Gannon Indiana University.
SOS7, Durango CO, 4-Mar-2003 Scaling to New Heights Retrospective IEEE/ACM SC2002 Conference Baltimore, MD Distilled [Trimmed & Distilled for SOS7 by M.
Scaling to New Heights Retrospective IEEE/ACM SC2002 Conference Baltimore, MD.
Adaptive MPI Milind A. Bhandarkar
DISTRIBUTED COMPUTING
Computational Design of the CCSM Next Generation Coupler Tom Bettge Tony Craig Brian Kauffman National Center for Atmospheric Research Boulder, Colorado.
Young Suk Moon Chair: Dr. Hans-Peter Bischof Reader: Dr. Gregor von Laszewski Observer: Dr. Minseok Kwon 1.
Molecular Dynamics Collection of [charged] atoms, with bonds – Newtonian mechanics – Relatively small #of atoms (100K – 10M) At each time-step – Calculate.
Scheduling Many-Body Short Range MD Simulations on a Cluster of Workstations and Custom VLSI Hardware Sumanth J.V, David R. Swanson and Hong Jiang University.
Programming Models & Runtime Systems Breakout Report MICS PI Meeting, June 27, 2002.
BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 1 Demonstration: Using NAMD David Hardy
Supercomputing Cross-Platform Performance Prediction Using Partial Execution Leo T. Yang Xiaosong Ma* Frank Mueller Department of Computer Science.
Crystal Ball Panel ORNL Heterogeneous Distributed Computing Research Al Geist ORNL March 6, 2003 SOS 7.
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC BioCoRE: User Experience Markus Dittrich
Overcoming Scaling Challenges in Bio-molecular Simulations Abhinav Bhatelé Sameer Kumar Chao Mei James C. Phillips Gengbin Zheng Laxmikant V. Kalé.
Framework for MDO Studies Amitay Isaacs Center for Aerospace System Design and Engineering IIT Bombay.
1CPSD Software Infrastructure for Application Development Laxmikant Kale David Padua Computer Science Department.
Workshop BigSim Large Parallel Machine Simulation Presented by Eric Bohm PPL Charm Workshop 2004.
What’s New With NAMD Triumph and Torture with New Platforms
Abstract A Structured Approach for Modular Design: A Plug and Play Middleware for Sensory Modules, Actuation Platforms, Task Descriptions and Implementations.
SAN DIEGO SUPERCOMPUTER CENTER Advanced User Support Project Overview Adrian E. Roitberg University of Florida July 2nd 2009 By Ross C. Walker.
1 ©2004 Board of Trustees of the University of Illinois Computer Science Overview Laxmikant (Sanjay) Kale ©
Harnessing Grid-Based Parallel Computing Resources for Molecular Dynamics Simulations Josh Hursey.
Parallelizing Spacetime Discontinuous Galerkin Methods Jonathan Booth University of Illinois at Urbana/Champaign In conjunction with: L. Kale, R. Haber,
Anton, a Special-Purpose Machine for Molecular Dynamics Simulation By David E. Shaw et al Presented by Bob Koutsoyannis.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
A uGNI-Based Asynchronous Message- driven Runtime System for Cray Supercomputers with Gemini Interconnect Yanhua Sun, Gengbin Zheng, Laximant(Sanjay) Kale.
Group Mission and Approach To enhance Performance and Productivity in programming complex parallel applications –Performance: scalable to thousands of.
An Automated Development Framework for a RISC Processor with Reconfigurable Instruction Set Extensions Nikolaos Vassiliadis, George Theodoridis and Spiridon.
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
1 Rocket Science using Charm++ at CSAR Orion Sky Lawlor 2003/10/21.
NIH Resource for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC Scaling NAMD to 100 Million Atoms on Petascale.
Hierarchical Load Balancing for Large Scale Supercomputers Gengbin Zheng Charm++ Workshop 2010 Parallel Programming Lab, UIUC 1Charm++ Workshop 2010.
Parallel Molecular Dynamics A case study : Programming for performance Laxmikant Kale
TeraGrid Capability Discovery John-Paul “JP” Navarro TeraGrid Area Co-Director for Software Integration University of Chicago/Argonne National Laboratory.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Flexibility and Interoperability in a Parallel MD code Robert Brunner, Laxmikant Kale, Jim Phillips University of Illinois at Urbana-Champaign.
OpenMosix, Open SSI, and LinuxPMI
Pipeline Execution Environment
Parallel Objects: Virtualization & In-Process Components
Many-core Software Development Platforms
Component Frameworks:
Milind A. Bhandarkar Adaptive MPI Milind A. Bhandarkar
Hybrid Programming with OpenMP and MPI
Department of Computer Science, University of Tennessee, Knoxville
IXPUG, SC’16 Lightning Talk Kavitha Chandrasekar*, Laxmikant V. Kale
Presentation transcript:

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor Dept. of Computer Science

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Vision Make NAMD a widely used MD program –For large molecular systems, –Scaling from PCs, clusters, to large parallel machines –For interactive molecular dynamics Specific Goals for NAMD 3: –High performance –Ease of use: Easy to configure, set-up, and run –Ease of modification (for us and advanced users) –Incorporation of features needed by Scientists

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD 3 New Features Scientific/Numeric Modules: –Implicit solvent models (e.g, generalized Born) –Replica exchange (e.g., 10 on 16 processors) –Hybrid quantum/classical mechanics –Self-consistent polarizability with a (sequential) CPU penalty of less than 100%. –Fast nonperiodic (and periodic) electrostatics using multiple grid methods. –A Langevin integrator that permits larger time steps by being exact for constant forces –An integrator module that computes shadow energy.

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC Terascale Biology and Resources PSC LeMieux Riken MDGRAPE NCSA Tungsten TeraGrid ASCI Purple Red Storm Thor’s Hammer CRAY X1

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD on Charm++ NAMD uses these diverse resources using Charm++ Active computer science collaboration (since 1992) Object array - A collection of objects, –Mapping of objects to processors handled by the system A[0]A[1]A[2]A[3]A[..] A[3]A[0] User’s view System view

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC Namd 3 Features Based on Charm++ Adaptive load balancing Optimized communication Flexible, tuned, parallel FFT libraries Ability to change the number of processors Automatic Checkpointing Scheduling on the grid Fault tolerance –Fully automated restart –Surviving loss of a node Scaling to large machines –fine-grained parallelism ATPase synthase 1.02 TeraFLOPs

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC Design for Programmability Software Goal: –Modular architecture to permit reuse and extensibility NAMD 3 will be a major rewrite of NAMD –Incorporate lessons learned in the past years –Use modern features of Charm++ –Re-factor software for modularity Separate physics modules from parallel framework –Restructure for supporting planned features –Algorithms that scale to even larger machines

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC Core CHARM++ ClustersLemieuxTeragrid Collective communicationLoad balancer FFTFault ToleranceGrid Scheduling Bonds related Force calculation IntegrationPair-wise Forces calculation PME Charm++ modules NAMD Core Replica exchangeQMImplicit SolventsPolarizable Force Field MDAPI … New Science modules

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC MDAPI Modular Interface Linked Executable LAN Grid Front End : Input/Output User Interface Engine Force Computation Integration,.. VMD Namd 3 Namd Amber Namd 2 MINDY Charm Dynamic discovery of engine capabilities

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC Efficient Parallelization for IMD Characteristics –Limited parallelism on small systems –Real time response needed Fine grained parallelization –Improve speedups on 4K-30K atom systems –Time/step goal Currently 0.2s/step for BrH on single processor (P4 1.7GHz) Targeting 3ms/step on 64 processors (20 ps/min) Flexible use of clusters: –Timeshare background and interactive jobs

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC Integration with CHARMM/Amber? Goal: NAMD as parallel simulation engine for CHARMM/Amber Generate input files in CHARMM/Amber –NAMD must read native file formats Run with NAMD on parallel computer –Need to use equivalent algorithms Analyze simulation in CHARMM/Amber –NAMD must generate native file formats

NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC Proud to be Programmers!