The Promise of Learning Everywhere and MLforHPC

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Advertisements

1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Sep 5, 2005 Lecture 2.
Lazy Learning k-Nearest Neighbour Motivation: availability of large amounts of processing power improves our ability to tune k-NN classifiers.
1 Reliable Adaptive Distributed Systems Armando Fox, Michael Jordan, Randy H. Katz, David Patterson, George Necula, Ion Stoica, Doug Tygar.
Online Stacked Graphical Learning Zhenzhen Kou +, Vitor R. Carvalho *, and William W. Cohen + Machine Learning Department + / Language Technologies Institute.
BTRC for Macromolecular Modeling and Bioinformatics Beckman Institute, UIUC 1 Future Direction with NAMD David Hardy
© Fujitsu Laboratories of Europe 2009 HPC and Chaste: Towards Real-Time Simulation 24 March
4.x Performance Technology drivers – Exascale systems will consist of complex configurations with a huge number of potentially heterogeneous components.
ExTASY 0.1 Beta Testing 1 st April 2015
Texas A&M University Page 1 9/16/ :22:47 PM Wei Zhao Texas A&M University Is Computer Stuff Science, Engineering, or Something else?
NIMIA October 2001, Crema, Italy - Vincenzo Piuri, University of Milan, Italy NEURAL NETWORKS FOR SENSORS AND MEASUREMENT SYSTEMS Part II Vincenzo.
Application of artificial neural network in materials research Wei SHA Professor of Materials Science
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Net-Centric Software and Systems I/UCRC A Framework for QoS and Power Management for Mobile Devices in Service Clouds Project Lead: I-Ling Yen, Farokh.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Reservoir Uncertainty Assessment Using Machine Learning Techniques Authors: Jincong He Department of Energy Resources Engineering AbstractIntroduction.
… begin …. Parallel Computing: What is it good for? William M. Jones, Ph.D. Assistant Professor Computer Science Department Coastal Carolina University.
Design and implementation Chapter 7 – Lecture 1. Design and implementation Software design and implementation is the stage in the software engineering.
An Evolutionary Algorithm for Neural Network Learning using Direct Encoding Paul Batchis Department of Computer Science Rutgers University.
Parallel OpenFOAM CFD Performance Studies Student: Adi Farshteindiker Advisors: Dr. Guy Tel-Zur,Prof. Shlomi Dolev The Department of Computer Science Faculty.
Geoffrey Fox Panel Talk: February
Panel: Beyond Exascale Computing
Classification of models
Early Results of Deep Learning on the Stampede2 Supercomputer
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
Tohoku University, Japan
SIMULATION SIMULAND PURPOSE TECHNIQUE CREDIBILITY PROGRAMMATICS
Deep Learning in HEP Large number of applications:
Seizure Prediction System: An Artificial Neural Network Approach
Dynamic Data Driven Application Systems
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
DEFECT PREDICTION : USING MACHINE LEARNING
Cloud Computing By P.Mahesh
Hadoop Clusters Tess Fulkerson.
Improving java performance using Dynamic Method Migration on FPGAs
Engineered nanoBIO Node at Indiana University
NSF : CIF21 DIBBs: Middleware and High Performance Analytics Libraries for Scalable Data Science PI: Geoffrey C. Fox Software: MIDAS HPC-ABDS.
Computer Simulation of Networks
Optimization Based Design of Robust Synthetic
Development of the Nanoconfinement Science Gateway
Tutorial Overview February 2017
Adaptive Strassen and ATLAS’s DGEMM
Early Results of Deep Learning on the Stampede2 Supercomputer
Dynamic Data Driven Application Systems
network of simple neuron-like computing elements
The use of Neural Networks to schedule flow-shop with dynamic job arrival ‘A Multi-Neural Network Learning for lot Sizing and Sequencing on a Flow-Shop’
Declarative Transfer Learning from Deep CNNs at Scale
Neural Networks Geoff Hulten.
Digital Science Center
$1M a year for 5 years; 7 institutions Active:
Brian Matthews STFC EOSCpilot Brian Matthews STFC
Big Data System Environments
Quoting and Billing: Commercialization of Big Data Analytics
Intelligent Process Automation in Audit
Building HPC Big Data Systems
Lecture 20 Parallel Programming CSE /27/2019.
Machine Learning for Space Systems: Are We Ready?
FREERIDE: A Framework for Rapid Implementation of Datamining Engines
Rohan Yadav and Charles Yuan (rohany) (chenhuiy)
MACHINE LEARNING IN ASIC BACKEND DESIGN
Big Data, Simulations and HPC Convergence
Lecture 20 Parallel Programming CSE /8/2019.
Search-Based Approaches to Accelerate Deep Learning
Funded by the Horizon 2020 Framework Programme of the European Union
Gagandeep Singh, Juan Gomez-Luna, Giovanni Mariani, Geraldo F
Digital Science Center
Research in Digital Science Center
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Dynamics 365 Customer Insights
Presentation transcript:

The Promise of Learning Everywhere and MLforHPC Geoffrey Fox and Shantenu Jha contribution to Kobe BDEC meeting HPC to enhance ML is important Arguably more important is the question: Can ML enhance the effective performance of HPC simulations ? We argue that ML can enhance HPC simulations by 106 if not greater! Enhancement not measured by Flops or usual performance measures But science done using same amount of computing for given accuracy Many challenges must be overcome Right hardware, right software system (platform) Application architecture and formulation. For details see team papers http://dsc.soic.indiana.edu/publications/Learning_Everywhere.pdf http://dsc.soic.indiana.edu/publications/Learning_Everywhere_Summary.pdf

MLforHPC and HPCforML We distinguish between different interfaces for ML/DL and HPC. HPCforML: Using HPC to execute and enhance ML performance, or using HPC simulations to train ML algorithms (theory guided machine learning), which are then used to understand experimental data or simulations. HPCrunsML: Using HPC to execute ML with high performance SimulationTrainedML: Using HPC simulations to train ML algorithms, which are then used to understand experimental data or simulations. MLforHPC: Using ML to enhance HPC applications and systems MLautotuning: Using ML to configure (autotune) ML or HPC simulations. MLafterHPC: ML analyzing results of HPC, e.g., trajectory analysis in biomolecular simulations MLaroundHPC: Using ML to learn from simulations and produce learned surrogates for the simulations. The same ML wrapper can also learn configurations as well as results MLControl: Using simulations (with HPC) in control of experiments and in objective driven computational campaigns, where simulation surrogates allow real-time predictions. 2/13/2019

ML-Based Simulation Configuration MLAutotuned HPC. Machine Learning for Parameter Auto-tuning in Molecular Dynamics Simulations: Efficient Dynamics of Ions near Polarizable Nanoparticles Integration of machine learning (ML) methods for parameter prediction for MD simulations by demonstrating how they were realized in MD simulations of ions near polarizable NPs. Note ML used at start and end of simulation blocks JCS Kadupitiya, Geoffrey Fox, Vikram Jadhao Testing Training Inference I Inference II ML-Based Simulation Configuration

ML-Based Simulation Prediction ANN Model MLaroundHPC: Machine learning for performance enhancement with Surrogates of molecular dynamics simulations We find that an artificial neural network based regression model successfully learns desired features associated with the output ionic density profiles (the contact, mid-point and peak densities) generating predictions for these quantities that are in excellent agreement with the results from explicit molecular dynamics simulations. The integration of an ML layer enables real-time and anytime engagement with the simulation framework, thus enhancing the applicability for both research and educational use. Deployed on nanoHUB for education ML-Based Simulation Prediction ANN Model Training Inference ML used during simulation

Speedup of MLaroundHPC Tseq is sequential time Ttrain time for a (parallel) simulation used in training ML Tlearn is time per point to run machine learning Tlookup is time to run inference per instance Ntrain number of training samples Nlookup number of results looked up Becomes Tseq/Ttrain if ML not used Becomes Tseq/Tlookup (105 faster in our case) if inference dominates (will overcome end of Moore’s law and win the race to zettascale) This application deployed on nanoHub for high performance education use Ntrain is 7K to 16K in our work