Deep Neural Networks for Onboard Intelligence

Slides:



Advertisements
Similar presentations
Characterization Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
Advertisements

Handwritten Character Recognition Using Artificial Neural Networks Shimie Atkins & Daniel Marco Supervisor: Johanan Erez Technion - Israel Institute of.
California Car License Plate Recognition System ZhengHui Hu Advisor: Dr. Kang.
MACHINE LEARNING AND ARTIFICIAL NEURAL NETWORKS FOR FACE VERIFICATION
Feature extraction Feature extraction involves finding features of the segmented image. Usually performed on a binary image produced from.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
Alternative Parallel Processing Approaches Jonathan Sagabaen.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
Hybrid AI & Machine Learning Systems Using Ne ural Network and Subsumption Architecture Libraries By Logan Kearsley.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Community Architectures for Network Information Systems
Video Tracking Using Learned Hierarchical Features
Rotation Invariant Neural-Network Based Face Detection
Avoiding Segmentation in Multi-digit Numeral String Recognition by Combining Single and Two-digit Classifiers Trained without Negative Examples Dan Ciresan.
Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.
A Neural Network Implementation on the GPU By Sean M. O’Connell CSC 7333 Spring 2008.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Convolutional LSTM Networks for Subcellular Localization of Proteins
Using Neural Network Language Models for LVCSR Holger Schwenk and Jean-Luc Gauvain Presented by Erin Fitzgerald CLSP Reading Group December 10, 2004.
WHAT IS DATA MINING?  The process of automatically extracting useful information from large amounts of data.  Uses traditional data analysis techniques.
Philipp Gysel ECE Department University of California, Davis
Accelerating K-Means Clustering with Parallel Implementations and GPU Computing Janki Bhimani Miriam Leeser Ningfang Mi
Comparing TensorFlow Deep Learning Performance Using CPUs, GPUs, Local PCs and Cloud Pace University, Research Day, May 5, 2017 John Lawrence, Jonas Malmsten,
Big data classification using neural network
Deep Residual Learning for Image Recognition
Dimensionality Reduction and Principle Components Analysis
Analysis of Sparse Convolutional Neural Networks
Deeply learned face representations are sparse, selective, and robust
Deep Learning Amin Sobhani.
Fall 2004 Perceptron CS478 - Machine Learning.
WAVELET VIDEO PROCESSING TECHNOLOGY
Data Mining, Neural Network and Genetic Programming
Public Patrick Papsdorf Adviser European Central Bank
Deep Learning in HEP Large number of applications:
Combining CNN with RNN for scene labeling (segmentation)
Inception and Residual Architecture in Deep Convolutional Networks
C-LSTM: Enabling Efficient LSTM using Structured Compression Techniques on FPGAs Shuo Wang1, Zhe Li2, Caiwen Ding2, Bo Yuan3, Qinru Qiu2, Yanzhi Wang2,
CS 698 | Current Topics in Data Science
Layer-wise Performance Bottleneck Analysis of Deep Neural Networks
A Comparative Study of Convolutional Neural Network Models with Rosenblatt’s Brain Model Abu Kamruzzaman, Atik Khatri , Milind Ikke, Damiano Mastrandrea,
Logistic Regression & Parallel SGD
Introduction to Deep Learning with Keras
Deep learning Introduction Classes of Deep Learning Networks
Chap. 7 Regularization for Deep Learning (7.8~7.12 )
Optimization for Fully Connected Neural Network for FPGA application
Final Project presentation
CubeSat vs. Science Instrument Complexity
Visualizing and Understanding Convolutional Networks
Ch4: Backpropagation (BP)
Data Mining, Machine Learning, Data Analysis, etc. scikit-learn
Object Tracking: Comparison of
John H.L. Hansen & Taufiq Al Babba Hasan
History of Deep Learning 1/16/19
Deep Learning Some slides are from Prof. Andrew Ng of Stanford.
Autoencoders hi shea autoencoders Sys-AI.
Presentation By: Eryk Helenowski PURE Mentor: Vincent Bindschaedler
PANN Testing.
Heterogeneous convolutional neural networks for visual recognition
Unsupervised Perceptual Rewards For Imitation Learning
Model Compression Joseph E. Gonzalez
Deep neural networks for spike sorting: exploring options
Machine Learning for Space Systems: Are We Ready?
Automatic Handwriting Generation
Cloud-DNN: An Open Framework for Mapping DNN Models to Cloud FPGAs
Object Detection Implementations
Ch4: Backpropagation (BP)
Shengcong Chen, Changxing Ding, Minfeng Liu 2018
Deep CNN for breast cancer histology Image Analysis
Presentation transcript:

Deep Neural Networks for Onboard Intelligence Daniel Davila daniel.davila@swri.org Southwest Research Institute This is a non-ITAR presentation, for public release and reproduction from FSW website.

How Do We Handle All This data? Modern hardware capable of pushing out very high volume, high veracity data Need to prioritize observation opportunities before they are lost LSST – 20 TB / day MASPEX – 2.5 GB / hr

Convolutional Neural Network SparkNotes A deep learning architecture Come in many shapes and sizes Applications include image recognition, object detection, and signal processing Primary operation is convolution on matrices

Why Machine Learning on FPGA? Massive parallel computing power = Faster processing of data Low power Onboard processing FPGAs commonly used in space

Insitu Intelligence Combines: Best practices in compact AI algorithm design Modern flight FPGA densities Access to ample ground data for training

Model: Mass Spectrometer for Planetary Exploration Targeted at planetary science missions Majority of data collected by instrument is noise Can neural network be used to automatically select relevant signal for transmission?

Challenge: Spaceflight Hardware Typical ground hardware: Typical space hardware:

Dataset Description Information sparsity Shifting baselines, noisy peaks, low SNR peaks

Model Architecture 1D Conv Net Architecture Ingests segments of length 120 as input Iterates continuously over time series signal

Network Compression Goal: Minimize memory usage Determined by size of weights and intermediate layer outputs Goal: Minimize hardware footprint Determined by width and depth of network

Introduction of Fire Modules Reduces footprint of conv layers by implementing squeeze mechanism 7x reduction in model size with same performance Model Architecture Number of Weight Parameters Model File Size Fully Convolutional Network 616,386 1.5 Mb Fully Convolutional Squeeze-net 264,786 236 Kb

Peak Compression Strategy Need to define data field that can store information required to reconstruct peak locations size = 2+ [extracted peak data] + [2 * n_peaks]

Results: 10x Reduction in Data with 95% Information Retention

Science Information Preserved What’s Next? Explore anomaly detection for new observations Explore higher dimensional data, such as 2D EO/IR imaging and video Compare speed and scalability vs GPU Improve parallelization Technique Ratio Science Information Preserved Rice Compression 2.1 100% FLAC 1.9 Neural Net 10 90% Co-addition n (100/n) %