Pining for Data II: The Empirical Results Strike Back

Slides:



Advertisements
Similar presentations
A brief review of non-neural-network approaches to deep learning
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Detecting Faces in Images: A Survey
Psychology 202b Advanced Psychological Statistics, II April 7, 2011.
Lecture 14 – Neural Networks
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
Goals of Factor Analysis (1) (1)to reduce the number of variables and (2) to detect structure in the relationships between variables, that is to classify.
K-means Based Unsupervised Feature Learning for Image Recognition Ling Zheng.
A New Subspace Approach for Supervised Hyperspectral Image Classification Jun Li 1,2, José M. Bioucas-Dias 2 and Antonio Plaza 1 1 Hyperspectral Computing.
EMIS 8381 – Spring Netflix and Your Next Movie Night Nonlinear Programming Ron Andrews EMIS 8381.
Image Classification 영상분류
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
Speech Lab, ECE, State University of New York at Binghamton  Classification accuracies of neural network (left) and MXL (right) classifiers with various.
Classification Ensemble Methods 1
© 2002 IBM Corporation IBM Research 1 Policy Transformation Techniques in Policy- based System Management Mandis Beigi, Seraphin Calo and Dinesh Verma.
MACHINE LEARNING 7. Dimensionality Reduction. Dimensionality of input Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
ImageNet Classification with Deep Convolutional Neural Networks Presenter: Weicong Chen.
Combining multiple learners Usman Roshan. Decision tree From Alpaydin, 2010.
Joe Bradish Parallel Neural Networks. Background  Deep Neural Networks (DNNs) have become one of the leading technologies in artificial intelligence.
Cascade Region Regression for Robust Object Detection
Rich feature hierarchies for accurate object detection and semantic segmentation 2014 IEEE Conference on Computer Vision and Pattern Recognition Ross Girshick,
Multivariate statistical methods. Multivariate methods multivariate dataset – group of n objects, m variables (as a rule n>m, if possible). confirmation.
Deep Residual Learning for Image Recognition
Neural networks (2) Reminder Avoiding overfitting Deep neural network Brief summary of supervised learning methods.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition arXiv: v4 [cs.CV(CVPR)] 23 Apr 2015 Kaiming He, Xiangyu Zhang, Shaoqing.
Lecture 4b Data augmentation for CNN training
Compression of CNNs Mooyeol Baek Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, Jian Sun: Efficient and Accurate Approximations of Nonlinear Convolutional.
Face Recognition based on 2D-PCA and CNN
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Recent developments in object detection
Principal Component Analysis (PCA)
Deep Residual Learning for Image Recognition
Demo.
Faster R-CNN – Concepts
Course Review Questions will not be all on one topic, i.e. questions may have parts covering more than one area.
Object detection with deformable part-based models
Sentence Modeling Representation of sentences is the heart of Natural Language Processing A sentence model is a representation and analysis of semantic.
Table 1. Advantages and Disadvantages of Traditional DM/ML Methods
Basic machine learning background with Python scikit-learn
CS 698 | Current Topics in Data Science
R-CNN region By Ilia Iofedov 11/11/2018 BGU, DNN course 2016.
Master’s Thesis defense Ming Du Advisor: Dr. Yi Shang
Students: Meiling He Advisor: Prof. Brain Armstrong
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
A Hybrid PCA-LDA Model for Dimension Reduction Nan Zhao1, Washington Mio2 and Xiuwen Liu1 1Department of Computer Science, 2Department of Mathematics Florida.
SBNet: Sparse Blocks Network for Fast Inference
Multiple Organ Detection in CT Volumes using CNN Week 1
Deep Learning Hierarchical Representations for Image Steganalysis
Dog/Cat Classifier Christina Stiff.
Design of Hierarchical Classifiers for Efficient and Accurate Pattern Classification M N S S K Pavan Kumar Advisor : Dr. C. V. Jawahar.
Object Detection + Deep Learning
SAS Deep Learning Object Detection, Keypoint Detection
Machine Learning 101 Intro to AI, ML, Deep Learning
Lecture: Deep Convolutional Neural Networks
Outline Background Motivation Proposed Model Experimental Results
Machine Learning in Practice Lecture 22
Somi Jacob and Christian Bach
RCNN, Fast-RCNN, Faster-RCNN
Neural Network Pipeline CONTACT & ACKNOWLEDGEMENTS
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
Automatic Handwriting Generation
CAMCOS Report Day December 9th, 2015 San Jose State University
Image Processing and Multi-domain Translation
Modeling IDS using hybrid intelligent systems
Neural Machine Translation using CNN
Object Detection Implementations
Advisor: Dr.vahidipour Zahra salimian Shaghayegh jalali Dec 2017
What is Artificial Intelligence?
An introduction to Machine Learning (ML)
Presentation transcript:

Pining for Data II: The Empirical Results Strike Back Zach Schira Analytics Hub

Goals Use NEON hyperspectral and Lidar data to accurately classify plant species using machine learning Example Hyperspectral Cube

Using Neural Networks and SVM’s Results: 55-75% Accuracy Drawbacks: Don’t account for order/spatial information well Can only detect one object at a time

Using Neural Networks and SVM’s Randomized

Using a Convolutional Neural Network Advantages: Much better at accounting for order/spatial information Can perform multiple classifications at once Can locate objects within images Disadvantages: Often more computationally expensive Difficult to implement

Basics of Convolutional Neural Networks Example of a single convolutional layer output

Data Processing Data comes from multiple sources of varying quality Machine learning does not respond well to missing data 426 hyperspectral bands + 1 Lidar band = lots of data One hyperspectral band

Using Principal Component Analysis (PCA) PCA can reduce dimensions of data to save memory and computing time Useful when data is highly correlated Reflectance of random points in 2 wavelengths

PCA Principals Uses linear combinations variables to transform data to a set of uncorrelated variables with dimension less than or equal to the original data “Summarizes” variance in fewer dimensions PCA example

Results Data reduced down to 3 bands 2 hyperspectral 1 Lidar 40 x 40 plots created with labeled species

Going forward with Faster R-CNN R-CNN uses a series of convolutional and regular neural network layers to predict a classification for each window location

Questions?

References Ren, Shaoqing, Kaiming He, Ross Girshick, and Jian Sun. "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks." IEEE Transactions on Pattern Analysis and Machine Intelligence (2016): 1. Microsoft Research. Web. 8 Mar. 2017. Wasser, Leah. "Intro to Working with Hyperspectral Remote Sensing Data in HDF5 Format in R." NEON Data Skills. NEON, n.d. Web. 08 Mar. 2017. Dumoulin, Vincent. "A Guide to Convolution Arithmetic for Deep Learning." (n.d.): n. pag. 24 Mar. 2016. Web. 8 Mar. 2017.