Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.

Slides:



Advertisements
Similar presentations
Advanced topics.
Advertisements

Tiled Convolutional Neural Networks TICA Speedup Results on the CIFAR-10 dataset Motivation Pretraining with Topographic ICA References [1] Y. LeCun, L.
Presented by: Mingyuan Zhou Duke University, ECE September 18, 2009
Recent Developments in Deep Learning Quoc V. Le Stanford University and Google.
Deep Learning.
Large Lump Detection by SVM Sharmin Nilufar Nilanjan Ray.
Large Lump Detection by SVM Sharmin Nilufar Nilanjan Ray.
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
Introduction to Data Mining Engineering Group in ACL.
What is the Best Multi-Stage Architecture for Object Recognition Kevin Jarrett, Koray Kavukcuoglu, Marc’ Aurelio Ranzato and Yann LeCun Presented by Lingbo.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Nantes Machine Learning Meet-up 2 February 2015 Stefan Knerr CogniTalk
Richard Socher Cliff Chiung-Yu Lin Andrew Y. Ng Christopher D. Manning
Hurieh Khalajzadeh Mohammad Mansouri Mohammad Teshnehlab
A shallow introduction to Deep Learning
Presented by: Mingyuan Zhou Duke University, ECE June 17, 2011
Building high-level features using large-scale unsupervised learning Anh Nguyen, Bay-yuan Hsu CS290D – Data Mining (Spring 2014) University of California,
Zhenbao Liu 1, Shaoguang Cheng 1, Shuhui Bu 1, Ke Li 2 1 Northwest Polytechnical University, Xi’an, China. 2 Information Engineering University, Zhengzhou,
Andrew Ng Feature learning for image classification Kai Yu and Andrew Ng.
Deep Convolutional Nets
Neural networks in modern image processing Petra Budíková DISA seminar,
Introduction to Deep Learning
Convolutional Restricted Boltzmann Machines for Feature Learning Mohammad Norouzi Advisor: Dr. Greg Mori Simon Fraser University 27 Nov
Object Recognizing. Deep Learning Success in 2012 DeepNet and speech processing.
Deep Belief Network Training Same greedy layer-wise approach First train lowest RBM (h 0 – h 1 ) using RBM update algorithm (note h 0 is x) Freeze weights.
1 Restricted Boltzmann Machines and Applications Pattern Recognition (IC6304) [Presentation Date: ] [ Ph.D Candidate,
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Facial Smile Detection Based on Deep Learning Features Authors: Kaihao Zhang, Yongzhen Huang, Hong Wu and Liang Wang Center for Research on Intelligent.
CNN architectures Mostly linear structure
Some Slides from 2007 NIPS tutorial by Prof. Geoffrey Hinton
Hybrid Deep Learning for Reflectance Confocal Microscopy Skin Images
Demo.
Convolutional Neural Network
The Relationship between Deep Learning and Brain Function
Deep Learning Amin Sobhani.
Automatic Lung Cancer Diagnosis from CT Scans (Week 2)
Data Mining, Neural Network and Genetic Programming
Data Mining, Neural Network and Genetic Programming
Week III: Deep Tracking
Matt Gormley Lecture 16 October 24, 2016
Restricted Boltzmann Machines for Classification
Image Recognition. Contents: Motivation Objective Definition Introduction Preprocessing / Edge Detection Neural Networks in Image Recognition Practical.
Deep Learning.
Ajita Rattani and Reza Derakhshani,
Recovery from Occlusion in Deep Feature Space for Face Recognition
Deep learning and applications to Natural language processing
Lecture 5 Smaller Network: CNN
Deep Learning Qing LU, Siyuan CAO.
Convolutional Networks
Deep Belief Networks Psychology 209 February 22, 2013.
CS6890 Deep Learning Weizhen Cai
R-CNN region By Ilia Iofedov 11/11/2018 BGU, DNN course 2016.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
State-of-the-art face recognition systems
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
Deep learning Introduction Classes of Deep Learning Networks
ECE 599/692 – Deep Learning Lecture 9 – Autoencoder (AE)
Object Classes Most recent work is at the object level We perceive the world in terms of objects, belonging to different classes. What are the differences.
CSC 578 Neural Networks and Deep Learning
Creating Data Representations
On Convolutional Neural Network
Use 3D Convolutional Neural Network to Inspect Solder Ball Defects
Outline Background Motivation Proposed Model Experimental Results
RCNN, Fast-RCNN, Faster-RCNN
Deep Learning Some slides are from Prof. Andrew Ng of Stanford.
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
Automatic Handwriting Generation
CS295: Modern Systems: Application Case Study Neural Network Accelerator Sang-Woo Jun Spring 2019 Many slides adapted from Hyoukjun Kwon‘s Gatech “Designing.
CSC 578 Neural Networks and Deep Learning
Presentation transcript:

Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1

Outline Introduction to human action recognition Introduction to deep learning Is deep learning useful for human action recognition? 2

Introduction to Human action recognition sensor-based human activity recognitionVision-based human activity recognition

Introduction to Human action recognition Segmentation Feature representation Feature Classification Usually hand-crafted: SIFT, HOG, etc…

The real challenge: image features

Representation Examples of descriptors

Unsupervised feature learning Until very recently, learning has not played a major role until the classification stage, at which point much of the input is lost. Now learn from data directly and No engineering/research effort Equally good if not better

Hierarchies in high-level vision

Deep Learning and AI

Unsupervised learning: optimizes Φ from unlabeled data distribution

Unsupervised feature learning Distributed representations – many-to-many relationship between concepts and variables Each concept is represented by many variables Each variable participates in the representation of many concepts Distributed color-shape representation

Hierarchical sparse coding

Unsupervised feature learning Boltzmann machine Deep Neural Networks Convolutional Neural Networks

Deep Neural Network for AR A key advantage of DNN is its representation of input features. DNN can model diverse activities with much less training data.

Deep Neural Network for AR Supervised : – Restricted Bozltman Machine (RBM) Unsupervised: – Shift-Invariant Sparse Coding RBM and Sparse Coding are fully connected DNN models. Therefore, they do not capture local dependencies of the time series signals.

Fully and Locally Connected NN

Convolutional NN

Advantages of applying CNN to AR Can capture Local Dependency and Scale invariance features of activity signals. variations of the same activity can be effectively captured through the extracted features.

CNN for sensor-based AR Consists of one or more pairs of convolution and pooling layers Local dependencies by Convolutional layers Scale-invariance by max- pooling layers

Activity Recognition

Criticism on Deep Learning Computational intensive A lot of parameters to tune