Building high-level features using large-scale unsupervised learning Anh Nguyen, Bay-yuan Hsu CS290D – Data Mining (Spring 2014) University of California,

Slides:



Advertisements
Similar presentations
Scalable Learning in Computer Vision
Advertisements

Advanced topics.
Rajat Raina Honglak Lee, Roger Grosse Alexis Battle, Chaitanya Ekanadham, Helen Kwong, Benjamin Packer, Narut Sereewattanawoot Andrew Y. Ng Stanford University.
ImageNet Classification with Deep Convolutional Neural Networks
Tiled Convolutional Neural Networks TICA Speedup Results on the CIFAR-10 dataset Motivation Pretraining with Topographic ICA References [1] Y. LeCun, L.
Presented by: Mingyuan Zhou Duke University, ECE September 18, 2009
Recent Developments in Deep Learning Quoc V. Le Stanford University and Google.
Deep Learning.
How to do backpropagation in a brain
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
Neural Networks Basic concepts ArchitectureOperation.
Aula 5 Alguns Exemplos PMR5406 Redes Neurais e Lógica Fuzzy.
Artificial Neural Networks
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
Autoencoders Mostafa Heidarpour
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Comp 5013 Deep Learning Architectures Daniel L. Silver March,
Nantes Machine Learning Meet-up 2 February 2015 Stefan Knerr CogniTalk
How to do backpropagation in a brain
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Hurieh Khalajzadeh Mohammad Mansouri Mohammad Teshnehlab
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
A shallow introduction to Deep Learning
NEURAL NETWORKS FOR DATA MINING
Large-scale Deep Unsupervised Learning using Graphics Processors
Presented by: Mingyuan Zhou Duke University, ECE June 17, 2011
Dr. Z. R. Ghassabi Spring 2015 Deep learning for Human action Recognition 1.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
CSC321 Lecture 5 Applying backpropagation to shape recognition Geoffrey Hinton.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 6: Applying backpropagation to shape recognition Geoffrey Hinton.
Object Recognizing. Deep Learning Success in 2012 DeepNet and speech processing.
CSC321 Lecture 24 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Artificial Neural Networks By: Steve Kidos. Outline Artificial Neural Networks: An Introduction Frank Rosenblatt’s Perceptron Multi-layer Perceptron Dot.
Xintao Wu University of Arkansas Introduction to Deep Learning 1.
Combining Models Foundations of Algorithms and Machine Learning (CS60020), IIT KGP, 2017: Indrajit Bhattacharya.
Convolutional Neural Network
The Relationship between Deep Learning and Brain Function
Deep Learning Amin Sobhani.
Chilimbi, et al. (2014) Microsoft Research
ECE 5424: Introduction to Machine Learning
Goodfellow: Chap 1 Introduction
CLASSIFICATION OF TUMOR HISTOPATHOLOGY VIA SPARSE FEATURE LEARNING Nandita M. Nayak1, Hang Chang1, Alexander Borowsky2, Paul Spellman3 and Bahram Parvin1.
Learning Mid-Level Features For Recognition
Article Review Todd Hricik.
Matt Gormley Lecture 16 October 24, 2016
Restricted Boltzmann Machines for Classification
Multimodal Learning with Deep Boltzmann Machines
ICS 491 Big Data Analytics Fall 2017 Deep Learning
Classification with Perceptrons Reading:
Machine Learning Basics
Deep learning and applications to Natural language processing
Structure learning with deep autoencoders
Unsupervised Learning and Autoencoders
Deep Learning Workshop
Goodfellow: Chap 1 Introduction
Machine Learning Today: Reading: Maria Florina Balcan
Limitations of Traditional Deep Network Architectures
CS 4501: Introduction to Computer Vision Training Neural Networks II
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
A Proposal Defense On Deep Residual Network For Face Recognition Presented By SAGAR MISHRA MECE
Lecture Notes for Chapter 4 Artificial Neural Networks
Representation Learning with Deep Auto-Encoder
Deep Learning Some slides are from Prof. Andrew Ng of Stanford.
Artificial Neural Networks
COSC 4335: Part2: Other Classification Techniques
CSC 578 Neural Networks and Deep Learning
Presentation transcript:

Building high-level features using large-scale unsupervised learning Anh Nguyen, Bay-yuan Hsu CS290D – Data Mining (Spring 2014) University of California, Santa Barbara Slide adapted from Andrew Ng (Stanford), Nando de Freitas (UBC) 1

Agenda 1.Motivation 2.Approach 1.Sparse Deep Auto-encoder 2.Local Receptive Field 3.L2 Pooling 4.Local contrast normalization 5.Overall Model 3.Parallelism 4.Evaluation 5.Discussion 2

1. MOTIVATION 3

Motivation Feature learning Supervised learning Need large number of labeled data Unsupervised learning Example: Build face detector without having labeled face images Building high-level features using unlabeled data. 4

Motivation Previous works Auto encoder Sparse coding Result: Only learns low level features Reason: Computational constraints Approach Dataset Model Computational resources 5

2. APPROACH 6

Sparse Deep Auto-encoder Auto-encoder Neural network Unsupervised learning Back-propagation 7

Sparse Deep Auto-encoder (cnt’d) Sparse Coding Input: Images x (1), x (2)... x (m) Learn: Bases (features) f 1, f 2,..., f k, so that each input x can be approximately decomposed as: x=∑a j f j s.t. a j ’s are mostly zero (“sparse”) 8

Sparse Deep Auto-encoder (cnt’d) 9

Sparse Coding Regularizer 10

Sparse Deep Auto-encoder (cnt’d) Sparse Deep Auto-encoder Multiple hidden layers to achieve particular characteristic in learning features 11

Local Receptive Field Definition: Each feature in the autoencoder can connect only to a small region of the lower layer Goal: Learn feature efficiently Parallelism Training on small image patches 12

L2 Pooling Goal: Robust to local distortion Approach: Group similar features together to achieve invariance 13

L2 Pooling Goal: Robust to local distortion Approach: Group similar features together to achieve invariance 14

L2 Pooling Goal: Robust to local distortion Approach: Group similar features together to achieve invariance 15

L2 Pooling Goal: Robust to local distortion Approach: Group similar features together to achieve invariance 16

Local Contrast Normalization Goal: Robust to variation in light intensity Approach: Normalize contrast 17

Local Contrast Normalization Goal: Robust to variation in light intensity Approach: Normalize contrast 18

Overall Model 3 layers Simple: 18x18 px 8 neurons/patch Complex: 5x5 px LCN: 5x5 px 19

Overall Model 20

Overall Model Train: Reconstruct input of each layer Optimization function 21

Overall Model Complex model? 22

3. PARALLELISM 23

Asynchronous SGD Two recent lines of research in speeding up large learning problems: Parallel/distributed computing Online (and mini-batch) learning algorithms: stochastic gradient descent, perceptron, MIRA, stepwise EM How can we bring together the benefits of parallel computing and online learning? 24

Asynchronous SGD SGD: Stochastic Gradient Descent: Choose an initial vector of parameters W and learning rate α Repeat until an approximate minimum is obtained: Randomly shuffle examples in the training set 25

26

27

28

Model Parallelism Weights divided according to locality of image and store on different machine 29

5. EVALUATION 30

Evaluation 10M Youtube unlabeled frames of size 200x200 1B parameters 1000 machines 16,000 cores 31

Experiment on Faces Test set 37,000 images 13,026 face images Best neuron 32

Experiment on Faces (cnt’d) Visualization Top stimulus (images) for face neuron Optimal stimulus for face neuron 33

Experiment on Faces (cnt’d) Invariances Properties 34

Experiment on Faces (cnt’d) Invariances Properties 35

Experiment on Cat/Human body Test set Cat: 10,000 positive, 18,409 negative Human body: 13,026 positive, 23,974 negative Accuracy 36

ImageNet classification Recognizing images Dataset 20,000 categories 14M images Accuracy 15.8% State of art: 9.3% 37

5. DISCUSSION 38

Discussion Deep learning Unsupervised feature learning Learning multiple layers of representation Increase accuracy: Invariance, contrast normalization Scalability 39

6. REFERENCES 40

References 1.Quoc Le et al., “Building High-level Features using Large Scale Unsupervised Learning” 2.Nando de Freitas, “Deep Learning”, URL: 3.Andrew Ng, “Sparse autoencoder”, URL: er.pdf 4.Andrew Ng, “Machine Learning and AI via Brain Simulations”, URL: 5.Andrew Ng, “Deep Learning”, URL: 41