Deep belief nets experiments and some ideas.

Slides:



Advertisements
Similar presentations
TWO STEP EQUATIONS 1. SOLVE FOR X 2. DO THE ADDITION STEP FIRST
Advertisements

Slide 1 Insert your own content. Slide 2 Insert your own content.
Combining Like Terms. Only combine terms that are exactly the same!! Whats the same mean? –If numbers have a variable, then you can combine only ones.
0 - 0.
MULTIPLYING MONOMIALS TIMES POLYNOMIALS (DISTRIBUTIVE PROPERTY)
Towards an Implementation of a Theory of Visual Learning in the Brain Shamit Patel CMSC 601 May 2, 2011.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 21 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Greedy Layer-Wise Training of Deep Networks
Complex Numbers Properties & Powers of i
Machine Learning: Intro and Supervised Classification
Properties of Exponents
Addition 1’s to 20.
Princess Nora University Artificial Intelligence Artificial Neural Network (ANN) 1.
Test B, 100 Subtraction Facts
Week 1.
Hierarchical Temporal Memory (HTM)
Scalable Learning in Computer Vision
Computer Science Department Learning on the Fly: Rapid Adaptation to the Image Erik Learned-Miller with Vidit Jain, Gary Huang, Laura Sevilla Lara, Manju.
Deep Learning Bing-Chen Tsai 1/21.
Chapter 2.
Advanced topics.
Rajat Raina Honglak Lee, Roger Grosse Alexis Battle, Chaitanya Ekanadham, Helen Kwong, Benjamin Packer, Narut Sereewattanawoot Andrew Y. Ng Stanford University.
Stacking RBMs and Auto-encoders for Deep Architectures References:[Bengio, 2009], [Vincent et al., 2008] 2011/03/03 강병곤.
Patch to the Future: Unsupervised Visual Prediction
POSTER TEMPLATE BY: Multi-Sensor Health Diagnosis Using Deep Belief Network Based State Classification Prasanna Tamilselvan.
Structure learning with deep neuronal networks 6 th Network Modeling Workshop, 6/6/2013 Patrick Michl.
Small Codes and Large Image Databases for Recognition CVPR 2008 Antonio Torralba, MIT Rob Fergus, NYU Yair Weiss, Hebrew University.
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
Fast and Compact Retrieval Methods in Computer Vision Part II A. Torralba, R. Fergus and Y. Weiss. Small Codes and Large Image Databases for Recognition.
IT 691 Final Presentation Pace University Created by: Robert M Gust Mark Lee Samir Hessami Mark Lee Samir Hessami.
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
CSC321: Introduction to Neural Networks and Machine Learning Lecture 20 Learning features one layer at a time Geoffrey Hinton.
How to do backpropagation in a brain
Authors : Ramon F. Astudillo, Silvio Amir, Wang Lin, Mario Silva, Isabel Trancoso Learning Word Representations from Scarce Data By: Aadil Hayat (13002)
(Infinitely) Deep Learning in Vision Max Welling (UCI) collaborators: Ian Porteous (UCI) Evgeniy Bart UCI/Caltech) Pietro Perona (Caltech)
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
CSC321: Neural Networks Lecture 13: Learning without a teacher: Autoencoders and Principal Components Analysis Geoffrey Hinton.
LeCun, Bengio, And Hinton doi: /nature14539
Students: Meera & Si Mentor: Afshin Dehghan WEEK 4: DEEP TRACKING.
Convolutional Restricted Boltzmann Machines for Feature Learning Mohammad Norouzi Advisor: Dr. Greg Mori Simon Fraser University 27 Nov
Deep learning Tsai bing-chen 10/22.
CSC321 Lecture 24 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Deep Belief Network Training Same greedy layer-wise approach First train lowest RBM (h 0 – h 1 ) using RBM update algorithm (note h 0 is x) Freeze weights.
CSC321 Lecture 27 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Neural networks (2) Reminder Avoiding overfitting Deep neural network Brief summary of supervised learning methods.
Yann LeCun Other Methods and Applications of Deep Learning Yann Le Cun The Courant Institute of Mathematical Sciences New York University
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
When deep learning meets object detection: Introduction to two technologies: SSD and YOLO Wenchi Ma.
Some Slides from 2007 NIPS tutorial by Prof. Geoffrey Hinton
Unsupervised Learning of Video Representations using LSTMs
Learning Deep Generative Models by Ruslan Salakhutdinov
an introduction to: Deep Learning
Energy models and Deep Belief Networks
CSC321: Neural Networks Lecture 22 Learning features one layer at a time Geoffrey Hinton.
Learning Mid-Level Features For Recognition
Article Review Todd Hricik.
Matt Gormley Lecture 16 October 24, 2016
Deep Belief Networks Psychology 209 February 22, 2013.
Structure learning with deep autoencoders
Unsupervised Learning and Autoencoders
Deep Learning Workshop
Dipartimento di Ingegneria «Enzo Ferrari»
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
Deep Architectures for Artificial Intelligence
CSE 573 Introduction to Artificial Intelligence Neural Networks
المشرف د.يــــاســـــــــر فـــــــؤاد By: ahmed badrealldeen
Toward a Great Class Project: Discussion of Stoianov & Zorzi’s Numerosity Model Psych 209 – 2019 Feb 14, 2019.
An introduction to: Deep Learning aka or related to Deep Neural Networks Deep Structural Learning Deep Belief Networks etc,
Lecture 09: Introduction Image Recognition using Neural Networks
Presentation transcript:

Deep belief nets experiments and some ideas. Karol Gregor NYU/Caltech

Outline DBN Image database experiments Temporal sequences

Deep belief network Backprop Labels H3 H2 H1 Input

Preprocessing – Bag of words of SIFT With: Greg Griffin (Caltech) Images Features (using SIFT) Bag of words Image1 Image2 Word1 23 11 Word2 12 55 Word3 92 33 … … … Group them (e.g. K-means)

13 Scenes Database – Test error

Train error

- Pre-training on larger dataset - Comparison to svm, spm

Explicit representations?

Compatibility between databases Pretraining: Corel database Supervised training: 15 Scenes database

Conclusions Bag of words is not a good input for deep architectures The networks can be pretrained on one database and the supervised training can be used on other one. Other observations:

Temporal Sequences

Simple prediction Y t W t-1 t-2 t-3 X Supervised learning

With hidden units (need them for several reasons) G H t-1,t-2,t-3 t t-1,t-2,t-3 t X Y Memisevic, R. F. and Hinton, G. E., Unsupervised Learning of Image Transformations. CVPR-07

Example pred_xyh_orig.m

G H t-1 t Additions t-1 t X Y Sparsity: When inferring the H the first time, keep only the largest n units on Slow H change: After inferring the H the first time, take H=(G+H)/2

Examples pred_xyh.m present_line.m present_cross.m

Hippocampus Cortex+Thalamus Senses Muscles e.g. Eye (through retina, LGN) Muscles (through sub-cortical structures) e.g. See: Jeff Hawkins: On Intelligence

Cortical patch: Complex structure (not a single layer RBM) From Alex Thomson and Peter Bannister, (see numenta.com)

Desired properties

1) Prediction A B C D E F G H J K L E F H

2) Explicit representations for sequences VISIONRESEARCH time

3) Invariance discovery e.g. complex cell time

4) Sequences of variable length VISIONRESEARCH time

5) Long sequences Layer1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ? ? 2 2 2 2 2 2 2 2 2 2 Layer2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 1 2 3 5 8 13 21 34 55 89 144

6) Multilayer - Inferred only after some time VISIONRESEARCH time

7) Smoother time steps

8) Variable speed - Can fit a knob with small speed range

9) Add a clock for actual time

Hippocampus Cortex+Thalamus Senses Muscles e.g. Eye (through retina, LGN) Muscles (through sub-cortical structures)

Hippocampus Cortex+Thalamus In Addition Senses Muscles Top down attention Bottom up attention Imagination Working memory Rewards Senses e.g. Eye (through retina, LGN) Muscles (through sub-cortical structures)

Training data Of the real world Simplified: Cartoons (Simsons) Videos Of the real world Simplified: Cartoons (Simsons) A robot in an environment Problem: Hard to grasp objects Artificial environment with 3D objects that are easy to manipulate (e.g. Grand theft auto IV with objects)