Comp 3503 / 5013 Dynamic Neural Networks Daniel L. Silver March, 2014 1.

Slides:



Advertisements
Similar presentations
Bioinspired Computing Lecture 16
Advertisements

Greedy Layer-Wise Training of Deep Networks
Thomas Trappenberg Autonomous Robotics: Supervised and unsupervised learning.
NEURAL NETWORKS Backpropagation Algorithm
Deep Learning Bing-Chen Tsai 1/21.
Stochastic Neural Networks Deep Learning and Neural Nets Spring 2015.
Advanced topics.
CS 678 –Boltzmann Machines1 Boltzmann Machine Relaxation net with visible and hidden units Learning algorithm Avoids local minima (and speeds up learning)
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Learning in Recurrent Networks Psychology 209 February 25, 2013.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Structure learning with deep neuronal networks 6 th Network Modeling Workshop, 6/6/2013 Patrick Michl.
How to do backpropagation in a brain
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Artificial Neural Networks ML Paul Scheible.
Deep Belief Networks for Spam Filtering
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #31 4/17/02 Neural Networks.
MACHINE LEARNING 12. Multilayer Perceptrons. Neural Networks Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
CSC321: Introduction to Neural Networks and Machine Learning Lecture 20 Learning features one layer at a time Geoffrey Hinton.
Comp 5013 Deep Learning Architectures Daniel L. Silver March,
How to do backpropagation in a brain
Neural Networks. Plan Perceptron  Linear discriminant Associative memories  Hopfield networks  Chaotic networks Multilayer perceptron  Backpropagation.
Using Fast Weights to Improve Persistent Contrastive Divergence Tijmen Tieleman Geoffrey Hinton Department of Computer Science, University of Toronto ICML.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Machine Learning Chapter 4. Artificial Neural Networks
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
Learning Lateral Connections between Hidden Units Geoffrey Hinton University of Toronto in collaboration with Kejie Bao University of Toronto.
Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient Tijmen Tieleman University of Toronto (Training MRFs using new algorithm.
CSC321: Introduction to Neural Networks and machine Learning Lecture 16: Hopfield nets and simulated annealing Geoffrey Hinton.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 9: Ways of speeding up the learning and preventing overfitting Geoffrey Hinton.
CSC 2535 Lecture 8 Products of Experts Geoffrey Hinton.
Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient Tijmen Tieleman University of Toronto.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 18 Learning Boltzmann Machines Geoffrey Hinton.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 19: Learning Restricted Boltzmann Machines Geoffrey Hinton.
Boltzman Machines Stochastic Hopfield Machines Lectures 11e 1.
Convolutional Restricted Boltzmann Machines for Feature Learning Mohammad Norouzi Advisor: Dr. Greg Mori Simon Fraser University 27 Nov
Intro. ANN & Fuzzy Systems Lecture 13. MLP (V): Speed Up Learning.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
CSC321 Lecture 24 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Deep Belief Network Training Same greedy layer-wise approach First train lowest RBM (h 0 – h 1 ) using RBM update algorithm (note h 0 is x) Freeze weights.
CSC2535: Computation in Neural Networks Lecture 8: Hopfield nets Geoffrey Hinton.
CSC Lecture 23: Sigmoid Belief Nets and the wake-sleep algorithm Geoffrey Hinton.
CSC321 Lecture 27 Using Boltzmann machines to initialize backpropagation Geoffrey Hinton.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Intro. ANN & Fuzzy Systems Lecture 11. MLP (III): Back-Propagation.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Neural Networks - Berrin Yanıkoğlu1 MLP & Backpropagation Issues.
Some Slides from 2007 NIPS tutorial by Prof. Geoffrey Hinton
Energy models and Deep Belief Networks
RADIAL BASIS FUNCTION NEURAL NETWORK DESIGN
CSC321: Neural Networks Lecture 22 Learning features one layer at a time Geoffrey Hinton.
Real Neurons Cell structures Cell body Dendrites Axon
Restricted Boltzmann Machines for Classification
Multimodal Learning with Deep Boltzmann Machines
Deep Belief Networks Psychology 209 February 22, 2013.
Structure learning with deep autoencoders
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Restricted Boltzman Machines
Neural Networks for Vertex Covering
Deep Belief Nets and Ising Model-Based Network Construction
Artificial Neural Networks
Regulation Analysis using Restricted Boltzmann Machines
Neural Networks Geoff Hulten.
Boltzmann Machine (BM) (§6.4)
CSC321 Winter 2007 Lecture 21: Some Demonstrations of Restricted Boltzmann Machines Geoffrey Hinton.
CSC 578 Neural Networks and Deep Learning
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Comp 3503 / 5013 Dynamic Neural Networks Daniel L. Silver March,

Outline Hopfield Networks Boltzman Machines Mean Field Theory Restricted Boltzman Machines (RBM) 2

Dynamic Neural Networks See handout for image of spider, beer and dog The search for a model or hypothesis can be considered the relaxation of a dynamic system into a state of equilibrium This is the nature of most physical systems – Pool of water – Air in a room Mathematics is that of thermal-dynamics – Quote from John Von Neumann 3

Hopfield Networks See hand out 4

Hopfield Networks Hopfield Network video intro – Y Y – Try these Applets: – ndex.html ndex.html – pplet.html pplet.html 5

Hopfield Networks Basics with Geoff Hinton: Introduction to Hopfield Nets – Storage capacity of Hopfield Nets – 6

Hopfield Networks Advanced concepts with Geoff Hinton: Hopfield nets with hidden units – Necker Cube – er.html er.html Adding noise to improve search – 7

Boltzman Machine -See Handout - Basics with Geoff Hinton Modeling binary data – BM Learning Algorithm – 8

Limitations of BMs BM Learning does not scale well This is due to several factors, the most important being: – The time the machine must be run in order to collect equilibrium statistics grows exponentially with the machine's size = number of nodes For each example – sample nodes, sample states – Connection strengths are more plastic when the units have activation probabilities intermediate between zero and one. Noise causes the weights to follow a random walk until the activities saturate (variance trap). 9

Potential Solutions Use a momentum term as in BP: Add a penalty term to create sparse coding (encourage shorter encodings for different inputs) Use implementation tricks to do more in memory – batches of examples Restrict number of iterations in + and – phases Restrict connectivity of network 10 w ij (t+1)=w ij (t) +ηΔw ij +αΔw ij (t-1)

Restricted Boltzman Machine 11 Source: SF/Fantasy Oscar Winner w ij j i Σ j =w ij v i h j  p j =1/(1-e -Σj ) v i  p i =1/(1-e -Σi ) Recall = Relaxation Σ i =w ij h j v o or h o

Restricted Boltzman Machine 12 Source: SF/Fantasy Oscar Winner w ij j i Σ j =w ij v i h j  p j =1/(1-e -Σj ) v i  p i =1/(1-e -Σi ) Recall = Relaxation Σ i =w ij h j v o or h o

Restricted Boltzman Machine 13 Source: SF/Fantasy Oscar Winner j i h j  p j =1/(1-e -Σj ) v i  p i =1/(1-e -Σi ) Σ i =w ij h j v o or h o Oscar Winner SF/Fantasy Recall = Relaxation w ij Σ j =w ij v i

Restricted Boltzman Machine 14 Source: SF/Fantasy Oscar Winner j i h j  p j =1/(1-e -Σj ) v i  p i =1/(1-e -Σi ) Σ i =w ij h j v o or h o Oscar Winner SF/Fantasy Recall = Relaxation w ij Σ j =w ij v i

Restricted Boltzman Machine 15 Source: SF/Fantasy Oscar Winner j i Σ i =w ij h j h j  p j =1/(1-e -Σj ) v i  p i =1/(1-e -Σi ) Learning = ~ Gradient Descent = Constrastive Divergence Update hidden units P=P+v i h j v o or h o Σ j =w ij v i

Restricted Boltzman Machine 16 Source: SF/Fantasy Oscar Winner j i h j  p j =1/(1-e -Σj ) v i  p i =1/(1-e -Σi ) Learning = ~ Gradient Descent = Constrastive Divergence Reconstruct visible units v o or h o Σ j =w ij v i Σ i =w ij h j

Restricted Boltzman Machine 17 Source: SF/Fantasy Oscar Winner j i Σ j =w ij v i h j  p j =1/(1-e -Σj ) v i  p i =1/(1-e -Σi ) Learning = ~ Gradient Descent = Constrastive Divergence Reupdate hidden units v o or h o Σ i =w ij h j N=N+v i h j

Restricted Boltzman Machine 18 Source: SF/Fantasy Oscar Winner Δw ij = - j i Σ j =w ij v i h j  p j =1/(1-e -Σj ) v i  p i =1/(1-e -Σi ) Σ i =w ij h j v o or h o w ij =w ij +ηΔw ij Learning = ~ Gradient Descent = Constrastive Divergence Update weights

Restricted Boltzman Machine RBM Overview: – to-restricted-boltzmann-machines/ to-restricted-boltzmann-machines/ Wikipedia on DLA and RBM: – RBM Details and Code: –

Restricted Boltzman Machine Geoff Hinton on RBMs: RBMs and Constrastive Divergence Algorithm – An example of RBM Learning – RBMs applied to Collaborative Filtering –

Additional References Coursera course – Neural Networks fro Machine Learning: – 001/lecture 001/lecture ML: Hottest Tech Trend in next 3-5 Years –