Presentation is loading. Please wait.

Presentation is loading. Please wait.

Deep Learning with Symbols

Similar presentations


Presentation on theme: "Deep Learning with Symbols"— Presentation transcript:

1 Deep Learning with Symbols
Daniel L. Silver Acadia University, Wolfville, NS, Canada HL NeSy Seminar - Dagsthul, April 2017

2 Introduction Shameer Iqbal Ahmed Galilia

3 Motivation Humans are multimodal learners
We are able to associate one modality with another Conjecture: knowing a concept has a lot to do with the fusion of sensory/motor channels

4 Motivation Further Conjecture: Symbols allow us to share complex concepts quickly, concisely A human communications tool A course approximation of a noisy concept Also help us to escape local minima when learning And their sounds: “one” “two” …

5 Objective 1 To develop a multimodal system
A generative deep learning architecture Trained using unsupervised algorithms That scales linearly in the number of channels Can reconstruct missing modalities Train and test it on digits 0-9 Four channels: Image, Audio, Motor, … Symbolic Classification

6 Background Deep Belief Networks
Stacked auto-encoders develop a rich feature space from unlabelled examples using unsupervised algorithms [Source: Caner Hazibas – slideshare]

7 Background Deep Belief Networks
RBM = Restricted Boltzman Machines

8 Background Multimodal Learning
MML Approach has been adopted by several deep learning researchers: (Srivastava and Salakhutdinov 2012) (Ngiam et al. 2011), (Socher et al. 2014) (Kiros, Salakhutdinov, and Zemel 2014) (Karpathy and Fei-Fei 2015) However tend to associate only 2 modalities Association layer is fine-tuned using supervised techniques such as back-prop

9 Background Problem Refinement

10 Background Problem Refinement

11 Background Problem Refinement
#1 Supervised fine-tuning does not scale well to three or more modalities Must fine-tune all possible input-output modality combinations Grows exponentially (2n-2), where n is number of channels

12 Background Problem Refinement
Example: n=3 channels, 6 configurations (23-2) = 6 (24-2) = 14 (25-2) = 30

13 Background Problem Refinement
#2 Standard unsupervised learning using RBM approaches yields poor reconstruction A channel that provides a simple, noise free signal will dominate over other channels at the associative layer Difficult for another channel to generate correct features at the associate layer

14 Theory and Approach Network Architecture
Propose a MML deep belief network that scales linearly in the number of channels Provides a concise symbolic rep of AM

15 Theory and Approach RBM training of DBN Stack
8 eight

16 Theory and Approach RBM training of DBN Stack
8 eight

17 Theory and Approach Fine-tuning with Iterative Back-fitting
Create and save hi wr wg Split weights 8 eight

18 Theory and Approach Fine-tuning with Iterative Back-fitting
Update weights: ∆wr = ∊(<vihj> - <vi’hj’>) … minimize ∑k∑m(vi – vi’)2 Create new hj’ wr wg vi vi' Split weight 8 eight

19 Theory and Approach Fine-tuning with Iterative Back-fitting
Update weights: ∆wr = ∊(<vihj> - <vi’hj’>) … minimize ∑k∑m(vi – vi’)2 Create new hj’ wr wg vi vi' Split weight 8 eight

20 Emperical Studies Data and Method
10 reps x 20 male Canadian students Handwritten digits 0-9 Audio recordings Vector of noisey motor coordinates Classifications 2000 examples in total 100 examples per student x 20 students Conducted 10-fold cross validation 18 subjects in training set (1800 examples) 2 subjects in test set (200 examples) DEMO

21 Deep Learning and LML http://ml3cpu.acadiau.ca
[Iqbal and Silver, in press]

22 Emperical Studies Data and Method
Evaluation: Examine reconstruction error of each channel given input on another channel Error measure differs for each channel: Class: misclassification error Image: agreement with ANN classifier (99% acc) Audio: STFT signal is not reversible so as to create sound, agreement with RF classifier (93% acc) Motor: error = distance to target vector template; (anything < 2.2 is human readable)

23 Emperical Studies Results
Reconstruction of Classification Channel

24 Emperical Studies Results
Reconstruction of Image Channel

25 Emperical Studies Results
Reconstruction of Motor Channel

26 Emperical Studies Discussion
Elimination of channel dominance is not perfect, but significantly better Reconstruction error of missing channel decreases as available channels increase NOTE: The classification channel is not needed Introduced to clarify the concept in assoc. mem. A symbol for a noisy concept

27 Objective 2 To show that learning with symbols is easier and more accurate than without A deep supervised learning architecture Develop a model to add two MNIST digits With and without symbolic inputs Test on previously unseen examples To examine what is happening in the network Is network learning addition or just a mapping function?

28 Challenge in Training Deep Architectures
Many tricks are used to overcome local minimum, most are a form of Inductive bias that favours portions of weights space where good solutions tend to be found.

29 Two learners are better than one !
Consider you’re in the jungle … Learning concepts Then you meet another person You share symbols Accuarcy improves Learn rate increases

30 Challenge in Training Deep Architectures
A single learner is hampered by the presence of local minima within its rep. space Overcoming this difficulty requires a lot of training examples Instead an agent’s learning effectiveness can be significantly improved with symbols Inspires social interaction and dev. of culture Bengio, Y.: Evolving culture vs local minima, In ArXiv v1. Springer (2013),

31 Emperical Studies: Learning to Add MNIST digits
Google Tensorflow Noisy: Input: 2 MNIST digit images (784 x 2 values) Output: 2 MNIST digit images (784 x 2 values) With binary symbolic values for each digit: Input: 1568 (images) (symbols) values Output: values 1-3 hidden layers of ReLU units

32 DL Model – Without Symbols

33 DL Model – With Symbols

34 Most recent results: Without symbols With Symbols

35 Discussion: Improved results of about 10% with symbolic outputs (based on classification of the output digits by a highly accurate convolution network) Believe we can do much better Lab working on: Different architectures Varying number of training examples with symbols Interpreting hidden node features

36 Thank You! QUESTONS? https://ml3cpu.acadiau.ca/

37 References Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, Now Publishers, 2009. Bengio, Y. and LeCun, Y. (2007). Scaling learning algorithms towards AI. In L. Bottou, O. Chapelle, D. DeCoste, and J. Weston, editors, Large Scale Kernel Machines. MIT Press. Bengio, Y.: Evolving culture vs local minima, In ArXiv v1. Springer (2013), Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep learning. Cambridge, MA: MIT Press, Print.


Download ppt "Deep Learning with Symbols"

Similar presentations


Ads by Google