Model-based learning: Theory and an application to sequence learning P.O. Box 49, 1525, Budapest, Hungary Zoltán Somogyvári.

Slides:



Advertisements
Similar presentations
Bioinspired Computing Lecture 16
Advertisements

Introduction to Neural Networks Computing
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Self Organization of a Massive Document Collection
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
DATA-MINING Artificial Neural Networks Alexey Minin, Jass 2006.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Kostas Kontogiannis E&CE
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Simple Neural Nets For Pattern Classification
Neural Networks Basic concepts ArchitectureOperation.
UNIVERSITY OF JYVÄSKYLÄ Yevgeniy Ivanchenko Yevgeniy Ivanchenko University of Jyväskylä
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Chapter Seven The Network Approach: Mind as a Web.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Biologically Inspired Robotics Group,EPFL Associative memory using coupled non-linear oscillators Semester project Final Presentation Vlad TRIFA.
Neural Networks Lecture 17: Self-Organizing Maps
Lecture 09 Clustering-based Learning
Neural networks - Lecture 111 Recurrent neural networks (II) Time series processing –Networks with delayed input layer –Elman network Cellular networks.
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Artificial Neural Networks
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 23 Nov 2, 2005 Nanjing University of Science & Technology.
11 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
A Scalable Self-organizing Map Algorithm for Textual Classification: A Neural Network Approach to Thesaurus Generation Dmitri G. Roussinov Department of.
NEURAL NETWORKS FOR DATA MINING
Hebbian Coincidence Learning
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
UNSUPERVISED LEARNING NETWORKS
Akram Bitar and Larry Manevitz Department of Computer Science
The Supervised Network Self- Organizing Map for Classification of Large Data Sets Authors: Papadimitriou et al, Advisor: Dr. Hsu Graduate: Yu-Wei Su.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
381 Self Organization Map Learning without Examples.
Zoltán Somogyvári Hungarian Academy of Sciences, KFKI Research Institute for Particle and Nuclear Physics Department of Biophysics A model-based approach.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Inverse solutions for localization of single cell currents based on extracellular measurements Zoltán Somogyvári 1, István Ulbert 2, Péter Érdi 1,3 1 KFKI.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
November 21, 2013Computer Vision Lecture 14: Object Recognition II 1 Statistical Pattern Recognition The formal description consists of relevant numerical.
Self-Organizing Maps (SOM) (§ 5.5)
Perceptrons Michael J. Watts
Joe Bradish Parallel Neural Networks. Background  Deep Neural Networks (DNNs) have become one of the leading technologies in artificial intelligence.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
Big data classification using neural network
Chapter 5 Unsupervised learning
Neural Networks Dr. Peter Phillips.
Neuro-Computing Lecture 4 Radial Basis Function Network
Deep Learning Authors: Yann LeCun, Yoshua Bengio, Geoffrey Hinton
The Network Approach: Mind as a Web
Unsupervised Networks Closely related to clustering
Akram Bitar and Larry Manevitz Department of Computer Science
Presentation transcript:

Model-based learning: Theory and an application to sequence learning P.O. Box 49, 1525, Budapest, Hungary Zoltán Somogyvári and Péter Érdi Hungarian Academy of Science, Research Institute for Particle and Nuclear Physics, Department of Biophysics

Model-based learning: A new framework 1. Background 2. Theory 3. Algorithm 4. Application to sequence learning: 4.1 Evaluation of convergence speed 4.2 How to avoid sequence ambiguity 4.3 Storage of multiple sequences 5. Outlook

Model-based learning: Background I. Learning algorithms Supervised Unsupervised Learning by linking neurons with existing and fixed receptive fields. Hopfield-network Attractor network Symbol linking Symbol generation Learning by receptive field generation Topographical projection generation Ocular dominance formation Kohonen-map

Model-based learning: Background II. In many (if not all) symbol generator learning algorithms a built-in connection structure determines the formation of receptive fields.  Lateral inhibition in wide variety of learning algorithms.  `Mexican hat' lateral interaction in the topographic map formation algorithms and in ocular dominance generation.  Most explicitly in Kohonen's self-organizing map.

A symbol generator learning: Self-organizing maps Input layer, the `external world' Internal connection structure Connections between internal and external Learning: Modification of connections between neurons of external and internal layer. Changes in the receptive fields. A 2 dimensional grid of neurons Samples from an N dimensional vector space

Self-organizing maps II. Stages of the learning: the internal net stretches out to wrap the input space

Self-organizing maps II. The result of learning: the neural grid is fitted to the input space. The result of the learning is stored in the internal-external connections, in the locations of receptive fields. Each of the neurons, in the internal layer represents a part of external world. Map formation.

Model-based learning principle: Encounter between internal and external structures From this unusual point of view, it is an evident generalization, to extend the set of applicable internal connection structures, and using them as built-in models or scheme. In this way, the learning procedure is become an encounter between an internal model and the structures in the signals coming from the `external world.' The result of the learning is a correspondence between neurons of the internal layer and elements of input space.

Model-based learning: Internal models Any connection structure can be used to be fitted to the signal, and the same input can be represented many ways, even parallel. The models may represent different reference frames, hierarchical structures, periodical processions...

Model-based learning: Application to sequence learning One of the most important internal model structure type, is a linear chain of neurons connected with directed connections. A directed linear chain of neurons is able to represent a temporal sequence. The question: An instantaneous position in the state space. The answer: The prediction of the following state: Or even the prediction of the whole sequence: If the system is able to addresses, theoretically it can accessed to any of the following states in one step, or even to the preceding states.

Model-based learning: Basic algorithm L cells in a chain N dimensional input LN connections to modify Internal dynamics: Learn when internally activated: Decreasing learning rate:

Model-based learning: Simple sequence learning task Steps of learning without noise, from the random initial distribution of receptive fields. T=100, the number of iteration steps. During one iteration the whole sequence is presented, thus it requires NL weight modification. A Lissajous-curve applied as input. N=5, L=12 The final distribution of receptive fields

Model-based learning: Sequence with noise The same input with additive noise  The steps of learning. The result of the learning is very good, but (of course) less precise.

Model-based learning: Noise dependence of convergence Err Iterations The time evolution of error, in case of different noise amplitude.  =0.5  =0.3  =0.1  =0.05  =0

Noise dependence of speed Iterations  The required number of iterations to reach a given precision is slightly increases with the noise amplitude.

Sequence length dependence of speed The required number of iterations to reach a given precision does not depend on the length of the sequence. Length of sequence, L Iterations

Input dimension dependence of speed The required number of iterations to reach a given precision does not depend on the dimension of the input. Input dimension, N Iterations

Model based learning: evaluation of learning speed Since the algorithm does LN operations during an iteration, and the required number of iterations to reach a given precision does not depend either on the length of the sequence (L), either on the dimension of the input (N), the whole learning procedure works with O(LN) operations.

Model based learning avoids sequence ambiguity The task is to learn a self-crossing sequence. The sequence is noisy The result of the learning The usual way of solving the problem is the extension of state-space with the recent past of the system.

Model based learning avoids sequence ambiguity II. Two portion of the sequence are overlap. Of course sequence is noisy The result of the learning. This problem can be solved if the state-space become extended with the derivative of the signal.

Model based learning avoids sequence ambiguity III. Two portion of the sequence are overlap and their directions are the same. The noisy signal. This type of problem is hard to solve with traditional methods, because of the length of the overlapping parts are not known previously. The well-trained connections

Model-based learning: Multiple sequences Learning of multiple sequences needs:  A set of built-in neuron chains as models of sequences.  An organizer algorithm to conduct this orchestra. Different strategies can exist, but the most important functions of it: The initiation of a models' activity. The termination of them. To harmonize the predictions of different models with each other and with the external world.

Model-based learning: Outlook