Un Supervised Learning & Self Organizing Maps Learning From Examples 1 3 4 6 5 2 1 9 16 36 25 4.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Introduction to Neural Networks
Chapter3 Pattern Association & Associative Memory
Introduction to Artificial Neural Networks
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Kostas Kontogiannis E&CE
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
The “Humpty Dumpty” problem
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Self Organization: Hebbian Learning CS/CMPE 333 – Neural Networks.
CONTENT BASED FACE RECOGNITION Ankur Jain 01D05007 Pranshu Sharma Prashant Baronia 01D05005 Swapnil Zarekar 01D05001 Under the guidance of Prof.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Perceptron Learning Rule Assuming the problem is linearly separable, there is a learning rule that converges in a finite time Motivation A new (unseen)
Biologically Inspired Robotics Group,EPFL Associative memory using coupled non-linear oscillators Semester project Final Presentation Vlad TRIFA.
Un Supervised Learning & Self Organizing Maps Learning From Examples
Lecture 09 Clustering-based Learning
Unsupervised learning
Machine Learning. Learning agent Any other agent.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organized Map (SOM)
Deep Learning – Fall 2013 Instructor: Bhiksha Raj Paper: T. D. Sanger, “Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network”,
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Neural Network Hopfield model Kim, Il Joong. Contents  Neural network: Introduction  Definition & Application  Network architectures  Learning processes.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Unsupervised learning
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
UNSUPERVISED LEARNING NETWORKS
Unsupervised Learning Motivation: Given a set of training examples with no teacher or critic, why do we learn? Feature extraction Data compression Signal.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
381 Self Organization Map Learning without Examples.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Supervised Learning. Teacher response: Emulation. Error: y1 – y2, where y1 is teacher response ( desired response, y2 is actual response ). Aim: To reduce.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Model-based learning: Theory and an application to sequence learning P.O. Box 49, 1525, Budapest, Hungary Zoltán Somogyvári.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Chapter 6 Neural Network.
Jochen Triesch, UC San Diego, 1 Part 3: Hebbian Learning and the Development of Maps Outline: kinds of plasticity Hebbian.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Big data classification using neural network
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Real Neurons Cell structures Cell body Dendrites Axon
Blind Signal Separation using Principal Components Analysis
Unsupervised learning
A principled way to principal components analysis
Chapter 3. Artificial Neural Networks - Introduction -
Neuro-Computing Lecture 4 Radial Basis Function Network
Outline Associative Learning: Hebbian Learning
Competitive Networks.
Competitive Networks.
The Naïve Bayes (NB) Classifier
Perceptron Learning Rule
Presentation transcript:

Un Supervised Learning & Self Organizing Maps

Learning From Examples

Supervised Learning  When a set of targets of interest is provided by an external teacher we say that the learning is Supervised  The targets usually are in the form of an input output mapping that the net should learn

Feed Forward Nets  Feed Forward Nets learn under supervision  classification - all patterns in the training set are coupled with the “correct classification”  function approximation – the values to be learnt for the training points is known  The Recurrent Networks we saw also learn under supervision

Hopfield Nets  Associative Nets (Hopfield like) store predefined memories.  During learning, the net goes over all patterns to be stored (Hebb Rule):

Hopfield, Cntd Hopfield Nets learn patterns whose organization is defined externally “Good” configurations of the system are the predefined memories

How do we learn?  Many times there is no “teacher” to tell us how to do things  A baby that learns how to walk  Grouping of events into a meaningful scene (making sense of the world)  Development of ocular dominance and orientation selectivity in our visual system

Self Organization  Network Organization is fundamental to the brain  Functional structure  Layered structure  Both parallel processing and serial processing require organization of the brain

Self Organizing Networks  Discover significant patterns or features in the input data  Discovery is done without a teacher  Synaptic weights are changed according to local rules  The changes affect a neuron’s immediate environment until a final configuration develops

Question How can a useful configuration develop from self organization? Can random activity produce coherent structure?

Answer: biologically There are self organized structures in the brain Neuronal networks grow and evolve to be computationally efficient both in vitro and in vivo Random activation of the visual system can lead to layered and structured organization

Answer: Physically  Local interactions can lead to global order  magnetic materials  Electric circuits  synchronization of coupled oscillators

Mathematically  A. Turing: Global order can arise from local interactions  Random local interactions between neighboring neurons can coalesce into states of global order, and lead to coherent spatio temporal behavior

Mathematically, Cntd  Network organization takes place at 2 levels that interact with each other:  Activity: certain activity patterns are produced by a given network in response to input signals  Connectivity: synaptic weights are modified in response to neuronal signals in the activity patterns  Self Organization is achieved if there is positive feedback between changes in synaptic weights and activity patterns

Principles of Self Organization 1.Modifications in synaptic weights tend to self amplify 2.Limitation of resources lead to competition among synapses 3.Modifications in synaptic weights tend to cooperate 4.Order and structure in activation patterns represent redundant information that is transformed into knowledge by the network

Redundancy Unsupervised learning depends on redundancy in the data Learning is based on finding patterns and extracting features from the data

Types of Information Familiarity – the net learns how similar is a given new input to the typical (average) pattern it has seen before The net finds Principal Components in the data Clustering – the net finds the appropriate categories based on correlations in the data Encoding – the output represents the input, using a smaller amount of bits Feature Mapping – the net forms a topographic map of the input

Possible Applications Familiarity and PCA can be used to analyze unknown data PCA is used for dimension reduction Encoding is used for vector quantization Clustering is applied on any types of data Feature mapping is important for dimension reduction and for functionality (as in the brain)

Simple Models Network has inputs and outputs There is no feedback from the environment  no supervision The network updates the weights following some learning rule, and finds patterns, features or categories within the inputs presented to the network

Un Supervised Hebbian Learning One linear unit: Hebbian Learning Problems:  In general W is not bounded  Assuming it is bounded, we can show that it is not stable.

Oja’s Rule The learning rule is Hebbian like: The change in weight depends on the product of the neuron’s output and input, with a term that makes the weights decrease Alternatively, we could have normalized the weight vector after each update, keeping its norm to be one.

Oja’s Rule, Cntd Such a net converges into a weight vector that:  Has norm = 1  Lies in the direction of the maximal eigenvector of C (correlation matrix of the data)  Maximizes the (average) value of

Oja’s Rule, Cntd This means that the weight vector points at the first principal component of the data The network learns a feature of the data without any prior knowledge This is called feature extraction

Visual Model Linsker (1986) proposed a model of self organization in the visual system, based on unsupervised Hebbian learning –Input is random dots (does not need to be structured) –Layers as in the visual cortex, with FF connections only (no lateral connections) –Each neuron receives inputs from a well defined area in the previous layer (“receptive fields”) –The network developed center surround cells in the 2 nd layer of the model and orientation selective cells in a higher layer –A self organized structure evolved from (local) hebbian updates