Un Supervised Learning & Self Organizing Maps Learning From Examples 1 3 4 6 5 2 1 9 16 36 25 4.

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Introduction to Neural Networks
Chapter 2.
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Self Organization: Hebbian Learning CS/CMPE 333 – Neural Networks.
Un Supervised Learning & Self Organizing Maps Learning From Examples
CONTENT BASED FACE RECOGNITION Ankur Jain 01D05007 Pranshu Sharma Prashant Baronia 01D05005 Swapnil Zarekar 01D05001 Under the guidance of Prof.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
September 16, 2010Neural Networks Lecture 4: Models of Neurons and Neural Networks 1 Capabilities of Threshold Neurons By choosing appropriate weights.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
Lecture 09 Clustering-based Learning
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Machine Learning. Learning agent Any other agent.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Artificial Neural Network Unsupervised Learning
Self organizing maps 1 iCSC2014, Juan López González, University of Oviedo Self organizing maps A visualization technique with data dimension reduction.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
NEURAL NETWORKS FOR DATA MINING
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Chapter 3: Neural Processing and Perception. Neural Processing and Perception Neural processing is the interaction of signals in many neurons.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Self-Organizing Maps (SOM) (§ 5.5)
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Unsupervised Learning G.Anuradha. Contents Introduction Competitive Learning networks Kohenen self-organizing networks Learning vector quantization Hebbian.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
CSE P573 Applications of Artificial Intelligence Neural Networks
Unsupervised learning
Lecture 22 Clustering (3).
CSE 573 Introduction to Artificial Intelligence Neural Networks
Self Organizing Maps A major principle of organization is the topographic map, i.e. groups of adjacent neurons process information from neighboring parts.
Feature mapping: Self-organizing Maps
The Network Approach: Mind as a Web
Artificial Neural Networks
Presentation transcript:

Un Supervised Learning & Self Organizing Maps

Learning From Examples

Supervised Learning  When a set of targets of interest is provided by an external teacher we say that the learning is Supervised  The targets usually are in the form of an input output mapping that the net should learn

Feed Forward Nets  Feed Forward Nets learn under supervision  classification - all patterns in the training set are coupled with the “correct classification”  classifying written digits into 10 categories (the US post zip code project)  function approximation – the values to be learnt for the training points is known  time series prediction such as weather forecast and stock values

Hopfield Nets  Associative Nets (Hopfield like) store predefined memories.  During learning, the net goes over all patterns to be stored (Hebb Rule):

Hopfield, Cntd When presented with an input pattern that is similar to one of the memories, the network restores the right memory, previously stored in its weights (“synapses”)

How do we learn?  Many times there is no “teacher” to tell us how to do things  A baby that learns how to walk  Grouping of events into a meaningful scene (making sense of the world)  Development of ocular dominance and orientation selectivity in our visual system

Self Organization  Network Organization is fundamental to the brain  Functional structure  Layered structure  Both parallel processing and serial processing require organization of the brain

Self Organizing Networks  Discover significant patterns or features in the input data  Discovery is done without a teacher  Synaptic weights are changed according to local rules  The changes affect a neuron’s immediate environment until a final configuration develops

Questions How can a useful configuration develop from self organization? Can random activity produce coherent structure?

Answer: biologically There are self organized structures in the brain Neuronal networks grow and evolve to be computationally efficient both in vitro and in vivo Random activation of the visual system can lead to layered and structured organization

Answer: mathematically  A. Turing, 1952 Global order can arise from local interactions  Random local interactions between neighboring neurons can coalesce into states of global order, and lead to coherent spatio temporal behavior

Mathematically, Cntd  Network organization takes place at 2 levels that interact with each other:  Activity: certain activity patterns are produced by a given network in response to input signals  Connectivity: synaptic weights are modified in response to neuronal signals in the activity patterns  Self Organization is achieved if there is positive feedback between changes in synaptic weights and activity patterns

Principles of Self Organization 1.Modifications in synaptic weights tend to self amplify 2.Limitation of resources lead to competition among synapses 3.Modifications in synaptic weights tend to cooperate 4.Order and structure in activation patterns represent redundant information that is transformed into knowledge by the network

Redundancy Unsupervised learning depends on redundancy in the data Learning is based on finding patterns and extracting features from the data

Un Supervised Hebbian Learning A linear unit: The learning rule is Hebbian like: The change in weight depends on the product of the neuron’s output and input, with a term that makes the weights decrease

US Hebbian Learning, Cntd Such a net converges into a weight vector that maximizes the average on This means that the weight vector points at the first principal component of the data The network learns a feature of the data without any prior knowledge This is called feature extraction

Visual Model Linsker (1986) proposed a model of self organization in the visual system, based on unsupervised Hebbian learning –Input is random dots (does not need to be structured) –Layers as in the visual cortex, with FF connections only (no lateral connections) –Each neuron receives inputs from a well defined area in the previous layer (“receptive fields”) –The network developed center surround cells in the 2 nd layer of the model and orientation selective cells in a higher layer –A self organized structure evolved from (local) hebbian updates

Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive learning means that only a single neuron from each group fires at each time step Output units compete with one another. These are winner takes all units (grandmother cells)

Simple Competitive Learning x 1 x2x2 xNxN W 11 W 12 W 22 W P1 W PN Y1Y1 Y2Y2 YPYP N inputs units P output neurons P x N weights

Network Activation The unit with the highest field h i fires i* is the winner unit Geometrically is closest to the current input vector The winning unit’s weight vector is updated to be even closer to the current input vector

Learning Starting with small random weights, at each step: 1.a new input vector is presented to the network 2.all fields are calculated to find a winner 3. is updated to be closer to the input

Result Each output unit moves to the center of mass of a cluster of input vectors  clustering

Model: Horizontal & Vertical lines Rumelhart & Zipser, 1985 Problem – identify vertical or horizontal signals Inputs are 6 x 6 arrays Intermediate layer with 8 WTA units Output layer with 2 WTA units Cannot work with one layer

Rumelhart & Zipser, Cntd HV

Self Organizing (Kohonen) Maps Competitive networks (WTA neurons) Output neurons are placed on a lattice, usually 2- dimensional Neurons become selectively tuned to various input patterns (stimuli) The location of the tuned (winning) neurons become ordered in such a way that creates a meaningful coordinate system for different input features  a topographic map of input patterns is formed

SOMs, Cntd Spatial locations of the neurons in the map are indicative of statistical features that are present in the inputs (stimuli)  Self Organization

Biological Motivation In the brain, sensory inputs are represented by topologically ordered computational maps –Tactile inputs –Visual inputs (center-surround, ocular dominance, orientation selectivity) –Acoustic inputs

Biological Motivation, Cntd Computational maps are a basic building block of sensory information processing A computational map is an array of neurons representing slightly different tuned processors (filters) that operate in parallel on sensory signals These neurons transform input signals into a place coded structure

Kohonen Maps Simple case: 2-d input and 2-d output layer No lateral connections Weight update is done for the winning neuron and its surrounding neighborhood The output layer is a sort of an elastic net that wants to come as close as possible to the inputs The output maps conserves the topological relationships of the inputs

Feature Mapping

Kohonen Maps, Cntd Examples of topologic conserving mapping between input and output spaces –Retintopoical mapping between the retina and the cortex –Ocular dominance –Somatosensory mapping (the homunculus)

Models Goodhill (1993) proposed a model for the development of retinotopy and ocular dominance, based on Kohonen Maps –Two retinas project to a single layer of cortical neurons –Retinal inputs were modeled by random dots patterns –Added between eyes correlation in the inputs –The result is an ocular dominance map and a retinotopic map as well

Models, Cntd Farah (1998) proposed an explanation for the spatial ordering of the homunculus using a simple SOM. –In the womb, the fetus lies with its hands close to its face, and its feet close to its genitals –This should explain the order of the somatosensory areas in the homunculus

Other Models Semantic self organizing maps to model language acquisition Kohonen feature mapping to model layered organization in the LGN Combination of unsupervised and supervised learning to model complex computations in the visual cortex

Examples of Applications Kohonen (1984). Speech recognition - a map of phonemes in the Finish language Optical character recognition - clustering of letters of different fonts Angeliol etal (1988) – travelling salesman problem (an optimization problem) Kohonen (1990) – learning vector quantization (pattern classification problem) Ritter & Kohonen (1989) – semantic maps

Summary Unsupervised learning is very common US learning requires redundancy in the stimuli Self organization is a basic property of the brain’s computational structure SOMs are based on –competition (wta units) –cooperation –synaptic adaptation SOMs conserve topological relationships between the stimuli Artificial SOMs have many applications in computational neuroscience