Presentation is loading. Please wait.

Presentation is loading. Please wait.

Un Supervised Learning & Self Organizing Maps Learning From Examples 1 3 4 6 5 2 1 9 16 36 25 4.

Similar presentations


Presentation on theme: "Un Supervised Learning & Self Organizing Maps Learning From Examples 1 3 4 6 5 2 1 9 16 36 25 4."— Presentation transcript:

1

2 Un Supervised Learning & Self Organizing Maps

3 Learning From Examples 1 3 4 6 5 2 1 9 16 36 25 4

4 Supervised Learning  When a set of targets of interest is provided by an external teacher we say that the learning is Supervised  The targets usually are in the form of an input output mapping that the net should learn

5 Feed Forward Nets  Feed Forward Nets learn under supervision  classification - all patterns in the training set are coupled with the “correct classification”  classifying written digits into 10 categories (the US post zip code project)  function approximation – the values to be learnt for the training points is known  time series prediction such as weather forecast and stock values

6 Hopfield Nets  Associative Nets (Hopfield like) store predefined memories.  During learning, the net goes over all patterns to be stored (Hebb Rule):

7 Hopfield, Cntd When presented with an input pattern that is similar to one of the memories, the network restores the right memory, previously stored in its weights (“synapses”)

8 How do we learn?  Many times there is no “teacher” to tell us how to do things  A baby that learns how to walk  Grouping of events into a meaningful scene (making sense of the world)  Development of ocular dominance and orientation selectivity in our visual system

9 Self Organization  Network Organization is fundamental to the brain  Functional structure  Layered structure  Both parallel processing and serial processing require organization of the brain

10 Self Organizing Networks  Discover significant patterns or features in the input data  Discovery is done without a teacher  Synaptic weights are changed according to local rules  The changes affect a neuron’s immediate environment until a final configuration develops

11 Questions How can a useful configuration develop from self organization? Can random activity produce coherent structure?

12 Answer: biologically There are self organized structures in the brain Neuronal networks grow and evolve to be computationally efficient both in vitro and in vivo Random activation of the visual system can lead to layered and structured organization

13 Answer: mathematically  A. Turing, 1952 Global order can arise from local interactions  Random local interactions between neighboring neurons can coalesce into states of global order, and lead to coherent spatio temporal behavior

14 Mathematically, Cntd  Network organization takes place at 2 levels that interact with each other:  Activity: certain activity patterns are produced by a given network in response to input signals  Connectivity: synaptic weights are modified in response to neuronal signals in the activity patterns  Self Organization is achieved if there is positive feedback between changes in synaptic weights and activity patterns

15 Principles of Self Organization 1.Modifications in synaptic weights tend to self amplify 2.Limitation of resources lead to competition among synapses 3.Modifications in synaptic weights tend to cooperate 4.Order and structure in activation patterns represent redundant information that is transformed into knowledge by the network

16

17 Redundancy Unsupervised learning depends on redundancy in the data Learning is based on finding patterns and extracting features from the data

18 Un Supervised Hebbian Learning A linear unit: The learning rule is Hebbian like: The change in weight depends on the product of the neuron’s output and input, with a term that makes the weights decrease

19 US Hebbian Learning, Cntd Such a net converges into a weight vector that maximizes the average on This means that the weight vector points at the first principal component of the data The network learns a feature of the data without any prior knowledge This is called feature extraction

20 Visual Model Linsker (1986) proposed a model of self organization in the visual system, based on unsupervised Hebbian learning –Input is random dots (does not need to be structured) –Layers as in the visual cortex, with FF connections only (no lateral connections) –Each neuron receives inputs from a well defined area in the previous layer (“receptive fields”) –The network developed center surround cells in the 2 nd layer of the model and orientation selective cells in a higher layer –A self organized structure evolved from (local) hebbian updates

21 Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive learning means that only a single neuron from each group fires at each time step Output units compete with one another. These are winner takes all units (grandmother cells)

22 Simple Competitive Learning x 1 x2x2 xNxN W 11 W 12 W 22 W P1 W PN Y1Y1 Y2Y2 YPYP N inputs units P output neurons P x N weights

23 Network Activation The unit with the highest field h i fires i* is the winner unit Geometrically is closest to the current input vector The winning unit’s weight vector is updated to be even closer to the current input vector

24 Learning Starting with small random weights, at each step: 1.a new input vector is presented to the network 2.all fields are calculated to find a winner 3. is updated to be closer to the input

25 Result Each output unit moves to the center of mass of a cluster of input vectors  clustering

26 Model: Horizontal & Vertical lines Rumelhart & Zipser, 1985 Problem – identify vertical or horizontal signals Inputs are 6 x 6 arrays Intermediate layer with 8 WTA units Output layer with 2 WTA units Cannot work with one layer

27 Rumelhart & Zipser, Cntd HV

28 Self Organizing (Kohonen) Maps Competitive networks (WTA neurons) Output neurons are placed on a lattice, usually 2- dimensional Neurons become selectively tuned to various input patterns (stimuli) The location of the tuned (winning) neurons become ordered in such a way that creates a meaningful coordinate system for different input features  a topographic map of input patterns is formed

29 SOMs, Cntd Spatial locations of the neurons in the map are indicative of statistical features that are present in the inputs (stimuli)  Self Organization

30 Biological Motivation In the brain, sensory inputs are represented by topologically ordered computational maps –Tactile inputs –Visual inputs (center-surround, ocular dominance, orientation selectivity) –Acoustic inputs

31 Biological Motivation, Cntd Computational maps are a basic building block of sensory information processing A computational map is an array of neurons representing slightly different tuned processors (filters) that operate in parallel on sensory signals These neurons transform input signals into a place coded structure

32 Kohonen Maps Simple case: 2-d input and 2-d output layer No lateral connections Weight update is done for the winning neuron and its surrounding neighborhood The output layer is a sort of an elastic net that wants to come as close as possible to the inputs The output maps conserves the topological relationships of the inputs

33 Feature Mapping

34 Kohonen Maps, Cntd Examples of topologic conserving mapping between input and output spaces –Retintopoical mapping between the retina and the cortex –Ocular dominance –Somatosensory mapping (the homunculus)

35

36 Models Goodhill (1993) proposed a model for the development of retinotopy and ocular dominance, based on Kohonen Maps –Two retinas project to a single layer of cortical neurons –Retinal inputs were modeled by random dots patterns –Added between eyes correlation in the inputs –The result is an ocular dominance map and a retinotopic map as well

37

38 Models, Cntd Farah (1998) proposed an explanation for the spatial ordering of the homunculus using a simple SOM. –In the womb, the fetus lies with its hands close to its face, and its feet close to its genitals –This should explain the order of the somatosensory areas in the homunculus

39 Other Models Semantic self organizing maps to model language acquisition Kohonen feature mapping to model layered organization in the LGN Combination of unsupervised and supervised learning to model complex computations in the visual cortex

40 Examples of Applications Kohonen (1984). Speech recognition - a map of phonemes in the Finish language Optical character recognition - clustering of letters of different fonts Angeliol etal (1988) – travelling salesman problem (an optimization problem) Kohonen (1990) – learning vector quantization (pattern classification problem) Ritter & Kohonen (1989) – semantic maps

41 Summary Unsupervised learning is very common US learning requires redundancy in the stimuli Self organization is a basic property of the brain’s computational structure SOMs are based on –competition (wta units) –cooperation –synaptic adaptation SOMs conserve topological relationships between the stimuli Artificial SOMs have many applications in computational neuroscience

42


Download ppt "Un Supervised Learning & Self Organizing Maps Learning From Examples 1 3 4 6 5 2 1 9 16 36 25 4."

Similar presentations


Ads by Google