Download presentation
Presentation is loading. Please wait.
Published byLaureen Wilkinson Modified over 9 years ago
1
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden
2
Unsupervised Competitive Learning Competitive learning Winner-take-all units Cluster/Categorize input data Feature mapping
3
Unsupervised Competitive Learning 321
4
output input (n-dimensional) winner
5
Simple Competitive Learning Winner: Lateral inhibition
6
Simple Competitive Learning Update weights for winning neuron
7
Simple Competitive Learning Update rule for all neurons:
8
Graph Bipartioning Patterns: edges = dipole stimuli Two output units
9
Simple Competitive Learning Dead Unit Problem Solutions –Initialize weights tot samples from the input –Leaky learning: also update the weights of the losers (but with a smaller ) –Arrange neurons in a geometrical way: update also neighbors –Turn on input patterns gradually –Conscience mechanism –Add noise to input patterns
10
Vector Quantization Classes are represented by prototype vectors Voronoi tessellation
11
Learning Vector Quantization Labelled sample data Update rule depends on current classification
12
Adaptive Resonance Theory Stability-Plasticity Dilemma Supply of neurons, only use them if needed Notion of “sufficiently similar”
13
Adaptive Resonance Theory Start with all weights = 1 Enable all output units Find winner among enabled units Test match Update weights
14
Feature Mapping Geometrical arrangement of output units Nearby outputs correspond to nearby input patterns Feature Map Topology preserving map
15
Self Organizing Map Determine the winner (the neuron of which the weight vector has the smallest distance to the input vector) Move the weight vector w of the winning neuron towards the input i Before learning i w After learning i w
16
Self Organizing Map Impose a topological order onto the competitive neurons (e.g., rectangular map) Let neighbors of the winner share the “prize” (The “postcode lottery” principle) After learning, neurons with similar weights tend to cluster on the map
17
Self Organizing Map
19
Input: uniformly randomly distributed points Output: Map of 20 2 neurons Training –Starting with a large learning rate and neighborhood size, both are gradually decreased to facilitate convergence
20
Self Organizing Map
26
Feature Mapping Retinotopic Map Somatosensory Map Tonotopic Map
27
Feature Mapping
31
Kohonen’s Algorithm
33
Travelling Salesman Problem
34
Hybrid Learning Schemes unsupervised supervised
35
Counterpropagation First layer uses standard competitive learning Second (output) layer is trained using delta rule
36
Radial Basis Functions First layer with normalized Gaussian activation functions
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.