Download presentation
Presentation is loading. Please wait.
1
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural networks and many neural network chips have been made and sold. Most of these are not terribly biologically realistic.
2
Output layer neurons Hidden layer neurons Input layer neurons A 2-dimensional example… x y x (-10:10) y(-10:10) w1 = 0.3, w2 = =0.7 weights +1, -1
3
x y x (-10:10) y(-10:10) w1 = 0.5, w2 = 0.11 y x x y -.53 0.53 Putting the two together… -0.25 We respond to a smaller region of this 2-D input space. +1, -1
4
So in general, we can apply this type of operation on an N-dimensional input With the hidden units defining hyperplanes in this input space. The individual output units combine these hyperplanes to create specific subregions of this N-dim space. This is what pattern recognition is about. As you might expect, these two images live very far apart from each other in this very high dimensional space. But if we had a set of 100 faces that we wanted to recognize, this might be harder. What happens if the faces are rotated, shifted, or scaled? Easy Task 100 x 77 pixels = 7700 dimensional input space unit1 unit2
5
How do I pick the weight matrices to solve these tasks?? One way is to present inputs and adjust the weights if the output is not what we want. where = 0 otherwise Output unit i, example Target output for unit i example input k, example Learning rate This is known as the perceptron learning rule A training set of examples with target output values is defined and presented one by one, adjusting the weight matrix after each evaluation. The learning rule Assigns large value weights to components of the inputs that allow discrimination between the classes of inputs. e.g., many faces and many helicopters
6
Face vs. Helicopter Example
7
The concept of an energy function of a recurrent neural network was introduced by Hopfield (1982) to describe the state of a network. By studying the dynamics, it is possible to show that the activity in the network will always decrease in energy, evolving towards a "local minima". w ij matrix inputs SiSi The Hopfield Recurrent Network Associative Memory and Energy Functions The network defines an 'energy landscape' in which the state of the network settles. By starting close to minima (stored patterns) compared to other points in the landscape The network will settle towards the minima and 'recall' the stored patterns.
8
This view of neural processing has its merits, provides insight into this type of computational structure and has spawned new fields on its own, but does not describe the current neurobiological state of knowledge very well. In particular, neurons communicate with spikes and the backpropagation learning rule Is not a good match to what has been found. So what do we know about neurobiological learning? Hebbian learning If both cells are active, strengthen the synapse If only the post-synaptic cell is active, weaken the synapse
9
In fact, learning at some synapses seems to be even more specific. Temporal ordering seems to play a role in determining the change in the synapse. strengthen Abbott and Blum, 1996 weaken ww Time between pre-syn and post-syn spikes
10
Chip Idea: 1. Design a spiking neural network that can learn using the spike-timing rule to solve a particular temporal pattern recognition problem 2. Design a floating-gate modification circuit that can implement the learning rule
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.