Focus on Unsupervised Learning
No teacher specifying right answer
Techniques for autonomous SW or robots to learn to characterize their sensations
“Competitive” learning algorithm
Winner-take-all
Learning Rule: Iterate
Learning Rule: Iterate Find “winner”
Learning Rule: Iterate Find “winner” Delta = learning rate * (sample – prototype)
Example: Learning rate =.05 Sample = (122, 180) Winner = (84, 203) DeltaX = learning rate * (sample x – winner x) DeltaX =.05 * (122 – 84) DeltaX = 1.9 New prototype x value = = 85.9 DeltaY =.05 * ( ) DeltaY = New prototype y value = =
Python Demo
Sound familiar?
Clustering Dimensionality Reduction Data visualization
Yves Amu Klein’s Octofungi uses a kohonen neural network to react to its environment
Associative learning method
Biologically inspired
Associative learning method Biologically inspired Behavioral conditioning and Psychological models
activation = sign(input sum)
+1 and -1 inputs
activation = sign(input sum) +1 and -1 inputs 2 layers
weight change = learning constant * neuron A activation * neuron B activation
weight change = learning constant * desired output * input value
Long-term memory
Inspired by Hebbian learning
Long-term memory Inspired by Hebbian learning Content-addressable memory
Long-term memory Inspired by Hebbian learning Content-addressable memory Feedback and convergance
Attractor – “a state or output vector in a system towards which the system consistently evolves toward given a specific input vector.”
Attractor Basin – “the set of input vectors surrounding a learned vector which will converge to the same output vector.”
Bi-directional Associative Memory Attractor network with 2 layers
SmellTaste
Bi-directional Associative Memory Attractor network with 2 layers Information flows in both directions
Bi-directional Associative Memory Attractor network with 2 layers Information flows in both directions Matrix worked out in advance
Hamming vector – vector composed of +1 and -1 only Ex. [1,-1,-1,1] [1,1,-1,1]
Hamming distance – number of components by which 2 vectors differ Ex. [1,-1,-1,1] and [1,1,-1,1] Differ in only one element (index 1) Hamming distance = 1
Weights are a matrix based on memories we want to store To associate X = [1,-1,-1,-1] With Y = [-1,1,1] XYXY
[1,-1,-1,-1] -> [1,1,1] and [-1,-1,-1,1] -> [1,-1,1] + =
Autoassociative Recurrent
To remember the pattern [1,-1,1,-1,1]
Demo Demo
Complements of a vector also become attractors
Ex. Installing [1,-1, 1] [-1, 1, -1] also “remembered”
Complements of a vector also become attractors Crosstalk
George Christos “Memory and Dreams”
Ralph E. Hoffman models of schizophrenia
Spurious Memories