Download presentation
Presentation is loading. Please wait.
Published byErnest Payne Modified over 9 years ago
1
Focus on Unsupervised Learning
2
No teacher specifying right answer
3
Techniques for autonomous SW or robots to learn to characterize their sensations
4
“Competitive” learning algorithm
5
Winner-take-all
9
Learning Rule: Iterate
10
Learning Rule: Iterate Find “winner”
11
Learning Rule: Iterate Find “winner” Delta = learning rate * (sample – prototype)
12
Example: Learning rate =.05 Sample = (122, 180) Winner = (84, 203) DeltaX = learning rate * (sample x – winner x) DeltaX =.05 * (122 – 84) DeltaX = 1.9 New prototype x value = 84 + 1.9 = 85.9 DeltaY =.05 * (180 - 203) DeltaY = -1.15 New prototype y value = 203 -1.15 = 201.85
14
Python Demo
15
Sound familiar?
16
Clustering Dimensionality Reduction Data visualization
21
Yves Amu Klein’s Octofungi uses a kohonen neural network to react to its environment
22
Associative learning method
23
Biologically inspired
24
Associative learning method Biologically inspired Behavioral conditioning and Psychological models
25
activation = sign(input sum)
26
+1 and -1 inputs
27
activation = sign(input sum) +1 and -1 inputs 2 layers
29
weight change = learning constant * neuron A activation * neuron B activation
30
weight change = learning constant * desired output * input value
34
Long-term memory
35
Inspired by Hebbian learning
36
Long-term memory Inspired by Hebbian learning Content-addressable memory
37
Long-term memory Inspired by Hebbian learning Content-addressable memory Feedback and convergance
38
Attractor – “a state or output vector in a system towards which the system consistently evolves toward given a specific input vector.”
39
Attractor Basin – “the set of input vectors surrounding a learned vector which will converge to the same output vector.”
41
Bi-directional Associative Memory Attractor network with 2 layers
42
SmellTaste
43
Bi-directional Associative Memory Attractor network with 2 layers Information flows in both directions
44
Bi-directional Associative Memory Attractor network with 2 layers Information flows in both directions Matrix worked out in advance
45
Hamming vector – vector composed of +1 and -1 only Ex. [1,-1,-1,1] [1,1,-1,1]
46
Hamming distance – number of components by which 2 vectors differ Ex. [1,-1,-1,1] and [1,1,-1,1] Differ in only one element (index 1) Hamming distance = 1
47
Weights are a matrix based on memories we want to store To associate X = [1,-1,-1,-1] With Y = [-1,1,1] XYXY 1 111 11 11
48
[1,-1,-1,-1] -> [1,1,1] and [-1,-1,-1,1] -> [1,-1,1] + = 1 1 1 1 111 1 0-2 0 200 0 0
53
Autoassociative Recurrent
55
To remember the pattern [1,-1,1,-1,1] 11 1 11 1 1 1 1 11 1 1 1 1 11 1 1
56
Demo Demo
57
Complements of a vector also become attractors
58
Ex. Installing [1,-1, 1] [-1, 1, -1] also “remembered”
59
Complements of a vector also become attractors Crosstalk
61
George Christos “Memory and Dreams”
62
Ralph E. Hoffman models of schizophrenia
63
Spurious Memories
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.