Focus on Unsupervised Learning.  No teacher specifying right answer.

Slides:



Advertisements
Similar presentations
Chapter3 Pattern Association & Associative Memory
Advertisements

Pattern Association.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Kohonen Self Organising Maps Michael J. Watts
Competitive learning College voor cursus Connectionistische modellen M Meeter.
An Illustrative Example.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
An Illustrative Example.
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Correlation Matrix Memory CS/CMPE 333 – Neural Networks.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Un Supervised Learning & Self Organizing Maps Learning From Examples
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
An Illustrative Example
Artificial Neural Networks Ch15. 2 Objectives Grossberg network is a self-organizing continuous-time competitive network.  Continuous-time recurrent.
Neural Networks Lab 5. What Is Neural Networks? Neural networks are composed of simple elements( Neurons) operating in parallel. Neural networks are composed.
Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example.
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
5.5 Learning algorithms. Neural Network inherits their flexibility and computational power from their natural ability to adjust the changing environments.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
Hebbian Coincidence Learning
Fuzzy BSB-neuro-model. «Brain-State-in-a-Box Model» (BSB-model) Dynamic of BSB-model: (1) Activation function: (2) 2.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
What is Unsupervised Learning? Learning without a teacher. No feedback to indicate the desired outputs. The network must by itself discover the relationship.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
R ECURRENT N EURAL N ETWORKS OR A SSOCIATIVE M EMORIES Ranga Rodrigo February 24,
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Data Classification by using Artificial Neural Networks Mohammed Hamdi CS 6800 Spring 2016, WMU.
Machine Learning 12. Local Models.
Self-Organizing Network Model (SOM) Session 11
Unsupervised Learning Networks
DEPARTMENT: COMPUTER SC. & ENGG. SEMESTER : VII
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
Dr. Unnikrishnan P.C. Professor, EEE
CSE 473 Introduction to Artificial Intelligence Neural Networks
Chapter 3. Artificial Neural Networks - Introduction -
Competitive Networks.
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
An Illustrative Example.
Grossberg Network.
Competitive Networks.
CS 621 Artificial Intelligence Lecture 27 – 21/10/05
Adaptive Resonance Theory
Artificial Neural Networks
Ch6: AM and BAM 6.1 Introduction AM: Associative Memory
An Illustrative Example.
Supervised Hebbian Learning
AI Lectures by Engr.Q.Zia
Presentation transcript:

Focus on Unsupervised Learning

 No teacher specifying right answer

 Techniques for autonomous SW or robots to learn to characterize their sensations

 “Competitive” learning algorithm

 Winner-take-all

 Learning Rule:  Iterate

 Learning Rule:  Iterate  Find “winner”

 Learning Rule:  Iterate  Find “winner”  Delta = learning rate * (sample – prototype)

 Example:  Learning rate =.05  Sample = (122, 180)  Winner = (84, 203)  DeltaX = learning rate * (sample x – winner x)  DeltaX =.05 * (122 – 84)  DeltaX = 1.9  New prototype x value = = 85.9  DeltaY =.05 * ( )  DeltaY =  New prototype y value = =

 Python Demo

 Sound familiar?

 Clustering  Dimensionality Reduction  Data visualization

 Yves Amu Klein’s Octofungi uses a kohonen neural network to react to its environment

 Associative learning method

 Biologically inspired

 Associative learning method  Biologically inspired  Behavioral conditioning and Psychological models

 activation = sign(input sum)

 +1 and -1 inputs

 activation = sign(input sum)  +1 and -1 inputs  2 layers

 weight change = learning constant * neuron A activation * neuron B activation

 weight change = learning constant * desired output * input value

 Long-term memory

 Inspired by Hebbian learning

 Long-term memory  Inspired by Hebbian learning  Content-addressable memory

 Long-term memory  Inspired by Hebbian learning  Content-addressable memory  Feedback and convergance

 Attractor – “a state or output vector in a system towards which the system consistently evolves toward given a specific input vector.”

 Attractor Basin – “the set of input vectors surrounding a learned vector which will converge to the same output vector.”

 Bi-directional Associative Memory  Attractor network with 2 layers

SmellTaste

 Bi-directional Associative Memory  Attractor network with 2 layers  Information flows in both directions

 Bi-directional Associative Memory  Attractor network with 2 layers  Information flows in both directions  Matrix worked out in advance

 Hamming vector – vector composed of +1 and -1 only Ex. [1,-1,-1,1] [1,1,-1,1]

 Hamming distance – number of components by which 2 vectors differ Ex. [1,-1,-1,1] and [1,1,-1,1] Differ in only one element (index 1) Hamming distance = 1

 Weights are a matrix based on memories we want to store  To associate X = [1,-1,-1,-1] With Y = [-1,1,1] XYXY

 [1,-1,-1,-1] -> [1,1,1] and [-1,-1,-1,1] -> [1,-1,1] + =

 Autoassociative  Recurrent

 To remember the pattern [1,-1,1,-1,1]

 Demo Demo

 Complements of a vector also become attractors

 Ex. Installing [1,-1, 1]  [-1, 1, -1] also “remembered”

 Complements of a vector also become attractors  Crosstalk

 George Christos “Memory and Dreams”

 Ralph E. Hoffman models of schizophrenia

 Spurious Memories