What is Unsupervised Learning? Learning without a teacher. No feedback to indicate the desired outputs. The network must by itself discover the relationship.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks (1)
Advertisements

Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
-Artificial Neural Network- Counter Propagation Network
Kohonen Self Organising Maps Michael J. Watts
Unsupervised Learning with Artificial Neural Networks The ANN is given a set of patterns, P, from space, S, but little/no information about their classification,
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Perceptron Learning Rule
An Illustrative Example
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
Prediction Networks Prediction –Predict f(t) based on values of f(t – 1), f(t – 2),… –Two NN models: feedforward and recurrent A simple example (section.
Chapter 4 (part 2): Non-Parametric Classification
Data Mining with Neural Networks (HK: Chapter 7.5)
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Aula 4 Radial Basis Function Networks
Neural Networks Lecture 17: Self-Organizing Maps
Dan Simon Cleveland State University
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
-Artificial Neural Network- Chapter 3 Perceptron 朝陽科技大學 資訊管理系 李麗華教授.
Radial-Basis Function Networks
Self-organizing Maps Kevin Pang. Goal Research SOMs Research SOMs Create an introductory tutorial on the algorithm Create an introductory tutorial on.
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
-Artificial Neural Network- Chapter 9 Self Organization Map(SOM) 朝陽科技大學 資訊管理系 李麗華 教授.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Hebbian Coincidence Learning
1 Chapter 20 Section Slide Set 2 Perceptron examples Additional sources used in preparing the slides: Nils J. Nilsson’s book: Artificial Intelligence:
1 Pattern Classification X. 2 Content General Method K Nearest Neighbors Decision Trees Nerual Networks.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Focus on Unsupervised Learning.  No teacher specifying right answer.
Chapter 5. Adaptive Resonance Theory (ART) ART1: for binary patterns; ART2: for continuous patterns Motivations: Previous methods have the following problem:
The Perceptron. Perceptron Pattern Classification One of the purposes that neural networks are used for is pattern classification. Once the neural network.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
UNSUPERVISED LEARNING NETWORKS
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
381 Self Organization Map Learning without Examples.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
R ECURRENT N EURAL N ETWORKS OR A SSOCIATIVE M EMORIES Ranga Rodrigo February 24,
COMP53311 Other Classification Models: Neural Network Prepared by Raymond Wong Some of the notes about Neural Network are borrowed from LW Chan’s notes.
Chapter 6 Neural Network.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Debrup Chakraborty Non Parametric Methods Pattern Recognition and Machine Learning.
Neural Networks References: “Artificial Intelligence for Games” "Artificial Intelligence: A new Synthesis"
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Data Classification by using Artificial Neural Networks Mohammed Hamdi CS 6800 Spring 2016, WMU.
Chapter 5 Unsupervised learning
Fuzzy Logic in Pattern Recognition
Data Mining, Neural Network and Genetic Programming
Unsupervised Learning Networks
Unsupervised Learning and Neural Networks
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Competitive Networks.
Competitive Networks.
Prediction Networks Prediction A simple example (section 3.7.3)
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Presentation transcript:

What is Unsupervised Learning? Learning without a teacher. No feedback to indicate the desired outputs. The network must by itself discover the relationship of interest from the input data.

The Nearest Neighbor Classifier x (1) x (2) x (3) x (4)

The Nearest Neighbor Classifier x (1) x (2) x (3) x (4)  ?Class

The Hamming Networks Stored a set of classes represented by a set of binary prototypes. Given an incomplete binary input, find the class to which it belongs. Use Hamming distance as the distance measurement. Distance vs. Similarity.

The Hamming Net Similarity Measurement MAXNET Winner-Take-All x1x1 x2x2 xnxn

The Hamming Distance y = 1   x =  1   1 1 Hamming Distance = ?

The Hamming Distance y = 1   x =  1   1 1 Hamming Distance = ?

y = 1   x =  1   1 1 The Hamming Distance Hamming Distance = 3

y = 1   The Hamming Distance   1  1 1 Sum=1 x =  1   1 1

The Hamming Distance

The Hamming Net Similarity Measurement MAXNET Winner-Take-All n1n1 n1n1 n n x1x1 x2x2 xm1xm1 xmxm n1n1 n1n1 n n y1y1 y2y2 yn1yn1 ynyn

The Hamming Net Similarity Measurement MAXNET Winner-Take-All n1n1 n1n1 n n x1x1 x2x2 xm1xm1 xmxm n1n1 n1n1 n n y1y1 y2y2 yn1yn1 ynyn W S =? W M =?

The Stored Patterns Similarity Measurement MAXNET Winner-Take-All n1n1 n1n1 n n x1x1 x2x2 xm1xm1 xmxm n1n1 n1n1 n n y1y1 y2y2 yn1yn1 ynyn W S =? W M =?

The Stored Patterns Similarity Measurement k x1x1 x2x2 xmxm... m/2

Weight update: –Method 1:Method 2 In each method, is moved closer to i l –Normalize the weight vector to unit length after it is updated –Sample input vectors are also normalized –Distance wjwj ilil i l – w j η (i l - w j ) w j + η(i l - w j ) ilil wjwj w j + ηi l ηilηil i l + w j

is moving to the center of a cluster of sample vectors after repeated weight updates –Node j wins for three training samples: i 1, i 2 and i 3 –Initial weight vector w j (0) –After successively trained by i 1, i 2 and i 3, the weight vector changes to w j (1), w j (2), and w j (3), i2i2 i1i1 i3i3 w j (0) w j (1) w j (2) w j (3)

Example will always win no matter the sample is from which class is stuck and will not participate in learning unstuck: let output nodes have some conscience temporarily shot off nodes which have had very high winning rate (hard to determine what rate should be considered as “very high”) w1w1 w2w2

Example Results depend on the sequence of sample presentation w1w1 w2w2 Solution: Initialize w j to randomly selected input vector i l that are far away from each other w1w1 w2w2