IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation.

Slides:



Advertisements
Similar presentations
Associative Learning Memories -SOLAR_A
Advertisements

2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Neural Networks Dr. Peter Phillips. The Human Brain (Recap of week 1)
Biostatistics-Lecture 4 More about hypothesis testing Ruibin Xi Peking University School of Mathematical Sciences.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Self Organization: Competitive Learning
-Artificial Neural Network- Counter Propagation Network
Kohonen Self Organising Maps Michael J. Watts
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
Artificial Neural Networks
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
INTRODUCTION TO Machine Learning 2nd Edition
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Supervised learning 1.Early learning algorithms 2.First order gradient methods 3.Second order gradient methods.
Neural Networks Part 4 Dan Simon Cleveland State University 1.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Introduction to Neural Networks Simon Durrant Quantitative Methods December 15th.
Perceptron Learning Rule
Neural Networks Chapter Feed-Forward Neural Networks.
1 Pertemuan 9 JARINGAN LEARNING VECTOR QUANTIZATION Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
Neural Networks based on Competition
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Lecture 09 Clustering-based Learning
SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING.
Foundation of High-Dimensional Data Visualization
Project reminder Deadline: Monday :00 Prepare 10 minutes long pesentation (in Czech/Slovak), which you’ll present on Wednesday during.
Unsupervised Learning and Clustering k-means clustering Sum-of-Squared Errors Competitive Learning SOM Pre-processing and Post-processing techniques.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
© Negnevitsky, Pearson Education, Will neural network work for my problem? Will neural network work for my problem? Character recognition neural.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
-Artificial Neural Network- Chapter 9 Self Organization Map(SOM) 朝陽科技大學 資訊管理系 李麗華 教授.
NEURAL NETWORKS FOR DATA MINING
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Last lecture summary. SOM supervised x unsupervised regression x classification Topology? Main features? Codebook vector? Output from the neuron?
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
CUNY Graduate Center December 15 Erdal Kose. Outlines Define SOMs Application Areas Structure Of SOMs (Basic Algorithm) Learning Algorithm Simulation.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
CS623: Introduction to Computing with Neural Nets (lecture-16) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
1 Statistics & R, TiP, 2011/12 Multivariate Methods  Multivariate data  Data display  Principal component analysis Unsupervised learning technique 
Deep Belief Network Training Same greedy layer-wise approach First train lowest RBM (h 0 – h 1 ) using RBM update algorithm (note h 0 is x) Freeze weights.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
Big data classification using neural network
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Erich Smith Coleman Platt
Other Applications of Energy Minimzation
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Dr. Unnikrishnan P.C. Professor, EEE
Unsupervised learning
Lecture 22 Clustering (3).
Competitive Networks.
Chap 8: Adaptive Networks
Competitive Networks.
Feature mapping: Self-organizing Maps
CS 621 Artificial Intelligence Lecture 29 – 22/10/05
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Perceptron Learning Rule
Presentation transcript:

IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation

LVQ Supervised version of SOM Same architecture as SOM Used for pattern classification Weight vector for an output neuron is a “reference” vector for the class that the neuron represents

LVQ – cont. A set of training patterns with known classifications is provided, along with an initial distribution of reference vectors (each of which represents a known classification) After training, LVQ classifies an input vector by assigning it to the same class as the output neuron that has its weight vector closest to the input vector.

Procedure of LVQ Same with SOM, except if the winning neuron is “correct” use same weight update: wnew = wold + (x - wold) and if winning neuron is “incorrect” use: wnew = wold - (x - wold)

Ways to Initialize the Weight Vectors Take the first m training vectors and use them as weight vectors; the remaining is used for training Assign the initial weights and classifications randomly Determined by k-means or SOM

Example of LVQ

Variations of LVQ LVQ2 & LVQ3 [Kohonen, 1990] Allows two vectors to learn (the winner and a runner-up) LVQ3 uses momentum for learning rate  prevent the reference vectors from moving away from their optimal placement

Counterpropagation Developed by Hecht-Nielsen, 1987 Used for compressing data, approximating functions or associating patterns Two types – full and forward only

Architecture of Forward Only Counterpropagation x Kohonen neurons z Output Layer w weight matrix y Cluster Layer v weight matrix Input Layer

Procedure of the Counterpropagation Net 2 phases of learning Phase 1 – SOM (unsupervised learning) Phase 2 – Grossberg outstar learning (supervised learning)

Counterpropagation Net Example

Iris Data Set A data set with 150 random samples of flowers from the iris species setosa, versicolor, and virginica. From each species there are 50 observations for sepal length, sepal width, petal length, and petal width in cm. This dataset was used by Fisher (1936) in his initiation of the linear-discriminant-function technique. http://www.stat.sc.edu/~bradley/Data.html Has some cool statistical plotting routines using this data.

Cool Web Sites http://gepasi.dbs.aber.ac.uk/roy/koho/kohonen.htm (nice write up with some good figures on self organizing maps) http://odur.let.rug.nl/~kleiweg/kohonen/kohonen.html#lit (brief write up with downloadable software) http://www.cs.may.ie/~trenaman/nnets/SOFM/index.htm (series of slides on the nets) http://www.cis.hut.fi/research/refs/ (a bibliography of over 4000 papers using SOM)

More Sites http://www.patol.com/java/TSP/ (super cool simulation - we will cover this in the optimization lecture) http://rfhs8012.fh-regensburg.de/~saj39122/jfroehl/diplom/e-index.html (another cool simulation like above but this time in 3D!) http://rana.usc.edu:8376/~yuri/kohonen/kohonen.html (pretty cool simulation that you change parameters with)