Chapter 5. Adaptive Resonance Theory (ART) ART1: for binary patterns; ART2: for continuous patterns Motivations: Previous methods have the following problem:

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks Computing
Advertisements

Perceptron Learning Rule
Instar and Outstar Learning Laws Adapted from lecture notes of the course CN510: Cognitive and Neural Modeling offered in the Department of Cognitive and.
Data mining in wireless sensor networks based on artificial neural-networks algorithms Authors: Andrea Kulakov and Danco Davcev Presentation by: Niyati.
1 Machine Learning: Lecture 10 Unsupervised Learning (Based on Chapter 9 of Nilsson, N., Introduction to Machine Learning, 1996)
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Kohonen Self Organising Maps Michael J. Watts
Artificial neural networks:
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Simple Neural Nets For Pattern Classification
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Neural Networks Part 4 Dan Simon Cleveland State University 1.
1 Pertemuan 15 ADAPTIVE RESONANCE THEORY Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
Ensemble Learning: An Introduction
ART (Adaptive Resonance Theory)
Lecture 09 Clustering-based Learning
Unsupervised Learning and Clustering k-means clustering Sum-of-Squared Errors Competitive Learning SOM Pre-processing and Post-processing techniques.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Kumar Srijan ( ) Syed Ahsan( ). Problem Statement To create a Neural Networks based multiclass object classifier which can do rotation,
Chapter 4. Neural Networks Based on Competition Competition is important for NN –Competition between neurons has been observed in biological nerve systems.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Neural Network Hopfield model Kim, Il Joong. Contents  Neural network: Introduction  Definition & Application  Network architectures  Learning processes.
Artificial Neural Network Unsupervised Learning
NEURAL NETWORKS FOR DATA MINING
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 16: NEURAL NETWORKS Objectives: Feedforward.
Adaptive Resonance Theory
Chapter 5 Unsupervised learning. Introduction Unsupervised learning –Training samples contain only input patterns No desired output is given (teacher-less)
Soft Computing Lecture 14 Clustering and model ART.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 24 Nov 2, 2005 Nanjing University of Science & Technology.
Neural Networks - Lecture 81 Unsupervised competitive learning Particularities of unsupervised learning Data clustering Neural networks for clustering.
UNSUPERVISED LEARNING NETWORKS
Intro. ANN & Fuzzy Systems Lecture 14. MLP (VI): Model Selection.
1 Adaptive Resonance Theory. 2 INTRODUCTION Adaptive resonance theory (ART) was developed by Carpenter and Grossberg[1987a] ART refers to the class of.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology Advisor : Dr. Hsu Graduate : Sheng-Hsuan Wang Authors :
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Chapter 2 Single Layer Feedforward Networks
What is Unsupervised Learning? Learning without a teacher. No feedback to indicate the desired outputs. The network must by itself discover the relationship.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
13 1 Associative Learning Simple Associative Network.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Additional NN Models Reinforcement learning (RL) Basic ideas: –Supervised learning: (delta rule, BP) Samples (x, f(x)) to learn function f(.) precise error.
Suppose G = (V, E) is a directed network. Each edge (i,j) in E has an associated ‘length’ c ij (cost, time, distance, …). Determine a path of shortest.
Intelligent Database Systems Lab Advisor : Dr. Hsu Graduate : Yu Cheng Chen Author : Yongqiang Cao Jianhong Wu 國立雲林科技大學 National Yunlin University of Science.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Soft Computing Lecture 15 Constructive learning algorithms. Network of Hamming.
Dimensions of Neural Networks Ali Akbar Darabi Ghassem Mirroshandel Hootan Nokhost.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
A Self-organizing Semantic Map for Information Retrieval Xia Lin, Dagobert Soergel, Gary Marchionini presented by Yi-Ting.
Chapter 5 Unsupervised learning
Self-Organizing Network Model (SOM) Session 11
Adaptive Resonance Theory (ART)
Creating fuzzy rules from numerical data using a neural network
Counter propagation network (CPN) (§ 5.3)
Dr. Unnikrishnan P.C. Professor, EEE
Chapter 5 Unsupervised learning
Adaptive Resonance Theory
Clustering Techniques
Self-organizing map numeric vectors and sequence motifs
Basic Classification Which is that?.
Adaptive Resonance Theory
Dr. Unnikrishnan P.C. Professor, EEE
Artificial Neural Networks
Presentation transcript:

Chapter 5. Adaptive Resonance Theory (ART) ART1: for binary patterns; ART2: for continuous patterns Motivations: Previous methods have the following problem: 1.Training is non-incremental: –with a fixed set of samples, –adding new samples often requires re-train the network with the enlarged training set until a new stable state is reached. 2.Number of class nodes is pre-determined and fixed. –Under- and over- classification may result from training –No way to add a new class node (unless these is a free class node happens to be close to the new input). –Any new input x has to be classified into one of an existing classes (causing one to win), no matter how far away x is from the winner. no control of the degree of similarity.

Ideas of ART model: –suppose the input samples have been appropriately classified into k clusters (say by competitive learning). –each weight vector is a representative (average) of all samples in that cluster. –when a new input vector x arrives 1. Find the winner j among all k cluster nodes 2. Compare with x if they are sufficiently similar (x resonates with class j), then update based on else, find/create a free class node and make x as its first member.

To achieve these, we need: –a mechanism for testing and determining similarity. –a control for finding/creating new class nodes. –need to have all operations implemented by units of local computation.

ART1 Architecture R G G1

cluster units: competitive, receive input vector x through weights b: to determine winner j. input units: placeholder or external inputs interface units: –pass s to x as input vector for classification by –compare x and –controlled by gain control unit G1 Needs to sequence the three phases (by control units G1, G2, and R)

R = 0: resonance occurs, update and R = 1: fails similarity test, inhibits J from further computation

Working of ART1 Initial state:nodes on set to zeros Recognition phase: determine the winner cluster for input s

Comparison phase:

Weight update/adaptive phase –Initial weight: (no bias) bottom up: top down: –When a resonance occurs with –If k sample patterns are clustered to node then = pattern whose 1’s are common to all these k samples

–Winner may shift: –What to do when failed to classify into any existing cluster? –report failure/treat as outlier –add a new cluster node

Notes 1.Classification as a search process 2.No two classes have the same b and t 3.Different ordering of sample input presentations may result in different classification. 4.Increase of  increases # of classes learned, and decreases the average class size. 5.Classification may shift during search, will reach stability eventually. 6.ART2 is the same in spirit but different in details.