-Artificial Neural Network- Adaptive Resonance Theory(ART) 朝陽科技大學 資訊管理系 李麗華 教授.

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

Tic tac toe v1.2 Made by Rogonow XX PC: X YOU: O The PC-player with mark X goes first. After the PC, you place the mark O at the position of a green oval.
NEURAL NETWORKS Backpropagation Algorithm
Hopefully a clearer version of Neural Network. I1 O2 O1 H1 H2I2.
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Data mining in wireless sensor networks based on artificial neural-networks algorithms Authors: Andrea Kulakov and Danco Davcev Presentation by: Niyati.
-Artificial Neural Network- Chapter 2 Basic Model
-Artificial Neural Network- Counter Propagation Network
Kohonen Self Organising Maps Michael J. Watts
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2008 Shreekanth Mandayam ECE Department Rowan University.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
-Artificial Neural Network- Chapter 3 Perceptron 朝陽科技大學 資訊管理系 李麗華 教授.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Networks. R & G Chapter Feed-Forward Neural Networks otherwise known as The Multi-layer Perceptron or The Back-Propagation Neural Network.
- Calculus & It’s Application- Chapter 2 Introduction to Limits 朝陽科技大學 資訊管理系 李麗華 教授.
-Antidifferentiation- Chapter 6 朝陽科技大學 資訊管理系 李麗華 教授.
Artificial Neural Networks (ANNs)
-Artificial Neural Network- Chapter 4 朝陽科技大學 資訊管理系 李麗華 教授.
Data Mining with Neural Networks (HK: Chapter 7.5)
-Artificial Neural Network- Chapter 5 Back Propagation Network
ART (Adaptive Resonance Theory)
-Artificial Neural Network- Matlab操作介紹 -以類神經網路BPN Model為例
朝陽科技大學 資訊科技研究所 流程改善方法 第一次報告 指 導 教 授:陳 隆 昇 學 生:李 延 浚 ( )
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
-Artificial Neural Network- Chapter 3 Perceptron 朝陽科技大學 資訊管理系 李麗華教授.
Minimax.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
-Artificial Neural Network- Chapter 9 Self Organization Map(SOM) 朝陽科技大學 資訊管理系 李麗華 教授.
Explorations in Neural Networks Tianhui Cai Period 3.
-Artificial Neural Network- Hopfield Neural Network(HNN) 朝陽科技大學 資訊管理系 李麗華 教授.
Appendix B: An Example of Back-propagation algorithm
NEURAL NETWORKS FOR DATA MINING
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
Adaptive Resonance Theory
Genetic Algorithms and Neural Networks MIT Splash 2006 Jack Carrozzo.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Chapter 5. Adaptive Resonance Theory (ART) ART1: for binary patterns; ART2: for continuous patterns Motivations: Previous methods have the following problem:
Soft Computing Lecture 14 Clustering and model ART.
Grid Programming on Taiwan Unigrid Platform. Outline Introduction to Taiwan Unigrid How to use Taiwan Unigrid.
Ming-Feng Yeh1 CHAPTER 16 AdaptiveResonanceTheory.
1 Adaptive Resonance Theory. 2 INTRODUCTION Adaptive resonance theory (ART) was developed by Carpenter and Grossberg[1987a] ART refers to the class of.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
What is Unsupervised Learning? Learning without a teacher. No feedback to indicate the desired outputs. The network must by itself discover the relationship.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
多媒體製作設計與評估 -- 數位圖書館案例介紹 Jian-hua Yeh ( 葉建華 ) 真理大學資訊科學系助理教授
2002 年編寫 朝陽科技大學資管系 李麗華 What went wrong? (1995) The Standish Group published a study “CHAOS.” 365 information technology executive managers who managed.
Tic tac toe XX PC: X YOU: O The PC-player with mark X goes first. After the PC, you place the mark O at the position of a green circle. If you succeed.
Intelligent Database Systems Lab Advisor : Dr. Hsu Graduate : Yu Cheng Chen Author : Yongqiang Cao Jianhong Wu 國立雲林科技大學 National Yunlin University of Science.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Soft Computing Lecture 15 Constructive learning algorithms. Network of Hamming.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Neural networks.
Chapter 5 Unsupervised learning
Adaptive Resonance Theory (ART)
Unsupervised Learning Networks
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Counter propagation network (CPN) (§ 5.3)
Dr. Unnikrishnan P.C. Professor, EEE
Prof. Carolina Ruiz Department of Computer Science
Artificial Intelligence Methods
Adaptive Resonance Theory
-Artificial Neural Network- Perceptron (感知器)
Adaptive Resonance Theory
Dr. Unnikrishnan P.C. Professor, EEE
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

-Artificial Neural Network- Adaptive Resonance Theory(ART) 朝陽科技大學 資訊管理系 李麗華 教授

朝陽科技大學 李麗華 教授 2 Introduction ART = Adaptive Resonance Theory The ART network is proposed by Grossberg in There are two commonly used models: - ART1 : this model takes only binary input - ART2 : this model takes continuous or binary input

朝陽科技大學 李麗華 教授 3 Introduction (cont.) The ART network features :  It is a two layered network with the forward & backward process until the message resonate.  ART is a kind of unsupervised learning network.  The input and output layers are connected by the bottom-up weights for competitive learning and by the top-down weights for ouster pattern learning.  When unfamiliar input is fed in, the ART network will dynamically generate the output node for representing the data cluster.

朝陽科技大學 李麗華 教授 4 ART Network Architecture Input layer : Output layer : a cluster layer The network starts from only one node and grows until all the input pattern are learned. Connections : every input node has one bottom-up link to output node and one top-down link to input node. Y1Y1 W 11 b W 11 t W n1 b W n1 t ……… Bottom-up Top-down X 1 X 2 X n ○ ○ ……...… ○ ○ ○ ……….. ○

朝陽科技大學 李麗華 教授 5 ART Process Steps (1/3) 1. Setup network, i.e. the input nodes 。 X 1 ………….. XnXn W 11 b W 1n b W 1n t ( at the very beginning, j=1 ) 6. Calculate “similar value” 5. Find the winning node j* 4. Calculate the “matching value” for every output node j. 3. Input the pattern X 2. Set initial weights, i.e.,

朝陽科技大學 李麗華 教授 6 ART Process Steps (2/3) 7. Do the vigilance test for winning node Case 1 : if V j < ρ ( vigilance value )  This means the input pattern does not similar to the connected weights and, hence, it does not belongs to this j* cluster.  Find the next winning output node to see if it can pass the vigilance test, otherwise, generate a new output node. Setup new output node k : if j = j*, then Y k =1, else Y k =0

朝陽科技大學 李麗華 教授 7 ART Process Steps (3/3) Case2 : if V j ≧ ρ  This means the input pattern matches to the output node j*. Therefore, the j* node is the cluster for representing this pattern X. ∴ in this case, all we have to do is to update weights. 8. Repeat from step 3 to step 7 for all the input patterns. The network terminates when all the input is fed into ART network.

朝陽科技大學 李麗華 教授 8 Example (1/6) Please find the cluster for the following patterns. 令圖形 X : 用數字 1 表示 O : 用數字 0 表示 X1 X2 X3 X4 X5 X6 Y1 Input Output XOX OXO OXO XOX XXX OOO XXX XXO OXX XXO XOO OXO XXX OXO OO X OXO XOO OOX OXO

朝陽科技大學 李麗華 教授 9 Example (2/6) 1. Let ρ=0.5 …... net 1 = j*=1 3. calculate matching value (匹配值) 2. Input 1 st pattern X 1 = { 1,0,1,0,1,0 } 1  16 t W  ={ 1, 1, 1, 1, 1, 1} 10 t W } , 7 1, 7 1 { 1  b W

朝陽科技大學 李麗華 教授 10 Example (3/6) ∵ V j * > ρ ( =0.5 ) (通過 vigilance test, 所以引用 case 2 ) update weights 4. Calculate “similar value” * *      XiXi XiXi W V t ij j

朝陽科技大學 李麗華 教授 11 Example (4/6) Input 2 nd pattern: X= {0,1,0,1,0,1} match value net 1 =0 net j *=net 1 Similar value ∵ V j* < ρ(=0.5) ∴沒有通過 vigilance test, 所以引用 case1 ∵ no other output node, ∴ generate Y 2 &assign new weights W 2 t ={0,1,0,1,0,1} W 2 b ={ }

朝陽科技大學 李麗華 教授 12 Example(5/6) W 1 b ={ } W 1 t = { 1,0,1,0,0,0 } 引用 Case2, so we do the weight updating > ρ ( = 0.5 ) V j *= net2 = match value net1 = input 3 th pattern X = { 1,1,1,0,0,0 } j*=1

朝陽科技大學 李麗華 教授 13 Example(6/6) The final clustering of this example is 8 cluster for 12 input patterns. XOX OXO OXO XOX XXX OOO XXX XXO OXX XXO XOO OXO XXX OXO OO X OXO XOO OOX OXO XOX O O O X OX XO O OXO O OX OXO XO O OXO O OX X X X OXO X X X Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6 Cluster 7 Cluster