-Artificial Neural Network- Counter Propagation Network

Slides:



Advertisements
Similar presentations
Memristor in Learning Neural Networks
Advertisements

Slides from: Doug Gray, David Poole
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
-Artificial Neural Network- Chapter 2 Basic Model
Kohonen Self Organising Maps Michael J. Watts
Competitive learning College voor cursus Connectionistische modellen M Meeter.
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Ch. 4: Radial Basis Functions Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from many Internet sources Longin.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2008 Shreekanth Mandayam ECE Department Rowan University.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
-Artificial Neural Network- Chapter 3 Perceptron 朝陽科技大學 資訊管理系 李麗華 教授.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
An Illustrative Example
-Artificial Neural Network- Adaptive Resonance Theory(ART) 朝陽科技大學 資訊管理系 李麗華 教授.
Neural Networks Chapter Feed-Forward Neural Networks.
-Artificial Neural Network- Chapter 4 朝陽科技大學 資訊管理系 李麗華 教授.
-Artificial Neural Network- Chapter 5 Back Propagation Network
-Artificial Neural Network- Matlab操作介紹 -以類神經網路BPN Model為例
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Neural Networks Lecture 17: Self-Organizing Maps
November 21, 2012Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms III 1 Learning in the BPN Gradients of two-dimensional functions:
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
-Artificial Neural Network- Chapter 3 Perceptron 朝陽科技大學 資訊管理系 李麗華教授.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
1 st Neural Network: AND function Threshold(Y) = 2 X1 Y X Y.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks Shreekanth Mandayam Robi Polikar …… …... … net k   
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
-Artificial Neural Network- Chapter 9 Self Organization Map(SOM) 朝陽科技大學 資訊管理系 李麗華 教授.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Chapter 9 Neural Network.
-Artificial Neural Network- Hopfield Neural Network(HNN) 朝陽科技大學 資訊管理系 李麗華 教授.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Hebbian Coincidence Learning
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
BACKPROPAGATION: An Example of Supervised Learning One useful network is feed-forward network (often trained using the backpropagation algorithm) called.
IE 585 Competitive Network – Learning Vector Quantization & Counterpropagation.
UNSUPERVISED LEARNING NETWORKS
381 Self Organization Map Learning without Examples.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 32: sigmoid neuron; Feedforward.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
What is Unsupervised Learning? Learning without a teacher. No feedback to indicate the desired outputs. The network must by itself discover the relationship.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
EEE502 Pattern Recognition
Chapter 6 Neural Network.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
1 Neural networks 2. 2 Introduction: Neural networks The nervous system contains 10^12 interconnected neurons.
Data Mining, Neural Network and Genetic Programming
CSE P573 Applications of Artificial Intelligence Neural Networks
Counter propagation network (CPN) (§ 5.3)
CSE 473 Introduction to Artificial Intelligence Neural Networks
Prof. Carolina Ruiz Department of Computer Science
Neuro-Computing Lecture 4 Radial Basis Function Network
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
-Artificial Neural Network- Perceptron (感知器)
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

-Artificial Neural Network- Counter Propagation Network 朝陽科技大學 資訊管理系 李麗華 教授

Introduction (1/4) Counter Propagation Network(CPN) - Defined by Robert Hecht-Nielsen in 1986, CPN is a network that learns a bidirectional mapping in hyper-dimensional space. - CPN learns both forward mapping (from n-space to m-space)and, if it exists, the inverse mapping (from m-space to n-space) for a set of pattern vectors. X1 X2 ‧ Xn Y1 Ym 朝陽科技大學 李麗華 教授

Introduction (2/4) -Counter Propagation Network (CPN) is a an unsupervised winner-take-all competitive learning network. -The learning process consists 2 phases: The kohonen learning (unsupervised) phase & the Grossberg learning(supervised) phase. 朝陽科技大學 李麗華 教授

Introduction (3/4) - The Architecture: H1 wih X1 whj X2 H2 ‧ Y1 Xn Ym Hh Kohonen Learning Grossberg H1 wih whj Y1 Ym 朝陽科技大學 李麗華 教授

Introduction (4/4) Input layer : X=[X1, X2, …….Xn] Hidden layer: also called Cluster layer, H=[H1, H2, …….Hn] Output layer: Y=[Y1, Y2, ……Ym] Weights : From InputHidden: Wih , From HiddenOutput : Whj Transfer function: uses linear type f(netj)=netj 朝陽科技大學 李麗華 教授

The learning Process(1/2) Phase I: (Kohonen unsupervised learning) (1)Computes the Euclidean distance between input vector & the weights of each hidden node. (2)Find the winner node with the shortest distance. (3)Adjust the weights that connected to the winner node in hidden layer with △Wih* = η1(Xi - Wih* ) Phase II: (Grossberg supervised learning) Some as (1)& (2)of phase I Let the link connected to the winner node to output node is set as 1 and the other are set to 0. Adjust the weights using △Wij = η2‧δ‧Hh 朝陽科技大學 李麗華 教授

The learning Process(2/2) The recall process: Set up the network Read the trained weights. Input the test vector, X. Computes the Euclidean distance & finds the winner where the winner hidden node output 1 and the other output 0. Compute the weighted sum for output nodes to derive the prediction (mapping output). 朝陽科技大學 李麗華 教授

The computation of CPN (1/4) Phase I:(Kohonen unsupervised learning) Set up the network. Randomly assign weights, Wih Input training vector, X=[X1, X2, …….Xn] Compute the Euclidean Distance to find the winner node, H* or 5. Hh * 1 h=h otherwise ì = í î if 朝陽科技大學 李麗華 教授

The computation of CPN (2/4) 6. Update weights △Wih* = η1(X1-Wih*) Wih* = Wih* + △Wih*. 7. Repeats 3 ~ 6 until the error value is small & stable or the number of training cycle is reached. 朝陽科技大學 李麗華 教授

The computation of CPN (3/4) Phase II:(Grossberg supervised learning) Input training vector Computes 4 & 5 of Phase I net j = ΣWhj‧ Hh Yj = net j 朝陽科技大學 李麗華 教授

The computation of CPN (3/4) 3. Computes error :δj = (T j - Yj) 4. Updates weights:△W * hj = η2‧δj‧H h* W h* j = Wh* j + △Wh* j 5. Repeats 1 to 4 of Phase II until the error is very small & stable or the number of training cycle is reached. 朝陽科技大學 李麗華 教授

The recall computation Set up the network. Read the trained weights, Wih Input testing vector (pattern), X=[X1, X2, …….Xn] Compute the Euclidean Distance to find the winner node, H* Hh * 1 h=h otherwise ì = í î if net j = ΣWhj‧ Hh Yj = net j 朝陽科技大學 李麗華 教授

The example of CPN (1/2) Ex:Use CPN to solve XOR problem X1 X2 T1 T2 -1 1 Y1 Ym X1 X2 H1 H2 H3 H4 0.5 -0.5 Randomly set up the weights of Wih & Whj 朝陽科技大學 李麗華 教授

The example of CPN (2/2) Sol: 以下僅介紹如何計算Phase I (Phase II 計算上課說明) (1) 代入X=[-1,-1] T=[0,1] net1 =[-1-(-0.5)]2 + [-1-(-0.5)]2 = (-0.52) + (-0.52) = 0.5 net2 =[-1-(-0.5)]2 + [ 1-(-0.5)]2 = (-0.52) + ( 1.52) = 2.5 net3 = 2.5 net4 = 4.5 ∴ net 1 has minimum distance and the winner is h* = 1 (2) Update weights of Wih* △W11 = (0.5) [-1-(-0.5)] = -0.25 △W21 = (0.5) [-1-(-0.5)] = -0.25 ∴ W11 = △W11 + W11 = -0.75, W21 = △W21 + W21 = -0.75 朝陽科技大學 李麗華 教授