November 18, 2010Neural Networks Lecture 18: Applications of SOMs 1 Assignment #3 Question 2 Regarding your cascade correlation projects, here are a few.

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

Beyond Linear Separability
1 Image Classification MSc Image Processing Assignment March 2003.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Kohonen Self Organising Maps Michael J. Watts
Adaptive Resonance Theory (ART) networks perform completely unsupervised learning. Their competitive learning algorithm is similar to the first (unsupervised)
Unsupervised Networks Closely related to clustering Do not require target outputs for each input vector in the training data Inputs are connected to a.
Self-Organizing Map (SOM). Unsupervised neural networks, equivalent to clustering. Two layers – input and output – The input layer represents the input.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
X0 xn w0 wn o Threshold units SOM.
Simple Neural Nets For Pattern Classification
Radial Basis Functions
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
November 9, 2010Neural Networks Lecture 16: Counterpropagation 1 Unsupervised Learning So far, we have only looked at supervised learning, in which an.
September 16, 2010Neural Networks Lecture 4: Models of Neurons and Neural Networks 1 Capabilities of Threshold Neurons By choosing appropriate weights.
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
Neural Networks Chapter Feed-Forward Neural Networks.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly.
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
October 14, 2010Neural Networks Lecture 12: Backpropagation Examples 1 Example I: Predicting the Weather We decide (or experimentally determine) to use.
November 24, 2009Introduction to Cognitive Science Lecture 21: Self-Organizing Maps 1 Self-Organizing Maps (Kohonen Maps) In the BPN, we used supervised.
October 28, 2010Neural Networks Lecture 13: Adaptive Networks 1 Adaptive Networks As you know, there is no equation that would tell you the ideal number.
September 28, 2010Neural Networks Lecture 7: Perceptron Modifications 1 Adaline Schematic Adjust weights i1i1i1i1 i2i2i2i2 inininin …  w 0 + w 1 i 1 +
Neural Networks Lecture 17: Self-Organizing Maps
Artificial Neural Network
November 21, 2012Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms III 1 Learning in the BPN Gradients of two-dimensional functions:
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
November 25, 2014Computer Vision Lecture 20: Object Recognition IV 1 Creating Data Representations The problem with some data representations is that the.
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Self-organizing Maps Kevin Pang. Goal Research SOMs Research SOMs Create an introductory tutorial on the algorithm Create an introductory tutorial on.
Self-organizing map Speech and Image Processing Unit Department of Computer Science University of Joensuu, FINLAND Pasi Fränti Clustering Methods: Part.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Appendix B: An Example of Back-propagation algorithm
Stephen Marsland Ch. 9 Unsupervised Learning Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 based on slides from Stephen.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
November 26, 2013Computer Vision Lecture 15: Object Recognition III 1 Backpropagation Network Structure Perceptrons (and many other classifiers) can only.
Applying Neural Networks Michael J. Watts
Non-Bayes classifiers. Linear discriminants, neural networks.
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
CS621 : Artificial Intelligence
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
A Self-organizing Semantic Map for Information Retrieval Xia Lin, Dagobert Soergel, Gary Marchionini presented by Yi-Ting.
April 5, 2016Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigms II 1 Capabilities of Threshold Neurons By choosing appropriate.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Self-Organizing Network Model (SOM) Session 11
Data Mining, Neural Network and Genetic Programming
Computer Science and Engineering, Seoul National University
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
Self organizing networks
CSE P573 Applications of Artificial Intelligence Neural Networks
Prof. Carolina Ruiz Department of Computer Science
Lecture 22 Clustering (3).
CSE 573 Introduction to Artificial Intelligence Neural Networks
Capabilities of Threshold Neurons
Artificial Neural Networks
Computer Vision Lecture 19: Object Recognition III
Artificial Neural Networks
Unsupervised Networks Closely related to clustering
Ch4: Backpropagation (BP)
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 1 Assignment #3 Question 2 Regarding your cascade correlation projects, here are a few tips to make your life easier. First of all, the book suggests that after adding a new hidden-layer unit and training its weights, in the output layer we only need to train the weights of the newly added connections (or – your instructor’s idea - just use linear regression to determine them). While that is a very efficient solution, the original paper on cascade correlation suggests to always retrain all output layer weights after adding a hidden- layer unit.

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 2 Assignment #3 Question 2 This will require more training, but it may find a better (=lower error) overall solution for the weight vectors. Furthermore, it will be easier for you to use the same training procedure over and over again instead of writing a single-weight updating function or a linear regression function. For this output weight training, you can simply use your backpropagation algorithm and remove the hidden-layer training. The cascade correlation authors suggest Quickprop for speedup, but Rprop also works.

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 3 Assignment #3 Question 2 In order to train the weights of a new hidden-layer unit, you need to know the current error for each output neuron and each exemplar. You can compute these values once and store them in an array. After creating a new hidden unit with random weights and before training it, determine the current sign S k of the covariance between the unit’s output and the error in output unit k (do not update S k during training, it can lead to convergence problems).

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 4 Assignment #3 Question 2 For the hidden-layer training, you can also use Quickprop or Rprop. Once a new hidden-layer unit has been installed and trained, its weights and thus its output for a given network input will never change. Therefore, you can store the outputs of all hidden units in arrays and use these stored data for the remainder of the network buildup/training. No optimizations are required for this question (sorry, no prizes here), but it is interesting to try it anyway.

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 5 Self-Organizing Maps (Kohonen Maps) Network structure: input vector x x1x1x1x1 output vector o x2x2x2x2 xnxnxnxn O1O1O1O1 O2O2O2O2 O3O3O3O3 OmOmOmOm … …

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 6 Self-Organizing Maps (Kohonen Maps)

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 7 Unsupervised Learning in SOMs In the textbook, a different kind of neighborhood function is used. Instead of having a smooth, continuous function  (i, k) to indicate connection strength, a neighborhood boundary is defined. All neurons within the neighborhood of the winner unit adapt their weights to the current input by exactly the same proportion . The size of the neighborhood is decreased over time.

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 8 Unsupervised Learning in SOMs N.hood for 0  t < 10 N.hood for 10  t < 20 N.hood for 20  t < 30 N.hood for 30  t < 40 N.hood for t > 39

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 9 Unsupervised Learning in SOMs Example I: Learning a one-dimensional representation of a two-dimensional (triangular) input space:

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 10 Unsupervised Learning in SOMs Example II: Learning a two-dimensional representation of a two-dimensional (square) input space:

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 11 Unsupervised Learning in SOMs Example III: Learning a two- dimensional mapping of texture images

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 12 Unsupervised Learning in SOMs Examples IV and V: Learning two-dimensional mappings of RGB colors and NFL images: map-demos/ Example VI: Interactive SOM learning of two- and three- dimensional shapes:

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 13 Unsupervised Learning in SOMs Example VII: A Self-organizing Semantic Map for Information Retrieval (Xia Lin, Dagobert Soergel, Gary Marchionini)