Presentation is loading. Please wait.

Presentation is loading. Please wait.

November 18, 2010Neural Networks Lecture 18: Applications of SOMs 1 Assignment #3 Question 2 Regarding your cascade correlation projects, here are a few.

Similar presentations


Presentation on theme: "November 18, 2010Neural Networks Lecture 18: Applications of SOMs 1 Assignment #3 Question 2 Regarding your cascade correlation projects, here are a few."— Presentation transcript:

1 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 1 Assignment #3 Question 2 Regarding your cascade correlation projects, here are a few tips to make your life easier. First of all, the book suggests that after adding a new hidden-layer unit and training its weights, in the output layer we only need to train the weights of the newly added connections (or – your instructor’s idea - just use linear regression to determine them). While that is a very efficient solution, the original paper on cascade correlation suggests to always retrain all output layer weights after adding a hidden- layer unit.

2 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 2 Assignment #3 Question 2 This will require more training, but it may find a better (=lower error) overall solution for the weight vectors. Furthermore, it will be easier for you to use the same training procedure over and over again instead of writing a single-weight updating function or a linear regression function. For this output weight training, you can simply use your backpropagation algorithm and remove the hidden-layer training. The cascade correlation authors suggest Quickprop for speedup, but Rprop also works.

3 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 3 Assignment #3 Question 2 In order to train the weights of a new hidden-layer unit, you need to know the current error for each output neuron and each exemplar. You can compute these values once and store them in an array. After creating a new hidden unit with random weights and before training it, determine the current sign S k of the covariance between the unit’s output and the error in output unit k (do not update S k during training, it can lead to convergence problems).

4 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 4 Assignment #3 Question 2 For the hidden-layer training, you can also use Quickprop or Rprop. Once a new hidden-layer unit has been installed and trained, its weights and thus its output for a given network input will never change. Therefore, you can store the outputs of all hidden units in arrays and use these stored data for the remainder of the network buildup/training. No optimizations are required for this question (sorry, no prizes here), but it is interesting to try it anyway.

5 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 5 Self-Organizing Maps (Kohonen Maps) Network structure: input vector x x1x1x1x1 output vector o x2x2x2x2 xnxnxnxn O1O1O1O1 O2O2O2O2 O3O3O3O3 OmOmOmOm … …

6 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 6 Self-Organizing Maps (Kohonen Maps)

7 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 7 Unsupervised Learning in SOMs In the textbook, a different kind of neighborhood function is used. Instead of having a smooth, continuous function  (i, k) to indicate connection strength, a neighborhood boundary is defined. All neurons within the neighborhood of the winner unit adapt their weights to the current input by exactly the same proportion . The size of the neighborhood is decreased over time.

8 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 8 Unsupervised Learning in SOMs N.hood for 0  t < 10 N.hood for 10  t < 20 N.hood for 20  t < 30 N.hood for 30  t < 40 N.hood for t > 39

9 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 9 Unsupervised Learning in SOMs Example I: Learning a one-dimensional representation of a two-dimensional (triangular) input space: 0 25000 20 100100010000

10 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 10 Unsupervised Learning in SOMs Example II: Learning a two-dimensional representation of a two-dimensional (square) input space:

11 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 11 Unsupervised Learning in SOMs Example III: Learning a two- dimensional mapping of texture images

12 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 12 Unsupervised Learning in SOMs Examples IV and V: Learning two-dimensional mappings of RGB colors and NFL images: http://www.shy.am/2005/12/kohonen-self-organizing- map-demos/ Example VI: Interactive SOM learning of two- and three- dimensional shapes: http://www.cs.umb.edu/~marc/cs672/wsom.exe

13 November 18, 2010Neural Networks Lecture 18: Applications of SOMs 13 Unsupervised Learning in SOMs Example VII: A Self-organizing Semantic Map for Information Retrieval (Xia Lin, Dagobert Soergel, Gary Marchionini) http://www.cs.umb.edu/~marc/cs672/lin1991.pdf


Download ppt "November 18, 2010Neural Networks Lecture 18: Applications of SOMs 1 Assignment #3 Question 2 Regarding your cascade correlation projects, here are a few."

Similar presentations


Ads by Google