Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evolutionary Computation Evolving Neural Network Topologies.

Similar presentations


Presentation on theme: "Evolutionary Computation Evolving Neural Network Topologies."— Presentation transcript:

1 Evolutionary Computation Evolving Neural Network Topologies

2 Project Problem There is a class of problems that are not linearly separable The XOR function is a member of this class The BACKPROPAGATION algorithm can express a variety of these non-linear decision surfaces

3 Project Problem Non-linear decision surfaces can’t be “learned” with a single perceptron Backprop uses a multi-layer approach where there is a number of input units, a number of “hidden” units, and the corresponding output units

4 Parametric Optimization Parametric Optimization was the goal of this project The parameter to be optimized was the number of hidden units in the hidden layer of a backprop network used to learn the output for the XOR benchmark function

5 Tool Boxes A neural network tool box developed by Herve Abdi (available from Matlab Central) was used for the backprop application The Genetic Algorithm for Function Optimization (GAOT) was used for the GA application

6 Graphical Illustrations The next plot shows the randomness associated with successive runs of the backprop algorithm xx = mytestbpg(1,5000,0.25) xx = mytestbpg(1,5000,0.25) Where 1 is the number of hidden units, 5000 is the number of training iterations, and 0.25 is the learning rate

7 Graphical Illustrations

8 The error from the above plot (and the following) is calculated as follows:

9 Graphical Illustrations E(n) is the error at time epoch n T i (n) = [0 11 0] is the output of the target function with input sets: (0,0), (0,1), (1,0), and (1,1) at time epoch n O i (n) = [o 1 o 2 o 3 o 4 ] is the output of the backprop network at time epoch n Notice that the training examples will cover the entire state space of this function

10 Graphical Illustrations The next few plots will show the effects of the rate of error convergence for different numbers of hidden units

11 Graphical Illustrations-1 hidden

12 Graphical Illustrations-1,2 hidden

13 Graphical Illustrations-1,2,3,5 hidden

14 Graphical Illustrations-1,2,3,5,50 hidden

15 Parametric Optimization The GA supplied by the GAOT tool box was used to optimize the number of hidden units needed for the XOR backprop benchmark A real-valued (floating point representation instead of binary value representation) was used in conjunction with the selection, mutation, and cross-over operators

16 Parametric Optimization The fitness (evaluation) function is the driving factor for the GA in the GAOT toolbox The fitness function is specific to the problem at hand

17 Fitness Function

18 Parametric Optimization Authors of the “NEAT” paper give the following results for their NeuroEvolutionary implementation Optimum number of hidden nodes (average value): 2.35 Average number of generations: 32

19 Parametric Optimization Results of my experimentation:’ Optimum number of hidden units (average value): 2.9 Convergence of this value after approximately 17 generations

20 Parametric Optimization GA parameters: Population size of 20 Maximum number of generations: 50 Backprop fixed parameters: 5000 training iterations; learning rate of 0.25 Note: Approximately 20 minutes per run of the GA with these parameter values. Running on Matlab with 768MB of RAM at 2.2 GHz

21 Initial Population Values

22 Final Population Values

23 Paths of Best and Average Solution for a Run of the GA

24 Target and Backprop Output

25 Conclusion Interesting experimental results. Also agrees with other researchers results GA is a good tool to use for parameter optimization. However, the results depend on a good fitness function for the task at hand


Download ppt "Evolutionary Computation Evolving Neural Network Topologies."

Similar presentations


Ads by Google