A new crossover technique in Genetic Programming Janet Clegg Intelligent Systems Group Electronics Department
This presentation Describe basic evolutionary optimisation Overview of failed attempts at crossover methods Describe the new crossover technique Results from testing on two regression problems
Evolutionary optimisation
Start by choosing a set of random trial solutions (population)
Each trial solution is evaluated (fitness / cost)
Parent selection Select mother Select father
Perform crossover child 1 child 2
Mutation Probability of mutation small (say 0.1)
This provides a new population of solutions – the next generation Repeat generation after generation 1 select parents 2 perform crossover 3 mutate until converged
Genetic Algorithms (GA) – Optimises some quantity by varying parameter values which have numerical values Genetic Programming (GP) – Optimises some quantity by varying parameters which are functions / parts of computer code / circuit components Two types of evolutionary optimisation
Representation
Traditional GA’s - binary representation e.g Floating point GA – performs better than binary e.g Genetic Program (GP) Nodes represent functions whose inputs are below the branches attached to the node
Some crossover methods
Crossover in a binary GA Mother = 130 Father = 122 Child = 250 Child = 2
Min parameter value Max parameter value Mother father Offspring chosen as random point between mother and father Crossover in a floating point GA
Traditional method of crossover in a GP mother father Child 1 Child 2
Motivation for this work Tree crossover in a GP does not always perform well Angeline and Luke and Spencer compared:- (1) performance of tree crossover (2) simple mutation of the branches difference in performance was statistically insignificant Consequently some people implement GP’s with no crossover - mutation only
Motivation for this work In a GP many people do not use crossover so mutation is the more important operator In a GA the crossover operator contributes a great deal to its performance - mutation is a secondary operator AIM:- find a crossover technique in GP which works as well as the crossover in a GA
Cartesian Genetic Programming
Cartesian Genetic Programming (CGP) Julian Miller introduced CGP Replaces tree representation with directed graphs – represented by a string of integers The CGP representation will be explained within the first test problem CGP uses mutation only – no crossover
First simple test problem A simple regression problem:- Finding the function to best fit data taken from The GP should find this exact function as the optimal fit
The traditional GP method for this problem Set of functions and terminals FunctionsTerminals +1 -x *
*x x x 1 - * (1-x) * (x*x) Initial population created by randomly choosing functions and terminals within the tree structures
Crossover by randomly swapping sub-branches of the parent trees mother father Child 1 Child 2
Set of functions – each function identified by an integer FunctionsInteger representation * 2 Set of terminals – each identified by an integer TerminalsInteger representation 10 x1 CGP representation
Creating the initial population 2 First integer is random choice of function 0 (+), 1 (-), or 2 (*) 0 1 Second two integers are random choice of terminals 0 (1) or 1 (x) Next integers are random choice of inputs for the function from the set 0 (1) 1 (x) or node 2 3
Creating the initial population random choice of inputs from Terminals nodes random choice for output from Terminals all nodes 5 output
output output 5 +x x x 1 * + 32 (1*x) +(x+x) = 3x
Run the CGP with test data taken from the function Population size offspring created at each generation Mutation only to begin with Fitness is the sum of squared differences between data and function
Result output 5 + x 1 x + * x 2 =
Any two runs of a GP (or GA) will not be exactly the same To analyse the convergence of the GP we need lots of runs All the following graphs depict the average convergence out of 4000 runs Statistical analysis of GP
Introduce crossover
Introducing some Crossover Parent Parent Child Child Pick random crossover point
GP with and without crossover
GA with and without crossover
Parent Parent Child Child Random crossover point but must be between the nodes
output +- * x+ * x1x-+x+1x1x
GP crossover at nodes
Parent Parent Child Child Pick a random node along the string and swap this single node
Crossover only one node
Parent Parent Child Child Each integer in child randomly takes value from either mother or father
Random swap crossover
Comparison with random search GP with no crossover performs better than any of the trial crossover here How much better than a completely random search is it? The only means it will improve on a random search are by parent selection mutation
Comparison with a random search
GP converges in 58 generations Random search 73 generations
GA performance compared with a completely random search GA tested on a large problem – A random search would have involved searching through 150,000,000 data points The GA reached the solution after testing 27,000 data points ( average convergence of 5000 GA runs) Probability of random search reaching solution in 27,000 trials is !!!!
Why does GP tree crossover not always work well?
f1f1 f2f2 f3f3 f5f5 f4f4 x8x8 x7x7 x6x6 x5x5 x4x4 x3x3 x2x2 x1x1 f6f6 f7f7 f 1 { f 2 [ f 4 ( x 1,x 2 ), f 5 ( x 3,x 4 ) ], f 3 [ f 6 ( x 5,x 6 ), f 7 ( x 7,x 8 ) ] } g 1 { g 2 [ g 4 ( y 1,y 2 ), g 5 ( y 3,y 4 ) ], g 3 [ g 6 ( y 5,y 6 ), g 7 ( y 7,y 8 ) ] } f 1 { f 2 [ f 4 ( x 1,x 2 ), f 5 ( x 3,x 4 ) ], f 3 [ f 6 ( x 5,x 6 ), g 7 ( y 7,y 8 ) ] }
x2x2 x1x1 f g g( x 1 ) = f( x 2 ) Good!
x2x2 x1x1 f g g( x 2 ) f( x 1 ) Good! f( x 1 ) f( x 2 ) g( x 2 ) g( x 1 )
Suppose we are trying to find a function to fit a set of data looking like this
Suppose we have 2 parents which fit the data fairly well
/ 1 + x y ^ ^ exp Choose crossover point
/ 1 2 ^ + + y x ^ ^ Choose crossover point
Introducing the new technique Based on Julian Miller’s Cartesian Genetic Program (CGP) The CGP representation is changed Integer values are replaced by floating point values Crossover is performed as in a floating point GA
CGP representation output New representation – replace integers with floating point variables x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 10 x 11 x 12 x 13 x 14 x 15 x 16 Where the x i lie in a defined range, say output
Interpretation of the new representation For the variables x i which represent choice of function If the set of functions is + - * *
x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 10 x 11 x 12 x 13 x 14 x 15 x output x node node 1 xnode 2node
The crossover operator Crossover is performed as in a floating point GA Two parents, p 1 and p 2 are chosen Offspring o 1 and o 2 are created by Uniformly generated random number is chosen 0 < r i < 1 o i = p 1 + r i (p 2 − p 1 ) when p 1 < p 2
Min parameter value Max parameter value Mother father Offspring chosen as random point between mother and father Crossover in the new representation
Why is this crossover likely to work better than tree crossover?
x8x8 f1f1 f2f2 f3f3 f5f5 f4f4 x7x7 x6x6 x5x5 x4x4 x3x3 x2x2 x1x1 f6f6 f7f7 f 1 { f 2 [ f 4 ( x 1,x 2 ), f 5 ( x 3,x 4 ) ], f 3 [ f 6 ( x 5,x 6 ), f 7 ( x 7,x 8 ) ] } g 1 { g 2 [ g 4 ( y 1,y 2 ), g 5 ( y 3,y 4 ) ], g 3 [ g 6 ( y 5,y 6 ), g 7 ( y 7,y 8 ) ] } f 1 { f 2 [ f 4 ( x 1,x 2 ), f 5 ( x 3,x 4 ) ], f 3 [ f 6 ( x 5,x 6 ), g 7 ( y 7,y 8 ) ] } Mathematical interpretation of tree crossover
x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 10 x 11 x 12 x 13 x 14 x 15 x 16 Mathematical interpretation of the new method output The fitness can be thought of as a function of these 16 variables, say and the optimisation becomes that of finding the values of these 16 variables which give the best fitness The new crossover guides each variable towards its optimal value in a continuous manner
The new technique has been tested on two problems Two regression problems studied by Koza (1) (2)
Test data – 50 points in the interval [-1,1] Fitness is the sum of the absolute errors over 50 points Population size 50 – 48 offspring each generation Tournament selection used to select parents Various rates of crossover tested 0% 25% 50% 75% Number of nodes in representation = 10
The following results are based on the average convergence of the new method out of 1000 runs Considered converged when the absolute error is less than 0.01 at all of the 50 data points (same as Koza) Results based on (1) average convergence graphs (2) the average number of generations to converge (3) Koza’s computational effort figure Statistical analysis of new method
Average convergence for
Convergence for latter generations
Introduce variable crossover At generation number 1 crossover performed 90% of the time Rate of crossover linearly decreases until Generation number 180 crossover is 0%
Variable crossover
Average number of generations to converge and computational effort Percentage crossover Average number of generations to converge Koza’s computational effort 0%16830,000 25%849,000 50%578,000 75%716,000 Variable crossover 4710,000
Average convergence for
Variable crossover
Percentage crossover Average number of generations to converge Koza’s computational effort 0% 51644,000 25%73524,000 50%69114,000 75%65511,000 Variable crossover 27813,000 Average number of generations to converge and computational effort
Number of generations to converge for both problems over 100 runs
Average convergence ignoring runs which take over 1000 generations to converge
The new technique has reduced the average number of generations to converge From 168 down to 47 for the first problem tested From 516 down to 278 for the second problem Conclusions
When crossover is 0% this new method is equivalent to the traditional CGP - mutation only The computational effort figures for 0% crossover here are similar to those reported for the traditional CGP Although a larger mutation rate and population size have been used here Conclusions
Future work Investigate the effects of varying the GP parameters population size mutation rate selection strategies Test the new technique on other problems larger problems other types of problems
Thankyou for listening!
Average convergence for using 50 nodes instead of 10
Average number of generations to converge and computational effort Percentage crossover Average number of generations to converge Koza’s computational effort 0% 7818,000 25%8513,000 50%7111,000 75%10413,000 Variable crossover 4514,000
Average convergence for using 50 nodes instead of 10
Average number of generations to converge and computational effort Percentage crossover Average number of generations to converge Koza’s computational effort 0% 13118,000 25%19317,000 50%22412,000 75%15219,000 Variable crossover 5816,000