Download presentation
Presentation is loading. Please wait.
1
Particle Swarm optimisation: A mini tutorial
Three months ago, when Alexandre Caminada asked me if I could give a talk about Particle Swarm optimisation, I said “OK, no problem, how long this talk? And for what kind of people?”, and he said “About one hour, and all people there will be optimisation specialists”. As you can guess Particle Swarm optimisation (its friends call it PSO) precisely is an optimisation method. So my first idea was to not present the PSO basic principles, but only the use of PSO for some hard problems, like Frequency Assignment (or more precisely Graph Coloring Problem) or Traveling Salesman Problem. And I began to design some slides with this idea in mind. However, one month later, when I met Alex in Belfort, he told me he precisely asked me that for PSO was in fact almost unknown in France, even by most of optimisation specialists, and it would be better to do a kind of small tutorial. Here it is, PSO in short. But as I hate to waste things, I kept some of my previous slides so that you could at least have an idea of some advanced applications. In fact, I even reused some slides from other presentations. Now, let’s begin. Yes, I know this bee is a monstrous octopod mutant one, but it comes directly from the Microsoft Power Point Central, so it is not really surprising, is it?
2
The “inventors” (1) Russell Eberhart eberhart@engr.iupui.edu
PSO has been “invented” mainly by two people. The first one is Russ, he is professor at the Purdue School of Engineering and Technology, in Indianapolis. Co-author of Computational Intelligence PC Tools, published by Academic Press Professional in 1996, and also co-author of Swarm Intelligence, published by Morgan Kaufmann in 2001.
3
The “inventors” (2) James Kennedy Kennedy_Jim@bls.gov 2002-04-24
The second one is my friend Jim Kennedy. Interestingly, he is not at all a “professional” researcher, if I dare say. He is graduated in psychology and is the other co-author of the book Swarm Intelligence you might be able to see on the photo. I wouldn’t say Bonnie and Jamey seem to be very interested, although they should, for, in fact, this book is far to be just a PSO handbook, but is also full of philosophical and socio psychological insights about relationships between mind, intelligence, cooperation etc. In particular, you should read it if, like me, you think it is possible to take advantage of social studies to design efficient optimisation methods. OK, end of the ad and back to technique.
4
Part 1: United we stand 2002-04-24 Maurice.Clerc@WriteMe.com
The big boss told you to explore this nice search space for, say, find the highest summit. Unfortunately your team is quite ridiculous: just one stupid guy. Morever he does not like very much to be ordered. So you moan “I can’t do the job, give me more people”. OK, OK, no problem, here is another one. Just like the previous one, though. The question is “can you really use such a team?”. And the answer is “yes”, if you are able to define an extremely small set of extremely simple rules so that team members can follow them and work together, without any foreman. That is what PSO does. Like, say, Genetic algorithms, PSO is a population based method, but unlike GAs, the underlying metaphor is cooperation instead of rivalry. Nevertheless, there is, if I dare say, a kind of convergent evolution, for, on the one hand, some GAs do reduce pressure selection by using nesting and, on the other hand, some PSO variants do use selection to permanently update some parameters, like, typically, swarm size. If we have time, we will see such variants, but, for the moment, we are just trying to understand the basic principles.
5
Cooperation example 2002-04-24 Maurice.Clerc@WriteMe.com
To illustrate what “cooperation” means in PSO, here is a simplistic example. As usually, the big fish is difficult to catch, hidden in the deepest part of the pond. At each time step, each fisherman tells to the other how deep the pond is at his place. At the very begininng, as the depths are quite similar, they both follow their own ways. Now, Fisherman 2 seems to be on a better place, so Fisherman 1 tends to go towards him quite rapidly. Now, the decision is a bit more difficult to make. On the one hand Fisherman 2 is still on a better place, but on the other hand, Fisherman 1’s position is worse than before. So Fisherman 1 comes to a compromise: he still goes towards Fisherman 2, but more slowly than before. As we can see, doing that, he escapes from the local minimum. Of course, this example is a caricatural one, but it presents the main features of a particle in basic PSO: a position, a velocity (or, more precisely an operator which can be applied to a position in order to modify it), the ability to exchange information with its neighbours, the ability to memorize a previous position, and the ability to use information to make a decision. Remember, though, all that as to remain simple. Let’s now see more precisely these points.
6
Initialization. Positions and velocities
Here you have another nice search space. First step: you put some particles on it. You can do it at random or on a regular way, or both. How many? In practice, for most real problems with dimension between 2 and 100, a swarm size of 20 particles works quite good. There are some mathematical ways to give an estimation, but a bit beyond the scope of this talk. Also, as we will see some variants use an adaptive swarm size. Second step: you define a velocity for each particle, usually at random. You can set all initial velocities to zero but, experimentally, it is usually not the best choice. Remember that what we call “velocity” is in fact a move, just because time is discretized.
7
Neighbourhoods geographical social 2002-04-24
Now, for each particle, we define what is called a neighbourhood. Although some variants use a “geographical” neighbourhood, that is to say compute distances and take the nearest particles, the most widely used neighbourhood is a “social” one: just a list of neighbours, regardless where they are. So, you do not need to define a distance and that is a great advantage, for in some cases, particularly for discrete spaces, such a definition would be quite arbitrary. Note that it can be proved (and it is intuitively quite obvious) that if the process converges any social neighbourhood tends to be also a geographical one. Usually, in practice, social neighbourhoods are defined just once, at the very beginning, which is consistent with the principle “simple rules for simple agents”. Now, the size of the neighbourhood could be a problem. Fortunately, PSO is not very sensitive to this parameter and most of users just take a value of 3 or 5 with good results. Unlike for the swarm size, there is no mathematical formula, but like for the swarm size, there are some adaptive variants. geographical social
8
The circular neighbourhood
1 5 7 6 4 3 8 2 Particle 1’s 3-neighbourhood The most commonly used neighbourhood is the circular one. The picture is almost self explanatory. Each particle is numbered, put on a virtual circle according to its number and the neighbourhood of a given particle is built by taking its neighbours on this circle. An important point for rule simplicity is that each particle belongs to its neighbourhood. For example if a rule says “I have to check all my neighbours”, there is no need to add “and I have to check myself”. We will see that more precisely later. Virtual circle
9
Psychosocial compromise
My best perf. i-proximity pi Here I am! The best perf. of my neighbours x pg This may be the most important slide of this presentation, for it summarizes the core of the method. Let’s take a bit time to comment it. You are a particle. Sorry, I don’t mean you are quite stupid, but it is just to explain how it works. (By the way, Jim Kennedy has designed a nice game in which you compete with such stupid particles. I have it here, and if we have time, you will see it is almost impossible to beat it.) You can compute how good is your position (that is to say you can compute the objective function at the place you are). You remember the best position you ever found (and the objective function value). You can ask your neighbours for this information they also have memorized, and choose the best one. Now, you have three tendancies, - audacious, following your own way (just using your own velocity) - conservative, going back more or less towards your best previous position - sheeplike, going more or less towards your best neighbour What PSO formalizes is how to combine these tendancies in order to be globally efficient. g-proximity v
10
The historical algorithm
At each time step t Randomness inside the loop for each particle for each component d update the velocity Here is the pseudo code of the algorithm. As you can see the randomness is inside the loop on components. As a result, when I say the particle is going towards its best previous position, it means in fact “towards a point which is around its best previous position”, that is to say a point within a given proximity. And the same for the best previous position in the neighbourhood. Let us see more precisely what it means. then move
11
Random proximity Hyperparallelepiped => Biased i-proximity
g-proximity x pg Each random term defines a hyperparallelepiped inside the search space. As you can see, it means that the “proximity area” around a given point is in fact biase (the point is not on the center). Nevertheless, the algorithm is so robust that it works quite good despite this bias, but, as you can guess, it can be improved by defining non biased proximities, namely hyperspheres. We will see that if we have time, but we can already tell that the bias is increasing with the search space dimension. In a way, this slide could be the end of this mini tutorial, for you have now all what you really need to write your own Particle Swarm optimiser. Now, of course, if you want to write a good one, some technical details are important. Also if you want to use PSO with non classical search spaces or non classical objective functions, discrete ones, in particular. Before to see these points and some others, just have a look at a small animation. v
12
Animated illustration
Global optimum At the very beginning, particles are randomly put on the search space. There is just one global optimum and several local ones. Let us now run the player. As you can see, the particles are quite rapidly forming some sub swarms around sub optimums. As soon as a particle is quite near of the global optimum, the others go also towards it, so the probability to really find it does increase. Of course, if the run was longer all particles would finally go towards this global optimum
13
Part 2: How to choose parameters
The right way This way Or this way I am sorry but in this part we have to do a bit more maths. What I call here “parameters” are in fact just the coefficients in the formula which indicates how to update the velocity of a given particle. Swarm size and neighbourhood size are also parameters but less important. I mean you can perfectly use a swarm size of 20 and a neighbourhood of 3 for a large range of problems with good results. On the contrary, PSO is more sensitive to the parameters we examine now.
14
Type 1” form Global constriction coefficient Usual values:
k=1 j=4.1 => c=0.73 swarm size=20 hood size=3 with It can be proved that by using some constriction coefficients to modify the equations, your system won’t explode. The most general form has 6 coefficients, but I show you here just an interesting particular case I defined three years ago, among some others, and which has been used by Jim Kennedy for a lot of tests. There are in fact just two parameters, called kappa and phi, and the global constriction coefficient khi is computed so that the system is non divergent. Of course, as you know, it is usally impossible to prove that an algorithm like PSO, with some randomness, does converge towards the global optimum. We are sure here it converges towards something, but we only hope that thanks the interactions between particles this “something” is indeed a global optimum. The good point is that we can easily tune the constriction coefficients so that the swarm has enough time to explore the search space without “exploding”. Non divergence criterion
15
} 5D complex space A 3D section Convergence Non divergence Re(y) Re(v)
j Re(y) Re(v) 5D complex space I don’t give any details here, but the behaviour of a particle is easier to describe in a 5 dimension complex search space. Just for fun, I show you here a section of such a space. As you may notice, 4 is a phi value to avoid. Non divergence means the particle in the phase space tends towards an attractor, whose shape can be perfectly described.
16
Move in a 2D section (attractor)
For example, we can see here such an attractor in the “velocity” plane of the phase space, that is to say the two coordinates are respectively imaginary and real parts of a given velocity component. The circular move in the phase space is seen as an oscillating one in the real space. If left alone, the particle does not converge. As I have said, it needs help of other particles. It is a bit more difficult to visualize, but convergence then means the radius of the attractor (which is here exactly a circle) tends towards zero.
17
Some functions ... Griewank Rastrigin Rosenbrock 2002-04-24
Let us see now a few examples with some very well known test functions. Of course far more tests have been done and there is now absolutely no doubt that PSO is efficient.
18
Best result after 40 000 evaluations
... and some results Optimum=0, dimension=30 Best result after evaluations For each test the dimension of the search space is 30, and we stop the run after evaluations. As you can see, PSO is not bad at all, compared to a classical evolutionary algorithm. Of course, this example is quite old, and there is now some better evolutionary or genetic algorithms, but there is also some better PSO versions. As usually, it does not make sense to simply say “PSO is better”, or “GAs are better”, so, please, consider this slide just as an illustration of the fact that PSO is at least as good as some better known methods.
19
Beat the swarm! Your current position Best perf. of the swarm
A) I am sorry, I am already late, so we have no time to play this nice game, written by Jim Kennedy. B) We have a few minutes, so, as promised, I show you here the nice game written by Jim Kennedy. The search space is a binary one. The dimension is here 9 (3 times 3), but you can change it. At the very beginning, there is a hidden pattern to find. Each possible pattern has a score, and you have here to reach 1000. You try to beat a swarm which compete with you. The window shows your current position, and the best one you have ever found. It also shows the best result found by the swarm. Note that the game is unfair, for you have more information than the swarm: you know its best performance, it does not know yours. --- Let’s try it. You click where you want to define a new position, and you do one step to see the result. With this example, if you are very smart, you might win. As soon as the dimension is higher, you have absolutely no chance. Best perf. of the swarm Your best perf.
20
Part 3: Beyond real numbers
1 2 3 4 5 6 1 2 3 4 1 2 3 4 Bingo! 1 2 3 4 1 2 3 8 8 1 2 One of the most interesting features of PSO is you can use it on a large range of problems, without change the formalism, assuming you are able to find a representation in terms of positions and velocities, and to define a very small set of basic algebraic operators. Let us explain a bit these minimum requirements.
21
Minimun requirements Comparing positions in the search space H
Algebraic operators If you look carefully at the core PSO formulas, you can see you do not need a lot of requirements in order to use this method. First, you have to define what a position is, what a velocity is, and what a coefficient is (although, most of the time, you just use real coefficients). Second, for any pair of positions in the search space, you have to be able to compare them, according to the objective function. Note it does not mean this function has to be a numerical one, you just need to have defined an order. Also the objective function does not need to be a differentiable one, or even a continuous one. Third, you need 4 basic algebraic operators. In practice, particularly for discrete or hybrid problems (I mean problems mixing continuous and discrete variables), the first one, how to multiply a velocity by a coefficient, may be the most difficult to define.
22
} Pseudo code form => algebraic operators
velocity = pos_minus_pos(position1, position2) velocity = linear_combin(a,velocity1,b,velocity2) position = pos_plus_vel(position, velocity) (position,velocity) = confinement(positiont+1,positiont) algebraic operators => For some problems, particularly for combinatorial ones like Graph Coloring, it is easier to directly define what is a linear combination of two velocities. So we finally have some operators like these ones. The confinement operator is not really an inherent part of PSO, but very useful in practice, to be sure the particle stays inside the search space. For example, if a particle tends to go outside the search space, you may have to take it back to the frontier and to modify its velocity, either by setting it to zero or to the opposite, at least for the component which is indeed outside the search space.
23
Fifty-fifty granularity=1 N=100, D=20. Search space: [1,N]D
105 evaluations: = (=450) With these kind of general operators PSO can cope with a lot of very different problems. Of course you have sometimes to seriously adapt the content of these operators, but the process itself remains unmodified. To illustrate this, have very rapidly a look at some classical integer problems. On this one you have to find 20 different integer numbers between 1 and 100 so that the sum of the first 10 is equal to the sum of the last 10. Note that “different” may be taken into account either by using penalties in the objective function or by using a special confinement operator. This second method has been used here. The granularity is the smallest value which can be used to modify a component. For we want to use only integer numbers, granularity is equal to 1.
24
Knapsack granularity=1 N=100, D=10, S=100, 870 evaluations:
run 1 => (9, 14, 18, 1, 16, 5, 6, 2, 12, 17) run 2 => (29, 3, 16, 4, 1, 2, 6, 8, 26, 5) Another classical one. Find 10 different integer numbers between 1 and 100, whose sum is equal to 100. Of course, for this case, as for the previous one, there are some better specific algorithms, but if you don’t have one at hand, PSO can do the job, although more slowly.
25
Graph Coloring Problem
1 4 5 2 3 -1 -3 + = pos-plus-vel I have a whole presentation about this important topic, with some examples for Frequency Assignment, but it would be far too long to show it now. Just note the representation. A “position” is an integer positive graph numbering, and a velocity an integer graph numbering. Although it is not really obvious, it is indeed possible to define all the algebraic operations we have seen. In practice such a method is not used alone, but in combination with some more classical local search algorithms, greedy algorithms for example. I tried such a hybrid method on graphs with up to 300 nodes with quite good results, similar to what we can obtain with genetic algorithms, but apparently not significantly better.
26
The Tireless Traveller
Example of position: X=(5,3,4,1,2,6) Example of velocity: v=((5,3),(2,5),(3,1)) PSO can be used as a stand alone algorithm for Travelling Salesman Problem. If the graph has n nodes, a position is just a permutation of the integers 1 to n, and a velocity a list of transpositions of two nodes: 5 gives 3, then 2 gives 5 etc. Applying such a list to a permutation gives another permutation. Multiplying a velocity by a coefficient smaller than 1 gives another velocity wich is just a smaller list of transpositions. There are several ways to do that. The objective function is a classical one. If an arc in the sequence does not really exist, a penalty is applied. On this toy example a solution is found very quickly, but on classical benchmarks as TSPLIB, I am not sure PSO can compete with the best highly complicated specific algorithms like LKH, but nevertheless it is used for some real applications.
27
Apple trees Best position Swarm size=3 n1 n3 n2 3 0 17 6 4 10 3 11 6
This is an interesting example, for it is not so obvious to solve, and nevertheless PSO finds a solution very rapidly, after just 4 evaluations, even with a very small swarm of 3 particles. You have to divide the field so that each part has the same number of trees, or, at least, so that the differences are as small as possible.
28
Part 4: Some variants 2002-04-24 Maurice.Clerc@WriteMe.com
As you can guess, a lot of variants have been written, and I can present here just a sample of them. At the end of this talk, I will say you where you could find most of the others.
29
Unbiased random proximity
Hyperparallelepiped => Biased Hypersphere vs hypercube x pg pi v i-proximity g-proximity As we have seen PSO needs to define a random proximity around a given position. The historical algorithm usesa hyperparallelepiped for it is very easy to code, but then the position is not at all on the center of its proximity, the proximity size may be too large, and, more important, the convergence time is depending on the system of coordinates you choose. To simultaneously avoid these three bias we can use hyperspheres. It is particularly useful for high dimensions, for the rate hypersphere volume out of tangent hypercube volume decreases very quickly when dimension increases. It is not that complicated to code for you can compute in advance all unit hypersphere volumes for a large range of dimensions. Volume Dimension
30
Each particle is weighted by its perf.
Clusters and queens Each particle is weighted by its perf. Centroids = queens = temporary new “particles” Dynamic clustering You remember the first animation, in which particles tend to form some sub swarms around local optimums. It is then tempting to “weight” each particle by its performance, to compute a centroid for each sub swarm, and to consider it as a quasi particle (without velocity), in order to speed up the process. It seems to work, although is not very clear what kind of weighting could be the best for a given problem.
31
Think locally, act locally (Adaptive versions)
The following slides are coming from another presentation, that is why they look a bit different. On the one hand, there is no selection at all in PSO, just cooperation, but, on the other hand, selection process has been proved to be also sometimes quite useful, like in Genetic Algorithms. So there are some attempts to combine advantages of the two approaches without, if possible, also combine their drawbacks. As far as I know, the first attempt has been done by Peter Angeline 4 years ago. The rules were apparently a bit too simplistic, so the results were not completely convincing.
32
Adaptive swarm size I try to kill myself
There has been enough improvement although I'm the worst I try to generate a new particle I'm the best This is a more recent and sophisticated attempt. I don’t give here the precise formulas, just the underlying metaphors. On this slide “improvement” means “improvement in the particle’s neighbourhood”, and “I try” means success is depending on the current swarm size, according to a probability rule. The good point is you don’t have anymore to “guess” what could be the best swarm size, or to launch a lot of runs to find it, for you can perfectly begin with a very small swarm, and let it increase and decrease by itself. Note that by using the same kind of metaphor, it is also possible to adapt the neighbourhood size. but there has been not enough improvement
33
Adaptive coefficients
rand(0…b)(p-x) av The better I am, the more I follow my own way The better is my best neighbour, the more I tend to go towards him You can also use adaptive coefficients. The theory gives a range of admissible values, in order to have a non divergent process. Same remark as before: you don’t have anymore to try and try again in order to find a good set of coefficients.
34
Energies: classical process
Rosenbrock 2D. Swarm size=20, constant coefficients In order to see how adaptations modify the behaviour of the swarm, let us have a look at a global variable, for example the kinetic energy, the red curve on this small example. With classical PSO, it decreases quite rapidly, so the particles, even if one of them is near of the optimum, spend a lot of time just to slowly move a bit nearer. It needs then about 2240 evaluations to reach the solution with an admissible error of 10-5.
35
Energies: adaptive process
Rosenbrock 2D. Adaptive swarm size, adaptive coefficients It does not happen with an adaptive version, for from time to time a new particle is generated with a random velocity. In the previous example, we used a “standard” swarm size of 20. As we can guess by seeing this one, a swarm size of about 10 would have been enough (the best value is 11, in fact). So, in this case, the adaptive version does need less evaluations. It would not have been exactly the same if the “right” swarm size would have been, say, 30, but, on the other hand, to find the optimal value 11, you would have to run a classical PSO algorithm several times, so, globally, such an adaptive version is in fact still better.
36
Part 5: Real applications (hybrid)
Medical diagnosis Industrial mixer Electrical generator Electrical vehicle OK, all this is nice, but is PSO really used? The answer is “Yes”. Historically, PSO has been first proved to be far more efficient than back propagation to “teach” or “train” a neural network, typically from a few hours to a few minutes. So it is not surprising that most of real applications are hybrid ones, using a neural network driven by a PSO algorithm. It could almost be a rule: “if you have a system using a neural network, you can improve it by using also PSO”.
37
Real applications (stand alone)
Cockshott A. R., Hartman B. E., "Improving the fermentation medium for Echinocandin B production. Part II: Particle swarm optimization", Process biochemistry, vol. 36, 2001, p He Z., Wei C., Yang L., Gao X., Yao S., Eberhart R. C., Shi Y., "Extracting Rules from Fuzzy Neural Network by Particle Swarm Optimization", IEEE International Conference on Evolutionary Computation, Anchorage, Alaska, USA, 1998. Secrest B. R., Traveling Salesman Problem for Surveillance Mission using Particle Swarm Optimization, AFIT/GCE/ENG/01M-03, Air Force Institute of Technology, 2001. Yoshida H., Kawata K., Fukuyama Y., "A Particle Swarm Optimization for Reactive Power and Voltage Control considering Voltage Security Assessment", IEEE Trans. on Power Systems, vol. 15, 2001, p However, there are also some more recent applications using just PSO. I show you here four references, so that you can have an idea of the diversity of the possible uses. Don’t ask me what Echinocandin B exactly is, but the authors use PSO mainly for it gives very rapidly several solutions, just by modifying the random initialization. It is far beyond the scope of this talk to explain the process involved here, but this is a classical optimisation problem: looking for the smallest set of smallest rules able to explain the behaviour of a neural network. The search space is extremely big, so PSO is used here for its speed. From my point of view, the third one is the most interesting, for I had written the first PSO for TSP version two years ago. It was not very satisfying, and it couldn’t reach the level of the best specific algorithms. These military guys don’t think the same, for they have found a way to seriously improve my algorithm, and so that it runs about 80 times faster. For the fourth one, one of the reasons the authors selected PSO was its capability to cope with to a non linear optimisation problem using both continuous and discrete variables.
38
Particle Swarm Central, http://www.particleswarm.net
To know more THE site: Particle Swarm Central, Self advert Clerc M., Kennedy J., "The Particle Swarm-Explosion, Stability, and Convergence in a Multidimensional Somplex space", IEEE Transaction on Evolutionary Computation, 2002,vol. 6, p If you want to find more references, some source codes, papers and presentations, you should go to the Particle Swarm Central. As today is my birthday, and to flatter my ego, I also give you here the references of my two most recent papers, although you will find there anyway. The first one has two parts, a quite rebarbative mathematical one, my contribution, and a more practical one, with some examples, Jim’s part. If you read French, you have also this one, not really for general public, but nevertheless easier to understand and with a lot of references. And, of course, don’t hesitate to me, either in English or in French, if you need more explanation. Thank you for your attention. A) We still have a few minutes left, so, if you have any question... B) I am afraid we don’t have time for any question right now, but, as usually in this kind of seminary, we can have some fruitful discussions around a cup of tea after the other talks.. Clerc M., "L'optimisation par essaim particulaire. Principes et pratique", Hermès, Techniques et Science de l'Informatique, 2002.
39
Appendix 2002-04-24 Maurice.Clerc@WriteMe.com
Just in case, I have here a few other slides, with a bit more technical details and examples.
40
Canonical form M Eigen values e1 and e2 a b g d-hj 2002-04-24
With a few simple algebraic transformations we obtain another form of the “historical” algorithm. We note that the system is completely defined by a matrix M, whose eigen values are e1 and e2. Now we put some coefficients everywhere in the formulas, and we are looking for relations between these coefficients to be sure the system is convergent or, at least, non divergent. a b g d-hj
41
Constriction Constriction coefficients 2002-04-24
We now define two constriction coefficients. Of course they are not arbitrary, and you can find all details in the paper I have written with Jim Kennedy. The interesting point is that the value phi = 4 is a special one. If phi is smaller than 4, the system has a complex part. In fact, it is indeed easier to study its behaviour in a complex phase space.
42
Convergence criterion
j 6 5 4 3 2 1 3.5 2.5 1.5 0.5 Now the non divergence criterion is quite simple. The curves show the norms of the eigen values versus phi. We have no problem with e1, but we may have some with e2. We define a new parameter kappa combining e2 and the second constriction coefficient, and the non divergence condition is that it is smaller than 1.
43
Magic Square (1) 2002-04-24 Maurice.Clerc@WriteMe.com
Just for fun, you can use PSO to very rapidly generate small magic squares. The objective function is just a direct formulation of the well known rules about the sums by columns and by rows, summarized into a single objective function to minimize.
44
Magic Square (2) D=3x3, N=100 10 runs 10 solutions 13430 evaluations
D=3x3, N=100 10 runs 13430 evaluations 10 solutions And here you have some different results for integers between 1 and 100. Note that although the search space size is 1018, each run needs just about 1350 evaluations to find a solution. Of course, the problem is not as difficult as it could seem, for there are in fact really a lot of solutions.
45
Non linear system 1 run 1 solution 143 evaluations Search space [0,1]2
3 solutions I don’t know if it is the same to you, but I have quite often to solve very small problems like this one. Of course, there are some powerful tools like Maple or Mathematica, but it not so easy to obtain several solutions by using them. So, in such a case, I just add a few lines to the C source code of my PSO program, in order to define a new objective function, and then I run it several times. Search space [0,1]2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.