SWARM INTELLIGENCE Swarms A swarm is defined as a set of (mobile) agents that collectively carry out a distributed problem solving The agents are liable to communicate with each other, and this communication between agents may be direct or indirect (by acting on their local environment)
SWARM INTELLIGENCE Swarms Swarm Intelligence (SI) is the property of a system whereby the collective behaviors of (unsophisticated) agents interacting locally with their environment cause coherent functional global patterns to emerge Two algorithms have been developed in the area of swarm intelligence: Particle swarm algorithm Ant colony algorithm
SWARM INTELLIGENCE Particle Swarm Algorithm Particle swarm optimization (PSO) is a computation technique developed by Kennedy & Eberhart in 1995, inspired by social behavior of bird flocking. Particularly useful for optimization problems
SWARM INTELLIGENCE Particle Swarm Algorithm A Swarm consists of several particles, where each particle keeps track of its own attributes The attributes of any particle in the swarm are: Its current position as given by the n-dimensional vector The current velocity of the particle, (to keep track of the speed and direction in which the particle is currently moving) Each particle also has a current fitness value, obtained by evaluating the fitness function at the particles current position
SWARM INTELLIGENCE Particle Swarm Algorithm: Steps The System is initialized with a population of random potential solutions (particles) Each particle is assigned a randomized 'velocity'. (i.e. it is a point in the solution space and it has velocity) These particles are then 'flown' through the (hyper) space of potential solutions Each particle keeps track of the coordinates in the hyperspace for which it has achieved the best fitness (solution) so far, and also its best fitness
SWARM INTELLIGENCE Particle Swarm Algorithm: Steps The particle having the best of the best values is the leader. At each time step the 'velocity' of each particle is changed (accelerated) as a function of its local best and the global best positions. This acceleration is weighted by a random term A new position in the solution space is calculated for each particle by adding the new velocity value to each component of the particle's position vector
SWARM INTELLIGENCE Particle Swarm Algorithm: Steps Conceptually local best resembles autobiographical memory, as each individual remembers its own experience (though only one fact about it), and the velocity adjustment associated with local best has been called "simple nostalgia" in that the individual tends to return to the place that most satisfied it in the past On the other hand, global best is conceptually similar to publicized knowledge, or a group norm or standard, which individuals seek to attain
SWARM INTELLIGENCE Particle Swarm Algorithm Consider a flock or swarm of p particles, with each particle’s position representing a possible solution point in the design problem space D For each particle i, the position xi is updated in the following manner : xik+1 = xik + υik+1 with a pseudo-velocity υik+1 calculated as follows: υik+1 = wk υik + c1r1 (pik − xik) + c2r2(pgk − xik) Subscript k indicates a (unit) pseudo-time increment
SWARM INTELLIGENCE Particle Swarm Algorithm The new position is dependent on previous position xik plus three factors xik+1 = xik + wk υik + c1r1 (pik − xik) + c2r2(pgk − xik) wk υik is weighted current velocity c1r1 (pik − xik) is weighted deviation from self best position c2r2(pgk − xik) is weighted deviation from global best position We consider the effect of these three factors one by one
SWARM INTELLIGENCE Particle Swarm Algorithm First consider only the first factor, assuming the other two factors as zero xik+1 = xik + υik+1 where υik+1 = wk υik and xik+2 = xik+1 + υik+2 where υik+2 = wk+1 υik+1 = wk+1 wk υik and xik+3 = xik+2 + υik+3 where υik+3 = wk+2 υik+2 = wk+2 wk+1 wk υik and so on xik+4 , etc If wk = wk+1 = 1 = constant for all k, then υik+1 = υik+2 = υik+3 = υik always
SWARM INTELLIGENCE Particle Swarm Algorithm First consider only the first factor, assuming the other two factors as zero xi1 = xi0 + υi1 where υi1 = υi0 and υi1 = xi1 - xi0 where xi0 is initialized randomly and υi1 will have to be user defined
SWARM INTELLIGENCE Particle Swarm Algorithm The variable wk, set at 1 at initialization, allows a more refined search as the optimization progresses by reducing its value linearly or dynamically The factor wk υik makes the particle move the same distance, and in the same direction at each time step (if wk = 1) As wk is made lesser and lesser, the particle becomes more and more slow
SWARM INTELLIGENCE Particle Swarm Algorithm If we consider only the second factor, assuming the first and the third factors as zero, then xik+1 = xik + wk υik + c1r1 (pik − xik) + c2r2(pgk − xik) becomes xik+1 = xik + c1r1 (pik − xik) The variable pik represents the best ever position of particle i up till time k (the cognitive contribution to the search vector υik+1) This is a weighted deviation from the best position occupied so far, by the particle i Ignore c1r1 for the moment, then xik+1 = xik + pik − xik = pik This implies that the particle wishes to return to its best position
SWARM INTELLIGENCE Particle Swarm Algorithm The factors c1r1 are incorporated so that the particle does not actually returns to its best position If we set c1 > 1, the particle will always overshoot its best position Similarly, for c1 < 1, the particle will always stay short of it If we set c1 = 2, and have a variable r1, which is randomly set to a value between 0 and 1 each time, then the stochastic factor multiplied by 2 gives a mean of 1, so that the particle will "over fly” the target about half the time, thereby allowing for a greater area to be searched around its best position
SWARM INTELLIGENCE Particle Swarm Algorithm Now if we consider only the third factor, assuming the first and the second factors as zero, then xik+1 = xik + wk υik + c1r1 (pik − xik) + c2r2(pgk − xik) becomes xik+1 = xik + c2r2 (pgk − xik) The variable pgk represents the global best position in the swarm up till time k (social contribution) This is a weighted deviation from the global best position occupied so far, by any of the particles Ignore c2r2 for the moment, then xik+1 = xik + pgk − xik = pgk This implies that the particle wishes to go to the global best position
SWARM INTELLIGENCE Particle Swarm Algorithm The factors c2r2 are incorporated so that the particle does not actually returns to this global best position If we set c2 > 1, the particle will always overshoot this best position Similarly, for c2 < 1, the particle will always stay short of it If we set c2 = 2, and have a variable r2, which is randomly set to a value between 0 and 1 each time, then the stochastic factor multiplied by 2 gives a mean of 1, so that the particle will "over fly” the target about half the time, thereby maintaining separation within the group and allowing for a greater area to be searched around the global best position
SWARM INTELLIGENCE Particle Swarm Algorithm The c1 and c2 are called learning factors In simulations, it is observed that a high value of c1 relative to c2 results in excessive wandering of isolated individuals through the problem space, while the reverse (c2 higher than c1) results in the flock rushing prematurely toward the flock’s local minima Approximately equal values of the two increments seem to result in the most effective search of the problem domain
SWARM INTELLIGENCE Particle Swarm Algorithm Note that this observation has been made experimentally, with the help of simulations This is in confirmation with the main idea of the algorithm: that the particles will mainly search the space between the global best and local best for better solution
SWARM INTELLIGENCE Let c1 = 2 and c2 = 2, then for r1 & r2 between 0 & 1 we have * ^ +
SWARM INTELLIGENCE Particle Swarm Algorithm: Steps With c1 and c2 set relatively high, the flock seems to be sucked violently into the target. In a very few iterations the entire flock is seen to be clustered around the goal. With c1 and c2 set low, the flock swirls around the goal, realistically approaching it, swinging out rhythmically with subgroups synchronized, and finally landing" on the target It is apparent that an agent is propelled towards a weighted average of the two better points in the problem space
SWARM INTELLIGENCE Particle Swarm Algorithm: Parameters Number of Particles: The typical range is 20 - 40. Actually for most of the problems 10 particles is large enough to get good results. For some difficult or special problems, one can try 100 or 200 particles as well Dimension of Particles: It is determined by the problem to be optimized Range of Particles: It is also determined by the problem to be optimized; different ranges for different dimension of particles can be specified
SWARM INTELLIGENCE Particle Swarm Algorithm: Parameters Vmax It determines the maximum change one particle can take during one iteration. Learning Factor: c1 and c2 usually equal to 2 Stopping Condition: The maximum number of iterations the PS executes and/or the maximum fitness achieved. This stop condition depends on the problem to be optimized
SWARM INTELLIGENCE Particle Swarm Algorithm Let p be the total number of particles in the swarm The best ever fitness value of a particle at design coordinates pik is denoted by f ibest and the best ever fitness value of the overall swarm at coordinates pgk by f gbest At the initialization time step k = 0, the particle velocities vi0 are initialized to random values within the limits 0 ≤ v0 ≤ vmax0 The vector vmax0 is calculated as a fraction of the distance between the upper and lower bounds vmax0 = ζ(xUB − xLB) with ζ = 0.5
SWARM INTELLIGENCE Particle Swarm Algorithm 1. Initialize (a) Set constants kmax, c1, c2, w0 (b) Randomly initialize particle positions xi0 ∈ D in Rn for i = 1, ..., p (c) Randomly initialize particle velocities 0 ≤ vi0 ≤ vmax0 for i = 1, ..., p (d) Set k = 1
SWARM INTELLIGENCE Particle Swarm Algorithm 2. Optimize (a) Evaluate f ik using design space coordinates xik (b) If f ik ≤ f ibest then f ibest = f ik, pi = xik (c) If f ik≤ f gbest then f gbest = f ik, pg = xik (d) If stopping condition is satisfied then go to 3 (e) Update particle velocity vector vik+1 (f) Update particle position vector xik+1 (g) Increment i. If i > p then increment k, and set i = 1 (h) Go to 2(a) 3. Report results 4. Terminate
SWARM INTELLIGENCE
SWARM INTELLIGENCE Comparison between Genetic Algorithm and Particle Swarm Both algorithms start with a group of a randomly generated population Both have fitness values to evaluate the population Both update the population and search for the optimum with random techniques. Both systems do not guarantee success However, PS does not have genetic operators like crossover and mutation. Particles update themselves with the internal velocity. They also have memory, which is important to the algorithm