Download presentation
1
Particle Swarm Optimization
Fahimeh Fooladgar
2
Outline Swarm Intelligence Introduction to PSO Original PSO algorithms
Global Best PSO Local Best PSO Algorithm Aspects Basic Variations PSO Parameters Application
3
Swarm Intelligence Example : benefits of cooperation Swarm group
agents that communicate with each other either directly or indirectly acting on their local environment Swarm Intelligence (SI) or collective intelligence emerges from the interaction of such agents Computational Swarm Intelligence(CSI) algorithmic models of such behavior
4
Swarm Intelligence(cont.)
computational models of swarm intelligence social animals and social insects ants, termites, bees, spiders, fish schools, and bird flocks individuals relatively simple in structure but their collective behavior usually very complex pattern of interactions between the individuals of the swarm over time
5
Swarm Intelligence(cont.)
objective of computational swarm intelligence models simple behaviors of individuals local interactions with the environment and neighboring to obtain more complex behaviors solve complex problems (optimization problems)
6
Introduction First introduced by James Kennedy and Russell Eberhart in 1995 population-based search algorithm simulation of the social behavior of birds within a flock Individuals are particles Individuals follow a very simple behavior emulate the success of neighboring emulate their own successes
7
Introduction (cont.) swarm of particles : population of individuals
particle have its own velocity xi (t): position of particle i at t
8
Introduction (cont.) velocity vector drives the optimization process
reflects experiential knowledge and socially exchanged information The experiential knowledge of a particle: cognitive component distance of the particle from its own best position particle’s personal best position socially exchanged information :social component
9
Original PSO algorithms
Two PSO algorithms Differ in the size of their neighborhoods gbest PSO and lbest PSO
10
Global Best PSO Neighborhood for each particle is entire swarm
Social network : star topology Velocity update statement
11
Global Best PSO (cont.) vij(t) :velocity of particle i in dimension j = 1, , nx yij(t) : personal best position y^j (t) : best position found by the swarm xij(t) : position of particle i in dimension j c1 and c2 : positive acceleration constants scale the contribution of the cognitive and social components r1j(t), r2j(t) ∼ U(0, 1) stochastic element to the algorithm
12
Global Best PSO (cont.) fitness function
personal best position at the next time step
13
Global Best PSO (cont.) global best position or
ns : total number of particles in the swarm
14
Global Best PSO (cont.)
15
Local Best PSO smaller neighborhoods are defined for each particle
network topology : ring social Velocity update statement
16
Local Best PSO(cont.) y^ij : best position, found by the neighborhood of particle i in dimension j best position found in the neighborhood Ni
17
Local Best PSO(cont.) neighborhood defined
gbest PSO is a special case of the lbest PSO with nNi = ns
18
lbest PSO versus gbest PSO
Two main differences gbest PSO converges faster than lbest PSO less diversity lbest PSO less susceptible to being trapped in local minima
19
Velocity Components vi(t) : previous velocity
memory of the previous flight direction prevents the particle from drastically changing direction bias towards the current direction referred as the inertia component
20
Velocity Components(cont.)
c1r1(yi −xi ) : cognitive component drawn back particle to their own best positions, individuals return to situations that satisfied them most in the past referred to as the “nostalgia” of the particle
21
Velocity Components(cont.)
social component In gbest PSO In lbest PSO each particle drawn towards the best position found by the particle’s neighborhood referred to as the “envy”
22
Geometric Illustration
inertia velocity cognitive velocity social velocity new velocity
23
Algorithm Aspects initialize the swarm Particle position
initial velocities Initial personal best position
24
Stopping conditions Maximum number of iterations
Acceptable solution has been found No improvement is observed over a number of iterations if the average change in particle positions is small if the average particle velocity over a number of iterations is approximately zero
25
Stopping conditions(cont.)
Objective function slope is approximately zero If f ’(t) < Є ,the swarm is converged
26
Social Network Structures
Star Ring Wheel Von Neumann Four Clusters Pyramid
27
Basic Variations Improve basic PSO Velocity clamping Inertia weight
speed of convergence Quality of solutions Velocity clamping Inertia weight Constriction Coefficient
28
Velocity Clamping exploration–exploitation trade-off
Exploration : explore different regions of the search space Exploitation : concentrate the search around a promising area good optimization algorithm: balances these contradictory objectives velocity update equation
29
Velocity Clamping(cont.)
velocity quickly explodes to large values Then particles have large position updates particles diverge Should control the global exploration of particles velocities clamped to stay within boundary constraints Vmax,j denote the maximum allowed velocity in dimension j
30
Velocity Clamping(cont.)
Large values of Vmax,j facilitate global exploration smaller values encourage local exploitation
31
Velocity Clamping(cont.)
If Vmax,j is too small swarm may not explore sufficiently beyond locally good regions increase the number of time steps to reach an optimum swarm may become trapped in a local optimum If Vmax,j is too large risk the possibility of missing a good region particles may jump over good solutions but particles are moving faster
32
Velocity Clamping(cont.)
Balance between moving too fast or too slow exploration and exploitation value of δ is problem-dependent
33
Inertia Weight introduced by Shi and Eberhart
control the exploration and exploitation abilities of the swarm eliminate the need for velocity clamping controlling influence of previous flight direction to new velocity
34
Inertia Weight(cont.) value of w is extremely important
ensure convergent behavior tradeoff exploration and exploitation For w ≥ 1 velocities increase over time the swarm diverges Particles fail to change direction For w < 1 particles decelerate until their velocities reach zero
35
Inertia Weight(cont.) guarantees convergent particle trajectories
If this condition is not satisfied, divergent or cyclic behavior may occur
36
Inertia Weight(cont.) Dynamic Inertia Weight approaches
Linear decreasing Start with w(0)=0.9 and final inertia weight w(nt)=0.4 nt : maximum number of time steps w(0) is the initial inertia weight w(nt) is the final inertia weight w(t) is the inertia at time step t
37
Inertia Weight(cont.) Random adjustments Nonlinear decreasing
38
Constriction Coefficient
similar to the inertia weight balance the exploration–exploitation trade-off velocities are constricted by a constant χ referred to as the constriction coefficient
39
Constriction Coefficient(cont.)
Κ controls the exploration and exploitation For κ ≈ 0 fast convergence local exploitation For κ ≈ 1 slow convergence high degree of exploration Usually, κ set to a constant value First K set close to one, decreasing it to zero
40
Constriction Coefficient(cont.)
Constriction approach equivalent to inertia weight approach if
41
PSO Parameters Swarm size (ns) Neighborhood size Number of iterations
more particles in the swarm, larger the initial diversity of the swarm general heuristic : ns ∈ [10, 30] actually problem dependent Neighborhood size Smaller neighborhoods , slower in convergence, more reliable convergence to optimal solutions Best solution : starting with small neighborhoods and increasing the neighborhood Number of iterations It depend on problem
42
PSO Parameters(cont.) Acceleration coefficients c1 , c2 , r1 and r2
control the stochastic influence of the cognitive and social components c1 : how much confidence a particle in itself c2 : how much confidence a particle in its neighbors
43
PSO Application Percent Paper# Application 7.6 51 Image processing 7.0 47 Control 5.8 39 Electronics and electromagnetics antenna design Power systems and plants 5.6 38 Scheduling 4.4 30 Design Communication networks 4.3 29 Biological and medical Clustering and classification 3.8 26 Fuzzy and neuro fuzzy Signal processing Neural networks 3.5 24 Combinatorial optimization
44
PSO Application(cont.)
45
What makes PSO so attractive to practitioners?
Simplicity Easy to implement ns×nx array for particle’s position ns×nx array particle’s velocity ns×nx d array particle’s personal best 1×nx array for global best 1×nx array for Vmax Can adapt to different application
46
What makes PSO so attractive to practitioners?
All operations are simple and easy to implement It require low computational resources (Memory and CPU) It has ability to quickly converge to a reasonably good solution It can easily and effectively run in distributed environments
47
References A.P.Engelbrecht, “computational intelligence ”,2007
R.Poli, " Analysis of the Publications on the Applications of Particle Swarm Optimisation ", Journal of Artificial Evolution and Applications, Vol. 2008,10 pages, 2007 K.E. Parsopoulos and M.N. Vrahatis. Particle Swarm Optimizer in Noisy and Continuously Changing Environments. In Proceedings of the IASTED International Conference on Artificial Intelligence and Soft Computing, pages 289–294,2001
48
Thanks for your attention
?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.