The Particle Swarm Optimization Algorithm Nebojša Trpković 10 th Dec 2010
Nebojša Trpković Slide 2 of 18 Problem Definition optimization of continuous nonlinear functions ↓ finding the best solution in problem space
Nebojša Trpković Slide 3 of 18 Example
Nebojša Trpković Slide 4 of 18 Importance function optimization artificial neural network training fuzzy system control
Nebojša Trpković Slide 5 of 18 Existing Solutions Ant Colony (ACO) – discrete Genetic Algorithms (GA) – slow convergence
Nebojša Trpković Slide 6 of 18 Particle Swarm Optimization Very simple classification: a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality
Nebojša Trpković Slide 7 of 18 Particle Swarm Optimization Facts: developed by Russell C. Eberhart and James Kennedy in 1995 inspired by social behavior of bird flocking or fish schooling similar to evolutionary techniques such as Genetic Algorithms (GA)
Nebojša Trpković Slide 8 of 18 Particle Swarm Optimization Benefits: faster convergence less parameters to tune ↓ easier searching in very large problem spaces
Nebojša Trpković Slide 9 of 18 Particle Swarm Optimization Basic principle: let particle swarm move towards the best position in search space, remembering each particle’s best known position and global (swarm’s) best known position
Nebojša Trpković Slide 10 of 18 Velocity Change x i – specific particle p i – particle’s (personal) best known position g – swarm’s (global) best known position v i – particle’s velocity v i ← ωv i + φ p r p (p i - x i ) + φ g r g (g - x i ) inertia cognitive social
Nebojša Trpković Slide 11 of 18 Position Change x i – specific particle v i – particle’s velocity x i ← x i + v i
Nebojša Trpković Slide 12 of 18 Algorithm For each particle Initialize particle END Do For each particle Calculate fitness value If the fitness value is better than the best personal fitness value in history, set current value as a new best personal fitness value End Choose the particle with the best fitness value of all the particles, and if that fitness value is better then current global best, set as a global best fitness value For each particle Calculate particle velocity according velocity change equation Update particle position according position change equation End While maximum iterations or minimum error criteria is not attained
Nebojša Trpković Slide 13 of 18 Single Particle
Nebojša Trpković Slide 14 of 18 Parameters selection Different ways to choose parameters: proper balance between exploration and exploitation (avoiding premature convergence to a local optimum yet still ensuring a good rate of convergence to the optimum) putting all attention on exploitation (making possible searches in a vast problem spaces) automatization by meta-optimization
Nebojša Trpković Slide 15 of 18 Avoiding Local Optimums adding randomization factor to velocity calculation adding random momentum in a specific iterations
Nebojša Trpković Slide 16 of 18 Swarm
Nebojša Trpković Slide 17 of 18 Conclusion “This algorithm belongs ideologically to that philosophical school that allows wisdom to emerge rather than trying to impose it, that emulates nature rather than trying to control it, and that seeks to make things simpler rather than more complex.” James Kennedy, Russell Eberhart
Nebojša Trpković Slide 18 of 18 References Wikipedia Swarm Intelligence Application of a particle swarm optimization algorithm for determining optimum well location and type, Jerome Onwunalu and Louis J. Durlofsky, 2009 Particle Swarm Optimization, James Kennedy and Russell Eberhart, Robot Swarm driven by Particle Swarm Optimization algorithm, thinkfluid