SWARM INTELLIGENCE Swarms

Slides:



Advertisements
Similar presentations
Computational Intelligence Winter Term 2011/12 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Advertisements

Particle Swarm Optimization (PSO)
Particle Swarm optimisation. These slides adapted from a presentation by - one of main researchers.
Particle Swarm Optimization
G. Folino, A. Forestiero, G. Spezzano Swarming Agents for Discovering Clusters in Spatial Data Second International.
PARTICLE SWARM OPTIMISATION (PSO) Perry Brown Alexander Mathews Image:
Particle Swarm Optimization
Particle Swarm Optimization (PSO)
Particle Swarm Optimization Particle Swarm Optimization (PSO) applies to concept of social interaction to problem solving. It was developed in 1995 by.
Bart van Greevenbroek.  Authors  The Paper  Particle Swarm Optimization  Algorithm used with PSO  Experiment  Assessment  conclusion.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
1 A hybrid particle swarm optimization algorithm for optimal task assignment in distributed system Peng-Yeng Yin and Pei-Pei Wang Department of Information.
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
RESEARCH DIRECTIONS IN GRID COMPUTING Dr G Sudha Sadasivam Professor CSE Department, PSG College of Technology.
Particle Swarm Optimization Algorithms
Lecture Module 24. Swarm describes a behaviour of an aggregate of animals of similar size and body orientation. Swarm intelligence is based on the collective.
Swarm Intelligence 虞台文.
Particle Swarm Optimization (PSO) Algorithm and Its Application in Engineering Design Optimization School of Information Technology Indian Institute of.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
(Particle Swarm Optimisation)
The Particle Swarm Optimization Algorithm Nebojša Trpković 10 th Dec 2010.
1 IE 607 Heuristic Optimization Particle Swarm Optimization.
Topics in Artificial Intelligence By Danny Kovach.
Particle Swarm optimisation. These slides adapted from a presentation by - one of main researchers.
Particle Swarm Optimization Speaker: Lin, Wei-Kai
Regrouping Particle Swarm Optimization: A New Global Optimization Algorithm with Improved Performance Consistency Across Benchmarks George I. Evers Advisor:
Particle Swarm Optimization † Spencer Vogel † This presentation contains cheesy graphics and animations and they will be awesome.
SwinTop: Optimizing Memory Efficiency of Packet Classification in Network Author: Chen, Chang; Cai, Liangwei; Xiang, Yang; Li, Jun Conference: Communication.
Particle Swarm Optimization Using the HP Prime Presented by Namir Shammas 1.
Particle Swarm Optimization (PSO)
Genetic Algorithm Dr. Md. Al-amin Bhuiyan Professor, Dept. of CSE Jahangirnagar University.
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Particle Swarm Optimization (PSO) Algorithm. Swarming – The Definition aggregation of similar animals, generally cruising in the same directionaggregation.
 Introduction  Particle swarm optimization  PSO algorithm  PSO solution update in 2-D  Example.
Swarm Intelligence. Content Overview Swarm Particle Optimization (PSO) – Example Ant Colony Optimization (ACO)
Swarm Intelligence By Nasser M..
Optimization via Search
Genetic Algorithms.
Advanced Computing and Networking Laboratory
Lesson 8: Basic Monte Carlo integration
Evolutionary Algorithms Jim Whitehead
Differential Evolution (DE) and its Variant Enhanced DE (EDE)
Particle Swarm Optimization (2)
Particle Swarm Optimization with Partial Search To Solve TSP
Scientific Research Group in Egypt (SRGE)
Cluster formation based comparison of Genetic algorithm and Particle Swarm Optimization in Wireless Sensor Network Ms.Amita Yadav.
Particle Swarm Optimization
PSO -Introduction Proposed by James Kennedy & Russell Eberhart in 1995
Whale Optimization Algorithm
Ana Wu Daniel A. Sabol A Novel Approach for Library Materials Acquisition using Discrete Particle Swarm Optimization.
Meta-heuristics Introduction - Fabien Tricoire
آموزش شبکه عصبی با استفاده از روش بهینه سازی PSO
Traffic Simulator Calibration
Multi-objective Optimization Using Particle Swarm Optimization
Comparing Genetic Algorithm and Guided Local Search Methods
Advanced Artificial Intelligence Evolutionary Search Algorithm
metaheuristic methods and their applications
Computational Intelligence
Heuristic Optimization Methods Pareto Multiobjective Optimization
MURI Kickoff Meeting Randolph L. Moses November, 2008
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Multi-Objective Optimization
现代智能优化算法-粒子群算法 华北电力大学输配电系统研究所 刘自发 2008年3月 1/18/2019
More on Search: A* and Optimization
EE368 Soft Computing Genetic Algorithms.
Boltzmann Machine (BM) (§6.4)
Multi-objective Optimization Using Particle Swarm Optimization
Computational Intelligence
Central South University
Presentation transcript:

SWARM INTELLIGENCE Swarms A swarm is defined as a set of (mobile) agents that collectively carry out a distributed problem solving The agents are liable to communicate with each other, and this communication between agents may be direct or indirect (by acting on their local environment)

SWARM INTELLIGENCE Swarms Swarm Intelligence (SI) is the property of a system whereby the collective behaviors of (unsophisticated) agents interacting locally with their environment cause coherent functional global patterns to emerge Two algorithms have been developed in the area of swarm intelligence: Particle swarm algorithm Ant colony algorithm

SWARM INTELLIGENCE Particle Swarm Algorithm Particle swarm optimization (PSO) is a computation technique developed by Kennedy & Eberhart in 1995, inspired by social behavior of bird flocking. Particularly useful for optimization problems

SWARM INTELLIGENCE Particle Swarm Algorithm A Swarm consists of several particles, where each particle keeps track of its own attributes The attributes of any particle in the swarm are: Its current position as given by the n-dimensional vector The current velocity of the particle, (to keep track of the speed and direction in which the particle is currently moving) Each particle also has a current fitness value, obtained by evaluating the fitness function at the particles current position

SWARM INTELLIGENCE Particle Swarm Algorithm: Steps The System is initialized with a population of random potential solutions (particles) Each particle is assigned a randomized 'velocity'. (i.e. it is a point in the solution space and it has velocity) These particles are then 'flown' through the (hyper) space of potential solutions Each particle keeps track of the coordinates in the hyperspace for which it has achieved the best fitness (solution) so far, and also its best fitness

SWARM INTELLIGENCE Particle Swarm Algorithm: Steps The particle having the best of the best values is the leader. At each time step the 'velocity' of each particle is changed (accelerated) as a function of its local best and the global best positions. This acceleration is weighted by a random term A new position in the solution space is calculated for each particle by adding the new velocity value to each component of the particle's position vector

SWARM INTELLIGENCE Particle Swarm Algorithm: Steps Conceptually local best resembles autobiographical memory, as each individual remembers its own experience (though only one fact about it), and the velocity adjustment associated with local best has been called "simple nostalgia" in that the individual tends to return to the place that most satisfied it in the past On the other hand, global best is conceptually similar to publicized knowledge, or a group norm or standard, which individuals seek to attain

SWARM INTELLIGENCE Particle Swarm Algorithm Consider a flock or swarm of p particles, with each particle’s position representing a possible solution point in the design problem space D For each particle i, the position xi is updated in the following manner : xik+1 = xik + υik+1 with a pseudo-velocity υik+1 calculated as follows: υik+1 = wk υik + c1r1 (pik − xik) + c2r2(pgk − xik) Subscript k indicates a (unit) pseudo-time increment

SWARM INTELLIGENCE Particle Swarm Algorithm The new position is dependent on previous position xik plus three factors xik+1 = xik + wk υik + c1r1 (pik − xik) + c2r2(pgk − xik) wk υik is weighted current velocity c1r1 (pik − xik) is weighted deviation from self best position c2r2(pgk − xik) is weighted deviation from global best position We consider the effect of these three factors one by one

SWARM INTELLIGENCE Particle Swarm Algorithm First consider only the first factor, assuming the other two factors as zero xik+1 = xik + υik+1 where υik+1 = wk υik and xik+2 = xik+1 + υik+2 where υik+2 = wk+1 υik+1 = wk+1 wk υik and xik+3 = xik+2 + υik+3 where υik+3 = wk+2 υik+2 = wk+2 wk+1 wk υik and so on xik+4 , etc If wk = wk+1 = 1 = constant for all k, then υik+1 = υik+2 = υik+3 = υik always

SWARM INTELLIGENCE Particle Swarm Algorithm First consider only the first factor, assuming the other two factors as zero xi1 = xi0 + υi1 where υi1 = υi0 and υi1 = xi1 - xi0 where xi0 is initialized randomly and υi1 will have to be user defined

SWARM INTELLIGENCE Particle Swarm Algorithm The variable wk, set at 1 at initialization, allows a more refined search as the optimization progresses by reducing its value linearly or dynamically The factor wk υik makes the particle move the same distance, and in the same direction at each time step (if wk = 1) As wk is made lesser and lesser, the particle becomes more and more slow

SWARM INTELLIGENCE Particle Swarm Algorithm If we consider only the second factor, assuming the first and the third factors as zero, then xik+1 = xik + wk υik + c1r1 (pik − xik) + c2r2(pgk − xik) becomes xik+1 = xik + c1r1 (pik − xik) The variable pik represents the best ever position of particle i up till time k (the cognitive contribution to the search vector υik+1) This is a weighted deviation from the best position occupied so far, by the particle i Ignore c1r1 for the moment, then xik+1 = xik + pik − xik = pik This implies that the particle wishes to return to its best position

SWARM INTELLIGENCE Particle Swarm Algorithm The factors c1r1 are incorporated so that the particle does not actually returns to its best position If we set c1 > 1, the particle will always overshoot its best position Similarly, for c1 < 1, the particle will always stay short of it If we set c1 = 2, and have a variable r1, which is randomly set to a value between 0 and 1 each time, then the stochastic factor multiplied by 2 gives a mean of 1, so that the particle will "over fly” the target about half the time, thereby allowing for a greater area to be searched around its best position

SWARM INTELLIGENCE Particle Swarm Algorithm Now if we consider only the third factor, assuming the first and the second factors as zero, then xik+1 = xik + wk υik + c1r1 (pik − xik) + c2r2(pgk − xik) becomes xik+1 = xik + c2r2 (pgk − xik) The variable pgk represents the global best position in the swarm up till time k (social contribution) This is a weighted deviation from the global best position occupied so far, by any of the particles Ignore c2r2 for the moment, then xik+1 = xik + pgk − xik = pgk This implies that the particle wishes to go to the global best position

SWARM INTELLIGENCE Particle Swarm Algorithm The factors c2r2 are incorporated so that the particle does not actually returns to this global best position If we set c2 > 1, the particle will always overshoot this best position Similarly, for c2 < 1, the particle will always stay short of it If we set c2 = 2, and have a variable r2, which is randomly set to a value between 0 and 1 each time, then the stochastic factor multiplied by 2 gives a mean of 1, so that the particle will "over fly” the target about half the time, thereby maintaining separation within the group and allowing for a greater area to be searched around the global best position

SWARM INTELLIGENCE Particle Swarm Algorithm The c1 and c2 are called learning factors In simulations, it is observed that a high value of c1 relative to c2 results in excessive wandering of isolated individuals through the problem space, while the reverse (c2 higher than c1) results in the flock rushing prematurely toward the flock’s local minima Approximately equal values of the two increments seem to result in the most effective search of the problem domain

SWARM INTELLIGENCE Particle Swarm Algorithm Note that this observation has been made experimentally, with the help of simulations This is in confirmation with the main idea of the algorithm: that the particles will mainly search the space between the global best and local best for better solution

SWARM INTELLIGENCE Let c1 = 2 and c2 = 2, then for r1 & r2 between 0 & 1 we have * ^ +

SWARM INTELLIGENCE Particle Swarm Algorithm: Steps With c1 and c2 set relatively high, the flock seems to be sucked violently into the target. In a very few iterations the entire flock is seen to be clustered around the goal. With c1 and c2 set low, the flock swirls around the goal, realistically approaching it, swinging out rhythmically with subgroups synchronized, and finally landing" on the target It is apparent that an agent is propelled towards a weighted average of the two better points in the problem space

SWARM INTELLIGENCE Particle Swarm Algorithm: Parameters Number of Particles: The typical range is 20 - 40. Actually for most of the problems 10 particles is large enough to get good results. For some difficult or special problems, one can try 100 or 200 particles as well Dimension of Particles: It is determined by the problem to be optimized Range of Particles: It is also determined by the problem to be optimized; different ranges for different dimension of particles can be specified

SWARM INTELLIGENCE Particle Swarm Algorithm: Parameters Vmax It determines the maximum change one particle can take during one iteration. Learning Factor: c1 and c2 usually equal to 2 Stopping Condition: The maximum number of iterations the PS executes and/or the maximum fitness achieved. This stop condition depends on the problem to be optimized

SWARM INTELLIGENCE Particle Swarm Algorithm Let p be the total number of particles in the swarm The best ever fitness value of a particle at design coordinates pik is denoted by f ibest and the best ever fitness value of the overall swarm at coordinates pgk by f gbest At the initialization time step k = 0, the particle velocities vi0 are initialized to random values within the limits 0 ≤ v0 ≤ vmax0 The vector vmax0 is calculated as a fraction of the distance between the upper and lower bounds vmax0 = ζ(xUB − xLB) with ζ = 0.5

SWARM INTELLIGENCE Particle Swarm Algorithm 1. Initialize (a) Set constants kmax, c1, c2, w0 (b) Randomly initialize particle positions xi0 ∈ D in Rn for i = 1, ..., p (c) Randomly initialize particle velocities 0 ≤ vi0 ≤ vmax0 for i = 1, ..., p (d) Set k = 1

SWARM INTELLIGENCE Particle Swarm Algorithm 2. Optimize (a) Evaluate f ik using design space coordinates xik (b) If f ik ≤ f ibest then f ibest = f ik, pi = xik (c) If f ik≤ f gbest then f gbest = f ik, pg = xik (d) If stopping condition is satisfied then go to 3 (e) Update particle velocity vector vik+1 (f) Update particle position vector xik+1 (g) Increment i. If i > p then increment k, and set i = 1 (h) Go to 2(a) 3. Report results 4. Terminate

SWARM INTELLIGENCE

SWARM INTELLIGENCE Comparison between Genetic Algorithm and Particle Swarm Both algorithms start with a group of a randomly generated population Both have fitness values to evaluate the population Both update the population and search for the optimum with random techniques. Both systems do not guarantee success However, PS does not have genetic operators like crossover and mutation. Particles update themselves with the internal velocity. They also have memory, which is important to the algorithm