Presentation is loading. Please wait.

Presentation is loading. Please wait.

Multi-objective Optimization Using Particle Swarm Optimization

Similar presentations


Presentation on theme: "Multi-objective Optimization Using Particle Swarm Optimization"— Presentation transcript:

1 Multi-objective Optimization Using Particle Swarm Optimization

2 Introduction Evolutionary Algorithms (EAs) are not the only search techniques that have been used to deal with multiobjective optimization problems. In fact, as other search techniques (e.g., Tabu search and simulated annealing) have proved to have very good performance in many combinatorial (as well as other types of) optimization problems, it is only natural to think of extensions of such approaches to deal with multiple objectives.

3 Search Techniques: Simulated Annealing Tabu Search and Scatter Search
Particle Swarm Optimization Ant System Artificial Immune Systems ...

4 Particle Swarm Optimization
Kennedy and Eberhart (1995) proposed an approach called “particle swarm optimization” (PSO) inspired by the choreography of a bird flock. The idea of this approach is to simulate the movements of a group (or population) of birds which aim to find food. The approach can be seen as a distributed behavioral algorithm that performs (in its more general version) multidimensional search. In the simulation, the behavior of each individual is affected by either the best local (i.e., within a certain neighborhood) or the best global individual.

5 As described by the inventers James Kennedy and Russell Eberhart,
“particle swarm algorithm imitates human (or insects) social behavior. Individuals interact with one another while learning from their own experience, and gradually the population members move into better regions of the problem space”.

6

7 Both PSO and EC are population based.
PSO also uses the fitness concept, but, less-fit particles do not die. No “survival of the fittest”. No evolutionary operators such as crossover and mutation. Each particle (candidate solution) is varied according to its past experience and relationship with other particles in the population.

8 In PSO, each single solution is a "bird" in the search space.
Call it "particle". All of particles have fitness values which are evaluated by the fitness function to be optimized, and have velocities which direct the flying of the particles. The particles fly through the problem space by following the current optimum particles.

9 Initialize with randomly generated particles.
Update through generations in search for optima Each particle has a velocity and position Update for each particle uses two “best” values. Pbest(Personal Best): best solution (fitness) it has achieved so far. (The fitness value is also stored.) Gbest(Global Best): best value, obtained so far by any particle in the population.

10 Presenting the Solution Set
Population-based search procedure in which individuals called particles change their position (state) with time.  individual has position & individual changes velocity Particle: Presenting the Solution Set It contains: coordinate of the position velocity fitness value

11 The particles are manipulated according to the
following equation: (a) (b) Where and are two positive constants, and are two random functions in the range [0,1]. W is the inertia weight.

12 In General: How PSO Work
Individuals in a population learn from previous experiences and the experiences of those around them. The direction of movement is a function of: Current position Velocity Location of individuals “best” success Location of neighbors “best” successes Therefore, each individual in a population will gradually move towards the “better” areas of the problem space. Hence, the overall population moves towards “better” areas of the problem space.

13 Particle Swarm Optimization (ctd.)
A swarm consists of N particles in a D-dimensional search space. Each particle holds a position (which is a candidate solution to the problem) and a velocity (which means the flying direction and step of the particle). Each particle successively adjust its position toward the global optimum based on two factors: the best position visited by itself (pbest) denoted as Pi=(pi1,pi2,…,piD) and the best position visited by the whole swarm (gbest) denoted as Pg=(pg1,pg2,…,pgD) .

14 Social (global) influence
Personal influence

15 vn+1: Velocity of particle at n+1 th iteration
c1 : acceleration factor related to lbest c2 : acceleration factor related to gbest rand1( ): random number between 0 and 1 rand2( ): random number between 0 and 1 gbest: gbest position of swarm pbest: pbest position of particle

16 Particle Swarm Optimization (PSO) Process
Initialize population in hyperspace Evaluate fitness of individual particles Modify velocities based on previous best and global (or neighborhood) best positions Terminate on some condition Go to step 2

17 For each particle     Initialize particle with feasible random number END Do     For each particle         Calculate the fitness value         If the fitness value is better than the best fitness value (pbest) in history             Set current value as the new pbest     End Choose the particle with the best fitness value of all the particles as the gbest     For each particle         Calculate particle velocity according to velocity update equation         Update particle position according to position update equation     End While maximum iterations or minimum error criteria is not attained

18 Pseudo code Initialize; while (not teminated)
{ t = t +1 for i = 1:N // for each particle { Vi(t) = Vi(t-1) + c1*rand()*(Pi –Xi(t-1)) +c2*rand()*(Pg –Xi(t-1)) Xi(t) = Xi(t-1) + Vi(t) Fitness i(t) = f(Xi(t)); if needed, update Pi and Pg; }// end for i } // end for while

19 PSO for MOP Three main issues to be considered when extending PSO to multi-objective optimization are: How to select particles (to be used as leaders) in order to give preference to non-dominated solutions over those that are dominated? How to retain the non-dominated solutions found during the search process in order to report solutions that are non-dominated with respect to all the past populations and not only with respect to the current one? Also, it is desirable that these solutions are well spread along the Pareto front. How to maintain diversity in the swarm in order to avoid convergence to a single solution?

20 Algorithm: MOPSO INITIALIZATION of the Swarm
EVALUATE the fitness of each particle of the swarm. EX_ARCHIVE = SELECT the non-dominated solutions from the Swarm. t = 0. REPEAT FOR each particle SELECT the gbest UPDATE the velocity UPDATE the Position MUTATION /* Optional */ EVALUATE the Particle UPDATE the pbest END FOR UPDATE the EX_ARCHIVE with gbests. t = t+1 UNTIL (t <= MAXIMUM_ITERATIONS) Report Results in the EX_ARCHIVE. Dehuri, S., Cho, S.-B., "Multi-criterion Pareto based particle swarm optimized polynomial neural network for classification: A Review and State-of-the-Art. Computer Science Review, Elsevier Science, vol. 3, no. 1, pp , 2009.

21

22 Leader Selection Algorithm
The main difference between PSO and MOPSO is how each particle selects its gbest. In PSO, each particle records its own gbest, which is the best position being discover by it and its neighbours. However in MOPSO, each particle will select its gbest from an archive set, which contains all non-dominated solutions being discovered so far. In MOPSO-SiD, for each generation, each particle freely selects its own leader by using the similarity distance calculation. Given two particles p1 and p2, the similarity distance (SiD) between these particles is calculated according to the Equation.

23 where n is the total number of decision variables (i. e
where n is the total number of decision variables (i.e. length of each position vector), x1i; x2i are the ith position entries of two particles p1; p2 respectively. In each generation, for each particle in the search swarm, the similarity distance (SiD) between the particle and all archive members is calculated. After that, the archive member with the shortest SiD is chosen as the leader of that particle.

24 Archive Control Algorithm
Controlling the archive set is also an important part of a MOPSO algorithms. The controlling mechanism aims to decide whether or not a solution is added to the archive set or which solution should be removed from the archive set when this set is full. In general, a solution S is added to the archive set if it is not dominated by any archive members. In particular, when the archive set is full, the similarity distance between each pair of archive members are calculated according to the Equation. After that, MOPSO-SiD will select a pair of archive members with the shortest similarity distance.

25 Global Best Selection The selection of the global best guide of the particle swarm is a crucial step in a multiobjective-PSO algorithm. It affects both the convergence capability of the algorithm as well as maintaining a good spread of nondominated solutions. In MOPSO-CD, a bounded external archive stores nondominated solutions found in previous iteration. We note that any of the nondominated solutions in the archive can be used as the global best guide of the particles in the swarm. But we want to ensure that the particles in the population move towards the sparse regions of the search space. In MOPSO-CD, the global best guide of the particles is selected from among those nondominated solutions with the highest crowding distance values. Selecting different guides for each particle in a specified top part of the sorted repository based on a decreasing crowding distance allows the particles in the primary population to move towards those nondominated solutions in the external repository which are in the least crowded area in the objective space. Also, whenever the archive is full, crowding distance is again used in selecting which solution to replace in the archive. This promotes diversity among the stored solutions in the archive since those solutions which are in the most crowded areas are most likely to be replaced by a new solution.

26 References [-1] A. Ghosh, S. Dehuri, and S. Ghosh, Multi-objective Evolutionary Algorithms for KDD, Springer-Verlag, 2008. [0] J. Oh and C. Wu, “Genetic Algorithms based real time task scheduling with multiple goals”, The journal fo systems and Software, vol. 71, pp , 2004. [1]R. Farmani, et al., “An Evolutionary Bayesian Belief Network Methodology for Optimum Management of Groundwater Contamination”, environmental Modeling and Software, vol.24, pp , 2009. [2]D. Dutta, et al., “Multi-objective Evolutionary Algorithms for Land-Use Management Problem”, International Journal of Computational Intelligence Research, vol. 3, no. 4, pp/ , 2007. [3] V. Chankong and Y. Y. Haimes, Multi-objective Decision Making Theory and Methodology, New York: North-Holland, 1983.

27 References [a]Konstantinos E. Parsopoulos and Michael N. Vrahatis. Particle swarm optimization method in multiobjective problems. In Proceedings of the 2002 ACM Symposium on Applied Computing (SAC’2002), pages 603–607, Madrid, Spain, ACM Press. [b]U. Baumgartner, Ch. Magele, and W. Renhart. Pareto optimality and particle swarm optimization. IEEE Transactions on Magnetics, 40(2):1172–1175, March 2004. [c]Xiaohui Hu and Russell Eberhart. Multiobjective optimization using dynamic neighborhood particle swarm optimization. In Congress on Evolutionary Computation (CEC’2002), volume 2, pages 1677–1681, Piscataway, New Jersey, May IEEE Service Center. [d]Konstantinos E. Parsopoulos, Dimitris K. Tasoulis, and Michael N. Vrahatis. Multiobjective optimization using parallel vector evaluated particle swarm optimization. In Proceedings of the IASTED International Conference on Artificial Intelligence and Applications (AIA 2004), volume 2, pages 823–828, Innsbruck, Austria, February ACTA Press.


Download ppt "Multi-objective Optimization Using Particle Swarm Optimization"

Similar presentations


Ads by Google