Multi-objective Optimization Using Particle Swarm Optimization

Slides:



Advertisements
Similar presentations
Computational Intelligence Winter Term 2011/12 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Advertisements

Particle Swarm Optimization (PSO)
Population-based metaheuristics Nature-inspired Initialize a population A new population of solutions is generated Integrate the new population into the.
The Particle Swarm Optimization Algorithm
Particle Swarm optimisation. These slides adapted from a presentation by - one of main researchers.
Particle Swarm Optimization
Multi-Objective Optimization NP-Hard Conflicting objectives – Flow shop with both minimum makespan and tardiness objective – TSP problem with minimum distance,
PARTICLE SWARM OPTIMISATION (PSO) Perry Brown Alexander Mathews Image:
Particle Swarm Optimization
Particle Swarm Optimization (PSO)
Particle Swarm Optimization Particle Swarm Optimization (PSO) applies to concept of social interaction to problem solving. It was developed in 1995 by.
Spring, 2013C.-S. Shieh, EC, KUAS, Taiwan1 Heuristic Optimization Methods Pareto Multiobjective Optimization Patrick N. Ngatchou, Anahita Zarei, Warren.
1 A hybrid particle swarm optimization algorithm for optimal task assignment in distributed system Peng-Yeng Yin and Pei-Pei Wang Department of Information.
A New Algorithm for Solving Many-objective Optimization Problem Md. Shihabul Islam ( ) and Bashiul Alam Sabab ( ) Department of Computer Science.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
1 PSO-based Motion Fuzzy Controller Design for Mobile Robots Master : Juing-Shian Chiou Student : Yu-Chia Hu( 胡育嘉 ) PPT : 100% 製作 International Journal.
Particle Swarm Optimization Algorithms
Multimodal Optimization (Niching) A/Prof. Xiaodong Li School of Computer Science and IT, RMIT University Melbourne, Australia
Swarm Intelligence 虞台文.
Particle Swarm Optimization (PSO) Algorithm and Its Application in Engineering Design Optimization School of Information Technology Indian Institute of.
(Particle Swarm Optimisation)
The Particle Swarm Optimization Algorithm Nebojša Trpković 10 th Dec 2010.
4 Fundamentals of Particle Swarm Optimization Techniques Yoshikazu Fukuyama.
1 IE 607 Heuristic Optimization Particle Swarm Optimization.
Topics in Artificial Intelligence By Danny Kovach.
Robin McDougall Scott Nokleby Mechatronic and Robotic Systems Laboratory 1.
Particle Swarm optimisation. These slides adapted from a presentation by - one of main researchers.
Solving of Graph Coloring Problem with Particle Swarm Optimization Amin Fazel Sharif University of Technology Caro Lucas February 2005 Computer Engineering.
Kanpur Genetic Algorithms Laboratory IIT Kanpur 25, July 2006 (11:00 AM) Multi-Objective Dynamic Optimization using Evolutionary Algorithms by Udaya Bhaskara.
DIVERSITY PRESERVING EVOLUTIONARY MULTI-OBJECTIVE SEARCH Brian Piper1, Hana Chmielewski2, Ranji Ranjithan1,2 1Operations Research 2Civil Engineering.
Biologically inspired algorithms BY: Andy Garrett YE Ziyu.
Particle Swarm Optimization by Dr. Shubhajit Roy Chowdhury Centre for VLSI and Embedded Systems Technology, IIIT Hyderabad.
SwinTop: Optimizing Memory Efficiency of Packet Classification in Network Author: Chen, Chang; Cai, Liangwei; Xiang, Yang; Li, Jun Conference: Communication.
Neural and Evolutionary Computing - Lecture 9 1 Evolutionary Multiobjective Optimization  Particularities of multiobjective optimization  Multiobjective.
Particle Swarm Optimization (PSO)
Evolutionary Computing Chapter 12. / 26 Chapter 12: Multiobjective Evolutionary Algorithms Multiobjective optimisation problems (MOP) -Pareto optimality.
Particle Swarm Optimization (PSO) Algorithm. Swarming – The Definition aggregation of similar animals, generally cruising in the same directionaggregation.
 Introduction  Particle swarm optimization  PSO algorithm  PSO solution update in 2-D  Example.
Swarm Intelligence. Content Overview Swarm Particle Optimization (PSO) – Example Ant Colony Optimization (ACO)
Swarm Intelligence By Nasser M..
Genetic Algorithms.
Advanced Computing and Networking Laboratory
metaheuristic methods and their applications
Evolutionary Algorithms Jim Whitehead
Particle Swarm Optimization (2)
Particle Swarm Optimization with Partial Search To Solve TSP
Scientific Research Group in Egypt (SRGE)
Scientific Research Group in Egypt (SRGE)
Scientific Research Group in Egypt (SRGE)
Particle Swarm Optimization
PSO -Introduction Proposed by James Kennedy & Russell Eberhart in 1995
Ana Wu Daniel A. Sabol A Novel Approach for Library Materials Acquisition using Discrete Particle Swarm Optimization.
Meta-heuristics Introduction - Fabien Tricoire
آموزش شبکه عصبی با استفاده از روش بهینه سازی PSO
Tabu Search Review: Branch and bound has a “rigid” memory structure (i.e. all branches are completed or fathomed). Simulated Annealing has no memory structure.
Multy- Objective Differential Evolution (MODE)
Multi-objective Optimization Using Particle Swarm Optimization
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Advanced Artificial Intelligence Evolutionary Search Algorithm
metaheuristic methods and their applications
Computational Intelligence
بهينه‌سازي گروه ذرات (PSO)
Heuristic Optimization Methods Pareto Multiobjective Optimization
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Multi-Objective Optimization
Introduction to Simulated Annealing
现代智能优化算法-粒子群算法 华北电力大学输配电系统研究所 刘自发 2008年3月 1/18/2019
Computational Intelligence
SWARM INTELLIGENCE Swarms
Multiobjective Optimization
Presentation transcript:

Multi-objective Optimization Using Particle Swarm Optimization

Introduction Evolutionary Algorithms (EAs) are not the only search techniques that have been used to deal with multiobjective optimization problems. In fact, as other search techniques (e.g., Tabu search and simulated annealing) have proved to have very good performance in many combinatorial (as well as other types of) optimization problems, it is only natural to think of extensions of such approaches to deal with multiple objectives.

Search Techniques: Simulated Annealing Tabu Search and Scatter Search Particle Swarm Optimization Ant System Artificial Immune Systems ...

Particle Swarm Optimization Kennedy and Eberhart (1995) proposed an approach called “particle swarm optimization” (PSO) inspired by the choreography of a bird flock. The idea of this approach is to simulate the movements of a group (or population) of birds which aim to find food. The approach can be seen as a distributed behavioral algorithm that performs (in its more general version) multidimensional search. In the simulation, the behavior of each individual is affected by either the best local (i.e., within a certain neighborhood) or the best global individual.

As described by the inventers James Kennedy and Russell Eberhart, “particle swarm algorithm imitates human (or insects) social behavior. Individuals interact with one another while learning from their own experience, and gradually the population members move into better regions of the problem space”.

Both PSO and EC are population based. PSO also uses the fitness concept, but, less-fit particles do not die. No “survival of the fittest”. No evolutionary operators such as crossover and mutation. Each particle (candidate solution) is varied according to its past experience and relationship with other particles in the population.

In PSO, each single solution is a "bird" in the search space. Call it "particle". All of particles have fitness values which are evaluated by the fitness function to be optimized, and have velocities which direct the flying of the particles. The particles fly through the problem space by following the current optimum particles.

Initialize with randomly generated particles. Update through generations in search for optima Each particle has a velocity and position Update for each particle uses two “best” values. Pbest(Personal Best): best solution (fitness) it has achieved so far. (The fitness value is also stored.) Gbest(Global Best): best value, obtained so far by any particle in the population.

Presenting the Solution Set Population-based search procedure in which individuals called particles change their position (state) with time.  individual has position & individual changes velocity Particle: Presenting the Solution Set It contains: coordinate of the position velocity fitness value

The particles are manipulated according to the following equation: (a) (b) Where and are two positive constants, and are two random functions in the range [0,1]. W is the inertia weight.

In General: How PSO Work Individuals in a population learn from previous experiences and the experiences of those around them. The direction of movement is a function of: Current position Velocity Location of individuals “best” success Location of neighbors “best” successes Therefore, each individual in a population will gradually move towards the “better” areas of the problem space. Hence, the overall population moves towards “better” areas of the problem space.

Particle Swarm Optimization (ctd.) A swarm consists of N particles in a D-dimensional search space. Each particle holds a position (which is a candidate solution to the problem) and a velocity (which means the flying direction and step of the particle). Each particle successively adjust its position toward the global optimum based on two factors: the best position visited by itself (pbest) denoted as Pi=(pi1,pi2,…,piD) and the best position visited by the whole swarm (gbest) denoted as Pg=(pg1,pg2,…,pgD) .

Social (global) influence Personal influence

vn+1: Velocity of particle at n+1 th iteration c1 : acceleration factor related to lbest c2 : acceleration factor related to gbest rand1( ): random number between 0 and 1 rand2( ): random number between 0 and 1 gbest: gbest position of swarm pbest: pbest position of particle

Particle Swarm Optimization (PSO) Process Initialize population in hyperspace Evaluate fitness of individual particles Modify velocities based on previous best and global (or neighborhood) best positions Terminate on some condition Go to step 2

For each particle     Initialize particle with feasible random number END Do     For each particle         Calculate the fitness value         If the fitness value is better than the best fitness value (pbest) in history             Set current value as the new pbest     End Choose the particle with the best fitness value of all the particles as the gbest     For each particle         Calculate particle velocity according to velocity update equation         Update particle position according to position update equation     End While maximum iterations or minimum error criteria is not attained

Pseudo code Initialize; while (not teminated) { t = t +1 for i = 1:N // for each particle { Vi(t) = Vi(t-1) + c1*rand()*(Pi –Xi(t-1)) +c2*rand()*(Pg –Xi(t-1)) Xi(t) = Xi(t-1) + Vi(t) Fitness i(t) = f(Xi(t)); if needed, update Pi and Pg; }// end for i } // end for while

PSO for MOP Three main issues to be considered when extending PSO to multi-objective optimization are: How to select particles (to be used as leaders) in order to give preference to non-dominated solutions over those that are dominated? How to retain the non-dominated solutions found during the search process in order to report solutions that are non-dominated with respect to all the past populations and not only with respect to the current one? Also, it is desirable that these solutions are well spread along the Pareto front. How to maintain diversity in the swarm in order to avoid convergence to a single solution?

Algorithm: MOPSO INITIALIZATION of the Swarm EVALUATE the fitness of each particle of the swarm. EX_ARCHIVE = SELECT the non-dominated solutions from the Swarm. t = 0. REPEAT FOR each particle SELECT the gbest UPDATE the velocity UPDATE the Position MUTATION /* Optional */ EVALUATE the Particle UPDATE the pbest END FOR UPDATE the EX_ARCHIVE with gbests. t = t+1 UNTIL (t <= MAXIMUM_ITERATIONS) Report Results in the EX_ARCHIVE. Dehuri, S., Cho, S.-B., "Multi-criterion Pareto based particle swarm optimized polynomial neural network for classification: A Review and State-of-the-Art. Computer Science Review, Elsevier Science, vol. 3, no. 1, pp. 19-40, 2009.

Leader Selection Algorithm The main difference between PSO and MOPSO is how each particle selects its gbest. In PSO, each particle records its own gbest, which is the best position being discover by it and its neighbours. However in MOPSO, each particle will select its gbest from an archive set, which contains all non-dominated solutions being discovered so far. In MOPSO-SiD, for each generation, each particle freely selects its own leader by using the similarity distance calculation. Given two particles p1 and p2, the similarity distance (SiD) between these particles is calculated according to the Equation.

where n is the total number of decision variables (i. e where n is the total number of decision variables (i.e. length of each position vector), x1i; x2i are the ith position entries of two particles p1; p2 respectively. In each generation, for each particle in the search swarm, the similarity distance (SiD) between the particle and all archive members is calculated. After that, the archive member with the shortest SiD is chosen as the leader of that particle.

Archive Control Algorithm Controlling the archive set is also an important part of a MOPSO algorithms. The controlling mechanism aims to decide whether or not a solution is added to the archive set or which solution should be removed from the archive set when this set is full. In general, a solution S is added to the archive set if it is not dominated by any archive members. In particular, when the archive set is full, the similarity distance between each pair of archive members are calculated according to the Equation. After that, MOPSO-SiD will select a pair of archive members with the shortest similarity distance.

Global Best Selection The selection of the global best guide of the particle swarm is a crucial step in a multiobjective-PSO algorithm. It affects both the convergence capability of the algorithm as well as maintaining a good spread of nondominated solutions. In MOPSO-CD, a bounded external archive stores nondominated solutions found in previous iteration. We note that any of the nondominated solutions in the archive can be used as the global best guide of the particles in the swarm. But we want to ensure that the particles in the population move towards the sparse regions of the search space. In MOPSO-CD, the global best guide of the particles is selected from among those nondominated solutions with the highest crowding distance values. Selecting different guides for each particle in a specified top part of the sorted repository based on a decreasing crowding distance allows the particles in the primary population to move towards those nondominated solutions in the external repository which are in the least crowded area in the objective space. Also, whenever the archive is full, crowding distance is again used in selecting which solution to replace in the archive. This promotes diversity among the stored solutions in the archive since those solutions which are in the most crowded areas are most likely to be replaced by a new solution.

References [-1] A. Ghosh, S. Dehuri, and S. Ghosh, Multi-objective Evolutionary Algorithms for KDD, Springer-Verlag, 2008. [0] J. Oh and C. Wu, “Genetic Algorithms based real time task scheduling with multiple goals”, The journal fo systems and Software, vol. 71, pp. 245-258, 2004. [1]R. Farmani, et al., “An Evolutionary Bayesian Belief Network Methodology for Optimum Management of Groundwater Contamination”, environmental Modeling and Software, vol.24, pp.303-310, 2009. [2]D. Dutta, et al., “Multi-objective Evolutionary Algorithms for Land-Use Management Problem”, International Journal of Computational Intelligence Research, vol. 3, no. 4, pp/ 371-384, 2007. [3] V. Chankong and Y. Y. Haimes, Multi-objective Decision Making Theory and Methodology, New York: North-Holland, 1983.

References [a]Konstantinos E. Parsopoulos and Michael N. Vrahatis. Particle swarm optimization method in multiobjective problems. In Proceedings of the 2002 ACM Symposium on Applied Computing (SAC’2002), pages 603–607, Madrid, Spain, 2002. ACM Press. [b]U. Baumgartner, Ch. Magele, and W. Renhart. Pareto optimality and particle swarm optimization. IEEE Transactions on Magnetics, 40(2):1172–1175, March 2004. [c]Xiaohui Hu and Russell Eberhart. Multiobjective optimization using dynamic neighborhood particle swarm optimization. In Congress on Evolutionary Computation (CEC’2002), volume 2, pages 1677–1681, Piscataway, New Jersey, May 2002. IEEE Service Center. [d]Konstantinos E. Parsopoulos, Dimitris K. Tasoulis, and Michael N. Vrahatis. Multiobjective optimization using parallel vector evaluated particle swarm optimization. In Proceedings of the IASTED International Conference on Artificial Intelligence and Applications (AIA 2004), volume 2, pages 823–828, Innsbruck, Austria, February 2004. ACTA Press.