Particle Swarm Optimization by Dr. Shubhajit Roy Chowdhury Centre for VLSI and Embedded Systems Technology, IIIT Hyderabad.

Slides:



Advertisements
Similar presentations
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 14.
Advertisements

Optimization methods Review
PARTICLE SWARM OPTIMISATION (PSO) Perry Brown Alexander Mathews Image:
Particle Swarm Optimization (PSO)
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Particle Swarm Optimization Particle Swarm Optimization (PSO) applies to concept of social interaction to problem solving. It was developed in 1995 by.
Easy Optimization Problems, Relaxation, Local Processing for a single variable.
Numerical Optimization
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Optimization Methods One-Dimensional Unconstrained Optimization
Constrained Optimization
L15:Microarray analysis (Classification). The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.
Optimization Methods One-Dimensional Unconstrained Optimization
1 A Novel Binary Particle Swarm Optimization. 2 Binary PSO- One version In this version of PSO, each solution in the population is a binary string. –Each.
Easy Optimization Problems, Relaxation, Local Processing for a single variable.
Advanced Topics in Optimization
Optimization Methods One-Dimensional Unconstrained Optimization
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
1 PSO-based Motion Fuzzy Controller Design for Mobile Robots Master : Juing-Shian Chiou Student : Yu-Chia Hu( 胡育嘉 ) PPT : 100% 製作 International Journal.
Improved Search for Local Optima in Particle Swarm Optimization May 6, 2015 Huidae Cho Water Resources Engineer, Dewberry Consultants Part-Time Assistant.
Swarm Intelligence 虞台文.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Particle Swarm Optimization (PSO) Algorithm and Its Application in Engineering Design Optimization School of Information Technology Indian Institute of.
(Particle Swarm Optimisation)
The Particle Swarm Optimization Algorithm Nebojša Trpković 10 th Dec 2010.
1 IE 607 Heuristic Optimization Particle Swarm Optimization.
Topics in Artificial Intelligence By Danny Kovach.
2010 IEEE International Conference on Systems, Man, and Cybernetics (SMC2010) A Hybrid Particle Swarm Optimization Considering Accuracy and Diversity.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Response surfaces. We have a dependent variable y, independent variables x 1, x 2,...,x p The general form of the model y = f(x 1, x 2,...,x p ) +  Surface.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
Biologically inspired algorithms BY: Andy Garrett YE Ziyu.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
1 Motion Fuzzy Controller Structure(1/7) In this part, we start design the fuzzy logic controller aimed at producing the velocities of the robot right.
Particle Swarm Optimization Using the HP Prime Presented by Namir Shammas 1.
Optimization of functions of one variable (Section 2)
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
Particle Swarm Optimization (PSO)
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
Debrup Chakraborty Non Parametric Methods Pattern Recognition and Machine Learning.
On the Computation of All Global Minimizers Through Particle Swarm Optimization IEEE Transactions On Evolutionary Computation, Vol. 8, No.3, June 2004.
Particle Swarm Optimization (PSO) Algorithm. Swarming – The Definition aggregation of similar animals, generally cruising in the same directionaggregation.
 Introduction  Particle swarm optimization  PSO algorithm  PSO solution update in 2-D  Example.
Swarm Intelligence. Content Overview Swarm Particle Optimization (PSO) – Example Ant Colony Optimization (ACO)
Particle Swarm Optimization
PSO -Introduction Proposed by James Kennedy & Russell Eberhart in 1995
Meta-heuristics Introduction - Fabien Tricoire
آموزش شبکه عصبی با استفاده از روش بهینه سازی PSO
Probability-based Evolutionary Algorithms
Multi-objective Optimization Using Particle Swarm Optimization
Chapter 14.
Collaborative Filtering Matrix Factorization Approach
Chapter 7 Optimization.
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Multi-Objective Optimization
Instructor :Dr. Aamer Iqbal Bhatti
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Introduction to Scientific Computing II
现代智能优化算法-粒子群算法 华北电力大学输配电系统研究所 刘自发 2008年3月 1/18/2019
Optimization and Some Traditional Methods
EEE 244-8: Optimization.
Introduction to Scientific Computing II
Performance Optimization
What are optimization methods?
Presentation transcript:

Particle Swarm Optimization by Dr. Shubhajit Roy Chowdhury Centre for VLSI and Embedded Systems Technology, IIIT Hyderabad

What is Optimization? Optimization can be defined as the art of obtaining best policies to satisfy certain objectives, at the same time satisfying fixed requirements.- Gotfried Optimization can be defined as the art of obtaining best policies to satisfy certain objectives, at the same time satisfying fixed requirements.- Gotfried Unconstrained Optimization Example: Maximize Z, Example: Maximize Z, where Z= x 1 2 x 2 –x 2 2 x 1 -2 x 1 x 2 where Z= x 1 2 x 2 –x 2 2 x 1 -2 x 1 x 2

Unconstrained & Constrained Optimization Unconstrained Approach: Unconstrained Approach: Set δZ/ δx 1 = 0, & Set δZ/ δx 1 = 0, & δZ/ δx 2 = 0. δZ/ δx 2 = 0. Constrained Optimization Constrained Optimization Example: Design a box with maximum volume and minimum surface area. Example: Design a box with maximum volume and minimum surface area.

Constrained Optimization (Contd.) Approach: Let Approach: Let L(x, y, z)= xyz – λ [2(xy + yz +zx)] L(x, y, z)= xyz – λ [2(xy + yz +zx)] volume Lag. Multiplier sur. Area volume Lag. Multiplier sur. Area Set δL/ δx =0, δL/ δy =0, δL/ δz =0, Set δL/ δx =0, δL/ δy =0, δL/ δz =0, and δL/ δ λ =0, and δL/ δ λ =0, Solution: x= y= z Solution: x= y= z In most engineering system, the solution returns numerical values of the variables. In most engineering system, the solution returns numerical values of the variables.

Numerical Approach to Optimization Steepest Descent Steepest Descent Problem: Minimize f(x 1, x 2, …, x n ) Problem: Minimize f(x 1, x 2, …, x n ) Approach: x 1 := x 1 -  δf/ δx 1 Approach: x 1 := x 1 -  δf/ δx 1 x 2 := x 2 -  δf/ δx 2 x 2 := x 2 -  δf/ δx 2 …… ……. …….. …… …… ……. …….. …… x n := x n -  δf/ δx n x n := x n -  δf/ δx n Loop through until δf/ δx i, for all i, are zero. Loop through until δf/ δx i, for all i, are zero.

But what about these multi-modal, noisy and even discontinuous functions? Gradient based methods get trapped in a local minima or the Function itself may be non differentiable. How a single agent can find global optima by following gradient descent?

Way Out: Multi-Agent Optimization in Continuous Space Randomly Initialized Agents Agents

Most Agents are near Global Optima After Convergence

Particle Swarm Optimization (PSO) (Kennedy and Eberhart, 1995)

Principles of Particle Swarm Optimization Current direction Direction of local maximum global maximum Resulting direction of motion

The PSO Algorithm 1. Initialize position and velocity of n particles randomly. 2. Evaluate fitness of all the particles. 3. Adapt velocity of each particle by taking into consideration of its current velocity, Global best and local best positions, so far experienced. 4. Evaluate new position from current position and velocity. 5. Update Local best and global best positions based on the fitness of the new positions. 6. Repeat from (2) until most of the particles cease motions.

PSO: Starting Situation Randomly Scattered Particles over the fitness landscape and their randomly oriented velocities

All Particles in A close vicinity of the Global optimum The best Particle Conquering the Peak Situation after a few iterations

Best Position found so far By the particle Globally Best position Found by the swarm Definitions

Best Position found By the agent so far (P lb ) Globally best position found so far (P gb ). Current Position V i (t) Resultant Velocity V i (t+1) Particle Swarm Optimization Kennedy & Eberhart(1995) V i (t+1)=φ.V i (t)+C 1.rand(0,1).(P lb -X i (t))+C 2.rand(0,1).(P gb -X i (t)) X i (t+1)=X i (t)+V i (t+1) C1.rand(0,1).(Plb-Xi(t)) C2.rand(0,1).(Pgb-Xi(t))

Example: f( x) = x(x-8) Consider a small swarm of particles for the above single dimensional function: Initial Position and velocities of the particles at time t=0: Randomly initialized In the range (-10, 10) Particle number Position x(0) at t = 0 Velocity v at t = 0 f (x) So, the fittest particle is particle 1 and we set P gb = -7 and P lb =X 1

Initial Distribution of the Particles over the fitness landscape

Change in position of the particles in next iteration: For this small scale PSO problem, we set C 1 = C 2 = 2.0,ω = 0.5 Particle 1> V 1 (1) = 0.5*3 + 2*0.6*(7 – 7) + 2*0.4*(7 – 7) = 1.5 X 1 (1) = = 8.5 Fitness f (X(1)) =4.25 V i (t+1)=φ.V i (t)+C 1.rand(0,1).(P lb -X i (t))+C 2.rand(0,1).(P gb -X i (t)) X i (t+1)=X i (t)+V i (t+1) Particle 2> V 2 (1) = 0.5*5 + 2*0.3*( ) + 2*0.4*(7 – (-2)) = 6.5 X 2 (1) = = 3.5 Fitness f (X(1)) =-9.75

Particle 3> V 3 (1) = 0.5*6 + 2*0.8*(9 - 9 ) + 2*0.95*(7 – 9) = -0.8 X 3 (1) = 6 – (-0.8) = 6.8 Fitness f (X(1)) =-8.16 Particle 4> V 4 (1) = 0.5*(-4) + 2*0.38*( ) + 2*0.45*(7 – (-6)) = 9.7 X 4 (1) = = 3.7 Fitness f (X(1)) = Here we go for the next iteration: Particle number Position at t = 1Velocity at t = 1f (x)P lb for t = 2 P gb for t = (at t = 0, 7)1.5 (at t = 0, 3)4.25 (at t = 0, -7) (-2)3.5 (5)-9.75 (20) (9)-0.8 (6)-8.16 (9) (-6)9.7 (-4)-15.91(84)3.7

Distribution of the Particles over the fitness landscape at t = 1 Distribution of the Particles over the fitness landscape at t = 5 Best particle at t = 5 P gb = 3.95 f(P gb ) =

Optimization by PSO: Egg crate Function Minimize X1X1 X2X2 f(x) Known global minima at [0,0] and optimum function value 0

Eggcrate Function Optimization by PSO Position of the particles on a 2D parameter Space at different instances

To know more THE site: Particle Swarm Central, Clerc M., Kennedy J., "The Particle Swarm-Explosion, Stability, and Convergence in a Multidimensional Complex Space", IEEE Transaction on Evolutionary Computation, 2002,vol. 6, p

Problem with PSO If the fitness function is too wavy and irregular, the particles will get trapped in some local minimum. If the fitness function is too wavy and irregular, the particles will get trapped in some local minimum. The result is that we get a sub optimal solution The result is that we get a sub optimal solution

Perceptive Particle Swarm Optimization The particles fly around in (n+1) dimensional search space for n dimensional optimization problem The particles fly around in (n+1) dimensional search space for n dimensional optimization problem The particles fly over a physical fitness landscape observing its crests and trough from a far The particles fly over a physical fitness landscape observing its crests and trough from a far Particles observe the search space within their perception ranges by sampling a fixed number of directions to observe and sampling a finite number of points along those directions. Particles observe the search space within their perception ranges by sampling a fixed number of directions to observe and sampling a finite number of points along those directions. The particles attempt to observe the search space for landscape at several sampled distances from its position, in each direction. The particles attempt to observe the search space for landscape at several sampled distances from its position, in each direction. If the sampled point is within the landscape, the particle perceives the height of the landscape at that point. If the sampled point is within the landscape, the particle perceives the height of the landscape at that point. The particles can observe neighboring particles in their perception range. The particles can observe neighboring particles in their perception range. The particle randomly chooses the neighboring particles which will influence the particle to move towards them. The particle randomly chooses the neighboring particles which will influence the particle to move towards them. The position of the chosen neighbor will be used as the local best position of the particle. The position of the chosen neighbor will be used as the local best position of the particle.

PPSO Illustration

Adaptive Perceptive Particle Swarm Optimization (APPSO) In APPSO algorithm, if the local best position of the particle at the current iteration does improve the performance of the particle, then one or more of the following things are done: In APPSO algorithm, if the local best position of the particle at the current iteration does improve the performance of the particle, then one or more of the following things are done: (1) Spacing between the sample points along any direction within the perception radius is minimized (2) number of sampling directions is increased (3) Perception radius is minimized

APPSO Illustration

Different types of APPSO AlgorithmPerception radiusNo. of directionsNo. of sample points APPSO1 (PPSO)Fixed APPSO2Fixed Variable APPSO3FixedVariableFixed APPSO4FixedVariable APPSO5VariableFixed APPSO6VariableFixedVariable APPSO7Variable Fixed APPSO8Variable

THANK YOU