metaheuristic methods and their applications

Slides:



Advertisements
Similar presentations
Population-based metaheuristics Nature-inspired Initialize a population A new population of solutions is generated Integrate the new population into the.
Advertisements

G5BAIM Artificial Intelligence Methods
Multi-Objective Optimization NP-Hard Conflicting objectives – Flow shop with both minimum makespan and tardiness objective – TSP problem with minimum distance,
Particle Swarm Optimization (PSO)
Institute of Intelligent Power Electronics – IPE Page1 Introduction to Basics of Genetic Algorithms Docent Xiao-Zhi Gao Department of Electrical Engineering.
1 Wendy Williams Metaheuristic Algorithms Genetic Algorithms: A Tutorial “Genetic Algorithms are good at taking large, potentially huge search spaces and.
Reporter : Mac Date : Multi-Start Method Rafael Marti.
1 A hybrid particle swarm optimization algorithm for optimal task assignment in distributed system Peng-Yeng Yin and Pei-Pei Wang Department of Information.
EAs for Combinatorial Optimization Problems BLG 602E.
Ant Colony Optimization Optimisation Methods. Overview.
Ant Colony Optimization Algorithms for the Traveling Salesman Problem ACO Kristie Simpson EE536: Advanced Artificial Intelligence Montana State.
Genetic Algorithms Nehaya Tayseer 1.Introduction What is a Genetic algorithm? A search technique used in computer science to find approximate solutions.
Ant Colony Optimization: an introduction
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Ant colony optimization algorithms Mykulska Eugenia
Lecture: 5 Optimization Methods & Heuristic Strategies Ajmal Muhammad, Robert Forchheimer Information Coding Group ISY Department.
Genetic Algorithms: A Tutorial
Particle Swarm Optimization Algorithms
CSM6120 Introduction to Intelligent Systems Other evolutionary algorithms.
Genetic Algorithms and Ant Colony Optimisation
Swarm Intelligence 虞台文.
Algorithms and their Applications CS2004 ( )
Search Methods An Annotated Overview Edward Tsang.
(Particle Swarm Optimisation)
1 IE 607 Heuristic Optimization Particle Swarm Optimization.
Object Oriented Programming Assignment Introduction Dr. Mike Spann
Metaheuristic Methods and Their Applications Lin-Yu Tseng ( 曾怜玉 ) Institute of Networking and Multimedia Department of Computer Science and Engineering.
1 “Genetic Algorithms are good at taking large, potentially huge search spaces and navigating them, looking for optimal combinations of things, solutions.
Exact and heuristics algorithms
G5BAIM Artificial Intelligence Methods
1 Genetic Algorithms and Ant Colony Optimisation.
Introduction to Optimization
Biologically Inspired Computation Ant Colony Optimisation.
Particle Swarm Optimization (PSO)
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Genetic Algorithms. Solution Search in Problem Space.
Ant Colony Optimisation. Emergent Problem Solving in Lasius Niger ants, For Lasius Niger ants, [Franks, 89] observed: –regulation of nest temperature.
Genetic Algorithms And other approaches for similar applications Optimization Techniques.
Swarm Intelligence. Content Overview Swarm Particle Optimization (PSO) – Example Ant Colony Optimization (ACO)
Swarm Intelligence By Nasser M..
Genetic Algorithms.
Advanced Computing and Networking Laboratory
CSCI 4310 Lecture 10: Local Search Algorithms
Particle Swarm optimisation
Particle Swarm optimisation
Scientific Research Group in Egypt (SRGE)
Scientific Research Group in Egypt (SRGE)
PSO -Introduction Proposed by James Kennedy & Russell Eberhart in 1995
Meta-heuristics Introduction - Fabien Tricoire
آموزش شبکه عصبی با استفاده از روش بهینه سازی PSO
School of Computer Science & Engineering
Multi-objective Optimization Using Particle Swarm Optimization
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Comparing Genetic Algorithm and Guided Local Search Methods
metaheuristic methods and their applications
Study Guide for ES205 Yu-Chi Ho Jonathan T. Lee Nov. 7, 2000
Genetic Algorithms: A Tutorial
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Multi-Objective Optimization
Genetic Algorithms CSCI-2300 Introduction to Algorithms
School of Computer Science & Engineering
“Hard” Optimization Problems
Ant Colony Optimization
Boltzmann Machine (BM) (§6.4)
Artificial Intelligence CIS 342
Traveling Salesman Problem by Genetic Algorithm
SWARM INTELLIGENCE Swarms
Population Based Metaheuristics
Genetic Algorithms: A Tutorial
Simulated Annealing & Boltzmann Machines
Presentation transcript:

metaheuristic methods and their applications

Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method? Trajectory Methods Population-Based Methods The Applications of Metaheuristic Methods

Optimization Problems Computer Science Traveling Salesman Problem Maximum Clique Problem Operational Research (OR) Flow Shop Scheduling Problem (processing orders) P-Median Problem (location selection) Many optimization problems are NP-hard.

(Traveling Salesman Problem) 1 2 3 5 4 23 10 7 15 20 6 18 A solution A sequence 12345 Tour length = 31 13452 Tour length = 63 There are (n-1)! tours in total

Strategies for Solving NP-hard Optimization Problems Branch-and-Bound  Find exact solution Approximation Algorithms There is an approximation algorithm for TSP which can find a tour with tour length ≦ 1.5× (optimal tour length) in O(n3) time. Heuristic Methods  Deterministic Metaheuristic Methods  Heuristic + Randomization

What is a Metaheruistic Method? Meta : in an upper level Heuristic : to find A metaheuristic is formally defined as an iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space, learning strategies are used to structure information in order to find efficiently near-optimal solutions.

Fundamental Properties of Metaheuristics Metaheuristics are strategies that “guide” the search process. The goal is to efficiently explore the search space in order to find (near-)optimal solutions. Techniques which constitute metaheuristic algorithms range from simple local search procedures to complex learning processes. Metaheuristic algorithms are approximate and usually non-deterministic.

Fundamental Properties of Metaheuristics (cont.) They may incorporate mechanisms to avoid getting trapped in confined areas of the search space. The basic concepts of metaheuristics permit an abstract level description. Metaheuristics are not problem-specific. Metaheuristics may make use of domain-specific knowledge in the form of heuristics that are controlled by the upper level strategy. Today’s more advanced metaheuristics use search experience (embodied in some form of memory) to guide the search.

Trajectory Methods 1. Basic Local Search : Iterative Improvement Improve (N(S)) can be First improvement Best improvement Intermediate option

HERE…………

Trajectory Methods x3 X4 x2 x1 X5

Simulated Annealing – Kirkpatrick (Science 1983) Temperature T may be defined as Random walk + iterative improvement

Tabu Search – Glover 1986 Tabu list Tabu tenure: the length of the tabu list Aspiration condition

Tabu Search

Population-Based Methods 1. Genetic Algorithm – Holland 1975 Coding of a solution --- Chromosome Fitness function --- Related to objective function Initial population

An example : TSP (Traveling Salesman Problem) 1 2 3 5 4 23 10 7 15 20 6 18 A solution A sequence 12345 Tour length = 31 13452 Tour length = 63

Genetic Algorithm Reproduction (Selection) Crossover Mutation

Example 1 Maximize f(x) = x2 where x  I and 0  x  31

Coding of a solution : A five-bit integer, e.g. 01101 Fitness function : F(x) = f(x) = x2 Initial population : (Randomly generated) 01101 11000 01000 10011

Reproduction Roulette Wheel

Reproduction

Crossover

The probability of mutation Pm = 0.001 20 bits * 0.001 = 0.02 bits

Ant Colony Optimization Dorigo 1992

Pheromone (smell)

Initialization Loop Each ant applies a state transition rule to incrementally build a solution and applies a local updating rule to the pheromone Until each of all ants has built a complete solution A global pheromone updating rule is applied Until End_Condition

Example : TSP Solution : 15432 A simple heuristics – a greedy method: choose the shortest edge to go out of a node 1 2 3 5 4 23 10 7 15 20 6 18 Solution : 15432

How ant k choose the next city to visit? Each ant builds a solution using a step by step constructive decision policy. How ant k choose the next city to visit? Where , be a distance measure associated with edge (r,s)

Local update of pheromone where is the initial pheromone level Global update of pheromone where globally best tour

Particle Swarm Optimization Kennedy and Eberhart 1995

Population initialized by assigning random positions and velocities; potential solutions are then flown through hyperspace. Each particle keeps track of its “best” (highest fitness) position in hyperspace. This is called “pBest” for an individual particle. It is called “gBest” for the best in the population. At each time step, each particle stochastically accelerates toward its pBest and gBest.

Particle Swarm Optimization Process Initialize population in hyperspace. Includes position and velocity. Evaluate fitness of individual particle. Modify velocities based on personal best and global best. Terminate on some condition. Go to step 2.

Initialization: - Positions and velocities Here you have another nice search space. First step: you put some particles on it. You can do it at random or on a regular way, or both. How many? In practice, for most real problems with dimension between 2 and 100, a swarm size of 20 particles works quite good. There are some mathematical ways to give an estimation, but a bit beyond the scope of this talk. Also, as we will see some variants use an adaptive swarm size. Second step: you define a velocity for each particle, usually at random. You can set all initial velocities to zero but, experimentally, it is usually not the best choice. Remember that what we call “velocity” is in fact a move, just because time is discretized.

Modify velocities - based on personal best and global best. Here I am, now! pBest gBest xt This may be the most important slide of this presentation, for it summarizes the core of the method. Let’s take a bit time to comment it. You are a particle. Sorry, I don’t mean you are quite stupid, but it is just to explain how it works. (By the way, Jim Kennedy has designed a nice game in which you compete with such stupid particles. I have it here, and if we have time, you will see it is almost impossible to beat it.) You can compute how good is your position (that is to say you can compute the objective function at the place you are). You remember the best position you ever found (and the objective function value). You can ask your neighbours for this information they also have memorized, and choose the best one. Now, you have three tendancies, - audacious, following your own way (just using your own velocity) - conservative, going back more or less towards your best previous position - sheeplike, going more or less towards your best neighbour What PSO formalizes is how to combine these tendancies in order to be globally efficient. xt+1 v New Position !

Particle Swarm Optimization Modification of velocities and positions vt+1 = vt + c1 * rand() * (pBest - xt ) + c2 * rand() * (gBest - xt ) xt+1 = xt + vt+1

The Applications of Metaheuristics Solving NP-hard Optimization Problems Traveling Salesman Problem Maximum Clique Problem Flow Shop Scheduling Problem (processing orders) P-Median Problem (location selection) Search Problems in Many Applications Feature Selection in Pattern Recognition Automatic Clustering Machine Learning (e.g. Neural Network Training)

For NP-hard optimization problems and complicated search problems, metaheuristic methods are very good choices for solving these problems. More efficient than branch-and-bound. Obtain better quality solutions than heuristic methods. Hybridization of metaheuristics How to make the search more systematic ? How to make the search more controllable ? How to make the performance scalable?

References Osman, I.H., and Laporte,G. “Metaheuristics:A bibliography”. Ann. Oper. Res. 63, 513–623, 1996. Blum, C., and Andrea R. “Metaheuristics in Combinatorial Optimization: Overview and Conceptual Comparison”. ACM Computing Surveys, 35(3), 268–308, 2003. Kirkpatrick, S., Gelatt. C. D., and Vecchi, M. P. “Optimization by simulated annealing”, Science, 13 May 1983 220, 4598, 671–680, 1983. Glover, F. “Future paths for integer programming and links to artificial intelligence”, Comput. Oper. Res. 13, 533–549, 1986. Hansen, P. and Mladenović, N. An introduction to variable neighborhood search. In Metaheuristics: Advances and trends in local search paradigms for optimization, S. Voß, S. Martello, I. Osman, and C. Roucairol, Eds. Kluwer Academic Publishers, Chapter 30, 433–458, 1999. Holland, J. H. Adaption in natural and artificial systems. The University of Michigan Press,Ann Harbor, MI. 1975. Dorigo, M. Optimization, learning and natural algorithms (in italian). Ph.D. thesis, DEI, Politecnico di Milano, Italy. pp. 140, 1992. Kennedy, J. and Eberhart, R. “Particle Swarm Optimization”, Proceedings of the 1995 IEEE International Conference on Neural Networks, pp. 1942-1948, IEEE Press, 1995.

References Lin-Yu Tseng* and Chun Chen, (2008) “Multiple Trajectory Search for Large Scale Global Optimization,” Proceedings of the 2008 IEEE Congress on Evolutionary Computation, June 2-6, 2008, Hong Kong. Lin-Yu Tseng* and Chun Chen, (2009) “Multiple Trajectory Search for Unconstrained/Constrained Multi-Objective Optimization,” Proceedings of the 2009 IEEE Congress on Evolutionary Computation, May 18-21, Trondheim, Norway.