Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.

Slides:



Advertisements
Similar presentations
Local optimization technique G.Anuradha. Introduction The evaluation function defines a quality measure score landscape/response surface/fitness landscape.
Advertisements

G5BAIM Artificial Intelligence Methods
Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
Tabu Search Strategy Hachemi Bennaceur 5/1/ iroboapp project, 2013.
Search by partial solutions. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) February, 9, 2009.
Spie98-1 Evolutionary Algorithms, Simulated Annealing, and Tabu Search: A Comparative Study H. Youssef, S. M. Sait, H. Adiche
Local search algorithms
Local search algorithms
Two types of search problems
Heuristic Optimization Athens 2004 Department of Architecture and Technology Universidad Politécnica de Madrid Víctor Robles
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Local and Stochastic Search Based on Russ Greiner’s notes.
Local Search and Stochastic Algorithms Solution tutorial 4.
Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Simulated Annealing Van Laarhoven, Aarts Version 1, October 2000.
Stochastic greedy local search Chapter 7 ICS-275 Spring 2007.
MAE 552 – Heuristic Optimization Lecture 5 February 1, 2002.
Introduction to Simulated Annealing 22c:145 Simulated Annealing  Motivated by the physical annealing process  Material is heated and slowly cooled.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Ant Colony Optimization: an introduction
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Elements of the Heuristic Approach
Informed Search Methods
D Goforth - COSC 4117, fall Note to 4 th year students  students interested in doing masters degree and those who intend to apply for OGS/NSERC.
Local Search and Optimization
1 Local search and optimization Local search= use single current state and move to neighboring states. Advantages: –Use very little memory –Find often.
Heuristic Optimization Methods
Stochastic Local Search CPSC 322 – CSP 6 Textbook §4.8 February 9, 2011.
Local Search. m = 3h - 8 linear fitness function – optimum at domain boundary.
COSC 4426 Topics in Computer Science II Discrete Optimization Good results with problems that are too big for people or computers to solve completely
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
1 Simulated Annealing Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations.
Simulated Annealing.
Local Search: walksat, ant colonies, and genetic algorithms.
Computer Science CPSC 322 Lecture 16 CSP: wrap-up Planning: Intro (Ch 8.1) Slide 1.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
G5BAIM Artificial Intelligence Methods
Single-solution based metaheuristics. Outline Local Search Simulated annealing Tabu search …
Local Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001.
Simulated Annealing G.Anuradha.
Simulated Annealing. Difficulty in Searching Global Optima starting point descend direction local minima global minima barrier to local search.
Stochastic greedy local search Chapter 7 ICS-275 Spring 2009.
Review of Propositional Logic Syntax
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Different Local Search Algorithms in STAGE for Solving Bin Packing Problem Gholamreza Haffari Sharif University of Technology
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
1 Simulated Annealing Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations.
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
School of Computer Science & Engineering
CSCI 4310 Lecture 10: Local Search Algorithms
Department of Computer Science
Heuristic Optimization Methods
Van Laarhoven, Aarts Version 1, October 2000
Local Search Algorithms
Local Search Strategies: From N-Queens to Walksat
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
metaheuristic methods and their applications
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
School of Computer Science & Engineering
Artificial Intelligence
Local Search Algorithms
Local Search Algorithms
Simulated Annealing & Boltzmann Machines
Presentation transcript:

Escaping Local Optima

Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General Model of Stochastic Local Search Simulated Annealing Tabu search Exhaustive search Hill climbing Random restart General Model of Stochastic Local Search Simulated Annealing Tabu search Exhaustive search Branch and bound Greedy Best first A* Divide and Conquer Dynamic programming Constraint Propagation Exhaustive search Branch and bound Greedy Best first A* Divide and Conquer Dynamic programming Constraint Propagation

Escaping local optima Stochastic local search  many important algorithms address the problem of avoiding the trap of local optima (possible source of project topics)  Michalewicz & Fogel focus on two only simulated annealing tabu search

Formal model of Stochastic Local Search (SLS): Hoos and Stützle goal: abstract the simple search subroutines from high level control structure experiment with various search methods systematically

Formal model of Stochastic Local Search (SLS): Hoos and Stützle Generalized Local Search Machine (GLSM) M M =(Z, z 0, M, m 0, Δ, σ Z, σ Δ, Τ Z, Τ Δ ) Z: set of states (basic search strategies) z 0 ∈ Z : start state M: memory space m 0 ∈ M : start memory state Δ ⊆ Z×Z : transition relation (when to switch to another type of search) σ Z : set of state types; σ Δ : set of transition types Τ Z : Z ➝ σ Z associate states with types Τ Δ : Δ ➝ σ Δ associate transitions with types

(0) Basic Hill climbing determine initial solution s while s not local optimum choose s’ in N(s) such that f(s’)>f(s) s = s’ return s M =(Z, z 0, M, m 0, Δ, σ Z, σ Δ, Τ Z, Τ Δ ) Z: {z 0, z 1 } M: {m 0 } //not used in this model Δ ⊆ Z×Z :{ (z o, z 1 ), (z 1, z 1 )} σ Z : { random choice, select better neighbour } σ Δ : { Det} Τ Z : Τ Z (z 0 ) = random choice, Τ Z (z 1 ) = select better neighbour Τ Δ : Τ Δ ((z o, z 1 )) = Det, Τ Δ ((z 1, z 1 )) = Det M =(Z, z 0, M, m 0, Δ, σ Z, σ Δ, Τ Z, Τ Δ ) Z: {z 0, z 1 } M: {m 0 } //not used in this model Δ ⊆ Z×Z :{ (z o, z 1 ), (z 1, z 1 )} σ Z : { random choice, select better neighbour } σ Δ : { Det} Τ Z : Τ Z (z 0 ) = random choice, Τ Z (z 1 ) = select better neighbour Τ Δ : Τ Δ ((z o, z 1 )) = Det, Τ Δ ((z 1, z 1 )) = Det z0z0 z0z0 z1z1 z1z1 Det

(1) Randomized Hill climbing determine initial solution s; bestS = s while termination condition not satisfied with probability p choose neighbour s’ at random else //climb if possible choose s’ with f(s’) > f(s) s = s’; if (f(s) > f(bestS)) bestS = s return bestS

(1) Randomized Hill climbing M =(Z, z 0, M, m 0, Δ, σ Z, σ Δ, Τ Z, Τ Δ ) Z: {z 0, z 1, z 2 } M: {m 0 } Δ ⊆ Z×Z :{(z o, z 1 ),(z 1, z 1 ),(z o, z 2 ),(z 1, z 2 ),(z 2, z 1 ),(z 2, z 2 )} σ Z : { random choice, select better neighbour, select any neighbour } σ Δ : { Prob(p), Prob(1-p)} Τ Z : Τ Z (z 0 ) = random choice, Τ Z (z 1 ) = select better neighbour Τ Z (z 2 ) = select any neighbour Τ Δ : Τ Δ ((z o, z 1 )) = Prob(p), Τ Δ ((z 1, z 1 )) = Prob(p) Τ Δ ((z o, z 2 )) = Prob(1-p), Τ Δ ((z 1, z 2 )) = Prob(1-p) Τ Δ ((z 2, z 1 )) = Prob(p), Τ Δ ((z 2, z 2 )) = Prob(1-p) Prob(p) z0z0 z0z0 z1z1 z1z1 z2z2 z2z2 Prob(1-p)

(2) Variable Neighbourhood determine initial solution s i = 1 repeat choose neighbour s’ in N i (s) with max(f(s’)) if ((f(s’) > f(s)) s = s’ i = 1// restart in first neighbourhood else i = i+1// go to next neighbourhood until i > iMax return s *example using memory to track neighbourhood definition z0z0 z0z0 z1z1 z1z1 Prob(1) i=1 NewBest(T ) i = 1 NewBest(F) i++

Hoos and Stützle terminology  transitions: Detdeterministic CDet(R), CDet(not R) conditional deterministic on R Prob(p), Prob(1-p)probabilistic CProb(not R, p), CProb(not R, 1-p) conditional probabilistic

Hoos and Stützle terminology  search subroutine(z states): RPrandom pick (usual start) RWrandom walk (any neighbour) BIbest in neighbourhood

Some examples RP BI Det CDet(not R) CDet(R) 1. Cprob (not R, p) RPRP RPRP BI CDet(R) RWRW RWRW Prob(p) Prob(1-p) Cprob (not R, p) Cprob (not R, 1-p) Cprob (not R, 1-p) CDet(R) 2.

Simulated annealing  metaphor: slow cooling of liquid metals to allow crystal structure to align properly  “temperature” T is slowly lowered to reduce random movement of solution s in solution space

Simulated Annealing determine initial solution s; bestS = s T = T 0 while termination condition not satisfied choose s’ in N(s) probabilistically if (s’ is “acceptable”) // function of T and f(s’) s = s’ if (f(s) > f(sBest)) bestS = s update T return bestS RP SA(T) Det: T = T 0 Det: update(T)

Accepting a new solution - acceptance more likely if f(s’) > f(s) - as execution proceeds, probability of acceptance of s’ with f(s’) < f(s) decreases (becomes more like hillclimbing) determine initial solution s; bestS = s T = T 0 while termination condition not satisfied choose s’ in N(s) probabilistically if (s’ is “acceptable”) // function of T s = s’ if (f(s) > f(sBest)) bestS = s update T return bestS determine initial solution s; bestS = s T = T 0 while termination condition not satisfied choose s’ in N(s) probabilistically if (s’ is “acceptable”) // function of T s = s’ if (f(s) > f(sBest)) bestS = s update T return bestS

the acceptance function T evolves *sometimes p=1 when f(s’)-f(s)> 0

Simulated annealing with SAT algorithm p.123 SA-SAT propositions:P 1,… P n expression:F = D 1 D 2 …D k recall cnf: clause D i is a disjunction of propositions and negative props e.g.,P x  ~P y  P z  ~P w fitness function:number of true clauses

Inner iteration – SA(T) in GLSM TFFT assign random truth setTFFT repeat for i=1 to 4 FFFT flip truth of prop iFFFT FTFT evaluateFTFT FFTT decide to keep (or not)FFTT FFTF changed valueFFTF FFTT FFTT reduce T RP SA(T) Det: T = T 0 Det: update(T)

Tabu search (taboo) always looks for best solution but some choices (neighbours) are ineligible (tabu) ineligibility is based on recent moves: once a neighbour edge is used, it cannot be removed (tabu) for a few iterations search does not stop at local optimum

Symmetric TSP example set of 9 cities {A,B,C,D,E,F,G,H,I} neighbour definition based on 2-opt* (27 neighbours) current sequence: B - E - F - H - I - A - D - C - G - B move to 2-opt neighbour B - D - A - I - H - F - E - C - G - B edges B-D and E-C are now tabu i.e., next 2-opt swap cannot involve these edges *example in book uses 2-swap, p 131

TSP example, algorithm p 133 how long will an edge be tabu?3 iterations how to track and restore eligibility? data structure to store tabu status of 9*8/2 = 36 edges B - D - A - I - H - F - E - C - G - B recency-based memory ABCDEFGH I H G F00000 E0030 D230 C00 B0

procedure tabu search in TSP begin repeat until a condition satisfied generate a tour repeat until a (different) condition satisfied identify a set T of 2-opt moves select best admissible (not tabu) move from T make move update tabu list and other vars if new tour is best-so-far update best tour information end This algorithm repeatedly starts with a random tour of the cities. Starting from the random tour, the algorithm repeatedly moves to the best admissible neighbour; it does not stop at a hilltop but continues to move.

applying 2-opt with tabu  from the table, some edges are tabu: B - D - A - I - H - F - E - C - G - B  2-opt can only consider:  AI and FE  AI and CG  FE and CG ABCDEFGH I H G F00000 E0030 D230 C00 B0

importance of parameters  once algorithm is designed, it must be “tuned” to the problem selecting fitness function and neighbourhood definition setting values for parameters  this is usually done experimentally

procedure tabu search in TSP begin repeat until a condition satisfied generate a tour repeat until a (different) condition satisfied identify a set T of 2-opt moves select best admissible (not tabu) move from T make move update tabu list and other vars if new tour is best-so-far update best tour information end Choices in ‘tuning’ the algorithm: what conditions to control repeated executions: counted loops, fitness threshhold, stagnancy (no improvement) how to generate first tour (random, greedy, ‘informed’) how to define neighbourhood how long to keep edges on tabu list other vars: e.g., for stagnancy