1 Appendix C 1. Min-Conflict Hill Climbing 2. Min-Conflict Random Walk.

Slides:



Advertisements
Similar presentations
Local optimization technique G.Anuradha. Introduction The evaluation function defines a quality measure score landscape/response surface/fitness landscape.
Advertisements

G5BAIM Artificial Intelligence Methods
Local Search Jim Little UBC CS 322 – CSP October 3, 2014 Textbook §4.8
CPSC 322, Lecture 14Slide 1 Local Search Computer Science cpsc322, Lecture 14 (Textbook Chpt 4.8) Oct, 5, 2012.
© J. Christopher Beck Lecture 17: Tabu Search.
Merge Sort 4/15/2017 6:09 PM The Greedy Method The Greedy Method.
Local search algorithms
Local search algorithms
Two types of search problems
CPSC 322, Lecture 15Slide 1 Stochastic Local Search Computer Science cpsc322, Lecture 15 (Textbook Chpt 4.8) February, 6, 2009.
1 Using Search in Problem Solving Part II. 2 Basic Concepts Basic concepts: Initial state Goal/Target state Intermediate states Path from the initial.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Local Search and Stochastic Algorithms Solution tutorial 4.
MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.
MAE 552 – Heuristic Optimization
MAE 552 – Heuristic Optimization Lecture 10 February 13, 2002.
Stochastic greedy local search Chapter 7 ICS-275 Spring 2007.
Distributed Scheduling. What is Distributed Scheduling? Scheduling: –A resource allocation problem –Often very complex set of constraints –Tied directly.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 2 Ryan Kinworthy CSCE Advanced Constraint Processing.
Introduction to Simulated Annealing 22c:145 Simulated Annealing  Motivated by the physical annealing process  Material is heated and slowly cooled.
By Rohit Ray ESE 251.  Most minimization (maximization) strategies work to find the nearest local minimum  Trapped at local minimums (maxima)  Standard.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Constraint Satisfaction Problems
Victor R. Lesser CMPSCI 683 Fall 2010
1 Chapter 5 Local Search. 2 Outline Local search basics General local search algorithm Hill-climbing Tabu search Simulated Annealing WSAT Conclusions.
Stochastic Local Search CPSC 322 – CSP 6 Textbook §4.8 February 9, 2011.
June 21, 2007 Minimum Interference Channel Assignment in Multi-Radio Wireless Mesh Networks Anand Prabhu Subramanian, Himanshu Gupta.
Local Search CPSC 322 – CSP 5 Textbook §4.8 February 7, 2011.
CS 484 – Artificial Intelligence1 Announcements Homework 2 due today Lab 1 due Thursday, 9/20 Homework 3 has been posted Autumn – Current Event Tuesday.
1 Simulated Annealing Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.
Constraint Satisfaction CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Course: Logic Programming and Constraints
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
Local Search Algorithms
The Greedy Method. The Greedy Method Technique The greedy method is a general algorithm design paradigm, built on the following elements: configurations:
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
Genetic Algorithms What is a GA Terms and definitions Basic algorithm.
Chapter 5 Constraint Satisfaction Problems
Principles of Intelligent Systems Constraint Satisfaction + Local Search Written by Dr John Thornton School of IT, Griffith University Gold Coast.
Stochastic greedy local search Chapter 7 ICS-275 Spring 2009.
A local search algorithm with repair procedure for the Roadef 2010 challenge Lauri Ahlroth, André Schumacher, Henri Tokola
Maintaining Arc Consistency (MAC) MAC is the same as Back-tracking, but with calls to AC-3 interleaved... function Backtracking-Search(csp) returns.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Accelerating Random Walks Wei Wei and Bart Selman.
ZEIT4700 – S1, 2015 Mathematical Modeling and Optimization School of Engineering and Information Technology.
Different Local Search Algorithms in STAGE for Solving Bin Packing Problem Gholamreza Haffari Sharif University of Technology
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Chapter 5. Advanced Search Fall 2011 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
1 Simulated Annealing Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations.
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
CHAN Siu Lung, Daniel CHAN Wai Kin, Ken CHOW Chin Hung, Victor KOON Ping Yin, Bob Fast Algorithms for Projected Clustering.
1 Distributed Vertex Coloring. 2 Vertex Coloring: each vertex is assigned a color.
© J. Christopher Beck Lecture 16: Local Search.
Foundations of Constraint Processing Local Search for CSPs1 Foundations of Constraint Processing CSCE421/821, Spring
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
1 Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations Simulated Annealing (SA)
On the Ability of Graph Coloring Heuristics to Find Substructures in Social Networks David Chalupa By, Tejaswini Nallagatla.
Problem Solving with Constraints Local Search for CSPs1 Foundations of Constraint Processing CSCE496/896, Fall2011
Combinatorial clustering algorithms. Example: K-means clustering
Stat 261 Two phase method.
Title: Suggestion Strategies for Constraint- Based Matchmaker Agents
First-Order Logic and Inductive Logic Programming
Computer Science cpsc322, Lecture 14
Computer Science cpsc322, Lecture 14
Lecture 9: Tabu Search © J. Christopher Beck 2005.
Artificial Intelligence
Simulated Annealing & Boltzmann Machines
Presentation transcript:

1 Appendix C 1. Min-Conflict Hill Climbing 2. Min-Conflict Random Walk

2 Min-conflict Hill Climbing procedure GenerateLocalMoves(s, TotalMoves) begin M’ , BestCost  f(s) choose randomly a variable v in conflict choose a value d for v (d  dcurr) that minimizes the number of conflicts for v. m  {v, d} if f(s  m)  BestCost then // accepts improving moves and sideways moves begin if f(s  m) < BestCost then begin BestCost  f(s  m); M’  end M’  M’  m end if M’ =  then TotalMoves  MaxMoves return M’ end

3 Notes How to create initial solution in MCHC  First, randomly assign a domain value to the first variable, then assign each of subsequent variable a domain value that causes the least number of conflicts with previously assigned variables. (greedy preprocessing phase) However, MCHC does not have the mechanism to escape local minima.

4 Min-Conflict Random Walk begin generate randomly an initial solution s n_iter := 0; n_moves := 0 while f(x) > max_cost and n_moves < max_moves do if p > random number between 0 and 1 then choose randomly a variable V in conflict choose randomly a value v’ for V else choose randomly a variable V in conflict choose a value v’ for V that minimizes the number of conflicts for v. (the current value is chosen only if all the other values increase the number of violated constraints.) if v’ is different from the current value of V then assign v’ to V n_moves := n_moves + 1 n_iter := n_iter + 1 endwhile output(s) end

5 Notes The algorithm MCRW is controlled by the random probability p. An iteration leading a new solution different from the current one is called a move. MCRW may have many iterations that do not lead to a move. An improvement can be as follows:  If an iteration fails to lead to a move for a conflicting variable V, V will not be considered for the next iterations until a move is effectively carried out.