Iterative Improvement Search Including Hill Climbing, Simulated Annealing, WALKsat and more....

Slides:



Advertisements
Similar presentations
Local optimization technique G.Anuradha. Introduction The evaluation function defines a quality measure score landscape/response surface/fitness landscape.
Advertisements

G5BAIM Artificial Intelligence Methods
For Friday Finish chapter 5 Program 1, Milestone 1 due.
School of Computer Science Carnegie Mellon University
Local search algorithms
Local search algorithms
Two types of search problems
Heuristic Optimization Athens 2004 Department of Architecture and Technology Universidad Politécnica de Madrid Víctor Robles
1 Chapter 5 Advanced Search. 2 Chapter 5 Contents l Constraint satisfaction problems l Heuristic repair l The eight queens problem l Combinatorial optimization.
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
MAE 552 – Heuristic Optimization Lecture 3 January 28, 2002.
MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.
Imagine that I am in a good mood Imagine that I am going to give you some money ! In particular I am going to give you z dollars, after you tell me the.
CSC344: AI for Games Lecture 4: Informed search
MAE 552 – Heuristic Optimization Lecture 10 February 13, 2002.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 5 Jim Martin.
Informed Search Next time: Search Application Reading: Machine Translation paper under Links Username and password will be mailed to class.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Ch. 11: Optimization and Search Stephen Marsland, Machine Learning: An Algorithmic Perspective. CRC 2009 some slides from Stephen Marsland, some images.
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Informed Search Methods
Local Search CS311 David Kauchak Spring 2013 Some material borrowed from: Sara Owsley Sood and others.
Copyright R. Weber Search in Problem Solving Search in Problem Solving INFO 629 Dr. R. Weber.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
C Programming (the Final Lecture) How to document your code. The famous travelling salesman problem (and when a problem is _really_ hard). Three ways to.
An Introduction to Artificial Life Lecture 4b: Informed Search and Exploration Ramin Halavati In which we see how information.
COSC 4426 Topics in Computer Science II Discrete Optimization Good results with problems that are too big for people or computers to solve completely
Local Search CPSC 322 – CSP 5 Textbook §4.8 February 7, 2011.
1 Chapter 5 Advanced Search. 2 Chapter 5 Contents l Constraint satisfaction problems l Heuristic repair l The eight queens problem l Combinatorial optimization.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.
Informed search algorithms
Local Search: walksat, ant colonies, and genetic algorithms.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
G5BAIM Artificial Intelligence Methods
For Friday Finish chapter 6 Program 1, Milestone 1 due.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Local Search and Optimization Presented by Collin Kanaley.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Clase 3: Basic Concepts of Search. Problems: SAT, TSP. Tarea 1 Computación Evolutiva Gabriela Ochoa
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Different Local Search Algorithms in STAGE for Solving Bin Packing Problem Gholamreza Haffari Sharif University of Technology
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Local Search Algorithms and Optimization Problems
Informed Search Chapter 3.5 – 3.6, 4.1. Informed Search Informed searches use domain knowledge to guide selection of the best path to continue searching.
Ch. Eick: Randomized Hill Climbing Techniques Randomized Hill Climbing Neighborhood Hill Climbing: Sample p points randomly in the neighborhood of the.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Games: Expectimax MAX MIN MAX Prune if α ≥ β. Games: Expectimax MAX MIN MAX
Optimization via Search
School of Computer Science & Engineering
CSCI 4310 Lecture 10: Local Search Algorithms
Hard Problems Some problems are hard to solve.
Hill-climbing Search Goal: Optimizing an objective function.
School of Computer Science & Engineering
Informed search algorithms
More on Search: A* and Optimization
Artificial Intelligence
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
First Exam 18/10/2010.
Presentation transcript:

Iterative Improvement Search Including Hill Climbing, Simulated Annealing, WALKsat and more....

An overview of the form of search Given: a set of states (or configuration) a function that evaluates each state (or configuration), Eval(State) Locate: The global maximum, i.e. X* such that Eval(X*) > Eval(X i ) for all X i.

Some problems take this form Find the shortest circuit through the cities A State = Eval(State) = A State = Eval(State) = Find assignments to variables that satisfy clauses boolean satisfiability

Some problems take this form Find the shortest circuit through the cities A State = an ordering of cities to visit {orlando, chicago, etc} Eval(State) = length of path defined by permutation (minimize) A State = a set of assignments to {A,B,C,D,E}, i.e. {1,1,1,1,1}, etc. Eval(State) = # clauses that are satisfied Find assignments to variables that satisfy clauses boolean satisfiability

More problems take this form Assign jobs to machines that minimize run time. A State = Eval(State) = A State = Eval(State) = place components on a chip in a way that is maximally compact & efficient, routing wise

More problems take this form Assign n jobs to m machines that run time is minimized. A State = an assignment of n jobs to m machines Eval(State) = completion time of the jobs (minimize) A State = a placement of components & a routing of interconnections Eval(State) = Distance between components + % of unused space + routing length (minimize) Place components on a chip in a way that is maximally compact & efficient, routing wise

These are problems in which… There is a combinatorial state space being optimized. There is a cost function, Eval(state): state -> real #, to be optimized, or at least a reasonable solution is to be found. Searching all possible states is intractable. Depth first search approaches are too expensive. There’s no known algorithm for finding the optimal solution efficiently. Similar solutions can be said to have similar costs. In general

Iterative improvement search Intuition: consider configurations as though they are laid out on the surface of a landscape. We want to find the highest point in the configuration space. Unlike many other AI search problems like the 8-puzzle, we don’t care how we get there. “Iterative Improvement” methods, generally: Start at a random configuration; repeatedly consider various moves; accept some & reject some. When you’re stuck, restart. These demand a moveset that describe what moves we will consider from any configuration. This moveset is analogous to the successor function in the prior search problems we have covered. Let’s invent movesets for our sample problems.

Some problems take this form Find the shortest circuit through the cities Moveset(state) = Find assignments to variables that satisfy clauses boolean satisfiability

More problems take this form Assign jobs to machines that minimize run time. place components on a chip in a way that is maximally compact & efficient, routing wise Moveset(state) =

Hill climbing Hill-climbing: Attempt to maximize Eval(X) by moving to the highest configuration in our moveset. If they’re all lower, we are stuck at a “local optimum.” 1. Let X := initial state (configuraton) 2. Let E := Eval(X) 3. Let N = moveset_size(X) 4. For ( i = 0 ; i<N ; i := i+1) Let E i := Eval(move(X,i)) 5. If all E i s are ≤ E, terminate, return X 6. Else let i* = argmax i E i 7. X := move(X,i*) 8. E := E i * 9. Goto 3

Hill Climbing Issues & Details Requires no memory (since it involves no backtracking) Is trivial to program The moveset design is critical. This is the real ingenuity – not the decision to use hill-climbing. Evaluation function design is often critical. Problems: dense local optima or plateaux If the number of moves is enormous, the algorithm may be inefficient. What to do? If the number of moves is tiny, the algorithm can get stuck easily. What to do? It’s often cheaper to evaluate an incremental change of a previously evaluated object than to evaluate from scratch. Does hill-climbing permit that? What if approximate evaluation is cheaper than accurate evaluation?

Randomized Hill Climbing 1. Let X := initial state or configuration 2. Let E := Eval(X) 3. Let i = a random move from the moveset 4. Let E i := Eval(move(X,i)) 5. If E < E i then X := move(X,i) E := E i 6. Goto 3 unless bored. What stopping criterion should we use? Any obvious pros or cons compared with our previous hill climber?

Hill climbing example WALKSAT (randomized GSAT): Pick a random unsatisfied clause. Consider 3 moves: flipping each variable. If any improve Eval(X), accept the best. If none improve Eval(X), then 50% of the time, pick the move that is the least bad; 50% of the time, pick a random one. This is the best known algorithm for satisfying Boolean formulae! [Selman, Kautz and Cohen, 1994; see ]

Hill Climbing Example: TSP

3 change example

Hill-climbing Example: TSP