Local Search Strategies: From N-Queens to Walksat

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Local Search Algorithms
1 Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides)
LOCAL SEARCH AND CONTINUOUS SEARCH. Local search algorithms  In many optimization problems, the path to the goal is irrelevant ; the goal state itself.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
More on A*; Preview of Logic; Local Search Henry Kautz.
Logical Foundations of AI SAT Henry Kautz. Resolution Refutation Proof DAG, where leaves are input clauses Internal nodes are resolvants Root is false.
Local and Stochastic Search based on Russ Greiner’s notes.
Local search algorithms
Local search algorithms
Two types of search problems
Ryan Kinworthy 2/26/20031 Chapter 7- Local Search part 1 Ryan Kinworthy CSCE Advanced Constraint Processing.
Local and Stochastic Search Based on Russ Greiner’s notes.
Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz.
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
Stochastic greedy local search Chapter 7 ICS-275 Spring 2007.
Informed Search Methods
Local Search and Optimization
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Local Search Algorithms This lecture topic Chapter Next lecture topic Chapter 5 (Please read lecture topic material before and after each lecture.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed search algorithms
Informed search algorithms
Local Search: walksat, ant colonies, and genetic algorithms.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Iterative Improvement Algorithm 2012/03/20. Outline Local Search Algorithms Hill-Climbing Search Simulated Annealing Search Local Beam Search Genetic.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Local Search Pat Riddle 2012 Semester 2 Patricia J Riddle Adapted from slides by Stuart Russell,
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
Local Search Introduction to Artificial Intelligence COS302 Michael L. Littman Fall 2001.
CSE 473 Propositional Logic SAT Algorithms Dan Weld (With some slides from Mausam, Stuart Russell, Dieter Fox, Henry Kautz, Min-Yen Kan…) Irrationally.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
Stochastic greedy local search Chapter 7 ICS-275 Spring 2009.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Local search algorithms In many optimization problems, the state space is the space of all possible complete solutions We have an objective function that.
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Local Search. Systematic versus local search u Systematic search  Breadth-first, depth-first, IDDFS, A*, IDA*, etc  Keep one or more paths in memory.
Local Search Algorithms and Optimization Problems
CPSC 322, Lecture 16Slide 1 Stochastic Local Search Variants Computer Science cpsc322, Lecture 16 (Textbook Chpt 4.8) Oct, 11, 2013.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Escaping Local Optima. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Exhaustive search Hill.
Local Search Algorithms CMPT 463. When: Tuesday, April 5 3:30PM Where: RLC 105 Team based: one, two or three people per team Languages: Python, C++ and.
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
1 Intro to AI Local Search. 2 Intro to AI Local search and optimization Local search: –use single current state & move to neighboring states Idea: –start.
Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
CSCI 4310 Lecture 10: Local Search Algorithms
Department of Computer Science
Informed Search Chapter 4 (b)
Heuristic Optimization Methods
Satisfiability Solvers
Local Search Algorithms
Lecture 7 Constraint Satisfaction Problems
Artificial Intelligence (CS 370D)
Subject Name: Operation Research Subject Code: 10CS661 Prepared By:Mrs
Local Search and Optimization
CSE 589 Applied Algorithms Spring 1999
CS Fall 2016 (Shavlik©), Lecture 9, Week 5
Heuristics Local Search
Informed Search Chapter 4 (b)
Stochastic Local Search Variants Computer Science cpsc322, Lecture 16
Heuristics Local Search
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
Local Search Algorithms
Search.
Search.
CSC 380: Design and Analysis of Algorithms
Local Search Algorithms
Presentation transcript:

Local Search Strategies: From N-Queens to Walksat Henry Kautz

Greedy Local Search state = choose_start_state(); while ! GoalTest(state) do state := arg min { h(s) | s in Neighbors(state) } end return state; Terminology: “neighbors” instead of “children” heuristic h(s) is the “objective function”, no need to be admissible No guarantee of finding a solution sometimes: probabilistic guarantee Best goal-finding, not path-finding Many variations

N-Queens Local Search, Version 1 state = choose_start_state(); while ! GoalTest(state) do state := arg min { h(s) | s in Neighbors(state) } end return state; start = put down N queens randomly GoalTest = Board has no attacking pairs h = number of attacking pairs neighbors = move one queen randomly

N-Queens Local Search, Version 2 state = choose_start_state(); while ! GoalTest(state) do state := arg min { h(s) | s in Neighbors(state) } end return state; start = put a queen on each square with 50% probability GoalTest = Board has N queens, no attacking pairs h = number of attacking pairs + # rows with no queens neighbors = add or delete one queen

SAT Translation At least one queen each row: No attacks: (Q11 v Q12 v Q13 v ... v Q18) (Q21 v Q22 v Q23 v ... v Q28) ... No attacks: (~Q11 v ~Q12) (~Q11 v ~Q22) (~Q11 v ~Q21) O(N2) clauses O(N3) clauses

Greedy Local Search for SAT state = choose_start_state(); while ! GoalTest(state) do state := arg min { h(s) | s in Neighbors(state) } end return state; start = random truth assignment GoalTest = formula is satisfied h = number of unsatisfied clauses neighbors = flip one variable

Local Search Landscape # unsat clauses

States Where Greedy Search Must Succeed # unsat clauses

States Where Greedy Search Must Succeed # unsat clauses

States Where Greedy Search Might Succeed # unsat clauses

States Where Greedy Search Might Succeed # unsat clauses

Local Search Landscape Plateau # unsat clauses Local Minimum

Variations of Greedy Search Where to start? RANDOM STATE PRETTY GOOD STATE What to do when a local minimum is reached? STOP KEEP GOING Which neighbor to move to? (Any) BEST neighbor (Any) BETTER neighbor How to make greedy search more robust?

Restarts state := arg min { h(s) | s in Neighbors(state) } for run = 1 to max_runs do state = choose_start_state(); flip = 0; while ! GoalTest(state) && flip++ < max_flips do state := arg min { h(s) | s in Neighbors(state) } end if GoalTest(state) return state; return FAIL

Uphill Moves: Random Noise state = choose_start_state(); while ! GoalTest(state) do with probability noise do state = random member Neighbors(state) else state := arg min { h(s) | s in Neighbors(state) } end return state;

Uphill Moves: Simulated Annealing (Constant Temperature) state = start; while ! GoalTest(state) do next = random member Neighbors(state); deltaE = h(next) – h(state); if deltaE  0 then state := next; else with probability e-deltaE/temperature do end endif return state; Book reverses, because is looking for max h state

Uphill Moves: Simulated Annealing (Geometric Cooling Schedule) temperature := start_temperature; state = choose_start_state(); while ! GoalTest(state) do next = random member Neighbors(state); deltaE = h(next) – h(state); if deltaE  0 then state := next; else with probability e-deltaE/temperature do end temperature := cooling_rate * temperature; return state;

Simulated Annealing For any finite problem with a fully-connected state space, will provably converge to optimum as length of schedule increases: But: fomal bound requires exponential search time In many practical applications, can solve problems with a faster, non-guaranteed schedule

Smarter Noise Strategies For both random noise and simulated annealing, nearly all uphill moves are useless Can we find uphill moves that are more likely to be helpful? At least for SAT we can...

Random Walk for SAT Observation: if a clause is unsatisfied, at least one variable in the clause must be different in any global solution (A v ~B v C) Suppose you randomly pick a variable from an unsatisfied clause to flip. What is the probability this was a good choice?

Random Walk for SAT Observation: if a clause is unsatisfied, at least one variable in the clause must be different in any global solution (A v ~B v C) Suppose you randomly pick a variable from an unsatisfied clause to flip. What is the probability this was a good choice?

Random Walk Local Search state = choose_start_state(); while ! GoalTest(state) do clause := random member { C | C is a clause of F and C is false in state } var := random member { x | x is a variable in clause } state[var] := 1 – state[var]; end return state;

Properties of Random Walk If clause length = 2: 50% chance of moving in the right direction Converges to optimal with high probability in O(n2) time reflecting

Properties of Random Walk If clause length = 2: 50% chance of moving in the right direction Converges to optimal with high probability in O(n2) time For any desired epsilon, there is a constant C, such that if you run for Cn2 steps, the probability of success is at least 1-epsilon

Properties of Random Walk If clause length = 3: 1/3 chance of moving in the right direction Exponential convergence Compare pure noise: 1/(n-Hamming distance) chance of moving in the right direction The closer you get to a solution, the more likely a noisy flip is bad reflecting 1/3 2/3

Greedy Random Walk state = choose_start_state(); while ! GoalTest(state) do clause := random member { C | C is a clause of F and C is false in state }; with probability noise do var := random member { x | x is a variable in clause }; else var := arg x min { #unsat(s) | x is a variable in clause, s and state differ only on x}; end state[var] := 1 – state[var]; return state;

Refining Greedy Random Walk Each flip makes some false clauses become true breaks some true clauses, that become false Suppose s1s2 by flipping x. Then: #unsat(s2) = #unsat(s1) – make(s1,x) + break(s1,x) Idea 1: if a choice breaks nothing, it is very likely to be a good move Idea 2: near the solution, only the break count matters – the make count is usually 1

Walksat state = random truth assignment; while ! GoalTest(state) do clause := random member { C | C is false in state }; for each x in clause do compute break[x]; if exists x with break[x]=0 then var := x; else with probability noise do var := random member { x | x is in clause }; var := arg x min { break[x] | x is in clause }; endif state[var] := 1 – state[var]; end return state; Put everything inside of a restart loop. Parameters: noise, max_flips, max_runs

Comparing Noise Strategies Hard Random 3-SAT Encodings of Blocks World Planning

Effect of Walksat Optimizations

Walksat Today Hard random 3-SAT: 100,000 vars, 15 minutes Walksat (or slight variations) winner every year in “random formula” track of International SAT Solver Competition Complete search methods: 700 variables “Friendly” encoded problems (graph coloring, n-queens, ...):  30,000 variables We will later see good backtracking algorithms other interesting classes of problems Inspired huge body of research linking SAT testing to statistical physics (spin glasses)

Other Local Search Strategies Tabu Search Keep a history of the last K visited states Revisiting a state on the history list is “tabu” Genetic algorithms Population = set of K multiple search points Neighborhood = population U mutations U crossovers Mutation = random change in a state Crossovers = random mix of assignments from two states Typically only a portion of neighbor is generated Search step: new population = K best members of neighborhood

Local Search in Continuous Spaces gradient negative step to minimize f positive step to maximize f