Download presentation
Presentation is loading. Please wait.
1
Introduction to Artificial Intelligence Local Search (updated 4/30/2006) Henry Kautz
2
Local Search in Continuous Spaces negative step to minimize f positive step to maximize f gradient
3
Local Search in Discrete State Spaces state = choose_start_state(); while ! GoalTest(state) do state := arg min { h(s) | s in Neighbors(state) } end return state; Terminology: –“neighbors” instead of “children” –heuristic h(s) is the “objective function”, no need to be admissible No guarantee of finding a solution –sometimes: probabilistic guarantee Best goal-finding, not path-finding Many variations
4
Local Search versus Systematic Search Systematic Search –BFS, DFS, IDS, Best-First, A* –Keeps some history of visited nodes –Always complete for finite search spaces, some versions complete for infinite spaces –Good for building up solutions incrementally State = partial solution Action = extend partial solution
5
Local Search versus Systematic Search Local Search –Gradient descent, Greedy local search, Simulated Annealing, Genetic Algorithms –Does not keep history of visited nodes –Not complete. May be able to argue will terminate with “high probability” –Good for “fixing up” candidate solutions State = complete candidate solution that may not satisfy all constraints Action = make a small change in the candidate solution
6
N-Queens Problem
7
N-Queens Systematic Search state = choose_start_state(); add state to Fringe; while ! GoalTest(state) do choose state from Fringe according to h(state); Fringe = Fringe U { Children(state) } end return state; start = empty board GoalTest = N queens are on the board h = (N – number of queens on the board) children = all ways of adding one queen without creating any attacks
8
N-Queens Local Search, V1 state = choose_start_state(); while ! GoalTest(state) do state := arg min { h(s) | s in Neighbors(state) } end return state; start = put down N queens randomly GoalTest = Board has no attacking pairs h = number of attacking pairs neighbors = move one queen to a different square on the board
9
N-Queens Local Search, V2 state = choose_start_state(); while ! GoalTest(state) do state := arg min { h(s) | s in Neighbors(state) } end return state; start = put a queen on each square with 50% probability GoalTest = Board has N queens, no attacking pairs h = (number of attacking pairs + max(0, N - # queens)) neighbors = add or delete one queen
10
N Queens Demo
11
States Where Greedy Search Must Succeed objective function
12
States Where Greedy Search Might Succeed objective function
13
Local Search Landscape objective function Local Minimum Plateau
14
Variations of Greedy Search Where to start? –RANDOM STATE –PRETTY GOOD STATE What to do when a local minimum is reached? –STOP –KEEP GOING Which neighbor to move to? –BEST neighbor –Any BETTER neighbor (Hill Climbing) How to make local search more robust?
15
Restarts for run = 1 to max_runs do state = choose_start_state(); flip = 0; while ! GoalTest(state) && flip++ < max_flips do state := arg min { h(s) | s in Neighbors(state) } end if GoalTest(state) return state; end return FAIL
16
Uphill Moves: Random Noise state = choose_start_state(); while ! GoalTest(state) do with probability noise do state = random member Neighbors(state) else state := arg min { h(s) | s in Neighbors(state) } end return state;
17
Uphill Moves: Simulated Annealing (Constant Temperature) state = start; while ! GoalTest(state) do next = random member Neighbors(state); deltaE = h(next) – h(state); if deltaE 0 then state := next; else with probability e -deltaE/temperature do state := next; end endif end return state; Book reverses, because is looking for max h state
19
Uphill Moves: Simulated Annealing (Geometric Cooling Schedule) temperature := start_temperature; state = choose_start_state(); while ! GoalTest(state) do next = random member Neighbors(state); deltaE = h(next) – h(state); if deltaE 0 then state := next; else with probability e -deltaE/temperature do state := next; end temperature := cooling_rate * temperature; end return state;
20
Simulated Annealing For any finite problem with a fully-connected state space, will provably converge to optimum as length of schedule increases: But: fomal bound requires exponential search time In many practical applications, can solve problems with a faster, non-guaranteed schedule
21
Other Local Search Strategies Tabu Search –Keep a history of the last K visited states –Revisiting a state on the history list is “tabu” Genetic algorithms –Population = set of K multiple search points –Neighborhood = population U mutations U crossovers Mutation = random change in a state Crossovers = random mix of assignments from two states Typically only a portion of neighbor is generated –Search step: new population = K best members of neighborhood
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.