Presentation is loading. Please wait.

Presentation is loading. Please wait.

B EYOND C LASSICAL S EARCH Instructor: Kris Hauser 1.

Similar presentations


Presentation on theme: "B EYOND C LASSICAL S EARCH Instructor: Kris Hauser 1."— Presentation transcript:

1 B EYOND C LASSICAL S EARCH Instructor: Kris Hauser http://cs.indiana.edu/~hauserk 1

2 A GENDA Local search, optimization Branch and bound search Online search

3 L OCAL S EARCH Light-memory search methods No search tree; only the current state is represented! Applicable to problems where the path is irrelevant (e.g., 8-queen) For other problems, must encode entire paths in the state Many similarities with optimization techniques 3

4 I DEA : M INIMIZE H (N) …Because h(G)=0 for any goal G An optimization problem! 4

5 S TEEPEST D ESCENT 1. S  initial state 2. Repeat: 3. S’  arg min S’  SUCCESSORS(S) {h(S’)} 4. if GOAL?(S’) return S’ 5. if h(S’)  h(S) then S  S’ else return failure Similar to: hill climbing with –h gradient descent over continuous space 5

6 A PPLICATION : 8-Q UEEN Pick an initial state S at random with one queen in each column Repeat k times: If GOAL?(S) then return S Pick an attacked queen Q at random Move Q in its column to minimize the number of attacking queens  new S [min-conflicts heuristic] Return failure 1 2 3 3 2 2 3 2 2 2 2 2 0 2

7 A PPLICATION : 8-Q UEEN Pick an initial state S at random with one queen in each column Repeat k times: If GOAL?(S) then return S Pick an attacked queen Q at random Move Q in its column to minimize the number of attacking queens  new S [min-conflicts heuristic] Return failure 1 2 3 3 2 2 3 2 2 2 2 2 0 2 Repeat n times:

8 A PPLICATION : 8-Q UEEN Pick an initial state S at random with one queen in each column Repeat k times: If GOAL?(S) then return S Pick an attacked queen Q at random Move Q in its column to minimize the number of attacking queens  new S [min-conflicts heuristic] Return failure 1 2 3 3 2 2 3 2 2 2 2 2 0 2 Repeat n times: Why does it work ??? 1)There are many goal states that are well-distributed over the state space 2)If no solution has been found after a few steps, it’s better to start it all over again. Building a search tree would be much less efficient because of the high branching factor 3)Running time almost independent of the number of queens Why does it work ??? 1)There are many goal states that are well-distributed over the state space 2)If no solution has been found after a few steps, it’s better to start it all over again. Building a search tree would be much less efficient because of the high branching factor 3)Running time almost independent of the number of queens

9 S TEEPEST D ESCENT S  initial state Repeat: S’  arg min S’  SUCCESSORS(S) {h(S’)} if GOAL?(S’) return S’ if h(S’)  h(S) then S  S’ else return failure may easily get stuck in local minima Random restart (as in n-queen example) Monte Carlo descent 9

10 G RADIENT D ESCENT IN C ONTINUOUS S PACE Minimize y=f(x) Move in opposite direction of derivative  df/dx(x) y x x1x1 df/dx(x 1 )

11 G RADIENT D ESCENT IN C ONTINUOUS S PACE Minimize y=f(x) Move in opposite direction of derivative  df/dx(x) y x x1x1 df/dx(x 1 ) x2x2

12 G RADIENT D ESCENT IN C ONTINUOUS S PACE Minimize y=f(x) Move in opposite direction of derivative  df/dx(x) y x x1x1 df/dx(x 2 ) x2x2

13 G RADIENT D ESCENT IN C ONTINUOUS S PACE Minimize y=f(x) Move in opposite direction of derivative  df/dx(x) y x x1x1 df/dx(x 2 ) x2x2 x3x3

14 G RADIENT D ESCENT IN C ONTINUOUS S PACE Minimize y=f(x) Move in opposite direction of derivative  df/dx(x) y x x1x1 df/dx(x 3 ) x2x2 x3x3

15 G RADIENT D ESCENT IN C ONTINUOUS S PACE Minimize y=f(x) Move in opposite direction of derivative  df/dx(x) y x x1x1 x2x2 x3x3

16 f Gradient : analogue of derivative in multivariate functions f(x 1,…,x n ) Direction that you would move x 1,…,x n to make the steepest increase in f x1x1 x2x2

17 17 f f GD works well GD works poorly

18 A LGORITHM FOR G RADIENT D ESCENT Input: continuous objective function f, initial point x 0 =(x 1 0,…,x n 0 ) For t=0,…,N-1: Compute gradient vector g t =(  f/  x 1 ( x t ),…,  f/  x n ( x t )) If the length of g t is small enough [convergence] Return x t Pick a step size  t Let x t+1 = x t -  t g t Return failure [convergence not reached] “Industrial strength” optimization software uses more sophisticated techniques to use higher derivatives, handle constraints, deal with particular function classes, etc.

19 P ROBLEMS FOR D ISCRETE O PTIMIZATION … 19 Plateau Ridges NP-hard problems typically have an exponential number of local minima

20 M ONTE C ARLO D ESCENT S  initial state Repeat k times: If GOAL?(S) then return S S’  successor of S picked at random if h(S’)  h(S) then S  S’ else ∆h = h(S’)-h(S) with probability ~ exp(  ∆h/T), where T is called the “temperature”, do: S  S’ [Metropolis criterion] Return failure Simulated annealing lowers T over the k iterations. It starts with a large T and slowly decreases T 20

21 “P ARALLEL ” L OCAL S EARCH T ECHNIQUES They perform several local searches concurrently, but not independently: Beam search Genetic algorithms Tabu search Ant colony/particle swarm optimization 21

22 E MPIRICAL S UCCESSES OF L OCAL S EARCH Satisfiability (SAT) Vertex Cover Traveling salesman problem Planning & scheduling Many others… 22

23 R ELATION TO N UMERICAL O PTIMIZATION Optimization techniques usually operate on a continuous state space Example: stitch point clouds together into a global model Same major issues, e.g., local minima, apply

24 D EALING WITH I MPERFECT K NOWLEDGE

25 Classical search assumes that: World states are perfectly observable,  the current state is exactly known Action representations are perfect,  states are exactly predicted How an agent can cope with adversaries, uncertainty, and imperfect information? 25

26 26 Distance, speed, acceleration? Intent? Personality?

27 O N -L INE S EARCH Sometimes uncertainty is so large that actions need to be executed for the agent to know their effects On-line search: repeatedly observe effects, and replan A proactive approach for planning A reactive approach to uncertainty Example: A robot must reach a goal position. It has no prior map of the obstacles, but its vision system can detect all the obstacles visible from a the robot’s current position 27

28 28 Assuming no obstacles in the unknown region and taking the shortest path to the goal is similar to searching with an admissible (optimistic) heuristic

29 29 Assuming no obstacles in the unknown region and taking the shortest path to the goal is similar to searching with an admissible (optimistic) heuristic

30 30 Assuming no obstacles in the unknown region and taking the shortest path to the goal is similar to searching with an admissible (optimistic) heuristic Just as with classical search, on-line search may detect dead-ends and move to a more promising position (~ node of search tree)

31 31 D* ALGORITHM FOR M OBILE R OBOTS Tony Stentz

32 R EAL - TIME REPLANNING AMONG UNPREDICTABLY MOVING OBSTACLES

33 N EXT CLASS Uncertain and partially observable environments Game playing Read R&N 5.1-4 HW1 due at end of next class


Download ppt "B EYOND C LASSICAL S EARCH Instructor: Kris Hauser 1."

Similar presentations


Ads by Google