State space representations and search strategies - 2 Spring 2007, Juris Vīksna.

Slides:



Advertisements
Similar presentations
Heuristic Search Russell and Norvig: Chapter 4 Slides adapted from:
Advertisements

Chapter 4: Informed Heuristic Search
Informed search algorithms
Review: Search problem formulation
Informed search strategies
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Informed search algorithms
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Solving Problem by Searching
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
5-1 Chapter 5 Tree Searching Strategies. 5-2 Satisfiability problem Tree representation of 8 assignments. If there are n variables x 1, x 2, …,x n, then.
Search in AI.
Review: Search problem formulation
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Class of 28 th August. Announcements Lisp assignment deadline extended (will take it until 6 th September (Thursday). In class. Rao away on 11 th and.
Informed Search Idea: be smart about what paths to try.
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
Informed State Space Search Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
For Monday Read chapter 4, section 1 No homework..
CS344: Introduction to Artificial Intelligence (associated lab: CS386)
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 3 - Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Search Foundations of Artificial Intelligence.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
Decomposition spaces Spring 2007, Juris Vīksna. Sample problem - Towers of Hanoi [Adapted from R.Shinghal]
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Heuristic Functions.
Heuristic Search  Best First Search –A* –IDA* –Beam Search  Generate and Test  Local Searches –Hill Climbing  Simple Hill Climbing  Steepest Ascend.
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Heuristic Functions.
Heuristic Search Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
The A* Algorithm Héctor Muñoz-Avila.
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
Artificial Intelligence Chapter 9 Heuristic Search
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
Heuristic Search Generate and Test Hill Climbing Best First Search
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
Informed Search.
Lecture 4: Tree Search Strategies
Presentation transcript:

State space representations and search strategies - 2 Spring 2007, Juris Vīksna

Search strategies - A* - state space h*(x)- a minimum path weight from x to the goal state h(x)- heuristic estimate of h*(x) g*(x)- a minimum path weight from I to x g(x)- estimate of g*(x) (i.e. minimal weight found as far f*(x) = g*(x) + h*(x) f(x) = g(x) + h(x)

Search strategies - A* [Adapted from J.Pearl]

Search strategies - A* A*Search(state space  =,h) Open  { } Closed   while Open   do  ExtractMin(Open) [minimum for f x ] if Goal(x,  ) then return x Insert(,Closed) for y  Child(x,  ) do g y = g x + W(x,y) f y = g y + h(y) if there is no  Closed with f  f y then Insert(,Open) [replace existing, if present] return fail

Complete search Definition An algorithm is said to be complete if it terminates with a solution when one exists.

Admissible search Definition An algorithm is admissible if it is guaranteed to return an optimal solution (with minimal possible path weight from the start state to a goal state) whenever a solution exists.

Dominant search Definition An algorithm A is said to dominate algorithm B, if every node expanded by A is also expanded by B. Similarly, A strictly dominates B if A dominates B and B does not dominate A. We will also use the phrase “more efficient than” interchangeably with dominates.

Optimal search Definition An algorithm is said to be optimal over a class of algorithms if it dominates all members of that class.

Locally finite state spaces Definition A state space is locally finite, if for every x  S, there is only a finite number of y  S such that (x,y)  P there exists  > 0 such that for all (x,y)  P we have W(x,y)  .

Completeness of A* Theorem A* algorithm is complete on locally finite state spaces.

Admissibility of A* Definition A heuristic function h is said to be admissible if 0  h(n)  h*(n) for all n  S.

Admissibility of A* Theorem A* which uses admissible heuristic function is admissible on locally finite state spaces.

Admissibility of A* Lemma If A* uses admissible heuristic function h, then at any time before A* terminates there exists a node n’ in Open, such that f(n’)  f*(I).

Admissibility of A* Theorem A* which uses admissible heuristic function is admissible on locally finite state spaces.

Informedness of heuristic functions Definition A heuristic function h 2 is said to be more informed than h 1, if both h 1 and h 2 are admissible and h 2 (n) > h 1 (n) for every non-goal node n  S. Similarly, an A* algorithm using h 2 is said to be more informed than that using h 1.

Dominance of A* Theorem If A 2 * is more informed than A 1 *, then A 2 * dominates A 1 *.

Dominance of A* Lemma Any node expanded by A* cannot have an f value exceeding f*(I), i.e. f(n)  f*(I) for all nodes expanded.

Dominance of A* Lemma Every node n on Open for which f(n) < f*(I) will eventually be expanded by A*.

C-bounded paths Definition We say that path P is C-bounded if every node along this path satisfies g P (n)+h(n)  C. Similarly, if a strict inequality holds for every n along P, we say that P is strictly C-bounded. When it becomes necessary to identify which heuristic was used, we will use the notation C(h)-bounded.

C-bounded paths Theorem A sufficient condition for A* to expand a node n is that there exists some strictly f*(I)-bounded path P from I to n.

C-bounded paths Theorem A necessary condition for A* to expand a node n is that there exists a f*(I)-bounded path P from I to n.

Dominance of A* Theorem If A 2 * is more informed than A 1 *, then A 2 * dominates A 1 *.

Consistent heuristic functions Definition A heuristic function h is said to be consistent, if h(n)  k(n,n’) + h(n’) for all nodes n and n’. (where k(n,n’) denotes the weight of cheapest path from n to n’).

Monotone heuristic functions Definition A heuristic function h is said to be monotone, if h(n)  W(n,n’) + h(n’) for all (n,n’)  P.

Monotonicity and consistency Theorem Monotonicity and consistency are equivalent properties.

Monotonicity and admissibility Theorem Every monotone heuristic is also admissible.

A* with monotone heuristic Theorem An A* algorithm with monotone heuristic finds optimal paths to all expanded nodes, i.e. g(n) = g*(n) for all  Closed.

Some terminology A* - we just discussed that A- basically the same as A*, but we check whether we have reached a goal state already at the time when nodes are generated Z*- generalization of A*, instead of f(x) = g(x) + h(x) uses more general function f(x’) = F(E(x),f(x),h(X’)) Z- related to Z* similarly as A to A*

Implementation issues A*Search(state space  =,h) Open  { } Closed   while Open   do  ExtractMin(Open) [minimum for f x ] if Goal(x,  ) then return x Insert(,Closed) for y  Child(x,  ) do g y = g x + W(x,y) f y = g y + h(y) if there is no  Closed with f  f y then Insert(,Open) [replace existing, if present] return fail

Implementation issues - Heaps They are binary trees with all levels completed, except the lowest one which may have uncompleted section on the right side They satisfy so called Heap Property - for each subtree of heap the key for the root of subtree must be at least as large as the keys for its (left and right) children

Implementation issues - Heaps

Implementation issues - Heaps T(n) =  (h) =  (log n) Insert

Implementation issues - Heaps Delete T(n) =  (h) =  (log n)

Implementation issues - Heaps ExtractMin T(n) =  (h) =  (log n) 1

Implementation issues - BST T is a binary search tree, if it is a binary tree (with a key associated with each node) for each node x in T the keys at all nodes of left subtree of x are not larger than key at node x and keys at all nodes of right subtree of x are not smaller than key at node x

Implementation issues - BST

Implementation issues - BST Insert

Implementation issues - BST Delete

Implementation issues - BST Delete

Implementation issues - BST Delete

Implementation issues - AVL trees T is an AVL tree, if it is a binary search tree for each node x in T we have Height(LC(x)) – Height(RC(x))  {– 1, 0, 1}

Implementation issues - skip lists  16 3 “Perfect” Skip List

How to chose a heuristic?  Original problem P Relaxed problem P' A set of constraints removing one or more constraints P is complex P' becomes simpler  Use cost of a best solution path from n in P' as h(n) for P  Admissibility: h* h cost of best solution in P >= cost of best solution in P' Solution space of P Solution space of P'

How to chose a heuristic - 8-puzzle  Example: 8-puzzle –Constraints: to move from cell A to cell B cond1: there is a tile on A cond2: cell B is empty cond3: A and B are adjacent (horizontally or vertically) –Removing cond2: h2 (sum of Manhattan distances of all misplaced tiles) –Removing cond2 and cond3: h1 (# of misplaced tiles) –Removing cond3: h3, a new heuristic function

How to chose a heuristic - 8-puzzle h3: repeat if the current empty cell A is to be occupied by tile x in the goal, move x to A. Otherwise, move into A any arbitrary misplaced tile. until the goal is reached  h2>= h3 >= h1 h1(start) = 7 h2(start) = 18 h3(start) = 7

How to chose a heuristic - TSP Example: TSP. A legal tour is a (Hamiltonian) circuit

How to chose a heuristic - TSP Example: TSP. A legal tour is a (Hamiltonian) circuit –It is a connected second degree graph (each node has exactly two adjacent edges) Removing the connectivity constraint leads to h1: find the cheapest second degree graph from the given graph (with o(n^3) complexity) The given complete graph A legal tour Other second degree graphs

How to chose a heuristic - TSP –It is a spanning tree (when an edge is removed) with the constraint that each node has at most 2 adjacent edges) Removing the constraint leads to h2: find the cheapest minimum spanning tree from the given graph (with O(n^2/log n) The given graph A legal tour Other MST

How complicated heuristic to chose? [Adapted from R.Shinghal]

Relaxing optimality requirements is f = g + h the best choice, if we want to minimize search efforts, not solution cost? even if solution cost is important, admissible f can lead to non terminating A*. Can speed be gained by decreasing solution quality? it may be hard to find good admissible heuristic. What happens, if we do not require admissibility?

Relaxing optimality requirements Weighted evaluation function f w (n) = (1–w)g(n) + w h(n) w = 0 - uniform cost w = 1/2- A* w = 1- BestFirst

Relaxing optimality requirements Bounded decrease in solution quality?

Relaxing optimality requirements Dynamic Weighting f(n) = g(n) + h(n) +  (1 – d(n)/N) h(n) d(n)- depth of a node N- anticipated depth of a goal node

Relaxing optimality requirements Dynamic Weighting f(n) = g(n) + h(n) +  (1 – d(n)/N) h(n) Theorem If h is admissible, then algorithm is  -admissible, i.e. it finds a path with a cost at most (1+  )C*.

Relaxing optimality requirements A  * algorithm Uses 2 lists - Open and Focal Focal is a sublist of Open containing nodes that do not deviate from the lowest f node by a factor greater than 1+ . A  * selects the node from Focal with lowest h F value h F (n) - a second heuristic function estimating the computational effort to complete the search starting from n

Relaxing optimality requirements A  * algorithm Theorem If h is admissible, then A  * algorithm is  -admissible, i.e. it finds a path with a cost at most (1+  )C*. Note h F does not need to be admissible

Relaxing optimality requirements [Adapted from J.Pearl]

Relaxing optimality requirements Theorem If h(n) - h*(n)  , then A* algorithm is  -admissible, i.e. it finds a path with a cost at most (1+  )C*.

Relaxing optimality requirements Example Consider SP with arc costs uniformly drawn from [0,1]. N - number of arcs between n and a goal h*(n) tends to be close to N/2 for large N The only admissible heuristic is h(n) = 0

Relaxing optimality requirements [Adapted from J.Pearl]

Relaxing optimality requirements [Adapted from J.Pearl]

Relaxing optimality requirements R  * algorithm selects node from Open with the lowest C  (n) value Three common risk measures: R 1 - the worst case risk R 2 - the probability of suboptimal termination R 3 - the expected risk

Relaxing optimality requirements [Adapted from J.Pearl]

Relaxing optimality requirements [Adapted from J.Pearl]

Relaxing optimality requirements R  * algorithm selects node from Open with the lowest C  (n) value Theorem For risk measures R 1, R 2, R 3 algorithm R  * is  -risk admissible, i.e. terminates with a solution cost C, such that R(C)  for all nodes left in Open.

Some performance examples [Adapted from J.Pearl]

Some performance examples [Adapted from J.Pearl]

Some performance examples [Adapted from J.Pearl]

Performance of search strategy T - number of nodes in search graph D - number fo nodes in solution path We define penetrance P as follows: P = D/T We have 0 < P < 1

Performance of search strategy T - number of nodes in search graph D - number fo nodes in solution path We define branching factor B as follows: T = B + B B D T = B(B D – 1)/(B – 1) We have B  1

Performance of search strategy [Adapted from R.Shinghal]

Performance of search strategy [Adapted from R.Shinghal]