Heuristic Search Viewed as path Finding in a Graph

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Chapter 4: Informed Heuristic Search
Review: Search problem formulation
Informed search strategies
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Informed search algorithms
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2004.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Solving Problem by Searching
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Algorithms + L. Grewe.
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
ICS-171:Notes 4: 1 Notes 4: Optimal Search ICS 171 Summer 1999.
Review: Search problem formulation
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Chapter 4 Informed/Heuristic Search
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
A General Introduction to Artificial Intelligence.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
State space representations and search strategies - 2 Spring 2007, Juris Vīksna.
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Lecture 3: Uninformed Search
Review: Tree search Initialize the frontier using the starting state
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
Heuristic Functions.
Heuristic Search A heuristic is a rule for choosing a branch in a state space search that will most likely lead to a problem solution Heuristics are used.
Artificial Intelligence (CS 370D)
Artificial Intelligence Problem solving by searching CSC 361
Chap 4: Searching Techniques Artificial Intelligence Dr.Hassan Al-Tarawneh.
Types of Algorithms.
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
Types of Algorithms.
CS 4100 Artificial Intelligence
Artificial Intelligence Informed Search Algorithms
EA C461 – Artificial Intelligence
Searching for Solutions
Informed search algorithms
Principles of Computing – UFCFA3-30-1
Chapter 11 Limitations of Algorithm Power
Artificial Intelligence Chapter 9 Heuristic Search
Branch and Bound Searching Strategies
Iterative Deepening CPSC 322 – Search 6 Textbook § 3.7.3
CSE 473 University of Washington
Chap 4: Searching Techniques
Types of Algorithms.
HW 1: Warmup Missionaries and Cannibals
More advanced aspects of search
Informed Search Idea: be smart about what paths to try.
State-Space Searches.
State-Space Searches.
ECE457 Applied Artificial Intelligence Fall 2007 Lecture #2
HW 1: Warmup Missionaries and Cannibals
CS 416 Artificial Intelligence
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Solving Problems by Searching
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Lecture 4: Tree Search Strategies
Presentation transcript:

Heuristic Search Viewed as path Finding in a Graph Author: Ira Pohl Presentation by: Allen Kanapala & Xiang Guan

Applications

Motivation Problem spaces are locally finite directed graphs. Exhaustive methods take too time to identify the goal node if the problem space is large. Usage of heuristic functions to estimate the path toward the goal node.

Heuristic Path Algorithm w = 0 => Uniform cost search w = 0.5 => A* search w = 1 => Greedy best-first search Efficient directed search is made possible by considering the heuristic as a function over the nodes in the graph. The Heuristic Path Algorithms(HPA) are used to perform enumerative searches of the graphs using the h(heuristic function) and g(distance to date) function values. g=number of edges from start node to the node currently enumerated by HPA. h=number of edges on the shortest path from the node currently enumerated by HPA to the terminal node. The evaluation function used in HPA: f(x) = (1-w)*g(x) + w*f(x), where w= weight and 0<=w<=1.

Shortcomings of using only g(distance to date term) : May exponentially broaden the search. Shortcoming of using only h(heuristic term): May exclude the optimal solution path. Combining the weighted sum of the above terms can solve the search problems efficiently.

Well defined heuristics model can be designed for various problem domains. The domain specific heuristic estimators are highly accurate but their efficiency needs verifiable proof. Perfect Estimator: hp(x) = minimum path length over all paths from node x to terminal node. HPA’s cannot always provide a decision procedure.

Theorem 1. Undecidability For some problem domains hp cannot be computed and(or) does not exist. Finite problem spaces can be enumerated for h, but this computation is impractical for large problem spaces. A travelling salesman problem can have (n-1)! nodes and these can be recursively evaluated by the HPA but the evaluation function must be weighted less than 1.

Theorem 2. Recursively Enumerable If a solution path exists for a problem, the HPA with its hp(x) weighted less than 1, will find that solution. This argument can be applied inductively to show that an exhaustive search will be carried out for any n(length of the shortest path from start node to the terminal node).

Theorem 3. Decidability If there exists a total computable h of bounded error for some domain, then HPA provides a decision procedure for this domain. Conversely, any problem domain for which there exists a decision procedure has a computable h of bounded error. Some infinite domains exist for which the error on any computable heuristic term is unbounded for an infinite number of nodes. Quite often the solution path is used as a schedule or plan that will be repeatedly executed at a cost proportional to its length. At these times it is desirable or mandatory that the shortest solution path be found.

Theorem 4. HPA + using as its evaluation function f(x) =g(x) + h(x) (or equivalently w= 0.5), where h<=hp, finds a shortest solution paths if one exists. This is also known as the A* algorithm.

Theorem 5. Given h2 < h1 < hp Then HPA using f1 =g + h1 will only expand some subset of nodes that are expanded using f2 = g + h2 If l(μ) < g + h2, then l(μ) < g + h1 If only a feasible solution is needed rather than an optimal one, theorem 5 may be abandoned.

Theorem 6. HPA search when hp is used is optimal and will find the shortest path. Along the shortest path μ, g + hp = l(μ) and (2ω – 1)hp is monotonic decreasing. Thus, at anytime the f is always the smallest.

Theorem 7. Let k = distance from root node to goal node, Let h = be the bounded error ε. Then for ω=0.5, the worst case search will expand: 2εk + 1 nodes For ε = k + 1.

Theorem 8. Then for ω = 1, the worst case search will expand: If ω < 0.5, then a problem of sufficient length k could be found such that the broadening of the search is more expensive than ω >= 0.5 regardless of the error bound.

Theorem 9. If HPA is searching a graph with unique solution paths, then for any given h, ω = 0.5 will visit at most the same number of nodes as ω = 1 in the worst case. If h1 and h2 are of bounded error ε1 and ε2 with ε2 > ε1 , then in the worst case for a given ω, h1 will visit the same number of nodes as h2.

Performance of the search model Empirical performance measures can be collected to decide on appropriate weightings and features.

15 Puzzles Performance Increasing ω increase the path length of the solutions and decreases the branch rate. In ideal case of h = hp, as ω increases, the number of nodes decreases until ω is 0.5. As ω increases, more confidence is put on the heuristic term, the searches continue to greater depth before abandoned.

ω extended to a general linear form This would allow a learning system to measure performance of each term as a contribution to the accuracy of distance estimation.

Conclusion Presented a model of heuristic search as a path finding problem in a directed graph. Theoretical result for the model and intuition of these result. Efficiency and accuracy of search evaluated as a consequence of the ω (evaluation function).

Reference Used Pohl, Ira. "Heuristic search viewed as path finding in a graph." Artificial intelligence 1.3-4 (1970): 193-204.