Cost-based & Informed Search Chapter 4. Review §We’ve seen that one way of viewing search is generating a tree given a state-space graph (explicitly or.

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Informed search strategies
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
Problem Solving by Searching
Review: Search problem formulation
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
State-Space Searches.
Heuristic Search Heuristic - a “rule of thumb” used to help guide search often, something learned experientially and recalled when needed Heuristic Function.
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Informed search algorithms
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l all learning algorithms,
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
Cost-based & Informed Search Chapter 4. Review §We’ve seen that one way of viewing search is as a graph with nodes & arcs §We’ve begun exploring various.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
For Wednesday Read chapter 5, sections 1-4 Homework: –Chapter 3, exercise 23. Then do the exercise again, but use greedy heuristic search instead of A*
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Heuristic Search Foundations of Artificial Intelligence.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
Today’s Topics Exam Thursday Oct 22. 5:30-7:3-pm, same room as lecture Makeup Thursday Oct 29? Or that Monday or Wednesday? Exam Covers Material through.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
For Monday Read chapter 4 exercise 1 No homework.
CMPT 463. What will be covered A* search Local search Game tree Constraint satisfaction problems (CSP)
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Heuristic Search Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
EA C461 – Artificial Intelligence
CS Fall 2016 (Shavlik©), Lecture 9, Week 5
Informed search algorithms
Informed search algorithms
CS Fall 2016 (Shavlik©), Lecture 10, Week 6
Presentation transcript:

Cost-based & Informed Search Chapter 4

Review §We’ve seen that one way of viewing search is generating a tree given a state-space graph (explicitly or implicitly). §We’ve begun exploring various brute-force or uninformed methods of examining nodes [differentiated by the order in which nodes are evaluated] §The next wrinkle is to consider putting costs on the arcs

Search with costs §We might not be interested in finding the goal in the fewest number of hops (arcs) l Instead, we might want to know the cheapest route, even if it requires us to go through more nodes (more hops) l Assumption: the cost of any arc is non-negative usually, this is thought of as distance from the start state

Uniform-cost search §Idea: the next node we are to evaluate is the node that is the cheapest to reach (in total cost) l the OPEN list should be sorted by cost (usually thought of as distance away from the start node) l produces the shortest path in terms of total cost all the costs must be non-negative for this result to hold usually called g(n) = cost of reaching node n BFS produces the shortest path in terms of arcs traversed (BFS can be thought of as uniform-cost search with all arcs equal to one g(n) = depth(n)) [optimal] l this is not a heuristic search because we aren’t estimating a value; we know the cost exactly

Informal proof §If a lower-cost path existed, the beginnings of that path would already be in OPEN l But all of the paths in OPEN are at least as costly as the one popped off l This is because OPEN is kept sorted l Thus, since the cost function (g(n)) never decreases, we know that we’ve found the cheapest path

Bi-directional search §Can search from the start node to the goal l this is what we’ve been doing up until now §Can search from the goal to the start node l PROLOG does this, for instance l requires that we can apply our operators “backwards” §Split time working from both directions l fewer nodes overall should be expanded example

Comparison of brute-force search §Fig §comparison factors l completeness: if a solution exists, is it guaranteed to be found l optimality: if a solution exists, is the best solution found? l space & time complexity: how much memory is required for the search? How long (on average) will the search take?

Summary of brute-force search §depth §breadth §iterative deepening §uniform cost §bidirectional

Growth of the CLOSED list §If OPEN can grow exponentially, can’t CLOSED as well? -Yes l for reasonable problems, we can use hash- tables; this just postpones the problem, however l we can also not use a CLOSED list CLOSED prevents infinite loops, but if we’re using iterative deepening we don’t have this worry obviously, if we’re using DFS this problem does arise & we have to try to avoid it by disallowing self-loops, for example (this doesn’t solve the whole problem, of course)

Best-first search (informed search) §We use some heuristic (evaluation (scoring) function) to determine which node to expand next §Incorporates domain specific knowledge and hopefully reduces the number of nodes we need to expand §Just as in the brute-force searches, however, we have to worry about memory usage §Use book slides on greedy search

Scoring functions §f(n) = g(n) + h(n) [general form] l g(n) = cost from start node to node n (current node) [we saw this in uniform-cost search] important if we’re looking for the cheapest solution (optimal) and completeness can also be used to break ties among h(n) values l h(n) = estimated cost to goal heuristic involves domain specific knowledge should be quick & easy to compute

Heuristically finding least cost sol’n §g(n) alone produces the least-cost solution, but it doesn’t use heuristics to focus our efforts §solution: f(n) = g(n) + h(n) l how far we’ve come plus how much farther we think we have to go l combine the cost function (keeps the search “honest”) with the heuristic function (directs the search) l given certain restrictions on the heuristic function, we will still find the least-cost solution without (hopefully) expanding as many nodes as uniform- cost search

Using g(n) + h(n) §Keep the OPEN list sorted by f(n) = g(n) + h(n) l however, we must insure that if we come across some node N that is already on the OPEN list that the current cost of N isn’t less than the previous best cost for reaching N l if the current path to N is better, we delete the old node & add the new node because a better path has been found

Admissibility §Definition: if a search algorithm always produces an optimal solution path (if one exists), then the algorithm is called admissible. §If h(n) never over-estimates the actual distance to a node, then using f(n) = g(n) + h(n) will lead to an admissible best-first algorithm l called A* search l of course, it has the same drawbacks in terms of space as the other full search algorithms

More properties of A* §Domination: if h 1, h 2 are both admissible & h 1 (n) >= h 2 (n) for all n, then the nodes A* expands using h 1 are a subset of those it expands using h 2 l i.e., h 1 (n) would lead to a more efficient search l extreme case: h 2 (n) = 0 = no domain knowledge; any domain knowledge engineered into h 1 would improve the search l we say that h 1 dominates h 2

Robustness §If h “rarely” overestimates the real distance by more than s, then A* will “rarely” find a solution whose cost is s more than the optimal l thus, it is useful to have a “good guess” at h

Completeness §A* will terminate (with the optimal solution) even in infinite spaces, provided a solution exists & all link costs are positive

Monotone restriction §If for all n, m where m is a descendent of n and h(n) - h(m) <= cost(n, m) [actual cost from going from n to m] l alternatively: h(m) >= h(n) - cost(n, m) l no node looks artificially distant from a goal §then whenever we visit a node, we’ve gotten there by the shortest path l no need for all of the extra bookkeeping to check if there are better paths to some node l extreme case: h(n) = 0 [then just BFS]

Creating heuristics [h(n)] §Domain or task specific “art” §A problem with less restrictions on the operators is called a relaxed problem. §It is often the case that the cost of an exact solution to a relaxed problem is a good heuristic for the original problem §example (8-puzzle)

Example heuristic §Domain: map §Heuristic: Euclidean distance l note that this will probably be an underestimation of the actual distance since most roads are not “as the crow flies”

Hill-climbing §Interesting problems are going to have very large search spaces & we may not be able to keep track of all possibilities on the OPEN list §Further, sometimes we’re not interested in the single best answer (which might be NP- complete), but a “reasonably good answer” §One method of doing this is to only follow the best arc out a given state if the next state is (judged to be) better than the current state & to disregard all the other children nodes

Hill-climbing algorithm (partial) §Expanding a node, X l put X on CLOSED l let s = score(X) (usually smaller scores are best [estimating how far away from the goal we are] so we’re really doing “valley descending”) l consider the previously unvisited children of X let C = best (lowest-scoring) child let r = score(C) if r  s, then OPEN = { C } else OPEN = { } –I.e., continue as long as progress is being made –if no progress, simply stop (no backtracking)

Greedy algorithms §Hill-climbing is a greedy algorithm l It does what is locally optimal, even though a “less good” move may pay higher dividends in the future.

Finding a local peak

Uses of hill-climbing §Hill-climbing seems rather naïve, but it’s actually useful & widely used in the following type of situations l evaluating or generating a node is costly & we can only do a few l locally optimal solutions are OK satisficing vs. optimal e.g., decision trees and neural networks l constrained by time (the world might change while we’re thinking, for example) l we may not have an explicit goal test

Local maxima (optimality) §Local maxima l all local moves (i.e., all single-arc traversals form the current state) are lower than the current state, but there is a global maxima elsewhere l this is the problem with hill-climbing: it may miss global maxima

Beam search §Beam search also addresses the problem of large search spaces (keeping OPEN tractable) §However, instead of only putting the next best node on the OPEN list only if it is better than the current node l we will put the k best ones on OPEN l we will put them on OPEN regardless of whether they are better than the current node (this allows for “downhill” moves) l usually this technique is used with best-first search but it could be used with other techniques too -- in general, simply limit the OPEN list to k nodes

Partial v. full search tradeoffs §Partial -- don’t save whole space l hill-climbing, beam search l less storage needed (OPEN is limited in size) l faster; less nodes to search through l might miss (optimal) solutions since it is only considering part of the search space §Full -- ability to search whole space l DFS, BFS, best-first search

Simulated Annealing §Idea: avoid getting stuck in a local minima by occasionally taking a “bad” step l question: how often should we do this? Too often & it will be just like a random walk Reduce the probability of it happening as time goes on –analogous to molecules cooling –when heated they will be moving about randomly –as they cool, bonds start to form between them & the randomness decreases

SA Algorithm

SA algorithm §Extreme cases l temp is hot e (  energy / temp) goes to 1 l temp is cold e (  energy / temp) goes to 0 becomes just like hill-climbing; no randomness any more

Example f(n) = g(n) + h(n)

Search issues §Optimal v. any solution §huge search spaces v. optimality §cost of executing the search (the costs on the arcs) v. cost of finding the solution (arcs cost = 1) §problem specific knowledge v. brute-force §implicit v. explicit goal states