The Rich/Knight Implementation

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Notes Dijstra’s Algorithm Corrected syllabus.
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Greedy best-first search Use the heuristic function to rank the nodes Search strategy –Expand node with lowest h-value Greedily trying to find the least-cost.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Informed search.
Review: Search problem formulation
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2004.
Problem Solving and Search in AI Heuristic Search
Informed Search Idea: be smart about what paths to try.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
Informed Search I (Beginning of AIMA Chapter 4.1)
Artificial Intelligence Problem solving by searching.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Informed Search CSE 473 University of Washington.
Heuristic Search Foundations of Artificial Intelligence.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Lecture 3: Uninformed Search
Review: Tree search Initialize the frontier using the starting state
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
Last time: Problem-Solving
Artificial Intelligence (CS 370D)
Heuristic Search Introduction to Artificial Intelligence
ECE 448 Lecture 4: Search Intro
CSE (c) S. Tanimoto, 2002 Search Algorithms
Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
Artificial Intelligence
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Solving problems by searching
A* Example Newly generated node s, but OLD on OPEN has the same state.
EA C461 – Artificial Intelligence
COMP 8620 Advanced Topics in AI
Searching for Solutions
Informed search algorithms
Informed search algorithms
Lecture 8 Heuristic search Motivational video Assignment 2
Artificial Intelligence Chapter 9 Heuristic Search
CSE 473 University of Washington
Solving problems by searching
Solving problems by searching
HW 1: Warmup Missionaries and Cannibals
More advanced aspects of search
Informed Search Idea: be smart about what paths to try.
UNINFORMED SEARCH -BFS -DFS -DFIS - Bidirectional
State-Space Searches.
HW 1: Warmup Missionaries and Cannibals
CS 416 Artificial Intelligence
Solving problems by searching
CSE (c) S. Tanimoto, 2004 Search Algorithms
Solving Problems by Searching
Informed Search Idea: be smart about what paths to try.
The Rich/Knight Implementation
Solving problems by searching
Presentation transcript:

The Rich/Knight Implementation a node consists of state g, h, f values list of successors pointer to parent OPEN is the list of nodes that have been generated and had h applied, but not expanded and can be implemented as a priority queue. CLOSED is the list of nodes that have already been expanded.

Rich/Knight /* Initialization */ OPEN <- start node Initialize the start node g: h: f: CLOSED <- empty list

Rich/Knight 2) repeat until goal (or time limit or space limit) if OPEN is empty, fail BESTNODE <- node on OPEN with lowest f if BESTNODE is a goal, exit and succeed remove BESTNODE from OPEN and add it to CLOSED generate successors of BESTNODE

Rich/Knight for each successor s do 1. set its parent field 2. compute g(s) 3. if there is a node OLD on OPEN with the same state info as s { add OLD to successors(BESTNODE) if g(s) < g(OLD), update OLD and throw out s }

Rich/Knight/Tanimoto 4. if (s is not on OPEN and there is a node OLD on CLOSED with the same state info as s { add OLD to successors(BESTNODE) if g(s) < g(OLD), update OLD, remove it from CLOSED and put it on OPEN, throw out s }

Rich/Knight 5. If s was not on OPEN or CLOSED { add s to OPEN add s to successors(BESTNODE) calculate g(s), h(s), f(s) } end of repeat loop

A* Extra Examples To show what happens when It encounters a node whose state is already on OPEN It encounters a node whose state is already on CLOSED

Thought Question Do you have to keep the list of successors for each node through the whole search? Rich/Knight did (why?) Tanimoto did not If you keep it, what might it be used for?

A* Example Newly generated node s, but OLD on OPEN has the same state. Shortest path in Romania, but the goal is now Giurgiu, not Bucharest. Straight line distances to Giurgiu (I made them up) Arad 390 Sibiu 275 Fagaras 200 Rimnicu 205 Pitesi 125 Craiova 120 Bucharest 80 Drobeta 240

1 Goal is Giurgiu 140 2 Forget the other 2 99 80 3 4 146 211 97 5 6 Arad 390 1 Goal is Giurgiu 140 2 Sibiu 415 =140+275 Forget the other 2 99 80 Fagaras 439=239+200 Rimnicu 425=220+205 3 4 146 211 97 Bucharest 530=450+80 Pitesi 442=317+125 Craiova 486=366+120 5 6 OLD on OPEN 120 146 138 101 138 Bucharest 498=418+80 BETTER Craiova 575=455+120 WORSE Drobeta 726=486+240 Rimnicu 717=512+205 WORSE Pitesi 629=504+125 WORSE 7 8 90 85 Giurgiu 508=508+0 GOAL Urziceni etc

A* Example (abstract, pretend it’s time) Newly generated node s, but OLD on CLOSED has the same state. 1 A 5 6 2 4 B 9=5+4 C 16=6+10 3 4 8 2 OLD on CLOSED D 14=9+5 E 19=13+6 D 13=8+5 BETTER 10 10 F 24=19+5 G 24=19+5

The Heuristic Function h If h is a perfect estimator of the true cost then A* will always pick the correct successor with no search. If h is admissible, A* with TREE-SEARCH is guaranteed to give the optimal solution. If h is consistent, too, then GRAPH-SEARCH is optimal. If h is not admissable, no guarantees, but it can work well if h is not often greater than the true cost.

Complexity of A* Time complexity is exponential in the length of the solution path unless for “true” distance h* |h(n) – h*(n)| < O(log h*(n)) which we can’t guarantee. But, this is AI, computers are fast, and a good heuristic helps a lot. Space complexity is also exponential, because it keeps all generated nodes in memory. Big Theta notation says 2 functions have about the same growth rate.

Why not always use A*? Pros Cons

Solving the Memory Problem Iterative Deepening A* Recursive Best-First Search Depth-First Branch-and-Bound Simplified Memory-Bounded A*

Iterative-Deepening A* Like iterative-deepening depth-first, but... Depth bound modified to be an f-limit Start with f-limit = h(start) Prune any node if f(node) > f-limit Next f-limit=min-cost of any node pruned FL=21 e FL=15 d a f b c

Recursive Best-First Search Use a variable called f-limit to keep track of the best alternative path available from any ancestor of the current node If f(current node) > f-limit, back up to try that alternative path As the recursion unwinds, replace the f-value of each node along the path with the backed-up value: the best f-value of its children

Simplified Memory-Bounded A* Works like A* until memory is full When memory is full, drop the leaf node with the highest f-value (the worst leaf), keeping track of that worst value in the parent Complete if any solution is reachable Optimal if any optimal solution is reachable Otherwise, returns the best reachable solution

Performance of Heuristics How do we evaluate a heuristic function? effective branching factor b* If A* using h finds a solution at depth d using N nodes, then the effective branching factor is b* where N = 1 + b* + (b*)2 + . . . + (b*)d Example: depth 0 d=2 depth 1 b=3 depth 2

Table of Effective Branching Factors b d N 2 2 7 2 5 63 3 2 13 3 5 364 3 10 88573 6 2 43 6 5 9331 6 10 72,559,411 How might we use this idea to evaluate a heuristic?

How Can Heuristics be Generated? From Relaxed Problems that have fewer constraints but give you ideas for the heuristic function. From Subproblems that are easier to solve and whose exact cost solutions are known. The cost of solving a relaxed problem or subproblem is not greater than the cost of solving the full problem.

Still may not succeed In spite of the use of heuristics and various smart search algorithms, not all problems can be solved. Some search spaces are just too big for a classical search. So we have to look at other kinds of tools.

HW 2: A* Search A robot moves in a 2D space. It starts at a start point (x0,y0) and wants to get to a goal point (xg,yg). There are rectangular obstacles in the space. It cannot go THROUGH the obstacles. It can only move to corners of the obstacles, ie. search space limited.

Simple Data Set How can the robot get from (0,0) to (9,6)? What is the minimal length path?

More next time.