Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.

Slides:



Advertisements
Similar presentations
Review: Search problem formulation
Advertisements

Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
G5BAIM Artificial Intelligence Methods Graham Kendall Blind Searches.
Informed search.
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
A* Search Introduction to AI. What is an A* Search? A greedy search method minimizes the cost to the goal by using an heuristic function, h(n). It works.
Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2004.
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2004.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
Introduction to Artificial Intelligence Heuristic Search Ruth Bergman Fall 2002.
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Pag.1 Artificial Intelligence Informed search algorithms Chapter 4, Sections 1-5.
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Artificial Intelligence Problem solving by searching CSC 361 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Informed (Heuristic) Search
1 CS 2710, ISSP 2610 Chapter 4, Part 1 Heuristic Search.
Informed search algorithms
2013/10/17 Informed search A* algorithm. Outline 2 Informed = use problem-specific knowledge Which search strategies? Best-first search and its variants.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Methods Heuristic = “to find”, “to discover” “Heuristic” has many meanings in general How to come up with mathematical proofs Opposite.
Informed Search Include specific knowledge to efficiently conduct the search and find the solution.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search I (Beginning of AIMA Chapter 4.1)
Artificial Intelligence Problem solving by searching.
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
1 CS 2710, ISSP 2160 Chapter 3, Part 2 Heuristic Search.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search CSE 473 University of Washington.
Heuristic Search Foundations of Artificial Intelligence.
Introduction to Artificial Intelligence (G51IAI)
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Search Methodologies Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
For Monday Read chapter 4 exercise 1 No homework.
Romania. Romania Problem Initial state: Arad Goal state: Bucharest Operators: From any node, you can visit any connected node. Operator cost, the.
Uninformed Search ECE457 Applied Artificial Intelligence Spring 2008 Lecture #2.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Artificial Intelligence (CS 370D)
Heuristic Search Introduction to Artificial Intelligence
Introduction to Artificial Intelligence
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
The Rich/Knight Implementation
Solving Problems by Searching
The Rich/Knight Implementation
Presentation transcript:

Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002

Best-First Search Review Advantages –Takes advantage of domain information to guide search –Greedy advance to the goal Disadvantages –Considers cost to the goal from the current state –Some path can continue to look good according to the heuristic function x 1 1 1x At this point the path is more costly than the alternate path

The A* Algorithm Consider the overall cost of the solution. f(n) = g(n) + h(n) where g(n) is the path cost to node n think of f(n) as an estimate of the cost of the best solution going through the node n x 3 4 5x x 1 1 1x

The A* Algorithm A*-Search(state, cost, h) open <- MakePriorityQueue(state, NIL, 0, h(state), h(state)) ;; (state, parent, g, h, f) while (nodes != empty) node = pop(open), closed <- push (closed, state) if (GoalTest(node) succeeds return node for each child in succ(node) new-cost = g(node) + cost(node,child) if child in open if new-cost < g(child) update(open, child, node, cost, h(child), cost+h(child)) elseif child in closed if new-cost < g(child) insert(open, child, node, cost, h(child), cost+h(child)) delete(closed,child) else nodes <- push(child,h(child)) return failure

A* Search: Example Travel: h(n) = distance(n, goal) Oradea (380) Zerind (374) Arad (366) Sibiu (253) Fagaras (178) Rimnicu Vilcea (193) Pitesti (98) Timisoara (329) Lugoj (244) Mehadia Dobreta (242) Craiova (160) Neamt (234) Iasi (226) Vaslui (199) Urziceni (80) Bucharest (0) Giurgiu (77) Hirsova (151) Eforie (161)

A* Search : Example

Admissible Heuristics This is not quite enough, we also require h be admissible: –a heuristic h is admissible if h(n) < h*(n) for all nodes n, –where h* is the actual cost of the optimal path from n to the goal Examples: –travel distance straight line distance must be shorter than actual travel path –tiles out of place each move can reorder at most one tile distance of each out of place tile from the correct place each move moves a tile at most one place toward correct place

Optimality of A* Let us assume that f is non-decreasing along each path –if not, simply use parent’s value –if that’s the case, we can think of A* as expanding f contours toward the goal; better heuristics make this contour more “eccentric” Let G be an optimal goal state with path cost f* Let G 2 be a suboptimal goal state with path cost g(G 2 ) > f*. –suppose A* picks G 2 before G (A* is not optimal) –suppose n is a leaf node on the path to G when G 2 is chosen –if h is admissible, then f* >= f(n) –since n was not chosen, it must be the case that f(n) >= G 2 –therefore f* >= f(G 2 ), but since G 2 is a goal, f* >= g(G 2 ) –But this is a contradiction --- G 2 is a better goal node than G –Thus, our supposition is false and A* is optimal.

Completeness of A* Suppose there is a goal state G with path cost f* –Intuitively: since A* expands nodes in order of increasing f, it must eventually expand node G If A* stops and fails –Prove by contradiction that this is impossible. –There exists a path from the initial state to the node state –Let n be the last node expanded along the solution path –n has at least one child, that child should be in the open nodes –A* does not stop until there are open list is empty (unless it finds a goal state). Contradiction. A* is on an infinite path –Recall that cost(s1,s2) >  –Let n be the last node expanded along the solution path –After f(n)/  the cumulative cost of the path becomes large enough that A* will expand n. Contradiction.

UCS, BFS, Best-First, and A* f = g + h  A* Search h = 0  Uniform cost search g = 1, h = 0  Breadth-First search g = 0  Best-First search

Heuristic Functions Tic-tac-toe x x x ox x xx x o o o o x x x o o o

Most-Win Heuristics x x x 3 win 4 win 2 win

Road Map Problem s g h(s) n h(n) n’ h(n’) g(n’)

Heuristics : 8 Puzzle

8 Puzzle Exhaustive Search : 3 20 = 3 * 10 9 states Remove repeated state : 9! = 362,880 Use of heuristics –h1 : # of tiles that are in the wrong position –h2 : sum of Manhattan distance h1 = 3 h2 = 1+2+2=

Effect of Heuristic Accuracy on Performance Well-designed heuristic have its branch close to 1 h 2 dominates h 1 iff h 2 (n)  h 1 (n),  n It is always better to use a heuristic function with higher values, as long as it does not overestimate Inventing heuristic functions –Cost of an exact solution to a relaxed problem is a good heuristic for the original problem –collection of admissible heuristics h*(n) = max(h 1 (n), h 2 (n), …, h k (n))

A* summary Completeness –provided finite branching factor and finite cost per operator Optimality –provided we use an admissible heuristic Time complexity –worst case is still O(b d ) in some special cases we can do better for a given heuristic Space complexity –worst case is still O(b d )

Relax Optimality Goals: –Minimizing search cost –Satisficing solution, i.e. bounded error in the solution f(s) = (1-w) g(s) + w h(s) –g can be thought of as the breadth first component –w = 1 => Best-First search –w =.5 => A* search –w = 0 => Uniform search

Iterative Deepening A* Goals –A storage efficient algorithm that we can use in practice –Still complete and optimal Modification of A* –use f-cost limit as depth bound –increase threshold as minimum of f(.) of previous cycle Each iteration expands all nodes inside the contour for current f-cost same order of node expansion

IDA* Algorithm IDA* (state,h) returns solution f-limit <- h(state) loop do solution, f-limit <- DFS-Contour(state, f-limit) if solution is non-null return solution if f-limit =  return failure end DFS-Contour (node,f-limit) returns solution if f (node) > f-limit return null, f(node) if GoalTest(node) return node, f-limit for each node s in succ(node) do solution, new-f <- DFS-Contour(s, f-limit) if solution is non-null return solution, f-limit next-f <- Min(next-f, new-f) end return null, next-f

IDA* Properties Complete: –if shortest path fits into memory Optimal: –if shortest optimal path fits into memory Time Complexity: O(b 2d ) Space Complexity: O(bd)