Artificial Intelligence

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Informed Search Algorithms
Informed search algorithms
An Introduction to Artificial Intelligence
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Problem Solving by Searching
Informed search.
Review: Search problem formulation
Informed search algorithms
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Informed search algorithms
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed search algorithms
1 CS 2710, ISSP 2610 Chapter 4, Part 1 Heuristic Search.
Informed search algorithms
2013/10/17 Informed search A* algorithm. Outline 2 Informed = use problem-specific knowledge Which search strategies? Best-first search and its variants.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
Informed Search Include specific knowledge to efficiently conduct the search and find the solution.
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search I (Beginning of AIMA Chapter 4.1)
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Search Foundations of Artificial Intelligence.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Romania. Romania Problem Initial state: Arad Goal state: Bucharest Operators: From any node, you can visit any connected node. Operator cost, the.
Reading Material Sections 3.3 – 3.5 Sections 4.1 – 4.2 “Optimal Rectangle Packing: New Results” By R. Korf (optional) “Optimal Rectangle Packing: A Meta-CSP.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Heuristic Search Introduction to Artificial Intelligence
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
Artificial Intelligence Informed Search Algorithms
Informed search algorithms
Informed search algorithms
Artificial Intelligence
Artificial Intelligence
Informed Search.
Presentation transcript:

Artificial Intelligence Informed search Chapter 4, AIMA

Romania

Romania

Romania problem Initial state: Arad Find the minimum distance path to Bucharest.

Informed search Searching for the goal and knowing something about in which direction it is. Evaluation function: f(n) - Expand the node with minimum f(n) Heuristic function: h(n) - Our estimated cost of the path from node n to the goal.

Example heuristic function h(n) hSLD – Straight-line distances (km) to Bucharest

Greedy best-first (GBFS) Expand the node that appears to be closest to the goal: f(n) = h(n) Incomplete (infinite paths, loops) Not optimal (unless the heuristic function is a correct estimate) Space and time complexity ~ O(bd)

Assignment: Expand the nodes in the greedy-best- first order, beginning from Arad and going to Bucharest These are the h(n) values.

Map Path cost = 450 km

Romania problem: GBFS Initial state: Arad Find the minimum distance path to Bucharest. 374 253 329

Romania problem: GBFS Initial state: Arad Find the minimum distance path to Bucharest. 380 366 176 193

Romania problem: GBFS Initial state: Arad Find the minimum distance path to Bucharest. 253 Not the optimal solution Path cost = 450 km

A and A* best-first search A: Improve greedy search by discouraging wandering off: f(n) = g(n) + h(n) Here g(n) is the cost to get to node n from the start position. This penalizes taking steps that don’t improve things considerably. A*: Use an admissible heuristic, i.e. a heuristic h(n) that never overestimates the true cost for reaching the goal from node n.

Assignment: Expand the nodes in the A Assignment: Expand the nodes in the A* order, beginning from Arad and going to Bucharest These are the g(n) values. These are the h(n) values.

A* on the Romania problem. = g(n) + h(n) The straight-line distance never overestimates the true distance; it is an admissible heuristic. A* on the Romania problem. Rimnicu-Vilcea is expanded before Fagaras. The gain from expanding Fagaras is too small so the A* algorithm backs up and expands Fagaras. None of the descentants of Fagaras is better than a path through Rimnicu-Vilcea; the algorithm goes back to Rimnicu-Vilcea and selects Pitesti. The final path cost = 418 km This is the optimal solution.

A* on the Romania problem. = g(n) + h(n) The straight-line distance never overestimates the true distance; it is an admissible heuristic. A* on the Romania problem. Rimnicu-Vilcea is expanded before Fagaras. The gain from expanding Rimnicu-Vilcea is too small so the A* algorithm backs up and expands Fagaras. None of the descentants of Fagaras is better than a path through Rimnicu-Vilcea; the algorithm goes back to Rimnicu-Vilcea and selects Pitesti. The final path cost = 418 km This is the optimal solution.

A* on the Romania problem. = g(n) + h(n) The straight-line distance never overestimates the true distance; it is an admissible heuristic. A* on the Romania problem. Rimnicu-Vilcea is expanded before Fagaras. The gain from expanding Rimnicu-Vilcea is too small so the A* algorithm backs up and expands Fagaras. None of the descentants of Fagaras is better than a path through Rimnicu-Vilcea; the algorithm goes back to Rimnicu-Vilcea and selects Pitesti. The final path cost = 418 km This is the optimal solution.

A* on the Romania problem. = g(n) + h(n) The straight-line distance never overestimates the true distance; it is an admissible heuristic. A* on the Romania problem. Rimnicu-Vilcea is expanded before Fagaras. The gain from expanding Rimnicu-Vilcea is too small so the A* algorithm backs up and expands Fagaras. None of the descentants of Fagaras is better than a path through Rimnicu-Vilcea; the algorithm goes back to Rimnicu-Vilcea and selects Pitesti. The final path cost = 418 km This is the optimal solution.

Romania problem: A* Initial state: Arad Find the minimum distance path to Bucharest. The optimal solution Path cost = 418 km

Theorem: A* tree-search is optimal A and B are two nodes on the fringe. A is a suboptimal goal node and B is a node on the optimal path. Optimal path cost = C Optimal path B A

Theorem: A* tree-search is optimal A and B are two nodes on the fringe. A is a suboptimal goal node and B is a node on the optimal path. Optimal path cost = C Optimal path B A h(A) = 0 h(n) is admissive heuristic

Theorem: A* tree-search is optimal A and B are two nodes on the fringe. A is a suboptimal goal node and B is a node on the optimal path. Optimal path cost = C Optimal path B A  No suboptimal goal node will be selected before the optimal goal node

Is A* graph-search optimal? Previous proof works only for tree-search For graph-search we add the requirement of consistency (monotonicity): c(n,m) = step cost for going from node n to node m (n comes before m) m h(m) c(n,m) n h(n) goal

A* graph search with consistent heuristic is optimal Theorem: If the consistency condition on h(n) is satisfied, then when A* expands a node n, it has already found an optimal path to n. This follows from the fact that consistency means that f(n) is nondecreasing along a path in the graph if m comes after n along a path

Proof A* has reached node m along the alternative path B. Path A is the optimal path to node m.  gA(m)  gB(m) Node n precedes m along the optimal path A.  fA(n)  fA(m) Both n and m are on the fringe and A* is about to expand m.  fB(m)  fA(n) m n A B Optimal path

Proof A* has reached node m along the alternative path B. Path A is the optimal path to node m.  gA(m)  gB(m) Node n precedes m along the optimal path A.  fA(n)  fA(m) Both n and m are on the fringe and A* is about to expand m.  fB(m)  fA(n) m n A B Optimal path

Proof But path A is optimal to reach m why gA(m)  gB(m) n A B Optimal path But path A is optimal to reach m why gA(m)  gB(m) Thus, either m = n or contradiction.  A* graph-search with consistent heuristic always finds the optimal path

A* Optimal Complete Optimally efficient (no algorithm expands fewer nodes) Memory requirement exponential...(bad) A* expands all nodes with f(n) < C A* expands some nodes with f(n) = C

Romania problem: A* Initial state: Arad Find the minimum distance path to Bucharest. The optimal solution Path cost = 418 km

Romania problem: A* Initial state: Arad Find the minimum distance path to Bucharest. Never tested nodes The optimal solution Path cost = 418 km

Variants of A* Iterative deepening A* (IDA*) (uses f cost) Recursive best-first search (RBFS) Depth-first but keep track of best f-value so far above. Memory-bounded A* (MA*/SMA*) Drop old/bad nodes when memory gets full. Best of these is SMA*

Heuristic functions 8-puzzle h1 = The number of misplaced tiles. h2 = The sum of the distances of the tiles from their respective goal positions (Manhattan distance). Both are admissive 2 8 3 1 6 4 7 5 h1 = 5, h2 = 5 1 2 3 8 6 4 7 5 Goal state

Heuristic functions 8-puzzle Initial state h1 = The number of misplaced tiles. Assignment: Expand the first three levels of the search tree using A* and the heuristic h1. 2 8 3 1 6 4 7 5 h1 = 5, h2 = 5 1 2 3 8 6 4 7 5 Goal state

A* on 8-puzzle, h1 heuristic Only nodes in shaded area are expanded Goal reached in node #13 Image from G. F. Luger, ”Artificial Intelligence” (4th ed.) 2002

Domination It is obvious from the definitions that h1(n)  h2(n). We say that h2 dominates h1. All nodes expanded with h2 are also expanded with h1 (but not vice versa). Thus, h2 is better.

Local search In many problems, one does not care about the path – only the goal state is of interest. Use local searches that only keep track of the last state (saves memory).

Example: N-queens From initial state (in N  N chessboard), try to move to other configurations such that the number of conflicts is reduced.

Hill-climbing Current node = ni. Grab a neighbor node ni+1 and move there if it improves things, i.e. if Df = f(ni) - f(ni+1) > 0

Heuristic: Number of pairs of queens that threat each other. Best moves are marked.

Simulated annealing Current node = ni. Grab a neighbor node ni+1 and move there if there is improvement or if the decrease is small in relation to the ”temperature”. Accept the move with probability p (This is a common and useful algorithm) Yields Boltzmann statistics

Local beam search Start with k random states Expand all k states and test their children states. Keep the k best children states Repeat until goal state is found

Genetic algorithms Start with k random states Selective breeding by mating the best states (with mutation)