CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches.

Slides:



Advertisements
Similar presentations
Heuristic Functions By Peter Lane
Advertisements

Review: Search problem formulation
Informed search strategies
Informed search algorithms
Artificial Intelligence Presentation
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Solving Problem by Searching
Optimality of A*(standard proof) Suppose suboptimal goal G 2 in the queue. Let n be an unexpanded node on a shortest path to optimal goal G. f(G 2 ) =
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Informed search.
Review: Search problem formulation
Introduction to Artificial Intelligence A* Search Ruth Bergman Fall 2002.
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2008.
Artificial Intelligence
Informed Search Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 4 Spring 2005.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
ITCS 3153 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches.
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
Cooperating Intelligent Systems Informed search Chapter 4, AIMA 2 nd ed Chapter 3, AIMA 3 rd ed.
Pag.1 Artificial Intelligence Informed search algorithms Chapter 4, Sections 1-5.
State-Space Searches.
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
CS 416 Artificial Intelligence Lecture 5 Finish Uninformed Searches Begin Informed Searches Lecture 5 Finish Uninformed Searches Begin Informed Searches.
Informed Search Uninformed searches easy but very inefficient in most cases of huge search tree Informed searches uses problem-specific information to.
Informed (Heuristic) Search
1 CS 2710, ISSP 2610 Chapter 4, Part 1 Heuristic Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
CS 312: Algorithm Design & Analysis Lecture #37: A* (cont.); Admissible Heuristics Credit: adapted from slides by Stuart Russell of UC Berkeley. This work.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Heuristic Search Foundations of Artificial Intelligence.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
CE 473: Artificial Intelligence Autumn 2011 A* Search Luke Zettlemoyer Based on slides from Dan Klein Multiple slides from Stuart Russell or Andrew Moore.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
Heuristic Functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
For Monday Read chapter 4 exercise 1 No homework.
Reading Material Sections 3.3 – 3.5 Sections 4.1 – 4.2 “Optimal Rectangle Packing: New Results” By R. Korf (optional) “Optimal Rectangle Packing: A Meta-CSP.
Review: Tree search Initialize the frontier using the starting state
Heuristic Functions.
For Monday Chapter 6 Homework: Chapter 3, exercise 7.
Informed Search Methods
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
EA C461 – Artificial Intelligence
CS 416 Artificial Intelligence
Presentation transcript:

CS 416 Artificial Intelligence Lecture 5 Informed Searches Lecture 5 Informed Searches

Administrivia Visual Studio If you’ve never been granted permission before, you should receive shortlyIf you’ve never been granted permission before, you should receive shortly Assign 1 Set up your submit account passwordSet up your submit account password Visit: You should see that you’re enrolled through Toolkit access to class gradebookYou should see that you’re enrolled through Toolkit access to class gradebook Visual Studio If you’ve never been granted permission before, you should receive shortlyIf you’ve never been granted permission before, you should receive shortly Assign 1 Set up your submit account passwordSet up your submit account password Visit: You should see that you’re enrolled through Toolkit access to class gradebookYou should see that you’re enrolled through Toolkit access to class gradebook

Example A B C Goal h(n) c(n, n’) f(n)

A* w/o Admissibility A B C Goal B = goal f(B) = 10 Are we done? No Must explore C

A* w/ Admissibility A B C Goal B = goal f(B) = 10 Are we done? Yes h(C) indicates best path cost of 95

A* w/o Consistency 0 A 101 B 200 C 100 D

A* w/ Consistency 0 A 101 B 105 C 100 D

Pros and Cons of A* A* is optimal and optimally efficient A* is still slow and bulky (space kills first) Number of nodes grows exponentially with the length to goalNumber of nodes grows exponentially with the length to goal –This is actually a function of heuristic, but they all make mistakes A* must search all nodes within this goal contourA* must search all nodes within this goal contour Finding suboptimal goals is sometimes only feasible solutionFinding suboptimal goals is sometimes only feasible solution Sometimes, better heuristics are non-admissibleSometimes, better heuristics are non-admissible A* is optimal and optimally efficient A* is still slow and bulky (space kills first) Number of nodes grows exponentially with the length to goalNumber of nodes grows exponentially with the length to goal –This is actually a function of heuristic, but they all make mistakes A* must search all nodes within this goal contourA* must search all nodes within this goal contour Finding suboptimal goals is sometimes only feasible solutionFinding suboptimal goals is sometimes only feasible solution Sometimes, better heuristics are non-admissibleSometimes, better heuristics are non-admissible

Memory-bounded Heuristic Search Try to reduce memory needs Take advantage of heuristic to improve performance Iterative-deepening A* (IDA*)Iterative-deepening A* (IDA*) Recursive best-first search (RBFS)Recursive best-first search (RBFS) SMA*SMA* Try to reduce memory needs Take advantage of heuristic to improve performance Iterative-deepening A* (IDA*)Iterative-deepening A* (IDA*) Recursive best-first search (RBFS)Recursive best-first search (RBFS) SMA*SMA*

Iterative Deepening A* Iterative Deepening Remember, as in uniformed search, this was a depth-first search where the max depth was iteratively increasedRemember, as in uniformed search, this was a depth-first search where the max depth was iteratively increased As an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iterationAs an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iteration –What happens when f-cost is real-valued? Iterative Deepening Remember, as in uniformed search, this was a depth-first search where the max depth was iteratively increasedRemember, as in uniformed search, this was a depth-first search where the max depth was iteratively increased As an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iterationAs an informed search, we again perform depth-first search, but only nodes with f-cost less than or equal to smallest f- cost of nodes expanded at last iteration –What happens when f-cost is real-valued?

Recursive best-first search Depth-first combined with best alternative Keep track of options along fringeKeep track of options along fringe As soon as current depth-first exploration becomes more expensive of best fringe optionAs soon as current depth-first exploration becomes more expensive of best fringe option –back up to fringe, but update node costs along the way Depth-first combined with best alternative Keep track of options along fringeKeep track of options along fringe As soon as current depth-first exploration becomes more expensive of best fringe optionAs soon as current depth-first exploration becomes more expensive of best fringe option –back up to fringe, but update node costs along the way

Recursive best-first search box contains f-value of best alternative path available from any ancestorbox contains f-value of best alternative path available from any ancestor First, explore path to PitestiFirst, explore path to Pitesti Backtrack to Fagaras and update FagarasBacktrack to Fagaras and update Fagaras Backtrack to Pitesti and update PitestiBacktrack to Pitesti and update Pitesti box contains f-value of best alternative path available from any ancestorbox contains f-value of best alternative path available from any ancestor First, explore path to PitestiFirst, explore path to Pitesti Backtrack to Fagaras and update FagarasBacktrack to Fagaras and update Fagaras Backtrack to Pitesti and update PitestiBacktrack to Pitesti and update Pitesti

Quality of Iterative Deepening A* and Recursive best-first search RBFS O(bd) space complexity [if h(n) is admissible]O(bd) space complexity [if h(n) is admissible] Time complexity is hard to describeTime complexity is hard to describe –efficiency is heavily dependent on quality of h(n) –same states may be explored many times IDA* and RBFS use too little memoryIDA* and RBFS use too little memory –even if you wanted to use more than O(bd) memory, these two could not provide any advantage RBFS O(bd) space complexity [if h(n) is admissible]O(bd) space complexity [if h(n) is admissible] Time complexity is hard to describeTime complexity is hard to describe –efficiency is heavily dependent on quality of h(n) –same states may be explored many times IDA* and RBFS use too little memoryIDA* and RBFS use too little memory –even if you wanted to use more than O(bd) memory, these two could not provide any advantage

Simple Memory-bounded A* Use all available memory Follow A* algorithm and fill memory with new expanded nodesFollow A* algorithm and fill memory with new expanded nodes If new node does not fitIf new node does not fit –free() stored node with worst f-value –propagate f-value of freed node to parent SMA* will regenerate a subtree only when it is neededSMA* will regenerate a subtree only when it is needed –the path through deleted subtree is unknown, but cost is known Use all available memory Follow A* algorithm and fill memory with new expanded nodesFollow A* algorithm and fill memory with new expanded nodes If new node does not fitIf new node does not fit –free() stored node with worst f-value –propagate f-value of freed node to parent SMA* will regenerate a subtree only when it is neededSMA* will regenerate a subtree only when it is needed –the path through deleted subtree is unknown, but cost is known

Thrashing Typically discussed in OS w.r.t. memory The cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual searchThe cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual search time complexity will scale significantly if thrashingtime complexity will scale significantly if thrashing –So we saved space with SMA*, but if the problem is large, it will be intractable from the point of view of computation time Typically discussed in OS w.r.t. memory The cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual searchThe cost of repeatedly freeing and regenerating parts of the search tree dominate the cost of actual search time complexity will scale significantly if thrashingtime complexity will scale significantly if thrashing –So we saved space with SMA*, but if the problem is large, it will be intractable from the point of view of computation time

Meta-foo What does meta mean in AI? Frequently it means step back a level from fooFrequently it means step back a level from foo Metareasoning = reasoning about reasoningMetareasoning = reasoning about reasoning These informed search algorithms have pros and cons regarding how they choose to explore new levelsThese informed search algorithms have pros and cons regarding how they choose to explore new levels –a metalevel learning algorithm may combine learn how to combine techniques and parameterize search What does meta mean in AI? Frequently it means step back a level from fooFrequently it means step back a level from foo Metareasoning = reasoning about reasoningMetareasoning = reasoning about reasoning These informed search algorithms have pros and cons regarding how they choose to explore new levelsThese informed search algorithms have pros and cons regarding how they choose to explore new levels –a metalevel learning algorithm may combine learn how to combine techniques and parameterize search

Heuristic Functions 8-puzzle problem Avg Depth=22 Branching = approx states 170,000 repeated

Heuristics The number of misplaced tiles Admissible because at least n moves required to solve n misplaced tilesAdmissible because at least n moves required to solve n misplaced tiles The distance from each tile to its goal position No diagonals, so use Manhattan DistanceNo diagonals, so use Manhattan Distance –As if walking around rectilinear city blocks also admissiblealso admissible The number of misplaced tiles Admissible because at least n moves required to solve n misplaced tilesAdmissible because at least n moves required to solve n misplaced tiles The distance from each tile to its goal position No diagonals, so use Manhattan DistanceNo diagonals, so use Manhattan Distance –As if walking around rectilinear city blocks also admissiblealso admissible

Compare these two heuristics Effective Branching Factor, b* If A* explores N nodes to find the goal at depth dIf A* explores N nodes to find the goal at depth d –b* = branching factor such that a uniform tree of depth d contains N+1 nodes  N+1 = 1 + b* + (b*) 2 + … + (b*) d b* close to 1 is idealb* close to 1 is ideal –because this means the heuristic guided the A* search linearly –If b* were 100, on average, the heuristic had to consider 100 children for each node –Compare heuristics based on their b* Effective Branching Factor, b* If A* explores N nodes to find the goal at depth dIf A* explores N nodes to find the goal at depth d –b* = branching factor such that a uniform tree of depth d contains N+1 nodes  N+1 = 1 + b* + (b*) 2 + … + (b*) d b* close to 1 is idealb* close to 1 is ideal –because this means the heuristic guided the A* search linearly –If b* were 100, on average, the heuristic had to consider 100 children for each node –Compare heuristics based on their b*

Compare these two heuristics

h 2 is always better than h 1 for any node, n, h 2 (n) >= h 1 (n)for any node, n, h 2 (n) >= h 1 (n) h 2 dominates h 1h 2 dominates h 1 Recall all nodes with f(n) < C* will be expanded?Recall all nodes with f(n) < C* will be expanded? –This means all nodes, h(n) + g(n) < C*, will be expanded  All nodes where h(n) < C* - g(n) will be expanded –All nodes h 2 expands will also be expanded by h 1 and because h 1 is smaller, others will be expanded as well h 2 is always better than h 1 for any node, n, h 2 (n) >= h 1 (n)for any node, n, h 2 (n) >= h 1 (n) h 2 dominates h 1h 2 dominates h 1 Recall all nodes with f(n) < C* will be expanded?Recall all nodes with f(n) < C* will be expanded? –This means all nodes, h(n) + g(n) < C*, will be expanded  All nodes where h(n) < C* - g(n) will be expanded –All nodes h 2 expands will also be expanded by h 1 and because h 1 is smaller, others will be expanded as well

Inventing admissible heuristic funcs How can you create h(n)? Simplify problem by reducing restrictions on actionsSimplify problem by reducing restrictions on actions –Allow 8-puzzle pieces to sit atop on another –Call this a relaxed problem –The cost of optimal solution to relaxed problem is admissible heuristic for original problem  It is at least as expensive for the original problem How can you create h(n)? Simplify problem by reducing restrictions on actionsSimplify problem by reducing restrictions on actions –Allow 8-puzzle pieces to sit atop on another –Call this a relaxed problem –The cost of optimal solution to relaxed problem is admissible heuristic for original problem  It is at least as expensive for the original problem

Examples of relaxed problems A tile can move from square A to square B if A is horizontally or vertically adjacent to B and B is blank A tile can move from A to B if A is adjacent to B (overlap)A tile can move from A to B if A is adjacent to B (overlap) A tile can move from A to B if B is blank (teleport)A tile can move from A to B if B is blank (teleport) A tile can move from A to B (teleport and overlap)A tile can move from A to B (teleport and overlap) Solutions to these relaxed problems can be computed without search and therefore heuristic is easy to compute A tile can move from square A to square B if A is horizontally or vertically adjacent to B and B is blank A tile can move from A to B if A is adjacent to B (overlap)A tile can move from A to B if A is adjacent to B (overlap) A tile can move from A to B if B is blank (teleport)A tile can move from A to B if B is blank (teleport) A tile can move from A to B (teleport and overlap)A tile can move from A to B (teleport and overlap) Solutions to these relaxed problems can be computed without search and therefore heuristic is easy to compute

Multiple Heuristics If multiple heuristics available: h(n) = max {h 1 (n), h 2 (n), …, h m (n)}h(n) = max {h 1 (n), h 2 (n), …, h m (n)} If multiple heuristics available: h(n) = max {h 1 (n), h 2 (n), …, h m (n)}h(n) = max {h 1 (n), h 2 (n), …, h m (n)}

Use solution to subproblem as heuristic What is optimal cost of solving some portion of original problem? subproblem solution is heuristic of original problemsubproblem solution is heuristic of original problem What is optimal cost of solving some portion of original problem? subproblem solution is heuristic of original problemsubproblem solution is heuristic of original problem

Pattern Databases Store optimal solutions to subproblems in database We use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzleWe use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzle During solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristicDuring solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristic Store optimal solutions to subproblems in database We use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzleWe use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzle During solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristicDuring solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristic

Learning Could also build pattern database while solving cases of the 8-puzzle Must keep track of intermediate states and true final cost of solutionMust keep track of intermediate states and true final cost of solution Inductive learning builds mapping of state -> costInductive learning builds mapping of state -> cost Because too many permutations of actual statesBecause too many permutations of actual states –Construct important features to reduce size of space Could also build pattern database while solving cases of the 8-puzzle Must keep track of intermediate states and true final cost of solutionMust keep track of intermediate states and true final cost of solution Inductive learning builds mapping of state -> costInductive learning builds mapping of state -> cost Because too many permutations of actual statesBecause too many permutations of actual states –Construct important features to reduce size of space