An Introduction to Artificial Intelligence

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Chapter 4: Informed Heuristic Search
Lights Out Issues Questions? Comment from me.
Heuristic Functions By Peter Lane
Informed search algorithms
Review: Search problem formulation
Informed Search Algorithms
Notes Dijstra’s Algorithm Corrected syllabus.
Lecture 4: Informed/Heuristic Search
Informed search strategies
Informed search algorithms
Artificial Intelligence Presentation
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Informed search algorithms
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
CSC344: AI for Games Lecture 4: Informed search
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Informed search algorithms
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed search algorithms
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Informed Search Strategies Lecture # 8 & 9. Outline 2 Best-first search Greedy best-first search A * search Heuristics.
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
CSC3203: AI for Games Informed search (1) Patrick Olivier
CS 312: Algorithm Design & Analysis Lecture #37: A* (cont.); Admissible Heuristics Credit: adapted from slides by Stuart Russell of UC Berkeley. This work.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Pengantar Kecerdasan Buatan 4 - Informed Search and Exploration AIMA Ch. 3.5 – 3.6.
Slides by: Eric Ringger, adapted from slides by Stuart Russell of UC Berkeley. CS 312: Algorithm Design & Analysis Lecture #36: Best-first State- space.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Informed Search CSE 473 University of Washington.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Heuristic Search Introduction to Artificial Intelligence
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
Artificial Intelligence Informed Search Algorithms
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
CSE 473 University of Washington
Artificial Intelligence
Solving Problems by Searching
Informed Search.
Presentation transcript:

An Introduction to Artificial Intelligence Lecture 4a: Informed Search and Exploration Ramin Halavati (halavati@ce.sharif.edu) In which we see how information about the state space can prevent algorithms from blundering about the dark.

Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Simulated annealing search Local beam search Genetic algorithms

UNINFORMED? Uninformed: To search the states graph/tree using Path Cost and Goal Test.

INFORMED? Informed: More data about states such as distance to goal. Best First Search Almost Best First Search Heuristic h(n): estimated cost of the cheapest path from n to goal. h(goal) = 0. Not necessarily guaranteed, but seems fine.

Greedy Best First Search Compute estimated distances to goal. Expand the node which gains the least estimate.

Greedy Best First Search Example Heuristic: Straight Line Distance (HSLD)

Greedy Best First Search Example

Properties of Greedy Best First Search Complete? No, can get stuck in loop. Time? O(bm), but a good heuristic can give dramatic improvement Space? O(bm), keeps all nodes in memory Optimal? No, it depends

A* search Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal

A* search example

A* vs Greedy

Admissible Heuristics h(n) is admissible: if for every node n, h(n) ≤ h*(n), h*(n): the true cost from n to goal. Never Overestimates. It’s Optimistic. Example: hSLD(n) (never overestimates the actual road distance)

A* is Optimal Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal TREE-SEARCH: To re-compute the cost of each node, each time you reach it. GRAPH-SEARCH: To store the costs of all nodes, the first time you reach em.

Optimality of A* ( proof ) Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) = g(G2) since h(G2) = 0 g(G2) > g(G) since G2 is suboptimal f(G) = g(G) since h(G) = 0 f(G2) > f(G) from above h(n) ≤ h* (n) since h is admissible g(n) + h(n) ≤ g(n) + h*(n) f(n) ≤ f(G) Hence f(G2) > f(n), and A* will never select G2 for expansion

Consistent Heuristics h(n) is consistent if: for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n') Consistency: Monotonicity Triangular Inequality. Usually at no extra cost! Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

Optimality of A* A* expands nodes in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f=fi, where fi < fi+1

Properties of A* Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) ) Time? Exponential Space? Keeps all nodes in memory (bd) Optimal? Yes A* prunes all nodes with f(n)>f(Goal). A* is Optimally Efficient.

How to Design Heuristics? E.g., for the 8-puzzle: h1(n) = number of misplaced tiles h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile)

Admissible heuristics h1(n) = Number of misplaced tiles h2(n) = Total Manhattan distance

Effective Branching Factor If A* finds the answer by expanding N nodes, using heuristic h(n), At depth d, b* is effective branching factor if: 1+b*+(b*)2+…+(b*)d = N+1

Dominance If h2(n) ≥ h1(n) for all n (both admissible) then h2 dominates h1. h2 is better for search. h2 is more realistic. h (n)=max(h1(n), h2(n),… ,hm(n)) Heuristic must be efficient.

How to Generate Heuristics? Formal Methods Relaxed Problems Pattern Data Bases Disjoint Pattern Sets Learning ABSOLVER, 1993 A new, better heuristic for 8 puzzle. First heuristic for Rubik’s cube.

“Relaxed Problem” Heuristic A problem with fewer restrictions on the actions. The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem. 8-puzzle: Main Rule: A tile can be moved from square A to B if A is horizontally or vertically adjacent to B and B is empty. Relaxed Rules: A tile can move from square A to square B if A is adjacent to B. (h2) A tile can move from square A to square B if B is blank. A tile can move from square A to square B. (h1)

“Sub Problem” Heuristic The cost to solve a subproblem. It IS admissible.

“Pattern Database” Heuristics To store the exact solution cost to some sub-problems.

“Disjoint Pattern” Databases To add the result of several Pattern-Database heuristics. Speed Up: 103 times for 15-Puzzle and 106 times for 24-Puzzle. Separablity: Rubik’s cube vs. 8-Puzzle.

Learning Heuristics from Experience Machine Learning Techniques. Feature Selection Linear Combinations

BACK TO MAIN SEARCH METHOD What’s wrong with A*? It’s both Optimal and Optimally Efficient. MEMORY

Memory Bounded Heuristic Search Iterative Deepening A* (IDA*) Similar to Iterative Deepening Depth First Search Bounded by f-cost. Memory: b*d

Recursive Best First Search Main Idea: To search a level with limited f-cost, based on other open nodes with continuous update.

Recursive Best First Search

Recursive Best First Search, Sample

Recursive Best First Search, Sample Complete? Yes, given enough space. Space? b * d Optimal? Yes, if admissible. Time? Hard to analyze. It depends…

Memory, more memory… A*: bd IDA*, RBFS: b*d What about exactly 10 MB?

Memory-Bounded A* MA* Simplified Memory Bounded A* (SMA*) To store as many nodes as possible (the A* trend). When memory is full, remove the worst current node and update its parent.

SMA* Example

SMA* Code

To be continued…