Artificial Intelligence Informed Search Algorithms

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Heuristic Search Russell and Norvig: Chapter 4 Slides adapted from:
Informed search algorithms
Review: Search problem formulation
Informed Search Algorithms
Local Search Algorithms
Informed search algorithms
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Informed search algorithms
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
Review: Search problem formulation
Informed search algorithms
Artificial Intelligence
Cooperating Intelligent Systems Informed search Chapter 4, AIMA.
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
CSC344: AI for Games Lecture 4: Informed search
Informed search algorithms
Vilalta&Eick: Informed Search Informed Search and Exploration Search Strategies Heuristic Functions Local Search Algorithms Vilalta&Eick: Informed Search.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Princess Nora University Artificial Intelligence Chapter (4) Informed search algorithms 1.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Local Search Algorithms and Optimization Problems
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Constraints Satisfaction Edmondo Trentin, DIISM. Constraint Satisfaction Problems: Local Search In many optimization problems, the path to the goal is.
Local search algorithms In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution State space = set of "complete"
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Heuristic Search Introduction to Artificial Intelligence
Local Search Algorithms
Artificial Intelligence Problem solving by searching CSC 361
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
Informed search algorithms
Informed search algorithms
CSE 473 University of Washington
Local Search Algorithms
Artificial Intelligence
Solving Problems by Searching
Informed Search.
Local Search Algorithms
Presentation transcript:

Artificial Intelligence Informed Search Algorithms Dr. Shahriar Bijani Shahed University Spring 2017

Slides’ Reference S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, Chapter 3, Prentice Hall, 2010, 3rd Edition.

Outline Best-first search Heuristics Local search algorithms Greedy best-first search A* search Memory-Bounded Search Heuristics Local search algorithms Hill-climbing search Simulated annealing search Local beam search Genetic algorithms

Best-first search Idea: use an evaluation function f(n) for each node Estimation of of "desirability" Expand most desirable unexpanded node Implementation: Order the nodes in the fringe in decreasing order of desirability Special cases: Greedy best-first search A* search

Romania with step costs in km

Greedy best-first search Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal e.g., hSLD(n) = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goal

Greedy best-first search example straight line distance h(n) h(n) =234 406

Greedy best-first search example

Greedy best-first search example

Greedy best-first search example

Greedy best-first search example

Greedy best-first search example Arad  Sibiu  Fagaras  Bucharest

Another example

Another example

Another example

Another example

Another example

Another example

Another example Start  A  D  E  Goal

Greedy best-first infinite loop A h(n)=10 h(n)=9 B h(n)=7 C infinite loop D h(n)=3 E h(n)=5 F h(n)=0

Greedy best-first not optimal h(n)=10 h(n)=9 B h(n)=7 h(n)=5 C D E h(n)=4 h(n)=2 G F h(n)=3 H h(n)=0

Greedy best-first not optimal goal to reach the largest-sum 9 6 17 8 7 6 80

Greedy best-first not optimal goal  to reach the largest value 9 6 17 8 7 6 80

Greedy best-first not optimal goal  to reach the largest value 9 6 17 8 7 6 80

Properties of greedy best-first search Complete? No – can get stuck in loops, e.g., Tehran  Karaj  Tehran  Karaj  Time? O(bm): but a good heuristic can give dramatic improvement Space? O(bm): keeps all nodes in memory Optimal? No

A* search The most widely known form of best-first search is called A* search ("A-star search"). Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path (i.e. cheapest solution) through n to goal

A* search example

A* search example

Comparing A* and greedy search greedy search path  140 + 99 + 211 = 450 A* search path  140 + 80+ 97 + 101 = 418

A* search: another example Assume h(n)=

A* search: another example f(n) = g(n) + h(n)

Admissible heuristics A heuristic h(n) is admissible if for every node n, h(n)≤ h*(n), h*(n): true cost to reach the goal state from n An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic Example: hSLD(n) (never overestimates the actual road distance) Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal

Optimality of A* (proof) Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) = g(G2) since h(G2) = 0 g(G2) > g(G) since G2 is suboptimal f(G) = g(G) since h(G) = 0 f(G2) > f(G) from above

Optimality of A* (proof) Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) > f(G) from above h(n) ≤ h*(n) since h is admissible g(n) + h(n) ≤ g(n) + h*(n) f(n) ≤ f(G)  f(G2) > f(n), and A* will never select G2 for expansion

Consistent heuristics A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n') If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n) i.e., f(n) is non-decreasing along any path. Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

Optimality of A* A* expands nodes in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f=fi, where fi < fi+1

Properties of A* Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) ) Time? Exponential Space? Keeps all nodes in memory Optimal? Yes

About Heuristics Heuristics are intended to orient the search along promising paths The time spent computing heuristics must be recovered by a better search A heuristic function could consist of solving the problem; then it would perfectly guide the search Deciding which node to expand is sometimes called meta-reasoning Heuristics may not always look like numbers and may involve large amount of knowledge

Heuristic Accuracy h(N) = 0 for all nodes is admissible and consistent. Hence, breadth-first and uniform-cost are specific A* ! Let h1 and h2 be two admissible and consistent heuristics such that for all nodes N: h1(N)  h2(N). h2 is more informed (or more accurate) than h1. h2 dominates h1 Claim 3: Every node expanded by A* using h2 is also expanded by A* using h1.

Heuristic Function Function h(N) that estimates the cost of the cheapest path from node N to goal node. Example: 8-puzzle 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 h(N) = number of misplaced tiles = 6 goal N

Heuristic Function Function h(N) that estimates the cost of the cheapest path from node N to goal node. Example: 8-puzzle 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 h(N) = sum of the distances of every tile to its goal position = 2 + 3 + 0 + 1 + 3 + 0 + 3 + 1 = 13 goal N

Robot Navigation YN XN h(N) = Straight-line distance to the goal = [(Xg – XN)2 + (Yg – YN)2]1/2

8-Puzzle f(N) = h(N) = number of misplaced tiles 3 4 3 4 5 3 3 4 2 4 4 4 4 2 1

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 3+3 3+4 1+5 1+3 2+3 2+4 5+2 5+0 0+4 3+4 3+2 4+1

8-Puzzle f(N) = h(N) =  distances of tiles to goal 6 4 5 3 2 5 4 2 1

Robot navigation Cost of one horizontal/vertical step = 1 Cost of one diagonal step = 2 h(N) = straight-line distance to the goal is admissible

Robot Navigation

Robot Navigation f(N) = h(N), with h(N) = Distance to the goal 2 1 5 8 2 1 5 8 7 3 4 6

Robot Navigation f(N) = h(N), with h(N) = Distance to the goal 8 7 6 5 4 3 2 3 4 5 6 7 5 4 3 5 6 3 2 1 1 2 4 7 7 6 5 8 7 6 5 4 3 2 3 4 5 6

Robot Navigation f(N) = g(N)+h(N), with h(N) = Distance to goal 7+2 8+3 2 1 5 8 7 3 4 6 8+3 7+4 7+4 6+5 5+6 6+3 4+7 5+6 3+8 4+7 3+8 2+9 2+9 3+10 7+2 6+1 2+9 3+8 6+1 8+1 7+0 2+9 1+10 1+10 0+11 7+0 7+2 6+1 8+1 7+2 6+3 6+3 5+4 5+4 4+5 4+5 3+6 3+6 2+7 2+7 3+8

Robot Navigation f(N) = g(N) + h(N), with h(N) = straight-line distance from N to goal Cost of one horizontal/vertical step = 1 Cost of one diagonal step = 2

8-Puzzle 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 N goal h1(N) = number of misplaced tiles = 6 is admissible h2(N) = sum of distances of each tile to goal = 13 is admissible h3(N) = (sum of distances of each tile to goal) + 3 x (sum of score functions for each tile) = 49 is not admissible

Relaxed problems A problem with fewer restrictions on the actions is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution

‘A’ Search Algorithm If h(n) was not acceptable, then we have A search algorithm. A is not optimal.

Memory Bounded Searches IDA* RBFS SMA*

Iterative Deepening A* (IDA*) Use f(N) = g(N) + h(N) with admissible and consistent h Each iteration is depth-first with cutoff on the value of f of expanded nodes

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 6 Cutoff=4

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 6 4 6 Cutoff=4

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 6 4 5 6 Cutoff=4

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 5 4 6 4 5 6 Cutoff=4

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 6 5 4 6 4 5 6 Cutoff=4

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 6 Cutoff=5

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 6 4 6 Cutoff=5

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 6 4 5 6 Cutoff=5

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 6 4 5 6 Cutoff=5 7

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 5 4 6 5 6 7 Cutoff=5

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 5 5 6 4 5 6 7 Cutoff=5

8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 5 5 6 4 5 6 7 Cutoff=5

IDA* properties With the same conditions for A*, it is complete & optimal. Saving space, because of depth-first search. Space longest path Does not save the history  repeated states

RBFS RBFS = Recursive Best-First Search Like Depth-First Search Saves f( ancestor) of current node. if f(n) > f(ancestor), returns. While returning, replace ‘f’ of path with ‘f’ of child. Linear space, O(bd)

SMA* SMA* = Simplified Memory –bounded A* Using all of the available space. Avoid repeating states. Expands the best child (A*). When the memory is full, deletes the worst ‘f’ and replaces new ‘f’. If goal was available (d<memory) optimal and complete

RTA* Real Time A* Search We have time limitation When the time finished and we could not find the best f(n), we choose the first f(n) in the list

Local search algorithms In many optimization problems, the path to the goal is not important; solution =the goal state State space = set of "complete" configurations Find configuration satisfying constraints, e.g., n-queens, job scheduling, optimization problems In such cases, we can use local search algorithms keep a single "current" state, try to improve it

Local search algorithms Advantages: Usually uses constant additional memory Useful in large and infinite state spaces

Local-minimum problem Robot Navigation Local-minimum problem f(N) = h(N) = straight distance to the goal

What’s the Issue? Search is an iterative local procedure Good heuristics should provide global look-ahead (at low computational cost)

Hill-climbing search Like climbing a mountain by an altimeter and without map It is also called local greedy search

Hill-climbing search Problem: depending on initial state, can get stuck in local maxima

Hill-climbing search: 8-queens problem h = number of pairs of queens that are attacking each other, either directly or indirectly h = 17 for the above state

Hill-climbing search: 8-queens problem A local minimum with h = 1 In 5 steps

Hill-climbing search Hill-climbing search is fast, if there are few local maximums & flats Hill-climbing depends on state space landscape. With random initial state in 8 queen: fails 86%, finds: 14% Not complete With random restarting, it can be complete

Simulated annealing search Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency

Properties of simulated annealing search One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 Widely used in VLSI layout, airline scheduling, etc

Local beam search Keep track of k states rather than just one Start with k randomly generated states At each iteration, all the successors of all k states are generated If any one is a goal state, stop; else select the k best successors from the complete list and repeat.

Genetic algorithms A successor state is generated by combining two parent states Start with k randomly generated states (population) A state is represented as a string over a finite alphabet (often a string of 0s and 1s) Evaluation function (fitness function). Higher values for better states. Produce the next generation of states by selection, crossover, and mutation

Genetic algorithms Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) 24/(24+23+20+11) = 31% 23/(24+23+20+11) = 29% etc

Genetic algorithms

When to Use Search Techniques? The search space is small, and There is no other available techniques, or It is not worth the effort to develop a more efficient technique The search space is large, and There is no other available techniques, and There exist “good” heuristics