2015-5-9 EIE426-AICV 1 Blind and Informed Search Methods Filename: eie426-search-methods-0809.ppt.

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Additional Topics ARTIFICIAL INTELLIGENCE
Review: Search problem formulation
Informed Search Algorithms
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Solving Problem by Searching
Properties of breadth-first search Complete? Yes (if b is finite) Time? 1+b+b 2 +b 3 +… +b d + b(b d -1) = O(b d+1 ) Space? O(b d+1 ) (keeps every node.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
1 Chapter 3 Solving Problems by Searching. 2 Outline Problem-solving agentsProblem-solving agents Problem typesProblem types Problem formulationProblem.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Artificial Intelligence Spring 2009
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence for Games Uninformed search Patrick Olivier
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
CHAPTER 3 CMPT Blind Search 1 Search and Sequential Action.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
CS 380: Artificial Intelligence Lecture #3 William Regli.
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Problem Solving and Search in AI Heuristic Search
CSC344: AI for Games Lecture 4: Informed search
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Informed search algorithms
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
1 Shanghai Jiao Tong University Informed Search and Exploration.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
1 Solving problems by searching 171, Class 2 Chapter 3.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
CSC3203: AI for Games Informed search (1) Patrick Olivier
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 3 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
1 search CS 331/531 Dr M M Awais REPRESENTATION METHODS Represent the information: Animals are generally divided into birds and mammals. Birds are further.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Problem Solving as Search. Problem Types Deterministic, fully observable  single-state problem Non-observable  conformant problem Nondeterministic and/or.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
WEEK 5 LECTURE -A- 23/02/2012 lec 5a CSC 102 by Asma Tabouk Introduction 1 CSC AI Basic Search Strategies.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
Solving problems by searching
CS 4100 Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
Artificial Intelligence
Solving Problems by Searching
Presentation transcript:

EIE426-AICV 1 Blind and Informed Search Methods Filename: eie426-search-methods-0809.ppt

EIE426-AICV 2 Contents:  Problem types  Problem formulation  Example problems  Basic search algorithms: - breadth-first search - depth-first search  Best-first search - greedy search - A* search

EIE426-AICV 3 Example: Romania On holiday in Romania; currently in Arad. Flight leaves tomorrow for Bucharest. Formulate goal: be in Bucharest Formulate problem: states: various cities actions: flight between cities Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

EIE426-AICV 4 Example: Romania (cont.)

EIE426-AICV 5 Selecting a state space Real world is absurdly complex = 〉 state space must be abstracted for problem solving (Abstract) state = set of real states (Abstract) action = complex combination of real actions e.g., “ Arad => Zerind'' represents a complex set of possible routes, detours, rest stops, etc. For guaranteed realizability, any real state “ in Arad ” must get to some real state “ in Zerind ” (Abstract) solution = set of real paths that are solutions in the real world Each abstract action should be “ easier ” than the original problem!

EIE426-AICV 6 State Space Representation of Problems Nodes (N): partial solution states Arcs (A): the steps in a problem-solving process. Initial States (one or more) (S): the given information in a problem instance, form the root of the graph Goal States (GD): the solutions to a problem instance State Space Search: finding a solution path from the start state to a goal A state space is represented by a four-tuple [N, A, S, GD].

EIE426-AICV 7 Example: The 8-puzzle states: integer locations of tiles (ignore intermediate positions) actions: move blank left, right, up, down goal test: = goal state (given) path cost: 1 per move

EIE426-AICV 8 Example: The Traveling Salesperson Suppose a salesperson has N cities to visit and must return home. The goal of the problem is to find the shortest path for the salesperson to travel, visiting each city, and then returning to the starting city. Nodes: cities Arcs: labeled with a weight indicating the cost of traveling Complexity: (N-1)!

EIE426-AICV 9 Example: The Traveling Salesperson (cont.)

EIE426-AICV 10 The start state: an empty board The goal description: three Xs or three Os in a row, column, or diagonal A directed acyclic graph (DAG) Arc: a legal move (placing an X or O in an unused location The complexity: 9!=362,880 paths Example: Tic-Tac-Toe

EIE426-AICV 11 Tree search algorithms Basic idea: offline, simulated exploration of state space by generating successors of already-explored states (also known as expanding states)

EIE426-AICV 12 Tree search example

EIE426-AICV 13 Implementation: states vs. nodes A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes parent, children, depth, path cost g(x) States do not have parents, children, depth, or path cost! The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.

EIE426-AICV 14 Implementation: general tree search

EIE426-AICV 15 Search strategies A strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: Completeness -- does it always find a solution if one exists? Time complexity -- number of nodes generated/expanded Space complexity -- maximum number of nodes in memory Optimality -- does it always find a least-cost solution? Time and space complexity are measured in terms of b --- maximum branching factor of the search tree d --- depth of the least-cost solution m --- maximum depth of the state space (may be  )

EIE426-AICV 16 Uninformed search strategies Uninformed strategies use only the information available in the problem definition Breadth-first search Depth-first search etc.

EIE426-AICV 17 S DE F G CB A Two costs: finding a path and traversing the path. A Search Tree S  G

EIE426-AICV 18 Search Trees (cont.) S DE F G CB A b = 2 d = 0, 1, …,6 m = 6 From the left to the right: alphabetical order

EIE426-AICV 19 Breadth-First Search Expand shallowest unexpanded node Implementation: fringe is an FIFO queue, i.e., new successors go at end

EIE426-AICV 20 Breadth-First Search (cont.) S D A B E C F G Expand shallowest unexpanded node Implementation: fringe is a FIFO queue, i.e., new successors go at end

EIE426-AICV 21 Properties of breadth-first search Complete? Yes (if b is finite) Time? 1+b+b^2+b^3+ … +b^d + b(b^d-1)= O(b^(d+1)), i.e., exponential in d Space? O(b^(d+1)) (keeps every node in memory) Optimal? Yes (if cost = 1 per step); not optimal in general Space is the big problem; can easily generate nodes at 10 MB/sec so 24 hrs = 860 GB.

EIE426-AICV 22 Depth-First Search Expand deepest unexpanded node Implementation: fringe = LIFO queue, i.e., put successors at front

EIE426-AICV 23 Depth-First Search (cont.) S D A B E C F G

EIE426-AICV 24 Properties of depth-first search Complete? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states along path => complete in finite spaces Time? O(b m ): terrible if m is much larger than d but if solutions are dense, may be much faster than breadth-first Space? O(b m ), i.e., linear space! Optimal? No

EIE426-AICV 25 A comparison between depth-first search and breadth-first search Advantages of Depth-First Search  It requires less memory.  By chance, it may find a solution without examining much of the search space at all. Advantages of Breadth-First Search  It will not get trapped exploring a blind alley.  If there are multiple solutions, then a minimal solution will be found.

EIE426-AICV 26 Summary Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored Variety of uninformed search strategies

EIE426-AICV 27 Best-first search Idea: use an evaluation function for each node -- estimation of “ desirability ” = 〉 Expand most desirable unexpanded node Implementation: fringe is a queue sorted in decreasing order of desirability Special cases: greedy search A* search

EIE426-AICV 28 Romania with step costs in km

EIE426-AICV 29 Greedy search Evaluation function h(n) (heuristic) = estimate of cost from n to the closest goal E.g., h SLD (n) = straight-line distance from n to Bucharest Greedy search expands the node that appears to be closest to goal

EIE426-AICV 30 Greedy search example

EIE426-AICV 31 Properties of greedy search Complete? No -- can get stuck in loops, e.g., Iasi  Neamt  Complete in finite space with repeated-state checking Time? O(b^m), but a good heuristic can give dramatic improvement Space? O(b^m)---keeps all nodes in memory Optimal? No

EIE426-AICV 32 A* search Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost to goal from n f(n) = estimated total cost of path through n to goal A* search uses an admissible heuristic i.e., h(n)  h*(n) where h*(n) is the true cost from n. (Also require h(n)  0, so h(G)=0 for any goal G.) E.g., h SLD (n) never overestimates the actual road distance Theorem: A* search is optimal.

EIE426-AICV 33 A* search example

EIE426-AICV 34 Optimality of A* (standard proof) Suppose some suboptimal goal G 2 has been generated and is in the queue. Let n be an unexpanded node on a shortest path to an optimal goal G 1. f(G 2 ) = g(G 2 ) since h(G 2 ) = 0 > g(G 1 ) since G 2 is suboptimal  f(n) since h is admissible Since f(G 2 ) > f(n), A* will never select G 2 for expansion

EIE426-AICV 35 Properties of A* Complete? Yes, unless there are infinitely many nodes with f  f(G) Time? Depends on the heuristic. In the worse case, exponential in the length of the solution. Space? Keeps all nodes in memory Optimal? Yes

EIE426-AICV 36 h Overestimates h* Step cost = 1 The search may miss the optimal solution if there is any overestimate h. Miss this solution!

EIE426-AICV 37 Admissible Heuristics E.g., for the 8-puzzle: h 1 (n) = number of misplaced tiles h 2 (n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h 1 (S) = 7 h 2 (S) = = 14

EIE426-AICV 38 Dominance If h 2 (n)  h 1 (n) for all n (both admissible) then h 2 dominates h 1 and is better for search Typical search costs: d=14 A*(h 1 ) = 539 nodes A*(h 2 ) = 113 nodes d=24 A*(h 1 ) = 39,135 nodes A*(h 2 ) = 1,641 nodes

EIE426-AICV 39 Example: n-queens Put n queens on an n times n board with no two queens on the same row, column, or diagonal. Move a queen to reduce number of conflicts.

EIE426-AICV 40 Example: Heuristics and A* Search for Solving the 8- Puzzle problem

EIE426-AICV x 2 = 4

EIE426-AICV 42 Heuristic: Sum of distances out of place

EIE426-AICV 43

EIE426-AICV 44 Part of the Search Family of Procedures

EIE426-AICV 45 Search Alternatives from a Procedure Family  Depth-first search is good when unproductive partial paths are never too long.  Breadth-first search is good when the branching factor is never too large.  A* search is good when a good heuristic is available. Search Animations