Presentation is loading. Please wait.

Presentation is loading. Please wait.

Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search.

Similar presentations


Presentation on theme: "Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search."— Presentation transcript:

1 Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search

2 (c) 2009 SNU CSE Biointelligence Lab 2 Outline Search Space Graphs Breadth-First Search Depth-First Search Iterative Deepening

3 Search for the optimal weight in training a multi- layer perceptron Search for the optimal solution by evolutional computation ex) wall-following robot Search in State Spaces - Motivating Examples (c) 2009 SNU CSE Biointelligence Lab 3

4 Search in State Spaces - Motivating Examples Agents that plan  State + action  State-space search is used to plan action sequences  ex) moving a block (Ch. 7) (c) 2009 SNU CSE Biointelligence Lab 4 A state-space graph (The size of the search space is small)

5 (c) 2009 SNU CSE Biointelligence Lab 5 1. Formulating the State Space For huge search space we need  Careful formulation  Implicit representation of large search graphs  Efficient search method

6 (c) 2009 SNU CSE Biointelligence Lab 6 1. Formulating the State Space (Cont’d) e.g.) 8-puzzle problem  state description  3-by-3 array: each cell contains one of 1-8 or blank symbol  two choices of describing state transitions  8  4 moves: one of 1-8 numbers moves up, down, right, or left  4 moves: one black symbol moves up, down, right, or left start goal

7 (c) 2009 SNU CSE Biointelligence Lab 7  The number of nodes in the state-space graph:  9! ( = 362,880 )  State space for 8-puzzle is  Divided into two separate graphs : not reachable from each other 1. Formulating the State Space (Cont’d)

8 (c) 2009 SNU CSE Biointelligence Lab 8 2. Components of Implicit State-Space Graphs 3 basic components to an implicit representation of a state-space graph  Description of start node  Operators  Functions of state transformation  Models of the effects of actions  3. Goal condition  True-False valued function  A list of actual instances of state descriptions

9 (c) 2009 SNU CSE Biointelligence Lab 9 2. Components of Implicit State-Space Graphs Two broad classes of search process  Uninformed search:  no problem specific reason to prefer one part of the search space to any other  Apply operators to nodes without using any special knowledge about the problem domain  Heuristic search  existence of problem-specific information to help focus the search  Using evaluation functions  Ex) A* algorithm

10 Uninformed Search for Graphs Breadth-First Search Depth-First Search Iterative Deepening State-space search tree  Each node represents a problem or sub-problem  Root of tree: initial problem to be solved  Children of a node created by adding constraints (c) 2009 SNU CSE Biointelligence Lab 10

11 (c) 2009 SNU CSE Biointelligence Lab 11 3. Breadth-First Search Procedure 1. Apply all possible operators (successor function) to the start node. 2. Apply all possible operators to all the direct successors of the start node. 3. Apply all possible operators to their successors till good node found.  Expanding : applying successor function to a node

12 (c) 2009 SNU CSE Biointelligence Lab 12 Figure 8.2 Breadth-First Search of the Eight-Puzzle

13 (c) 2009 SNU CSE Biointelligence Lab 13 3. Breadth-First Search (Cont’d) Advantage  When a goal node is found, we have found a path of minimal length to the goal Disadvantage  Requires the generation and storage of a tree whose size is exponential to the depth of the shallowest goal node Uniform-cost search [Dijkstra 1959]  A variant of BFS: expansion by equal cost (not equal depth)  If costs of all arcs are identical, same as BFS

14 (c) 2009 SNU CSE Biointelligence Lab 14 4. Depth-First or Backtracking Search Procedure  Generates the successor of a node just one at a time.  Trace is left at each node to indicate that additional operators can be applied there if needed.  At each node a decision must be made about which operator to apply first, which next, and so on.  Repeats this process until the depth bound.  chronological Backtrack when search depth is depth bound.

15 (c) 2009 SNU CSE Biointelligence Lab 15 4. Depth-First or Backtracking Search (Cont’d) 8-puzzle example  Depth bound: 5  Operator order: left  up  right  down Figure 8.3 Generation of the First Few Nodes in a Depth-First Search

16 (c) 2009 SNU CSE Biointelligence Lab 16 4. Depth-First or Backtracking Search (Cont’d)  The graph when the goal is reached in depth-first search

17 (c) 2009 SNU CSE Biointelligence Lab 17 4. Depth-First or Backtracking Search (Cont’d) Advantage  Low memory size: linear in the depth bound  saves only that part of the search tree consisting of the path currently being explored plus traces Disadvantage  No guarantee for the minimal state length to goal state  The possibility of having to explore a large part of the search space

18 (c) 2009 SNU CSE Biointelligence Lab 18 5. Iterative Deepening Advantage  Linear memory requirements of depth-first search  Guarantee for goal node of minimal depth Procedure  Successive depth-first searches are conducted – each with depth bounds increasing by 1

19 (c) 2009 SNU CSE Biointelligence Lab 19 5. Iterative Deepening (Cont’d) Figure 8.5 Stages in Iterative-Deepening Search

20 (c) 2009 SNU CSE Biointelligence Lab 20 5. Iterative Deepening (Cont’d) The number of expanded nodes  In case of breadth-first search  In case of iterative deepening search (in the worst case)

21 (c) 2009 SNU CSE Biointelligence Lab 21 5. Iterative Deepening (Cont’d)  For large d the ratio N id /N df is b/(b-1)  For a branching factor of 10 and deep goals, 11% more nodes expansion in iterative-deepening search than breadth-first search  Related technique iterative broadening is useful when there are many goal nodes

22 (c) 2009 SNU CSE Biointelligence Lab 22 6. Additional Readings and Discussion Various improvements in chronological backtracking  Dependency-directed backtracking [Stallman & Sussman 1977]  Backjumping [Gaschnig 1979]  Dynamic backtracking [Ginsberg 1993]


Download ppt "Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search."

Similar presentations


Ads by Google