Artificial Intelligence Solving problems by searching

Slides:



Advertisements
Similar presentations
Artificial Intelligent
Advertisements

Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Additional Topics ARTIFICIAL INTELLIGENCE
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
January 26, 2003AI: Chapter 3: Solving Problems by Searching 1 Artificial Intelligence Chapter 3: Solving Problems by Searching Michael Scherger Department.
Artificial Intelligence Problem Solving Eriq Muhammad Adams
Blind Search by Prof. Jean-Claude Latombe
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
1 Chapter 3 Solving Problems by Searching. 2 Outline Problem-solving agentsProblem-solving agents Problem typesProblem types Problem formulationProblem.
1 Blind (Uninformed) Search (Where we systematically explore alternatives) R&N: Chap. 3, Sect. 3.3–5.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
CHAPTER 3 CMPT Blind Search 1 Search and Sequential Action.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
CS 380: Artificial Intelligence Lecture #3 William Regli.
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Announcements Project 0: Python Tutorial
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Solving problems by searching This Lecture Read Chapters 3.1 to 3.4 Next Lecture Read Chapter 3.5 to 3.7 (Please read lecture topic material before and.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
CS 440 / ECE 448 Introduction to Artificial Intelligence Spring 2010 Lecture #3 Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam.
1 Solving Problems by Searching (Blindly) R&N: Chap. 3 (many of these slides borrowed from Stanford’s AI Class)
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
1 Solving problems by searching 171, Class 2 Chapter 3.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 3 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Solving problems by searching A I C h a p t e r 3.
1 Solving problems by searching Chapter 3. 2 Outline Problem types Example problems Assumptions in Basic Search State Implementation Tree search Example.
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
WEEK 5 LECTURE -A- 23/02/2012 lec 5a CSC 102 by Asma Tabouk Introduction 1 CSC AI Basic Search Strategies.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Search Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides adapted from those.
Lecture 3: Uninformed Search
Announcements Homework 1 will be posted this afternoon. After today, you should be able to do Questions 1-3. Python 2.7 only for the homework; turns.
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Uninformed Search Chapter 3.4.
Problem Solving by Searching
Problem Solving as Search
Problem solving and search
Artificial Intelligence
Russell and Norvig: Chapter 3, Sections 3.4 – 3.6
Solving problems by searching
Solving problems by searching
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
Problem Solving and Searching
EA C461 – Artificial Intelligence
Artificial Intelligence
Searching for Solutions
Artificial Intelligence
Problem Solving and Searching
Solving problems by searching
Solving problems by searching
Solving problems by searching
Solving Problems by Searching
Solving Problems by Searching
Solving problems by searching
Presentation transcript:

Artificial Intelligence Solving problems by searching Dr. Shahriar Bijani Shahed University Spring 2017

Slides’ Reference S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, Chapter 3, Prentice Hall, 2010, 3rd Edition. Dan Klein and Pieter Abbeel, CS 188: Artificial Intelligence, University of California Berkeley, 2014.

Problem-solving agents Problem types Problem formulation Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

Problem-solving agents environment agent ? sensors actuators Actions Initial state Goal test Graph searching

Problem-solving agents

Example: Mashhad On holiday in Mashhad; currently in Tehran. Formulate goal: be in Mashhad Formulate problem: states: various cities actions: drive between cities Find solution: sequence of cities, e.g., Tehran, Semnan, Sabzevar, Neyshaboor, Mashhad

State Space and Successor Function Actions Initial state Goal test

Initial State Actions Initial state Goal test state space successor function Actions Initial state Goal test

Goal Test Actions Initial state Goal test state space successor function Actions Initial state Goal test

Problem-solving agents Problem types Problem formulation Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

Problem types Deterministic, fully observable  single-state problem Agent knows exactly which state it will be in; solution is a sequence Non-observable  sensorless problem (conformant problem) Agent may have no idea where it is; solution is a sequence Nondeterministic and/or partially observable  contingency problem percepts provide new information about current state * often interleave} search, execution Unknown state space  exploration problem

Example: vacuum world Single-state, start in #5. Solution?

Example: vacuum world Single-state, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution?

Example: vacuum world Single-state, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck]

Example: vacuum world Single-state, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] Contingency Nondeterministic: Suck may dirty a clean carpet Partially observable: location, dirt at current location. Percept: [L, Clean], i.e., start in #5 or #7 Solution? [Right, if dirt then Suck]

Problem-solving agents Problem types Problem formulation Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

Problem formulation Packman Example

Search Problem State space Initial state Successor function Goal test Path cost

Search Problem State space Initial state Successor function Goal test each state is an abstract representation of the environment the state space is discrete Initial state Successor function Goal test Path cost

Search Problem State space What is in a State space? The world state includes every detail of the environment A search state keeps only the details needed for planning (abstraction) Wrong example for “eat-all-dots”: (x, y, dot count)

Search Problem State space Initial state: Successor function Goal test usually the current state sometimes one or several imaginary states (“what if …”) Successor function Goal test Path cost

Search Problem State space Initial state Successor function: Goal test [state  subset of states] an abstract representation of the possible actions actions and costs Goal test Path cost “N”, 1.0 “E”, 1.0

Search Problem State space Initial state Successor function Goal test: usually a condition sometimes the description of a state Path cost

Search Problem State space Initial state Successor function Goal test Path cost: [path  positive number] usually, path cost = sum of step costs e.g., number of moves of the empty tile

Single-state problem formulation A problem is defined by four items: initial state e.g., "at Tehran" actions or successor function S(x) = set of action– state pairs e.g., S(Tehran) = {<Tehran  Semnan, Semnan>, … } goal test, can be explicit, e.g., x = "at Mashahd" implicit, e.g., Checkmate (x) (chess) path cost (additive) e.g., sum of distances, number of actions executed, etc. c(x,a,y) is the step cost, assumed to be ≥ 0 A solution is a sequence of actions leading from the initial state to a goal state

Example: Chess Initial state actions Goal state(s) r k b ki q p P K B

Selecting a state space Real world is very complex  state space must be abstracted for problem solving (Abstract) state = set of real states (Abstract) action = complex combination of real actions e.g., “Tehran  Semnan" represents a complex set of possible routes, rest stops, etc. For guaranteed realizability, any real state "in Tehran“ must get to some real state "in Semnan" (Abstract) solution = set of real paths that are solutions in the real world Each abstract action should be "easier" than the original problem

State Space Graphs & Search Trees

Tiny search graph for a tiny search problem State Space Graphs State space graph: A mathematical representation of a search problem Nodes are (abstracted) world configurations Arcs represent successors (action results) The goal test is a set of goal nodes (maybe only one) In a state space graph, each state occurs only once. Although build the full graph in memory is a useful idea, we can rarely do it (it’s too big) S G d b p q c e h a f r Tiny search graph for a tiny search problem

Search Trees A search tree: This is now / start Possible futures A “what if” tree of plans and their outcomes The start state is the root node Children correspond to successors Nodes show states, but correspond to PLANS that achieve those states For most problems, we can never actually build the whole tree Different plans that achieve the same state, will be different nodes in the tree.

State Space Graphs vs. Search Trees Each NODE in in the search tree is an entire PATH in the state space graph. State Space Graph Search Tree S S G d b p q c e h a f r d e p b c e h r q a a h r p q f We construct both on demand – and we construct as little as possible. p q f q c G q c a G a

Quiz: State Space Graphs vs. Search Trees Consider this 4-state graph: How big is its search tree (from S)? a S G b Important: Lots of repeated structure in the search tree!

Tree Search

Problem-solving agents Problem types Problem formulation Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

Vacuum world state space graph actions? goal test? path cost?

Vacuum world state space graph states? integer dirt and robot location actions? Left, Right, Suck goal test? no dirt at all locations path cost? 1 per action

Example: The 8-puzzle states? actions? goal test? path cost?

Example: The 8-puzzle states? locations of tiles actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move [Note: optimal solution of n-Puzzle family is NP-hard]

Example: robotic assembly

Example: robotic assembly States?: Collection of sub-assemblies Initial state?: All sub-assemblies are individual parts Goal State?: Complete assembly Successor function?: Merge two subassemblies (check for collision) Cost function?: Longest sequence of assembly operation

Example: robotic assembly

Example: 8-queens Place 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal. A solution Not a solution

Example: 8-queens  648 states with 8 queens Formulation #1: States: any arrangement of 0 to 8 queens on the board Initial state: 0 queens on the board Successor function: add a queen in any square Goal test: 8 queens on the board, none attacked  648 states with 8 queens

Example: 8-queens  2,057 states Formulation #2: States: any arrangement of k = 0 to 8 queens in the k leftmost columns with none attacked Initial state: 0 queens on the board Successor function: add a queen to any square in the leftmost empty column such that it is not attacked by any other queen Goal test: 8 queens on the board  2,057 states

Example: Packman Problem: Find the Path Problem: Eating All Dots States: (x,y) location Actions: NSEW Successor: update location only Goal test: is (x,y)=END Problem: Eating All Dots States: {(x,y), dot booleans} Actions: NSEW Successor: update location and possibly a dot boolean Goal test: dots all false Wrong example for “eat-all-dots”: (x, y, dot count)

The State Space Sizes of Packman World state: Agent positions: 120 Food count: 30 Ghost positions: 12 Agent facing: NSEW How many? World states? 120x(230)x(122)x4 States for “finding the path”? 120 States for “Eating All Dots”? 120x(230) 90 * (2^30-1) + 30 * 2^29 = 145 billion 2^29 = 536 870 912

Problem-solving agents Problem types Problem formulation Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

Assumptions in Basic Search The environment is static The environment is discretizable The environment is observable The actions are deterministic  open-loop solution (control theory)

Search of State Space

Search of State Space

Search of State Space

Search of State Space

Search of State Space

Search of State Space  search tree

Tree search algorithms Basic idea: offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)

Fringe Set of search nodes that have not been expanded yet Implemented as a queue FRINGE INSERT(node,FRINGE) REMOVE(FRINGE) The ordering of the nodes in FRINGE defines the search strategy

Search Algorithm If GOAL?(initial-state) then return initial-state INSERT(initial-node,FRINGE) Repeat: If FRINGE is empty then return failure n  REMOVE(FRINGE) s  STATE(n) For every state s’ in SUCCESSORS(s) Create a node n’ as a successor of n If GOAL?(s’) then return path or goal state INSERT(n’,FRINGE)

Implementation: general tree search

Implementation: states vs. nodes A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.

Implementation: Node Data Structure STATE PARENT ACTION COST DEPTH If a state is too large, it may be preferable to only represent the initial state and (re-)generate the other states when needed

Search Nodes  States The search tree may be infinite even 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 3 5 6 8 4 7 2 The search tree may be infinite even when the state space is finite

Search strategies Search strategy: selecting the order of node expansion Strategies are evaluated by: completeness: does it always find a solution if one exists? time complexity: number of nodes generated space complexity: maximum number of nodes in memory optimality: does it always find a least-cost solution? Computing time and space complexity: b: maximum branching factor of the search tree d: depth of the least-cost solution m: maximum depth of the state space (may be ∞)

Blind vs. Heuristic Strategies Blind (or un-informed) strategies Do not use any of the state information Only uses the information of the problem definition Heuristic (or informed) strategies Uses state information to assess the node is “more promising” than another

Uninformed search strategies Breadth-first search Uniform-cost search Bidirectional Strategy Depth-first search Depth-limited search Iterative deepening search

Blind Strategies Step cost = 1 Step cost = c(action)   > 0 Breadth-first Bidirectional Depth-first Depth-limited Iterative deepening Uniform-Cost Step cost = 1 Step cost = c(action)   > 0

Breadth-First Strategy

Breadth-First Search Strategy: expand an upper node first G d b p q c Levels S a b d p c e h f r q G Source: Dan Klein and Pieter Abbeel, CS 188: Artificial Intelligence, University of California, Berkeley

Breadth-First Strategy New nodes are inserted at the end of FRINGE Fringe is a FIFO queue 2 3 4 5 1 6 7 FRINGE = (1)

Breadth-First Strategy New nodes are inserted at the end of FRINGE 2 3 4 5 1 6 7 FRINGE = (2, 3)

Breadth-First Strategy New nodes are inserted at the end of FRINGE 2 3 4 5 1 6 7 FRINGE = (3, 4, 5)

Breadth-First Strategy New nodes are inserted at the end of FRINGE 2 3 4 5 1 6 7 FRINGE = (4, 5, 6, 7)

Breadth-first Evaluation … b 1 node b nodes b2 nodes bm nodes d Levels bd nodes Complete? Yes (d must be finite if a solution exists) Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) Space? O(bd+1) (keeps every node in memory)  Optimal? Yes (if cost = 1 per step)

Time and Memory Requirements #Nodes Time Memory 2 111 .01 msec 11 Kbytes 4 11,111 1 msec 1 Mbyte 6 ~106 1 sec 100 Mb 8 ~108 100 sec 10 Gbytes 10 ~1010 2.8 hours 1 Tbyte 12 ~1012 11.6 days 100 Tbytes 14 ~1014 3.2 years 10,000 Tb Assumptions: b = 10; 1,000,000 nodes/sec; 100bytes/node

Time and Memory Requirements #Nodes Time Memory 2 111 .01 msec 11 Kbytes 4 11,111 1 msec 1 Mbyte 6 ~106 1 sec 100 Mb 8 ~108 100 sec 10 Gbytes 10 ~1010 2.8 hours 1 Tbyte 12 ~1012 11.6 days 100 Tbytes 14 ~1014 3.2 years 10,000 Tb Assumptions: b = 10; 1,000,000 nodes/sec; 100bytes/node

Time and Memory Requirements Space is the bigger problem (more than time)

Uniform-cost search Breadth-first is only optimal if path cost is a non- decreasing function of depth, i.e., cost(d) ≥ cost(d-1); e.g., constant step cost, as in the 8-puzzle Uniform-cost search Expands unexpanded node with smallest path cost g(n) Implementation: fringe = a priority queue ordered by path cost g(n) new successors are merged into the queue sorted by g(n) Equivalent to breadth-first if step costs all equal

Uniform-cost search Complete? Time? Space? Optimal? Yes, if step cost ≥ ε Time? # of nodes with g ≤ cost of optimal solution, O(b (1+C*/ ε)) ≈ O(bd+1) where C* is the cost of the optimal solution Space? # of nodes with g ≤ cost of optimal solution O(bceiling(C*/ ε)) ≈ O(b (1+C*/ ε)) ≈ O(bd+1) Optimal? Yes – nodes expanded in increasing order of g(n) O(bceiling(C*/ ε)) = O(bd+1)

Bidirectional Strategy 2 fringe queues: FRINGE1 and FRINGE2 Time and space complexity = O(bd/2) << O(bd) - Branching factor may be different in each direction - Must check whether states are identical

Depth-First Strategy

Depth-First Strategy Strategy: expand a deepest node first S G a b d p q c e h a f r S a b d p c e h f r q G

Depth-First Strategy Fringe is a LIFO stack New nodes are inserted at the front of FRINGE Fringe is a LIFO stack 1 FRINGE = (1) 2 3 4 5

Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2 3 4 5 FRINGE = (2, 3)

Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2 3 4 5 FRINGE = (4, 5, 3)

Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2 3 4 5

Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2 3 4 5

Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2 3 4 5

Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2 3 4 5

Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2 3 4 5

Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2 3 4 5

Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2 3 4 5

Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2 3 4 5

Evaluation b: branching factor d: depth of shallowest goal node m: maximal depth of a leaf node Number of nodes generated: 1 + b + b2 + … + bm … b 1 node b nodes b2 nodes bm nodes m tiers

Properties of depth-first search Complete? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states in path complete in finite spaces (no cycles) Time? O(bm): m =maximum depth of space terrible if m is much larger than d but if trees are thick, may be much faster than breadth-first Space? Only has siblings on path to root, so O(bm), i.e., linear space!  Optimal? No, it finds the “leftmost” solution, regardless of depth or cost Breadth-first: O(bd) time and space

Maze DFS Demo Source: Dan Klein and Pieter Abbeel, CS 188: Artificial Intelligence, University of California, Berkeley

Depth-Limited Strategy Depth-first with depth cutoff k (maximal depth k, below which nodes are not expanded) Three possible outcomes: Solution Failure (no solution) Cutoff (no solution within cutoff)

Depth-limited search = depth-first search with depth limit l, Recursive implementation:

Iterative deepening search

Iterative deepening search l =0

Iterative deepening search l =1

Iterative deepening search l =2

Iterative deepening search l =3

Iterative deepening search Number of nodes generated in a depth-limited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd For b = 10, d = 5, NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111 NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456 Overhead = (123,456 - 111,111)/111,111 = 11%

Properties of iterative deepening search Complete? Yes Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd) Space? O(bd)  Optimal? Yes, if step cost = 1

Comparison of Strategies Breadth-first is complete and optimal, but has high space complexity Depth-first is space efficient, but neither complete nor optimal Iterative deepening is asymptotically optimal

Repeated States No Few Many search tree is finite search tree is infinite 1 2 3 4 5 6 7 8 8-puzzle and robot navigation assembly planning 8-queens

Repeated States Failure to detect repeated states can turn a linear problem into an exponential one!

Avoiding Repeated States Requires comparing state descriptions Breadth-first strategy: Keep track of all generated states If the state of a new node already exists  discard the node

Avoiding Repeated States Depth-first strategy: Solution 1: Keep track of all states associated with nodes in current path If the state of a new node already exists  discard the node  Avoids loops Solution 2: Keep track of all states generated so far If the state of a new node has already been generated discard the node  Space complexity of breadth-first

Graph search

Example: Travelling salesman problem A salesman must visit 5 cities. What is the shortest route? Abadan Babol Cemirom Damavand Eilam 594 524 619 127 184 78 467 233 395 493

Example: Travelling salesman problem 594 619 524 B C D 184 233 78 184 78 233 C D B D C B 233 78 184 184 233 78 D C D B B C 1011 905 786 835 1036 881 No of paths = (n-1)! n=4, p=6 n=5, p=24 n=10, p=362,880

When to do Goal-Test? For DFS, BFS, DLS, and IDS, the goal test is done when the child node is generated. These are not optimal searches in the general case. BFS and IDS are optimal if cost is a function of depth only; then, optimal goals are also shallowest goals and so will be found first For GBFS the behavior is the same whether the goal test is done when the node is generated or when it is removed h(goal)=0 so any goal will be at the front of the queue anyway. For UCS and A* the goal test is done when the node is removed from the queue. This precaution avoids finding a short expensive path before a long cheap path.

Summary Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored Search tree  state space Uninformed search strategies: breadth-first, depth- first, and variants Evaluation of strategies: completeness, optimality, time and space complexity Iterative deepening search uses only linear space and not much more time than other uninformed algorithms Avoiding repeated states Optimal search with variable step costs

Summary of algorithms

Summary of algorithms

Example: The Water Jugs Problem 4 gallon 3 gallon How can you get exactly 2 gallons into the 4 gallon jug? Possible operators: Empty jug Fill jug from tap Pour contents from one jug into another 3 4

The Water Jugs Problem – Search Tree 0 , 0 4 , 0 4 , 3 1 , 3 0 , 3 1 , 0 0 , 1 4 , 1 3 , 3 3 , 0 4 , 2 0, 2 2, 0

Blind Search – Breadth First 0 , 0 4 , 0 0 , 3 4 , 3 1 , 3 3 , 0 1 , 0 3 , 3 0 , 1 4 , 2 4 , 1 0 , 2 2 , 0

Blind Search – Depth First 0 , 0 0 , 3 4 , 3 3 , 0 4 , 0 3 , 3 4 , 2 0 , 2 2 , 0