Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Intelligence Solving problems by searching

Similar presentations


Presentation on theme: "Artificial Intelligence Solving problems by searching"— Presentation transcript:

1 Artificial Intelligence Solving problems by searching
Dr. Shahriar Bijani Shahed University Spring 2017

2 Slides’ Reference S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, Chapter 3, Prentice Hall, 2010, 3rd Edition. Dan Klein and Pieter Abbeel, CS 188: Artificial Intelligence, University of California Berkeley,

3 Problem-solving agents Problem types Problem formulation
Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

4 Problem-solving agents
environment agent ? sensors actuators Actions Initial state Goal test Graph searching

5 Problem-solving agents

6 Example: Mashhad On holiday in Mashhad; currently in Tehran.
Formulate goal: be in Mashhad Formulate problem: states: various cities actions: drive between cities Find solution: sequence of cities, e.g., Tehran, Semnan, Sabzevar, Neyshaboor, Mashhad

7 State Space and Successor Function
Actions Initial state Goal test

8 Initial State Actions Initial state Goal test state space
successor function Actions Initial state Goal test

9 Goal Test Actions Initial state Goal test state space
successor function Actions Initial state Goal test

10 Problem-solving agents Problem types Problem formulation
Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

11 Problem types Deterministic, fully observable  single-state problem
Agent knows exactly which state it will be in; solution is a sequence Non-observable  sensorless problem (conformant problem) Agent may have no idea where it is; solution is a sequence Nondeterministic and/or partially observable  contingency problem percepts provide new information about current state * often interleave} search, execution Unknown state space  exploration problem

12 Example: vacuum world Single-state, start in #5. Solution?

13 Example: vacuum world Single-state, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution?

14 Example: vacuum world Single-state, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck]

15 Example: vacuum world Single-state, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution?[Right,Suck,Left,Suck] Contingency Nondeterministic: Suck may dirty a clean carpet Partially observable: location, dirt at current location. Percept: [L, Clean], i.e., start in #5 or #7 Solution? [Right, if dirt then Suck]

16 Problem-solving agents Problem types Problem formulation
Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

17 Problem formulation Packman Example

18 Search Problem State space Initial state Successor function Goal test
Path cost

19 Search Problem State space Initial state Successor function Goal test
each state is an abstract representation of the environment the state space is discrete Initial state Successor function Goal test Path cost

20 Search Problem State space What is in a State space?
The world state includes every detail of the environment A search state keeps only the details needed for planning (abstraction) Wrong example for “eat-all-dots”: (x, y, dot count)

21 Search Problem State space Initial state: Successor function Goal test
usually the current state sometimes one or several imaginary states (“what if …”) Successor function Goal test Path cost

22 Search Problem State space Initial state Successor function: Goal test
[state  subset of states] an abstract representation of the possible actions actions and costs Goal test Path cost “N”, 1.0 “E”, 1.0

23 Search Problem State space Initial state Successor function Goal test:
usually a condition sometimes the description of a state Path cost

24 Search Problem State space Initial state Successor function Goal test
Path cost: [path  positive number] usually, path cost = sum of step costs e.g., number of moves of the empty tile

25 Single-state problem formulation
A problem is defined by four items: initial state e.g., "at Tehran" actions or successor function S(x) = set of action– state pairs e.g., S(Tehran) = {<Tehran  Semnan, Semnan>, … } goal test, can be explicit, e.g., x = "at Mashahd" implicit, e.g., Checkmate (x) (chess) path cost (additive) e.g., sum of distances, number of actions executed, etc. c(x,a,y) is the step cost, assumed to be ≥ 0 A solution is a sequence of actions leading from the initial state to a goal state

26 Example: Chess Initial state actions Goal state(s) r k b ki q p P K B

27 Selecting a state space
Real world is very complex  state space must be abstracted for problem solving (Abstract) state = set of real states (Abstract) action = complex combination of real actions e.g., “Tehran  Semnan" represents a complex set of possible routes, rest stops, etc. For guaranteed realizability, any real state "in Tehran“ must get to some real state "in Semnan" (Abstract) solution = set of real paths that are solutions in the real world Each abstract action should be "easier" than the original problem

28 State Space Graphs & Search Trees

29 Tiny search graph for a tiny search problem
State Space Graphs State space graph: A mathematical representation of a search problem Nodes are (abstracted) world configurations Arcs represent successors (action results) The goal test is a set of goal nodes (maybe only one) In a state space graph, each state occurs only once. Although build the full graph in memory is a useful idea, we can rarely do it (it’s too big) S G d b p q c e h a f r Tiny search graph for a tiny search problem

30 Search Trees A search tree: This is now / start Possible futures
A “what if” tree of plans and their outcomes The start state is the root node Children correspond to successors Nodes show states, but correspond to PLANS that achieve those states For most problems, we can never actually build the whole tree Different plans that achieve the same state, will be different nodes in the tree.

31 State Space Graphs vs. Search Trees
Each NODE in in the search tree is an entire PATH in the state space graph. State Space Graph Search Tree S S G d b p q c e h a f r d e p b c e h r q a a h r p q f We construct both on demand – and we construct as little as possible. p q f q c G q c a G a

32 Quiz: State Space Graphs vs. Search Trees
Consider this 4-state graph: How big is its search tree (from S)? a S G b Important: Lots of repeated structure in the search tree!

33 Tree Search

34 Problem-solving agents Problem types Problem formulation
Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

35 Vacuum world state space graph
actions? goal test? path cost?

36 Vacuum world state space graph
states? integer dirt and robot location actions? Left, Right, Suck goal test? no dirt at all locations path cost? 1 per action

37 Example: The 8-puzzle states? actions? goal test? path cost?

38 Example: The 8-puzzle states? locations of tiles
actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move [Note: optimal solution of n-Puzzle family is NP-hard]

39 Example: robotic assembly

40 Example: robotic assembly
States?: Collection of sub-assemblies Initial state?: All sub-assemblies are individual parts Goal State?: Complete assembly Successor function?: Merge two subassemblies (check for collision) Cost function?: Longest sequence of assembly operation

41 Example: robotic assembly

42 Example: 8-queens Place 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal. A solution Not a solution

43 Example: 8-queens  648 states with 8 queens Formulation #1:
States: any arrangement of 0 to 8 queens on the board Initial state: 0 queens on the board Successor function: add a queen in any square Goal test: 8 queens on the board, none attacked  648 states with 8 queens

44 Example: 8-queens  2,057 states Formulation #2:
States: any arrangement of k = 0 to 8 queens in the k leftmost columns with none attacked Initial state: 0 queens on the board Successor function: add a queen to any square in the leftmost empty column such that it is not attacked by any other queen Goal test: 8 queens on the board  2,057 states

45 Example: Packman Problem: Find the Path Problem: Eating All Dots
States: (x,y) location Actions: NSEW Successor: update location only Goal test: is (x,y)=END Problem: Eating All Dots States: {(x,y), dot booleans} Actions: NSEW Successor: update location and possibly a dot boolean Goal test: dots all false Wrong example for “eat-all-dots”: (x, y, dot count)

46 The State Space Sizes of Packman
World state: Agent positions: 120 Food count: 30 Ghost positions: 12 Agent facing: NSEW How many? World states? 120x(230)x(122)x4 States for “finding the path”? 120 States for “Eating All Dots”? 120x(230) 90 * (2^30-1) + 30 * 2^29 = 145 billion 2^29 =

47 Problem-solving agents Problem types Problem formulation
Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

48 Assumptions in Basic Search
The environment is static The environment is discretizable The environment is observable The actions are deterministic  open-loop solution (control theory)

49 Search of State Space

50 Search of State Space

51 Search of State Space

52 Search of State Space

53 Search of State Space

54 Search of State Space  search tree

55 Tree search algorithms
Basic idea: offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)

56 Fringe Set of search nodes that have not been expanded yet
Implemented as a queue FRINGE INSERT(node,FRINGE) REMOVE(FRINGE) The ordering of the nodes in FRINGE defines the search strategy

57 Search Algorithm If GOAL?(initial-state) then return initial-state
INSERT(initial-node,FRINGE) Repeat: If FRINGE is empty then return failure n  REMOVE(FRINGE) s  STATE(n) For every state s’ in SUCCESSORS(s) Create a node n’ as a successor of n If GOAL?(s’) then return path or goal state INSERT(n’,FRINGE)

58 Implementation: general tree search

59 Implementation: states vs. nodes
A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes state, parent node, action, path cost g(x), depth The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states.

60 Implementation: Node Data Structure
STATE PARENT ACTION COST DEPTH If a state is too large, it may be preferable to only represent the initial state and (re-)generate the other states when needed

61 Search Nodes  States The search tree may be infinite even
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 3 5 6 8 4 7 2 The search tree may be infinite even when the state space is finite

62 Search strategies Search strategy: selecting the order of node expansion Strategies are evaluated by: completeness: does it always find a solution if one exists? time complexity: number of nodes generated space complexity: maximum number of nodes in memory optimality: does it always find a least-cost solution? Computing time and space complexity: b: maximum branching factor of the search tree d: depth of the least-cost solution m: maximum depth of the state space (may be ∞)

63 Blind vs. Heuristic Strategies
Blind (or un-informed) strategies Do not use any of the state information Only uses the information of the problem definition Heuristic (or informed) strategies Uses state information to assess the node is “more promising” than another

64 Uninformed search strategies
Breadth-first search Uniform-cost search Bidirectional Strategy Depth-first search Depth-limited search Iterative deepening search

65 Blind Strategies Step cost = 1 Step cost = c(action)   > 0
Breadth-first Bidirectional Depth-first Depth-limited Iterative deepening Uniform-Cost Step cost = 1 Step cost = c(action)   > 0

66 Breadth-First Strategy

67 Breadth-First Search Strategy: expand an upper node first G d b p q c
Levels S a b d p c e h f r q G Source: Dan Klein and Pieter Abbeel, CS 188: Artificial Intelligence, University of California, Berkeley

68 Breadth-First Strategy
New nodes are inserted at the end of FRINGE Fringe is a FIFO queue 2 3 4 5 1 6 7 FRINGE = (1)

69 Breadth-First Strategy
New nodes are inserted at the end of FRINGE 2 3 4 5 1 6 7 FRINGE = (2, 3)

70 Breadth-First Strategy
New nodes are inserted at the end of FRINGE 2 3 4 5 1 6 7 FRINGE = (3, 4, 5)

71 Breadth-First Strategy
New nodes are inserted at the end of FRINGE 2 3 4 5 1 6 7 FRINGE = (4, 5, 6, 7)

72 Breadth-first Evaluation
b 1 node b nodes b2 nodes bm nodes d Levels bd nodes Complete? Yes (d must be finite if a solution exists) Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1) Space? O(bd+1) (keeps every node in memory)  Optimal? Yes (if cost = 1 per step)

73 Time and Memory Requirements
#Nodes Time Memory 2 111 .01 msec 11 Kbytes 4 11,111 1 msec 1 Mbyte 6 ~106 1 sec 100 Mb 8 ~108 100 sec 10 Gbytes 10 ~1010 2.8 hours 1 Tbyte 12 ~1012 11.6 days 100 Tbytes 14 ~1014 3.2 years 10,000 Tb Assumptions: b = 10; 1,000,000 nodes/sec; 100bytes/node

74 Time and Memory Requirements
#Nodes Time Memory 2 111 .01 msec 11 Kbytes 4 11,111 1 msec 1 Mbyte 6 ~106 1 sec 100 Mb 8 ~108 100 sec 10 Gbytes 10 ~1010 2.8 hours 1 Tbyte 12 ~1012 11.6 days 100 Tbytes 14 ~1014 3.2 years 10,000 Tb Assumptions: b = 10; 1,000,000 nodes/sec; 100bytes/node

75 Time and Memory Requirements
Space is the bigger problem (more than time)

76 Uniform-cost search Breadth-first is only optimal if path cost is a non- decreasing function of depth, i.e., cost(d) ≥ cost(d-1); e.g., constant step cost, as in the 8-puzzle Uniform-cost search Expands unexpanded node with smallest path cost g(n) Implementation: fringe = a priority queue ordered by path cost g(n) new successors are merged into the queue sorted by g(n) Equivalent to breadth-first if step costs all equal

77 Uniform-cost search Complete? Time? Space? Optimal?
Yes, if step cost ≥ ε Time? # of nodes with g ≤ cost of optimal solution, O(b (1+C*/ ε)) ≈ O(bd+1) where C* is the cost of the optimal solution Space? # of nodes with g ≤ cost of optimal solution O(bceiling(C*/ ε)) ≈ O(b (1+C*/ ε)) ≈ O(bd+1) Optimal? Yes – nodes expanded in increasing order of g(n) O(bceiling(C*/ ε)) = O(bd+1)

78 Bidirectional Strategy
2 fringe queues: FRINGE1 and FRINGE2 Time and space complexity = O(bd/2) << O(bd) - Branching factor may be different in each direction - Must check whether states are identical

79 Depth-First Strategy

80 Depth-First Strategy Strategy: expand a deepest node first S G a b d p
q c e h a f r S a b d p c e h f r q G

81 Depth-First Strategy Fringe is a LIFO stack
New nodes are inserted at the front of FRINGE Fringe is a LIFO stack 1 FRINGE = (1) 2 3 4 5

82 Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2
3 4 5 FRINGE = (2, 3)

83 Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2
3 4 5 FRINGE = (4, 5, 3)

84 Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2
3 4 5

85 Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2
3 4 5

86 Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2
3 4 5

87 Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2
3 4 5

88 Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2
3 4 5

89 Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2
3 4 5

90 Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2
3 4 5

91 Depth-First Strategy New nodes are inserted at the front of FRINGE 1 2
3 4 5

92 Evaluation b: branching factor d: depth of shallowest goal node
m: maximal depth of a leaf node Number of nodes generated: b + b2 + … + bm b 1 node b nodes b2 nodes bm nodes m tiers

93 Properties of depth-first search
Complete? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states in path complete in finite spaces (no cycles) Time? O(bm): m =maximum depth of space terrible if m is much larger than d but if trees are thick, may be much faster than breadth-first Space? Only has siblings on path to root, so O(bm), i.e., linear space!  Optimal? No, it finds the “leftmost” solution, regardless of depth or cost Breadth-first: O(bd) time and space

94 Maze DFS Demo Source: Dan Klein and Pieter Abbeel, CS 188: Artificial Intelligence, University of California, Berkeley

95 Depth-Limited Strategy
Depth-first with depth cutoff k (maximal depth k, below which nodes are not expanded) Three possible outcomes: Solution Failure (no solution) Cutoff (no solution within cutoff)

96 Depth-limited search = depth-first search with depth limit l,
Recursive implementation:

97 Iterative deepening search

98 Iterative deepening search l =0

99 Iterative deepening search l =1

100 Iterative deepening search l =2

101 Iterative deepening search l =3

102 Iterative deepening search
Number of nodes generated in a depth-limited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd For b = 10, d = 5, NDLS = , , ,000 = 111,111 NIDS = , , ,000 = 123,456 Overhead = (123, ,111)/111,111 = 11%

103 Properties of iterative deepening search
Complete? Yes Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd) Space? O(bd)  Optimal? Yes, if step cost = 1

104 Comparison of Strategies
Breadth-first is complete and optimal, but has high space complexity Depth-first is space efficient, but neither complete nor optimal Iterative deepening is asymptotically optimal

105 Repeated States No Few Many search tree is finite
search tree is infinite 1 2 3 4 5 6 7 8 8-puzzle and robot navigation assembly planning 8-queens

106 Repeated States Failure to detect repeated states can turn a linear problem into an exponential one!

107 Avoiding Repeated States
Requires comparing state descriptions Breadth-first strategy: Keep track of all generated states If the state of a new node already exists  discard the node

108 Avoiding Repeated States
Depth-first strategy: Solution 1: Keep track of all states associated with nodes in current path If the state of a new node already exists  discard the node  Avoids loops Solution 2: Keep track of all states generated so far If the state of a new node has already been generated discard the node  Space complexity of breadth-first

109 Graph search

110 Example: Travelling salesman problem
A salesman must visit 5 cities. What is the shortest route? Abadan Babol Cemirom Damavand Eilam 594 524 619 127 184 78 467 233 395 493

111 Example: Travelling salesman problem
594 619 524 B C D 184 233 78 184 78 233 C D B D C B 233 78 184 184 233 78 D C D B B C 1011 905 786 835 1036 881 No of paths = (n-1)! n=4, p=6 n=5, p=24 n=10, p=362,880

112 When to do Goal-Test? For DFS, BFS, DLS, and IDS, the goal test is done when the child node is generated. These are not optimal searches in the general case. BFS and IDS are optimal if cost is a function of depth only; then, optimal goals are also shallowest goals and so will be found first For GBFS the behavior is the same whether the goal test is done when the node is generated or when it is removed h(goal)=0 so any goal will be at the front of the queue anyway. For UCS and A* the goal test is done when the node is removed from the queue. This precaution avoids finding a short expensive path before a long cheap path.

113 Summary Problem formulation usually requires abstracting away real-world details to define a state space that can feasibly be explored Search tree  state space Uninformed search strategies: breadth-first, depth- first, and variants Evaluation of strategies: completeness, optimality, time and space complexity Iterative deepening search uses only linear space and not much more time than other uninformed algorithms Avoiding repeated states Optimal search with variable step costs

114 Summary of algorithms

115 Summary of algorithms

116 Example: The Water Jugs Problem
4 gallon 3 gallon How can you get exactly 2 gallons into the 4 gallon jug? Possible operators: Empty jug Fill jug from tap Pour contents from one jug into another 3 4

117 The Water Jugs Problem – Search Tree
0 , 0 4 , 0 4 , 3 1 , 3 0 , 3 1 , 0 0 , 1 4 , 1 3 , 3 3 , 0 4 , 2 0, 2 2, 0

118 Blind Search – Breadth First
0 , 0 4 , 0 0 , 3 4 , 3 1 , 3 3 , 0 1 , 0 3 , 3 0 , 1 4 , 2 4 , 1 0 , 2 2 , 0

119 Blind Search – Depth First
0 , 0 0 , 3 4 , 3 3 , 0 4 , 0 3 , 3 4 , 2 0 , 2 2 , 0


Download ppt "Artificial Intelligence Solving problems by searching"

Similar presentations


Ads by Google