Download presentation
Presentation is loading. Please wait.
1
Problem Solving
2
Topics Problem Solving Searching Methods Game Playing
3
Introduction Problem solving is mostly based on searching.
Every search process can be viewed as a traversal of a directed graph in which each node represents a problem state and each arc represents a relationship between the states represented by the nodes it connects. The search process must find a path through the graph, starting at an initial state and ending in one or more final states. The graph is constructed from the rules that define the allowable moves in the search space. Most search programs represent the graph implicitly in the rules to avoid combinatorial explosion and generate explicitly only those parts that they decide to explore.
4
Introduction Goal : a description of a desired solution (may be a state (8-puzzle) or a path (traveling salesman)). Search space: set of possible steps leading from initial conditions to a goal. State: a snapshot of the problem at one stage of the solution. The idea is to find a sequence of operators that can be applied to a starting state until a goal state is reached. A state space: the directed graph whose nodes are states and whose arcs are the operators that lead from one state to another. Problem solving is carried out by searching through the space of possible solutions for ones that satisfy a goal.
5
Example: Water jug problem
Given two jugs one 4 gallons and the other 3 gallons. The goal is to get 2 gallons in 4 gallon jug. Assumptions: You can fill the jug from the pump You can pour water out of the jug onto the ground You can pour water from one jug to another
6
Example: Water jug problem
State space representation
7
Search Five important issues that arise in search techniques are:
The direction of the search The topology of the search process Representation of the nodes Selecting applicable rules Using heuristic function to guide the search
8
The Direction of the Search
Forward : Data directed search. Start search from the initial state. To reason forward, the left sides (the preconditions) are matched against the current state and the right sides (the results) are used to generate new nodes until the goal is reached Backward: Goal directed search. Start search from the goal state. To reason backward the right sides are matched against the current node and the left sides are used to generate new nodes representing new goal states to be achieved.
9
The Direction of the Search
Factors influencing the choice between forward vs. backward chaining are: relative number of goal states to start states – move from the smaller set of states to the larger branching factor – move in the direction with the lower branching factor explanation of reasoning – proceed in the direction that corresponds more closely with the way the user thinks.
10
The Direction of the Search Examples
Branching factor In theorem proving goal state is the theorem to be proved and the initial state is the set of axioms. From small set of axioms large number of theorems can be proved. This large set of theorems must go back to the small set of axioms. Branching factor is greater going forward from axioms to theorems. Backward reasoning is more appropriate If the branching factor is same in both directions then relative number of start states to goal states determine the direction of search. Bi-directional search, start from both ends and meet somewhere in between. The disadvantage of this technique is search may bypass each other.
11
Explanation of reasoning
MYCIN program that diagnoses infectious diseases uses backward reasoning to determine the cause of patient's illness. A doctor may reason as follows: If an organism has a set of properties (lab results) then it is likely that the organism is X. Even though the evidence is most likely documented in the reverse direction (IF (ORGANISM X) (PROPERTIES Y)) CF The rules justify why certain tests should be performed.
12
The Topology of the Search
The Topology of the Search Trees
13
The Topology of the Search
Graphs
14
The topology of the search
Check if the generated node already exists If not, add the node If exists, then do: Set the node that is being expanded to point to the already existing node corresponding to its successor, rather than to the new one. The new one can be thrown away. If looking for the best path, check if the new path is better. If worse do nothing. If better record the new path as the correct path to use to get to the node, and propagate the corresponding change in cost down through successor nodes as necessary. Disadvantage of this topology is that cycles may occur and there is no guarantee for termination
15
Representation of the nodes
Arrays Ordered pairs Predicates
16
Representation of the nodes
State: location of 8 number tiles Operators: blank moves left, right, left or down Goal test: state matches the configuration on the right Path cost: each step cost 1, i.e. path length for search tree depth.
17
Representation of the nodes
Possible state representations in LISP (0 is the blank) ( ) ((0 2 3) (1 8 4) (7 6 5)) ((0 1 7) (2 8 6) (3 4 5)) The representation depends on: how easy to compare,operate on, and store (size).
18
Goal Test >(defvar *goal-state* ‘(1 2 3 8 0 4 7 6 5 ))
>(equal *goal-state* ‘( )) t
19
Operators Functions from state to subset of states
drive to neighboring city place piece on chess board add person to meeting schedule slide a tile in 8-puzzle Matching Conflict resolution: order (priority) recency Indexing
20
Using heuristic function to guide the search
It is frequently possible to find rules which will increase the chance of success. Such rules are termed heuristics and a search involving them is termed a heuristic search. A heuristic function is a function that maps from problem state description to measure of desirability Heuristics for the 8-puzzle problem could be: the number of displaced tiles distance of displaced tiles
21
Implementing heuristic evaluation functions
example: 8-puzzle fails to distinguish Start Goal more accurate (1) (2) (3) (1) tiles out of place (2) sum of distance out of place (3) 2*number of direct tile reversals
22
Evaluation of Search Strategies
Time complexity: how many nodes expanded so far? Space complexity: how many nodes must be stored in node-list at any given time? Completeness: if solution exists, guaranteed to be found? Optimality: guaranteed to find the best solution?
23
Components of Implicit State-Space Graphs
There are three basic components to an implicit representation of a state-space graph. A description with which to label the start node. This description is some data structure modeling the initial state of the environment. Functions that transform a state description representing one state of the environment into one that represents the state resulting after an action. These functions are usually called operators. When an operator is applied to a node, it generates one of that node’s successor’s. A goal condition, which can be either a True-False valued function on state descriptions or a list of actual instances of state descriptions that correspond to goal states.
24
Types of Search There are three broad classes of search processes:
Uninformed- Blind Search- There is no specific reason to prefer one part of the search space to any other, in finding a path from initial state to goal state. systematic, exhaustive search depth-first-search Breadth-first-search
25
Types of Search 2) Informed – Heuristic search - there is specific information to focus the search. Hill climbing Branch and bound Best first A* 3) Game playing – there are at least two partners opposing to each other. Minimax (a, b pruning) Means ends analysis
26
Search Algorithms Task find solution path thro’ problem space
keep track of paths from start to goal nodes define optimal path if > 1 solution (circumstances) avoid loops (prevent reaching goal)
27
Depth-first search Uses generate and test strategy. Nodes are generated by applying the applicable rules. Then, each generated node is tested if it is the goal. Nodes are generated in a systematic form. It is an exhaustive search of the problem space. Form a one element queue consisting of the root node Until the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node. If the first element is the goal do nothing. If the first element is not the goal node remove the first element from the queue and add the first element's children if any to the front of the queue. If the goal node has been found announce success, otherwise announce failure.
28
Depth-first search - lists: keep track of progress through state space
- open states generated but children not examined - closed states already examined begin open := [Start] /initialise close := []; while open <> [] do remove leftmost state from open, call it X; if X is a goal then return (success) else begin generate children of X; put X on close; eliminate children of X on open or close; /loop check put remaining children on the left end of open /queue end end; return (failure) /no states left end.
29
Depth-first search Node visit order: Queuing function: enqueue at left
30
Depth-first search Evolution of the closed and open lists [1] – []
[2 3] – [1] [4 5 3] – [1 2 ] [ ] – [1 2 4] ……………….
31
Depth-first Evaluation
Branching factor b, depth of solutions d, max depth m: Incomplete: may wonder down the wrong path. Bad for deep and infinite depth state space Time: bm nodes expanded (worst case) Space: bm (just along the current path) Does not guarantee the shortest path. Good when there are many shallow goals.
32
Breadth-first search It will first explore all paths of length one, then two and if a solution exists it will find it at the exploration of the paths of length N. There is a guarantee of finding a solution if one exists. It will find the shortest path from the solution, it may not be the best one.
33
Breadth-first search The algorithm:
1. Form one element queue consisting of the root node 2. Until the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node. a) If the first element is the goal do nothing. b) If the first element is not the goal node remove the first element from the queue and add the first element's children if any to the back of the queue. 3. If the goal node has been found announce success, otherwise announce failure.
34
Breadth-first search Breadth-first search procedure begin
open := [Start] /initialise close := []; while open <> [] do remove leftmost state from open, call it X; if X is a goal then return (success) else begin generate children of X; put X on close; eliminate children of X on open or close; /loop check put remaining children on the right end of open /queue end end; return (failure) /no states left end.
35
Breadth-first search Node visit order (goal test): Queuing function: enqueue at end ( add expanded node at the end of the list)
36
Breadth-first search Evolution of the open and closed lists [1] – [ ]
[2 3 ] – [ 1 ] [3 4 5 ] – [1 2 ] [ ] – [1 2 3 ] …………..
37
Implementing Breadth-First and Depth-First Search
The lisp implementation of breadth first search maintains the open list as a first in first out (FIFO) structure. (defun breadth-first () (cond ((null *open*) nil) (t (let ((state (car *open*))) (cond ((equal state *goal*) ‘success) (t (setq *closed* (cons state *closed*)) (setq *open* (append (cdr *open*) (generate-descendants state *moves*))) ;*moves*:list of the funcs that generate the moves. (breadth-first)))))))
38
Implementing Breadth-First and Depth-First Search
(defun run-breadth (start goal) (setq *open* (list start)) (setq *closed* nil) (setq *qoal* goal) (breadth-first)
39
Implementing Breadth-First and Depth-First Search
generate-descendants takes a state and returns a list of its children. (defun generate-descendants (state moves) (cond ((null moves) nil) (t (let (child (funcall (car moves) state)) (rest (generate-descendants state (cdr *moves*)))) (cond ((null child) rest) ((member child rest :test #‘equal) rest) ((member child *open* :test #‘equal) rest) ((member child *closed* :test #‘equal) rest) (t (cons child rest))))))) By binding the global variable *move* to an appropriate list of move functions this search algorithm may be used to search any state space in breadth first search fashion.
40
Breadth-first Evaluation
Branching factor b, depth of solution d: Complete: it will find the solution if it exists Time. 1 + b + b2 + …+ bd Space: bk where k is the current depth Space is more problem than time in most cases Time is also a major problem nonetheless
41
Heuristic Search Reasons for heuristics
- impossible for exact solution, heuristics lead to promising path - no exact solution but an acceptable one - fallible due to limited information Intelligence for a system with limited processing resources consists in making wise choices of what to do next Heuristics = Search Algorithm + Measure
42
Hill climbing Hill climbing is depth first search with a heuristic measurement that orders choices as nodes are expanded.The algorithm is the same only 2b differs slightly. 1. Form a one element queue consisting of the root node 2. Until the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node. a) If the first element is the goal do nothing. b) If the first element is not the goal node remove the first element from the queue, sort the first elements children, if any by estimated remaining distance, and add the first element's children if any to the front of the queue. 3. If the goal node has been found announce success, otherwise announce failure.
43
Hill climbing
44
Hill climbing Problems that may arise:
A local maximum, is a state that is better than all its neighbors, but is not better than some other states farther away. At a local maximum, all moves appear to make things worse. A plateau, A whole set of neighboring states have the same value.It is not possible to determine the best direction. A ridge,. Higher than surrounding area but can not be traversed by single move in any one direction.
45
Hill climbing Some ways of dealing with these:
Backtrack: local maximum Make a big jump in one direction to try to get to new section of search space (plateau) Apply two or more rules before doing the test. This corresponds to moving in several directions at once (ridges).
46
Best-first Search Best-first search is a combination of depth-first and breadth-first search algorithms. Forward motion is from the best open node (most promising) so far, no matter where it is on the partially developed tree. The second step of the algorithm changes as: 2. Until the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node. a) If the first element is the goal do nothing. b) If the first element is not the goal node remove the first element from the queue and add the first element's children if any to the queue and sort the entire queue by estimated remaining distance..
47
Best-First Search procedure best_first_search; begin
open := [Start]; closed = []; while open <> [] do remove leftmost state from open, call it X; if X = goal then return path from Start to X else begin generate children of X; for each child of X do case the child is not on open or closed: assign child heuristic value; add child to open end; the child is already on open: if the child reached by shorter path then give state on open shorter path the child is already on closed: if child reached shorter path then remove state from closed; end case; put X on closed; re-order states on open by heuristic method return failure end.
48
Example of best-first search
1. open = [A5]; closed = [] 2. eval A5; open = [B4, C4, D6]; closed = [A5] 3. eval B4; open = [C4, E5, F5, D6]; closed = [B4, A5] 4. eval C4; open = [H3, G4, E5, F5, D6] closed = [C4, B4, A5] 5. eval H3; open = [O2, P3, G4, E5, F5, D6]; closed = [H3, C4, B4, A5] 6. eval O2; open = [P3, G4, E5, F5, D6] closed = [O2, H3, C4, B4, A5] 7. eval P3; the solution is found! B-4 C-4 D-6 I J E-5 F-5 G-4 H-3 K L M N O-2 P-3 Q R S T
49
Branch and Bound Search
Shortest path is always chosen for expansion. The path first reaching the destination is optimal In order to be certain that supposed solution is not longer than one or more incomplete paths, instead of terminating when a path is found, terminate when the shortest incomplete path is longer than the shortest complete path.
50
Branch and Bound Search
To conduct a branch and bound search: 1. Form a queue of partial paths. Let the initial queue consist of the zero length, zero step path from the root node to no where. 2. Until the queue is empty or the goal has been reached, determine if the first path in the queue reaches the goal node. a) If the first path reaches the goal do nothing. b) If the first path does not reach the goal node, i) remove the first path from the queue ii) form new paths from the removed path by extending one step iii) add the new paths to the queue iv) sort the queue by cost accumulated so far, with least cost paths in front 3. If the goal node has been found announce success, otherwise announce failure.
51
Branch and Bound Search
52
Branch and Bound Search
53
Branch and Bound Search
Adding underestimates improves efficiency c(total length) = d(already traveled) + e(distance remaining) If the guesses are not perfect, and a bad overestimate somewhere along the true optimal path may cause us to wonder off that optimal path permanently. But underestimates cannot cause the right path to be overlooked. An underestimate of the distance remaining yields an underestimate of the total path, u(total path length). u(total path length) = d(already traveled) + u(distance remaining)
54
Branch and Bound Search
If a total path is found by extending the path with the smallest underestimate repeatedly, no further work need be done once all incomplete path distance estimates are longer than some complete path distance. This is true, because a real distance along a completed path can not be shorter than an underestimate of the distance. To conduct a branch and bound search with underestimates: 2b4) sort the queue by the sum of cost accumulated so far and a lower bound estimate of the cost remaining, with the least cost paths in front.
55
A* Search Dynamic-programming principal holds that when looking for the best path from S to G, all paths from S to any intermediate node, I, other than the minimum length path from S to I, can be ignored. The A* procedure is branch and bound search in a graph space with an estimate of remaining distance, combined with dynamic programming principle. If one can show that h(n) never overestimates the cost to reach the goal, then it can be shown that the A* algorithm is both complete and optimal.
56
A* Search To do A* search with lower bound estimates:
2b4) sort the queue by the sum of the cost accumulated so far and a lower bound estimate of the cost remaining, with least cost paths in front. 2b5) If two or more paths reach a common node, delete all those paths except for one that reaches the common node with the minimum cost.
57
Recursive Search in Prolog
3X3 knight’s tour problem move(1, 6) move(3, 4) move(6, 7) move(1, 8) move(3, 8) move(6, 1) move(2,7) move(4, 3) move(7, 6) move(2, 9) move(4, 9) move(7, 2) move(8,3) move(9,4) move(8,1) move(9,2)
58
Recursive Search in Prolog
predicates path(integer, integer, integer*) clauses path(Z, Z, L). path(X, Y, L):- move(X, Z), not(member(Z, L), path(Z, Y, [Z|L]) /*x is the member of the list if X is the head of the list or x is a member of the tail */ member(X, [X|T]). member(X, [Y|T]):- member(X, T). goal path(1, 3, [1]).
59
Farmer-Wolf-Cabbage Problem
A Farmer with his wolf, goat and cabbage come to the edge of a river they wish to cross. There is a boat at the river’s edge, but of course only the farmer can raw it. The boat also can carry only two things. If the wolf is ever left alone with the goat, the wolf will eat the goat. If the goat is ever left alone with the cabbage, the goat will eat the cabbage. Devise a sequence of crossings of the river so that all four characters arrive safely on the other side of the river. The problem implementation.
60
Search Algorithms in LISP
Example: Farmer, wolf, goat and cabbage problem. uses depth first search states are represented as list of four elements. eg: (w e w e) represents the farmer and the goat on the west bank, and wolf and the cabbage on the east bank. make-state takes as arguments the locations of the farmer, wolf, goat and cabbage and returns a state and four access functions, farmer-side, wolf-side, goat-side, and cabbage-side, which take a state and return the location of an individual.
61
(defun make-state (f w g c)(list f w g c))
(defun farmer-side (state) (nth 0 state)) (defun wolf-side (state) (nth 1 state)) (defun goat-side (state) (nth 2 state)) (defun cabbage-side (state) (nth 3 state))
62
(defun farmer-takes-self(state)
(make-state(opposite (farmer-side state)) (wolf-side state) (goat-side state) (cabbage-side state))) In the above procedure a new state is returned regardless of its being safe or not.
63
>(safe ‘(w w w w)) ;safe state, return unchanged
A safe function should be defined so that it returns nil if a state is not safe. >(safe ‘(w w w w)) ;safe state, return unchanged >(safe ‘(e w w e)) ; wolf eats goat, return nil. (defun safe(sate) (cond((and (equal(goat-side state) (wolf-side state)) (not(equal(farmer-side state) (wolf-side state))) nil) ; wolf eats goat ((and(equal(goat-side state) (cabbage-side state)) (not(equal(farmer-side state) (goat-side state))) nil) ; goat eats cabba (t state)))
64
;return nil for unsafe states
;filter the unsafe states (defun farmer-takes-self(state) (safe (make-state(opposite (farmer-side state)) (wolf-side state) (goat-side state) (cabbage-side state)))) (defun opposite( side) (cond ((equal side ‘e) ‘w) ((equal side ‘w) ‘e)))
65
(defun farmer-takes-wolf(state)
(cond((equal (farmer-side state) (wolf-side state)) (safe (make-state (opposite (farmer-side state)) (oppsite (wolf-side state)) (goat-side state) (cabbage-side state)))) (t nil)))
66
(defun farmer-takes-goat(state)
(cond((equal (farmer-side state) (goat-side state)) ; farmer and on the same side (safe (make-state (opposite (farmer-side state)) (wolf-side state) (oppsite (goat-side state)) (cabbage-side state)))) (t nil)))
67
(defun farmer-takes-cabbage(state)
(cond((equal (farmer-side state) (cabbage-side state)) (safe (make-state (opposite (farmer-side state)) (wolf-side state) (goat-side state) (oppsite (cabbage-side state))))) (t nil)))
68
(defun path(state goal) (cond((equal state goal) ‘success)
(t (or (path (farmer-takes-self state) goal) (path (farmer-takes-wolf state) goal) (path (farmer-takes-goat state) goal) (path (farmer-takes-cabbage state) goal))))) To prevent path from attempting to generate the children of a nil state, it must first check whether the created state is nil. If it is, the path should return nil. In this definition there is the probability of going into a loop, repeating the same states over and over again. Third parameter, been-list, which keeps track of these visited states, is passed to path. member predicate is used to make sure that the current state is not the member of the been-list.
69
(defun path(state goal been-list)
(cond ((null state) nil) ((equal state goal)(reverse (cons state been-list))) ((not (member state been-list :test ‘equal)) (or (path (farmer-takes-self state) goal (cons state been-list)) (path (farmer-takes-wolf state) goal (cons state been-list)) (path (farmer-takes-goat state) goal (cons state been-list)) (path (farmer-takes-cabbage state) goal (cons state been list))))))
70
*moves* is a list of functions that generate the moves.
In the farmer, goat and cabbage problem *moves* would be defined by (setq *moves* ‘(farmer-takes-self farmer-takes-wolf farmer-takes-goat farmer-takes-cabbage)) (defun run-breadth (start goal) (setq *open* (list start)) (setq *closed* nil) (setq *qoal* goal) (bradth-first)
71
(defun generate-descendants (state moves) (cond ((null moves) nil)
generate-descendants takes a state and returns a list of its children. It also disallows duplicates in the list of children and eliminates any children that are already in the open or closed list. (defun generate-descendants (state moves) (cond ((null moves) nil) (t (let (child (funcall (car moves) state)) (rest (generate-descendat state (cdr *moves*)))) (cond ((null child) rest) ((member child rest :test ‘equal) rest) ((member child *open* :test ‘equal) rest) ((member child *closed* :test ‘equal) rest) (t (cons child rest))))))) Rest is the list of children By binding the global variable *move* to an appropriate list of move functions this search algorithm may be used to search any state space in breadth first search fashion.
72
Breadth-First and Depth-First Search
The lisp implementation of breadth first search maintains the open list as a first in first out (FIFO) structure. Open, closed and goal are defined as global variables. (defun breadth-first () (cond ((null *open*) nil) (t (let ((state (car *open*))) (cond ((equal state *goal*) ‘success) (t (setq *closed* (cons state *closed*)) (setq *open* (append (cdr *open*) (generate-descendants state *moves*))) (breadth-first)))))))
73
References Nilsson, N.J. Artificial Intelligence: A new Synthesis, Morgan Kaufmann, 1998
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.