Presentation is loading. Please wait.

Presentation is loading. Please wait.

States and Search Core of intelligent behaviour. D Goforth - COSC 4117, fall 20062 The simple problem solver Restricted form of general agent: Figure.

Similar presentations


Presentation on theme: "States and Search Core of intelligent behaviour. D Goforth - COSC 4117, fall 20062 The simple problem solver Restricted form of general agent: Figure."— Presentation transcript:

1 States and Search Core of intelligent behaviour

2 D Goforth - COSC 4117, fall 20062 The simple problem solver Restricted form of general agent: Figure 3.1, p.61 function Simple-Problem-Solving-Agent( percept) returns action seq an action sequence, initially empty state some description of the current world state goal a goal, initially null problem a problem formulation state = Update-State(state, percept) if seq is empty ( ie – do search first time only) goal = Formulate-Goal(state) if (state==goal) return nil problem = Formulate-Problem(state, goal) (performance) seq = Search(problem) action First(seq) seq = Rest(seq) return action Creating a solution sequence by graph search

3 D Goforth - COSC 4117, fall 20063 The simple problem solver  works by simulating the problem in internal representation and trying plans till a good one is discovered  works in deterministic, static, single agent environments plan is made once and never changed  works if plan is perfect – actions do what plan assumes  no corrections to path are required  works efficiently if space is not too large

4 D Goforth - COSC 4117, fall 20064 Representation of Environment – abstractions of real world  states and state space – only relevant information in state representation  actions - successor function costs and path cost (eg touring problem TSP)  start state  goal state or criterion function of state(s)

5 D Goforth - COSC 4117, fall 20065 Representation of Environment – state space efficiency Faster processing  minimization of number of states  minimization of degree of branching of successor function (actions) Smaller memory allocation  large state spaces are generated/explored, not stored/traversed

6 D Goforth - COSC 4117, fall 20066 Searching state space The fundamental method for creating a plan Is SEARCH

7 D Goforth - COSC 4117, fall 20067 Searching – graph traversals – TREE-SEARCH – p. 70  start node and all possible actions, then pick another node...:

8 D Goforth - COSC 4117, fall 20068 Design of a search space – the spanning tree over a state space  Node in search space current state reference to parent node on path action from parent to node path cost from start node (may be just path length)

9 D Goforth - COSC 4117, fall 20069 Problem-solving agent – example LLLL RRRRLRRR RLLL RRLL RLLRRLRL LRLL LLLRLLRL RRRL RLRRRRLR LRRL LLRRLRLR Node in search space LLRL 2 F State Parent link Action Path length RRRL 3 Ff

10 D Goforth - COSC 4117, fall 200610 Problem-solving agent – example LLLL RRRRLRRR RLLL RRLL RLLRRLRL LRLL LLLRLLRL RRRL RLRRRRLR LRRL LLRRLRLR SPANNING TREE Note why some state space edges are not traversed in the spanning tree

11 D Goforth - COSC 4117, fall 200611 General search algorithm – p.72  EXAMPLE: breadth first search in a binary tree start state (visitedList) fringe (openList) current state

12 general search algorithm -variation startState initial state of environment adjacentNode, node nodes of search tree: contain state, parent, action, path cost openList collection of Nodes generated, not tested yet (fringe) visitedList collection of Nodes already tested and not the goal action[n] list of actions that can be taken by agent goalStateFound(state) returns boolean evaluate a state as goal precondition(state, action) returns boolean test a state for action apply(node,action) returns node apply action to get next state node makeSequence(node) returns sequence of actions generate plan as sequence of actions

13 general search algorithm -variation algorithm search (startState, goalStateFound()) returns action sequence openList = new NodeCollection(); // stack or queue or... visitedList = new NodeCollection(); node = new Node(startState, null, null, 0 0); openList.insert(node) while ( notEmpty(openList) ) node = openList.get() if (goalStateFound (node.state) ) // successful search return makeSequence(node) for k = 0..n-1 if (precondition(node.state, action[k])==TRUE) adjacentNode = apply(nextNode,action[k]) if NOT(adjacentNode in openList OR visitedList) openList.insert(adjacentNode) visitedList.insert(node) return null

14 D Goforth - COSC 4117, fall 200614 algorithms of general search  breadth first  depth first  iterative deepening search  uniform cost search

15 D Goforth - COSC 4117, fall 200615 variations on search algorithm 1.breadth first search openList is a queue 2.depth first search openList is a stack (recursive depth first is equivalent) tradeoffs for bfs: shortest path vs resources required RRRL 3 Ff State Parent link Action Path length

16 D Goforth - COSC 4117, fall 200616 comparison of bfs and dfs  nodes on openList while search is at level k: bfsO(n k )n is branching factor dfsO(nk) recursive dfsO(k)  quality of solution path bfs always finds path with fewest actions dfs may find a longer path before a shorter one

17 D Goforth - COSC 4117, fall 200617 depth-limited dfs  use depth first search with limited path length eg dfs(startNode,goalStateFound(),3) uses dfs but only goes to level 3

18 D Goforth - COSC 4117, fall 200618 iterated (depth-limited) dfs  variation on dfs to get best of both small openList of dfs finds path with fewest actions like bfs  repeated searching is not a big problem!!!

19 D Goforth - COSC 4117, fall 200619 iterative deepening dfs  search algorithm puts depth-limited dfs in a loop: algorithm search (startState, goalStateFound()) Node node = null depth = 0 while (node == null) depth++ node = dfs(startState,goalStateFound(),depth) return node

20 D Goforth - COSC 4117, fall 200620 uniform cost search  find best path when there is an action cost for each edge: a path of more edges may be better than a path of fewer edges: 12+8+9+4+10 (5 edges) is preferred to 35+18 (2 edges) variation on bfs openList is a priority queue ordered on path cost from start state

21 D Goforth - COSC 4117, fall 200621 uniform cost search - example openList is a priority queue ordered on path cost from start state visited. open C(2),B(4),D(8) current A(0) A BCD 4 2 8

22 visited. open C(2),B(4),D(8) current A(0) A BCD 4 2 8 visited A(0) open B(4),E(5),D(8) current C(2) A BCD 4 2 8 E 3 visited A(0),C(2), B(4) open G(6),F(7),D(8), H(10) current E(5) A BCD 4 2 8 E 3 FG 3 5 visited A(0),C(2) open E(5),F(7),D(8),G(9) current B(4) A BCD 4 2 8 E 3 FG 3 5 H 5 1 1 3 12 34

23 D Goforth - COSC 4117, fall 200623 variations of the general algorithm  openList structure  time of testing for goal state

24 D Goforth - COSC 4117, fall 200624 (some) problems that complicate search  perceptions are incomplete representation of state  dynamic environment – path of actions is not only cause of state change (eg games)

25 D Goforth - COSC 4117, fall 200625 what kind of problem-reprise  fully / partly observable - is state known?  deterministic / stochastic - effect of action uncertain?  sequential / episodic - plan required/useful?  static / dynamic - state changes between action & perception and/or between perception & action  discrete / continuous - concurrent or sequential actions on state  single- / multi-agent dynamic environment; possible communication, distributed AI


Download ppt "States and Search Core of intelligent behaviour. D Goforth - COSC 4117, fall 20062 The simple problem solver Restricted form of general agent: Figure."

Similar presentations


Ads by Google