Download presentation
Presentation is loading. Please wait.
1
Problem Spaces & Search
Dan Weld CSE 573 A handful of GENERAL SEARCH TECHNIQUES lie at the heart of practically all work in AI We will encounter the SAME PRINCIPLES again and again in this course, whether we are talking about GAMES LOGICAL REASONING MACHINE LEARNING These are principles for SEARCHING THROUGH A SPACE OF POSSIBLE SOLUTIONS to a problems.
2
Weak Methods “In the knowledge lies the power…” [Feigenbaum]
But what if you have very little knowledge???? © Daniel S. Weld
3
Generate & Test As weak as it gets… Works on semi-decidable problems!
© Daniel S. Weld
4
+ Mass Spectrometry C12O4N2H20 Relative abundance m/z value
© Daniel S. Weld
5
Improving G&T Use knowledge of problem domain
Add ‘smarts’ to the generator © Daniel S. Weld
6
Problem Space / State Space
Search thru a Problem Space / State Space Input: Set of states Operators [and costs] Start state Goal state [test] Output: Path: start a state satisfying goal test [May require shortest path] [Sometimes just need state passing test] State Space Search is basically a GRAPH SEARCH PROBLEM STATES are whatever you like! OPERATORS are a compact DESCRIPTION of possible STATE TRANSITIONS – the EDGES of the GRAPH May have different COSTS or all be UNIT cost OUTPUT may be specified either as ANY PATH (REACHABILITY), or a SHORTEST PATH © Daniel S. Weld
7
Example: Route Planning
Input: Set of states Operators [and costs] Start state Goal state (test) Output: States = cities Start state = starting city Goal state test = is state the destination city? Operators = move to an adjacent city; cost = distance Output: a shortest path from start state to goal state SUPPOSE WE REVERSE START AND GOAL STATES? © Daniel S. Weld
8
Example: N Queens Input: Set of states Operators [and costs]
Start state Goal state (test) Output © Daniel S. Weld
9
Ex: Blocks World Input: Set of states Operators [and costs]
Start state Goal state (test) Partially specified plans Plan modification operators The null plan (no actions) State = configuration of blocks: ON(A,B), ON(B, TABLE), ON(C, TABLE) Start state = starting configuration Goal state test = does state satisfy some test, e.g. “ON(A,B)”? Operators: MOVE x FROM y TO z or PICKUP(x,y) and PUTDOWN(x,y) Output: Sequence of actions that transform start state into goal. HOW HARD if don’t care about number of actions used? POLYNOMIAL HOW HARD to find smallest number of actions? NP-COMPLETE Suppose we want to search BACKWARDS from GOAL? Simple if goal COMPLETELY specified NEED to have a DIFFERENT notion of STATE if goals are PARTIALLY SPECIFIED A plan which provably achieves The desired world configuration © Daniel S. Weld
10
Multiple Problem Spaces
Real World States of the world (e.g. block configurations) Actions (take one world-state to another) Robot’s Head Problem Space 1 PS states = models of world states Operators = models of actions Problem Space 2 PS states = partially spec. plan Operators = plan modificat’n ops State = configuration of blocks: ON(A,B), ON(B, TABLE), ON(C, TABLE) Start state = starting configuration Goal state test = does state satisfy some test, e.g. “ON(A,B)”? Operators: MOVE x FROM y TO z or PICKUP(x,y) and PUTDOWN(x,y) Output: Sequence of actions that transform start state into goal. HOW HARD if don’t care about number of actions used? POLYNOMIAL HOW HARD to find smallest number of actions? NP-COMPLETE Suppose we want to search BACKWARDS from GOAL? Simple if goal COMPLETELY specified NEED to have a DIFFERENT notion of STATE if goals are PARTIALLY SPECIFIED © Daniel S. Weld
11
Classifying Search GUESSING (“Tree Search”) SIMPLIFYING (“Inference”)
Guess how to extend a partial solution to a problem. Generates a tree of (partial) solutions. The leafs of the tree are either “failures” or represent complete solutions SIMPLIFYING (“Inference”) Infer new, stronger constraints by combining one or more constraints (without any “guessing”) Example: X+2Y = 3 X+Y =1 therefore Y = 2 WANDERING (“Markov chain”) Perform a (biased) random walk through the space of (partial or total) solutions HERE ARE THE THREE BASIC PRINCIPLES Guessing – guess how to EXTEND A PARTIAL SOLUTION one step at a time Simplifying – make inferences about how the PROBLEM CONSTRAINTS interact X + 2Y = 3 X + Y = 1 Y = 2 3. Wandering – RANDOMLY step through the space of POSSIBILITIES © Daniel S. Weld
12
Search Strategies Blind Search Informed Search Constraint Satisfaction
Adversary Search Depth first search Breadth first search Iterative deepening search Iterative broadening search © Daniel S. Weld
13
Depth First Search Maintain stack of nodes to visit Evaluation
Complete? Time Complexity? Space Complexity? Not for infinite spaces a b O(b^d) e g h c d f O(bd) © Daniel S. Weld d=max depth of tree
14
Breadth First Search Maintain queue of nodes to visit Evaluation
Complete? Time Complexity? Space Complexity? Yes (if branching factor is finite) a O(bd+1) b e g h c d f O(bd+1) © Daniel S. Weld
15
Memory a Limitation? Suppose: 2 GHz CPU 4 GB main memory
100 instructions / expansion 5 bytes / node 200,000 expansions / sec Memory filled in 400 sec … < 7 min © Daniel S. Weld
16
Iterative Deepening Search
DFS with limit; incrementally grow limit Evaluation Complete? Time Complexity? Space Complexity? a Yes b e O(b^d) g h c d f O(d) © Daniel S. Weld
17
Cost of Iterative Deepening
b ratio ID to DFS 2 3 5 1.5 10 1.2 25 1.08 100 1.02 © Daniel S. Weld
18
Speed BFS Nodes Time Iter. Deep. Nodes Time 8 Puzzle 2x2x2 Rubik’s
Assuming 10M nodes/sec & sufficient memory BFS Nodes Time Iter. Deep. Nodes Time 8 Puzzle 2x2x2 Rubik’s 15 Puzzle 3x3x3 Rubik’s 24 Puzzle sec sec days k yrs B yrs sec sec k yrs k yrs yrs 1Mx 8x Why the difference? Rubik has higher branch factor puzzle has greater depth # of duplicates © Daniel S. Weld Adapted from Richard Korf presentation
19
When to Use Iterative Deepening
N Queens? Q Q Q Q © Daniel S. Weld
20
Search Space with Uniform Structure
© Daniel S. Weld
21
Search Space with Clustered Structure
© Daniel S. Weld
22
Iterative Broadening Search
What if know solutions lay at depth N? No sense in doing iterative deepening Increment sum of sibling indicies a d b e h f c g i © Daniel S. Weld
23
Forwards vs. Backwards start end © Daniel S. Weld
24
vs. Bidirectional © Daniel S. Weld
25
Problem All these methods are slow (blind) Solution
add guidance (“heurstic estimate”) “informed search” … coming next … © Daniel S. Weld
26
Problem Space / State Space
Recap: Search thru a Problem Space / State Space Input: Set of states Operators [and costs] Start state Goal state [test] Output: Path: start a state satisfying goal test [May require shortest path] State Space Search is basically a GRAPH SEARCH PROBLEM STATES are whatever you like! OPERATORS are a compact DESCRIPTION of possible STATE TRANSITIONS – the EDGES of the GRAPH May have different COSTS or all be UNIT cost OUTPUT may be specified either as ANY PATH (REACHABILITY), or a SHORTEST PATH © Daniel S. Weld
27
Symbolic Integration E.g. x2ex dx = ex(x2-2x+2) + C States?
Formulae Operators? Integration by parts Integration by substitution … © Daniel S. Weld
28
Concept Learning Labeled Training Examples
<p1,blond,32,mc,ok> <p2,red,47,visa,ok> <p3,blond,23,cash,ter> <p4,… Output: f: <pn…> {ok, ter} Input: Set of states Operators [and costs] Start state Goal state (test) Output: STATE: partial assignment of letters to numbers GOAL state: TOTAL assignment that is a correct sum OPERATOR: assign a number to a letter DON’T CARE about PATH LENGTH. In fact, ALL solution paths exactly 8 LETTERS long! © Daniel S. Weld
29
Search Strategies Blind Search Informed Search Constraint Satisfaction
Adversary Search Best-first search A* search Iterative deepening A* search Local search © Daniel S. Weld
30
Best-first Search Generalization of breadth first search
Priority queue of nodes to be explored Cost function f(n) applied to each node Add initial state to priority queue While queue not empty Node = head(queue) If goal?(node) then return node Add children of node to queue © Daniel S. Weld
31
Old Friends Breadth first = best first With f(n) = depth(n)
Dijkstra’s Algorithm = best first With f(n) = g(n), i.e. the sum of edge costs from start to n Space bound (stores all generated nodes) © Daniel S. Weld
32
{ { A* Search Hart, Nilsson & Rafael 1968
Best first search with f(n) = g(n) + h(n) Where g(n) = sum of costs from start to n h(n) = estimate of lowest cost path n goal h(goal) = 0 If h(n) is admissible (and monotonic) then A* is optimal { Underestimates cost of any solution which can reached from node { f values increase from node to descendants (triangle inequality) © Daniel S. Weld
33
Optimality of A* © Daniel S. Weld
34
Optimality Continued © Daniel S. Weld
35
A* Summary Pros Cons © Daniel S. Weld
36
Iterative-Deepening A*
Like iterative-deepening depth-first, but... Depth bound modified to be an f-limit Start with f-limit = h(start) Prune any node if f(node) > f-limit Next f-limit = min-cost of any node pruned FL=21 e FL=15 d a f b c © Daniel S. Weld
37
IDA* Analysis Complete & Optimal (ala A*)
Space usage depth of solution Each iteration is DFS - no priority queue! # nodes expanded relative to A* Depends on # unique values of heuristic function In 8 puzzle: few values close to # A* expands In traveling salesman: each f value is unique 1+2+…+n = O(n2) where n=nodes A* expands if n is too big for main memory, n2 is too long to wait! Generates duplicate nodes in cyclic graphs © Daniel S. Weld
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.