Download presentation
Presentation is loading. Please wait.
1
Problem Spaces & Search CSE 473
2
© Daniel S. Weld 2 473 Topics Agents & Environments Problem Spaces Search & Constraint Satisfaction Knowledge Repr’n & Logical Reasoning Machine Learning Uncertainty: Repr’n & Reasoning Dynamic Bayesian Networks Markov Decision Processes
3
© Daniel S. Weld 3 Weak Methods “In the knowledge lies the power…” [Feigenbaum] But what if no knowledge????
4
© Daniel S. Weld 4 Generate & Test As weak as it gets… Works on semi-decidable problems!
5
© Daniel S. Weld 5 Example: Fragment of 8-Puzzle Problem Space
6
© Daniel S. Weld 6 Search thru a Set of states Operators [and costs] Start state Goal state [test] Path: start a state satisfying goal test [May require shortest path] Input: Output: Problem Space / State Space
7
© Daniel S. Weld 7 Example: Route Planning Input: Set of states Operators [and costs] Start state Goal state (test) Output:
8
© Daniel S. Weld 8 Example: N Queens Input: Set of states Operators [and costs] Start state Goal state (test) Output Q Q Q Q
9
© Daniel S. Weld 9 Example: Blocks World Input: Set of states Operators [and costs] Start state Goal state (test) Partially specified plans Plan modification operators The null plan (no actions) A plan which provably achieves The desired world configuration
10
© Daniel S. Weld 10 Multiple Problem Spaces Real World States of the world (e.g. block configurations) Actions (take one world-state to another) Problem Space 1 PS states = models of world states Operators = models of actions Robot’s Head Problem Space 2 PS states = partially spec. plan Operators = plan modification ops
11
© Daniel S. Weld 11 Classifying Search GUESSING (“Tree Search”) Guess how to extend a partial solution to a problem. Generates a tree of (partial) solutions. The leafs of the tree are either “failures” or represent complete solutions SIMPLIFYING (“Inference”) Infer new, stronger constraints by combining one or more constraints (without any “guessing”) Example: X+2Y = 3 X+Y =1 therefore Y = 2 WANDERING (“Markov chain”) Perform a (biased) random walk through the space of (partial or total) solutions
12
© Daniel S. Weld 12 Roadmap Guessing – State Space Search 1.BFS 2.DFS 3.Iterative Deepening 4.Bidirectional 5.Best-first search 6.A* 7.Game tree 8.Davis-Putnam (logic) 9.Cutset conditioning (probability) Simplification – Constraint Propagation 1.Forward Checking 2.Path Consistency (Waltz labeling, temporal algebra) 3.Resolution 4.“Bucket Algorithm” Wandering – Randomized Search 1.Hillclimbing 2.Simulated annealing 3.Walksat 4.Monte-Carlo Methods Constraint Satisfaction
13
© Daniel S. Weld 13 Search Strategies v2 Blind Search Informed Search Constraint Satisfaction Adversary Search Depth first search Breadth first search Iterative deepening search Iterative broadening search
14
© Daniel S. Weld 14 Depth First Search a b cd e f gh Maintain stack of nodes to visit Evaluation Complete? Time Complexity? Space Complexity? Not for infinite spaces O(b^d) O(d)
15
© Daniel S. Weld 15 Breadth First Search a b c def gh Maintain queue of nodes to visit Evaluation Complete? Time Complexity? Space Complexity? Yes O(b^d)
16
© Daniel S. Weld 16 Memory a Limitation? Suppose: 2 GHz CPU 1 GB main memory 100 instructions / expansion 5 bytes / node 200,000 expansions / sec Memory filled in 100 sec … < 2 minutes
17
© Daniel S. Weld 17 Iterative Deepening Search a b c d e f gh DFS with limit; incrementally grow limit Evaluation Complete? Time Complexity? Space Complexity? Yes O(b^d) O(d) j i k L
18
© Daniel S. Weld 18 Cost of Iterative Deepening bratio ID to DFS 23 32 51.5 101.2 251.08 1001.02
19
© Daniel S. Weld 19 Speed 8 Puzzle 2x2x2 Rubik’s 15 Puzzle 3x3x3 Rubik’s 24 Puzzle 10 5.01 sec 10 6.2 sec 10 17 20k yrs 10 20 574k yrs 10 37 10 23 yrs BFS Nodes Time Iter. Deep. Nodes Time Assuming 10M nodes/sec & sufficient memory 10 5.01 sec 10 6.2 sec 10 13 6 days 10 19 68k yrs 10 25 12B yrs Adapted from Richard Korf presentation Why the difference? Rubik has higher branch factor 15 puzzle has greater depth # of duplicates 8x 1Mx
20
© Daniel S. Weld 20 When to Use Iterative Deepening N Queens? Q Q Q Q
21
© Daniel S. Weld 21 Search Space with Uniform Structure
22
© Daniel S. Weld 22 Search Space with Clustered Structure
23
© Daniel S. Weld 23 Iterative Broadening Search What if know solutions lay at depth N? No sense in doing iterative deepening a b c d e fg h i j
24
© Daniel S. Weld 24 Forwards vs. Backwards start end
25
© Daniel S. Weld 25 vs. Bidirectional
26
© Daniel S. Weld 26 Problem All these methods are slow (blind) Solution add guidance (“heurstic estimate”) “informed search” … next time …
27
© Daniel S. Weld 27 Recap: Search thru a Set of states Operators [and costs] Start state Goal state [test] Path: start a state satisfying goal test [May require shortest path] Input: Output: Problem Space / State Space
28
© Daniel S. Weld 28 Cryptarithmetic SEND + MORE ------ MONEY Input: Set of states Operators [and costs] Start state Goal state (test) Output:
29
© Daniel S. Weld 29 Concept Learning Labeled Training Examples <p4,… Output: f: {ok, ter} Input: Set of states Operators [and costs] Start state Goal state (test) Output:
30
© Daniel S. Weld 30 Symbolic Integration E.g. x 2 e x dx = e x (x 2 -2x+2) + C Operators: Integration by parts Integration by substitution …
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.