Problem Spaces & Search

Slides:



Advertisements
Similar presentations
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Advertisements

Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
UnInformed Search What to do when you don’t know anything.
1 Solving Problems by Searching. 2 Terminology State State Space Initial State Goal Test Action Step Cost Path Cost State Change Function State-Space.
Problem Spaces & Search CSE 473. © Daniel S. Weld Topics Agents & Environments Problem Spaces Search & Constraint Satisfaction Knowledge Repr’n.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Informed Search CSE 473 University of Washington.
Heuristics CSE 473 University of Washington. © Daniel S. Weld Topics Agency Problem Spaces SearchKnowledge Representation Planning PerceptionNLPMulti-agentRobotics.
State-Space Searches. 2 State spaces A state space consists of –A (possibly infinite) set of states The start state represents the initial problem Each.
Solving Problems by Searching
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Informed (Heuristic) Search
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Lecture 3: Uninformed Search
Problem Spaces & Search CSE 573. © Daniel S. Weld 2 Logistics Mailing list Reading Ch 4.2, Ch 6 Mini-Project I Partners Game Playing?
1 Solving Problems by Searching. 2 Terminology State State Space Goal Action Cost State Change Function Problem-Solving Agent State-Space Search.
Informed Search CSE 473 University of Washington.
Problem Spaces & Search Dan Weld CSE 573. © Daniel S. Weld 2 Logistics Read rest of R&N chapter 4 Also read chapter 5 PS 1 will arrive by , so be.
CSE 473: Artificial Intelligence Spring 2012 Search: Cost & Heuristics Luke Zettlemoyer Lecture adapted from Dan Klein’s slides Multiple slides from Stuart.
Chap 4: Searching Techniques Artificial Intelligence Dr.Hassan Al-Tarawneh.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Lecture 3: Uninformed Search
Review: Tree search Initialize the frontier using the starting state
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
Problem Solving Agents
Logistics Problem Set 1 Mailing List Due 10/13 at start of class
CSE 473 Artificial Intelligence Oren Etzioni
Introduction to Artificial Intelligence
Problem Solving by Searching
Artificial Intelligence (CS 370D)
Chap 4: Searching Techniques Artificial Intelligence Dr.Hassan Al-Tarawneh.
Uniformed Search (cont.) Computer Science cpsc322, Lecture 6
Russell and Norvig: Chapter 3, Sections 3.4 – 3.6
CSE P573 Introduction to Artificial Intelligence Class 1 Planning & Search INTRODUCE MYSELF Relative new faculty member, but 20 years in AI research.
Lecture 1B: Search.
Problem Solving and Searching
What to do when you don’t know anything know nothing
CSE 4705 Artificial Intelligence
EA C461 – Artificial Intelligence
Searching for Solutions
Heuristics Local Search
Problem Solving and Searching
Artificial Intelligence
Problem Spaces & Search
CSE 473: Artificial Intelligence Spring 2012
Heuristics Local Search
CSE 473 University of Washington
Chap 4: Searching Techniques
HW 1: Warmup Missionaries and Cannibals
More advanced aspects of search
Informed Search Idea: be smart about what paths to try.
Problem Spaces & Search
State-Space Searches.
State-Space Searches.
ECE457 Applied Artificial Intelligence Fall 2007 Lecture #2
Search.
The Rich/Knight Implementation
HW 1: Warmup Missionaries and Cannibals
CS 416 Artificial Intelligence
Search.
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Search Strategies CMPT 420 / CMPG 720.
Solving Problems by Searching
The Rich/Knight Implementation
Presentation transcript:

Problem Spaces & Search Dan Weld CSE 573 A handful of GENERAL SEARCH TECHNIQUES lie at the heart of practically all work in AI We will encounter the SAME PRINCIPLES again and again in this course, whether we are talking about GAMES LOGICAL REASONING MACHINE LEARNING These are principles for SEARCHING THROUGH A SPACE OF POSSIBLE SOLUTIONS to a problems.

Weak Methods “In the knowledge lies the power…” [Feigenbaum] But what if you have very little knowledge???? © Daniel S. Weld

Generate & Test As weak as it gets… Works on semi-decidable problems! © Daniel S. Weld

+ Mass Spectrometry C12O4N2H20 Relative abundance m/z value © Daniel S. Weld

Improving G&T Use knowledge of problem domain Add ‘smarts’ to the generator © Daniel S. Weld

Problem Space / State Space Search thru a Problem Space / State Space Input: Set of states Operators [and costs] Start state Goal state [test] Output: Path: start  a state satisfying goal test [May require shortest path] [Sometimes just need state passing test] State Space Search is basically a GRAPH SEARCH PROBLEM STATES are whatever you like! OPERATORS are a compact DESCRIPTION of possible STATE TRANSITIONS – the EDGES of the GRAPH May have different COSTS or all be UNIT cost OUTPUT may be specified either as ANY PATH (REACHABILITY), or a SHORTEST PATH © Daniel S. Weld

Example: Route Planning Input: Set of states Operators [and costs] Start state Goal state (test) Output: States = cities Start state = starting city Goal state test = is state the destination city? Operators = move to an adjacent city; cost = distance Output: a shortest path from start state to goal state SUPPOSE WE REVERSE START AND GOAL STATES? © Daniel S. Weld

Example: N Queens Input: Set of states Operators [and costs] Start state Goal state (test) Output © Daniel S. Weld

Ex: Blocks World Input: Set of states Operators [and costs] Start state Goal state (test) Partially specified plans Plan modification operators The null plan (no actions) State = configuration of blocks: ON(A,B), ON(B, TABLE), ON(C, TABLE) Start state = starting configuration Goal state test = does state satisfy some test, e.g. “ON(A,B)”? Operators: MOVE x FROM y TO z or PICKUP(x,y) and PUTDOWN(x,y) Output: Sequence of actions that transform start state into goal. HOW HARD if don’t care about number of actions used? POLYNOMIAL HOW HARD to find smallest number of actions? NP-COMPLETE Suppose we want to search BACKWARDS from GOAL? Simple if goal COMPLETELY specified NEED to have a DIFFERENT notion of STATE if goals are PARTIALLY SPECIFIED A plan which provably achieves The desired world configuration © Daniel S. Weld

Multiple Problem Spaces Real World States of the world (e.g. block configurations) Actions (take one world-state to another) Robot’s Head Problem Space 1 PS states = models of world states Operators = models of actions Problem Space 2 PS states = partially spec. plan Operators = plan modificat’n ops State = configuration of blocks: ON(A,B), ON(B, TABLE), ON(C, TABLE) Start state = starting configuration Goal state test = does state satisfy some test, e.g. “ON(A,B)”? Operators: MOVE x FROM y TO z or PICKUP(x,y) and PUTDOWN(x,y) Output: Sequence of actions that transform start state into goal. HOW HARD if don’t care about number of actions used? POLYNOMIAL HOW HARD to find smallest number of actions? NP-COMPLETE Suppose we want to search BACKWARDS from GOAL? Simple if goal COMPLETELY specified NEED to have a DIFFERENT notion of STATE if goals are PARTIALLY SPECIFIED © Daniel S. Weld

Classifying Search GUESSING (“Tree Search”) SIMPLIFYING (“Inference”) Guess how to extend a partial solution to a problem. Generates a tree of (partial) solutions. The leafs of the tree are either “failures” or represent complete solutions SIMPLIFYING (“Inference”) Infer new, stronger constraints by combining one or more constraints (without any “guessing”) Example: X+2Y = 3 X+Y =1 therefore Y = 2 WANDERING (“Markov chain”) Perform a (biased) random walk through the space of (partial or total) solutions HERE ARE THE THREE BASIC PRINCIPLES Guessing – guess how to EXTEND A PARTIAL SOLUTION one step at a time Simplifying – make inferences about how the PROBLEM CONSTRAINTS interact X + 2Y = 3 X + Y = 1 ------------ Y = 2 3. Wandering – RANDOMLY step through the space of POSSIBILITIES © Daniel S. Weld

Search Strategies Blind Search Informed Search Constraint Satisfaction Adversary Search Depth first search Breadth first search Iterative deepening search Iterative broadening search © Daniel S. Weld

Depth First Search Maintain stack of nodes to visit Evaluation Complete? Time Complexity? Space Complexity? Not for infinite spaces a b O(b^d) e g h c d f O(bd) © Daniel S. Weld d=max depth of tree

Breadth First Search Maintain queue of nodes to visit Evaluation Complete? Time Complexity? Space Complexity? Yes (if branching factor is finite) a O(bd+1) b e g h c d f O(bd+1) © Daniel S. Weld

Memory a Limitation? Suppose: 2 GHz CPU 4 GB main memory 100 instructions / expansion 5 bytes / node 200,000 expansions / sec Memory filled in 400 sec … < 7 min © Daniel S. Weld

Iterative Deepening Search DFS with limit; incrementally grow limit Evaluation Complete? Time Complexity? Space Complexity? a Yes b e O(b^d) g h c d f O(d) © Daniel S. Weld

Cost of Iterative Deepening b ratio ID to DFS 2 3 5 1.5 10 1.2 25 1.08 100 1.02 © Daniel S. Weld

Speed BFS Nodes Time Iter. Deep. Nodes Time 8 Puzzle 2x2x2 Rubik’s Assuming 10M nodes/sec & sufficient memory BFS Nodes Time Iter. Deep. Nodes Time 8 Puzzle 2x2x2 Rubik’s 15 Puzzle 3x3x3 Rubik’s 24 Puzzle 105 .01 sec 106 .2 sec 1013 6 days 1019 68k yrs 1025 12B yrs 105 .01 sec 106 .2 sec 1017 20k yrs 1020 574k yrs 1037 1023 yrs 1Mx 8x Why the difference? Rubik has higher branch factor 15 puzzle has greater depth # of duplicates © Daniel S. Weld Adapted from Richard Korf presentation

When to Use Iterative Deepening N Queens? Q Q Q Q © Daniel S. Weld

Search Space with Uniform Structure © Daniel S. Weld

Search Space with Clustered Structure © Daniel S. Weld

Iterative Broadening Search What if know solutions lay at depth N? No sense in doing iterative deepening Increment sum of sibling indicies a d b e h f c g i © Daniel S. Weld

Forwards vs. Backwards start end © Daniel S. Weld

vs. Bidirectional © Daniel S. Weld

Problem All these methods are slow (blind) Solution  add guidance (“heurstic estimate”)  “informed search” … coming next … © Daniel S. Weld

Problem Space / State Space Recap: Search thru a Problem Space / State Space Input: Set of states Operators [and costs] Start state Goal state [test] Output: Path: start  a state satisfying goal test [May require shortest path] State Space Search is basically a GRAPH SEARCH PROBLEM STATES are whatever you like! OPERATORS are a compact DESCRIPTION of possible STATE TRANSITIONS – the EDGES of the GRAPH May have different COSTS or all be UNIT cost OUTPUT may be specified either as ANY PATH (REACHABILITY), or a SHORTEST PATH © Daniel S. Weld

 Symbolic Integration E.g. x2ex dx = ex(x2-2x+2) + C States? Formulae Operators? Integration by parts Integration by substitution … © Daniel S. Weld

Concept Learning Labeled Training Examples <p1,blond,32,mc,ok> <p2,red,47,visa,ok> <p3,blond,23,cash,ter> <p4,… Output: f: <pn…>  {ok, ter} Input: Set of states Operators [and costs] Start state Goal state (test) Output: STATE: partial assignment of letters to numbers GOAL state: TOTAL assignment that is a correct sum OPERATOR: assign a number to a letter DON’T CARE about PATH LENGTH. In fact, ALL solution paths exactly 8 LETTERS long! © Daniel S. Weld

Search Strategies Blind Search Informed Search Constraint Satisfaction Adversary Search Best-first search A* search Iterative deepening A* search Local search © Daniel S. Weld

Best-first Search Generalization of breadth first search Priority queue of nodes to be explored Cost function f(n) applied to each node Add initial state to priority queue While queue not empty Node = head(queue) If goal?(node) then return node Add children of node to queue © Daniel S. Weld

Old Friends Breadth first = best first With f(n) = depth(n) Dijkstra’s Algorithm = best first With f(n) = g(n), i.e. the sum of edge costs from start to n Space bound (stores all generated nodes) © Daniel S. Weld

{ { A* Search Hart, Nilsson & Rafael 1968 Best first search with f(n) = g(n) + h(n) Where g(n) = sum of costs from start to n h(n) = estimate of lowest cost path n  goal h(goal) = 0 If h(n) is admissible (and monotonic) then A* is optimal { Underestimates cost of any solution which can reached from node { f values increase from node to descendants (triangle inequality) © Daniel S. Weld

Optimality of A* © Daniel S. Weld

Optimality Continued © Daniel S. Weld

A* Summary Pros Cons © Daniel S. Weld

Iterative-Deepening A* Like iterative-deepening depth-first, but... Depth bound modified to be an f-limit Start with f-limit = h(start) Prune any node if f(node) > f-limit Next f-limit = min-cost of any node pruned FL=21 e FL=15 d a f b c © Daniel S. Weld

IDA* Analysis Complete & Optimal (ala A*) Space usage  depth of solution Each iteration is DFS - no priority queue! # nodes expanded relative to A* Depends on # unique values of heuristic function In 8 puzzle: few values  close to # A* expands In traveling salesman: each f value is unique  1+2+…+n = O(n2) where n=nodes A* expands if n is too big for main memory, n2 is too long to wait! Generates duplicate nodes in cyclic graphs © Daniel S. Weld