Problem Spaces & Search CSE 573. © Daniel S. Weld 2 Logistics Mailing list Reading Ch 4.2, Ch 6 Mini-Project I Partners Game Playing?

Slides:



Advertisements
Similar presentations
Search I: Chapter 3 Aim: achieving generality Q: how to formulate a problem as a search problem?
Advertisements

Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
CSCE 580 ANDREW SMITH JOHNNY FLOWERS IDA* and Memory-Bounded Search Algorithms.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
1 Blind (Uninformed) Search (Where we systematically explore alternatives) R&N: Chap. 3, Sect. 3.3–5.
Implementation and Evaluation of Search Methods. Use a stack data structure. 1. Push root node of tree on the stack. 2. Pop the top node off the stack.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence (CS 461D)
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
State Space Search Algorithms CSE 472 Introduction to Artificial Intelligence Autumn 2003.
Review: Search problem formulation
UnInformed Search What to do when you don’t know anything.
1 Solving Problems by Searching. 2 Terminology State State Space Initial State Goal Test Action Step Cost Path Cost State Change Function State-Space.
Problem Spaces & Search CSE 473. © Daniel S. Weld Topics Agents & Environments Problem Spaces Search & Constraint Satisfaction Knowledge Repr’n.
Informed Search CSE 473 University of Washington.
Problem Solving and Search in AI Heuristic Search
Review Best-first search uses an evaluation function f(n) to select the next node for expansion. Greedy best-first search uses f(n) = h(n). Greedy best.
CSC344: AI for Games Lecture 4: Informed search
Heuristics CSE 473 University of Washington. © Daniel S. Weld Topics Agency Problem Spaces SearchKnowledge Representation Planning PerceptionNLPMulti-agentRobotics.
State-Space Searches. 2 State spaces A state space consists of –A (possibly infinite) set of states The start state represents the initial problem Each.
Solving Problems by Searching
State-Space Searches.
Informed Search Idea: be smart about what paths to try.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
Problem Solving and Search Andrea Danyluk September 11, 2013.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Chapter 3. Problem Solving Agents Looking to satisfy some goal Wants environment to be in particular state Have a number of possible actions An action.
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l all learning algorithms,
Informed (Heuristic) Search
Informed search algorithms
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
CSE 473: Artificial Intelligence Spring 2012
CSE 573: Artificial Intelligence Autumn 2012 Heuristic Search With slides from Dan Klein, Stuart Russell, Andrew Moore, Luke Zettlemoyer Dan Weld.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
CSC3203: AI for Games Informed search (1) Patrick Olivier
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
1 Solving Problems by Searching. 2 Terminology State State Space Goal Action Cost State Change Function Problem-Solving Agent State-Space Search.
Informed Search CSE 473 University of Washington.
Problem Spaces & Search Dan Weld CSE 573. © Daniel S. Weld 2 Logistics Read rest of R&N chapter 4 Also read chapter 5 PS 1 will arrive by , so be.
CSE 473: Artificial Intelligence Spring 2012 Search: Cost & Heuristics Luke Zettlemoyer Lecture adapted from Dan Klein’s slides Multiple slides from Stuart.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
Romania. Romania Problem Initial state: Arad Goal state: Bucharest Operators: From any node, you can visit any connected node. Operator cost, the.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Logistics Problem Set 1 Mailing List Due 10/13 at start of class
CSE 473 Artificial Intelligence Oren Etzioni
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
EA C461 – Artificial Intelligence
Searching for Solutions
Heuristics Local Search
Problem Spaces & Search
Problem Spaces & Search
CSE 473: Artificial Intelligence Spring 2012
Heuristics Local Search
CSE 473 University of Washington
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
Problem Spaces & Search
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
Presentation transcript:

Problem Spaces & Search CSE 573

© Daniel S. Weld 2 Logistics Mailing list Reading Ch 4.2, Ch 6 Mini-Project I Partners Game Playing?

© Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanning Supervised Learning Logic-Based Probabilistic Multi-agent NLPSoftboticsRobotics

© Daniel S. Weld 4 Weak Methods “In the knowledge lies the power…” [Feigenbaum] But what if you have very little knowledge????

© Daniel S. Weld 5 Generate & Test As weak as it gets… Works on semi-decidable problems!

© Daniel S. Weld 6 Mass Spectrometry Relative abundance m/z value C 12 O 4 N 2 H 20 +

© Daniel S. Weld 7 Example: Fragment of 8-Puzzle Problem Space

© Daniel S. Weld 8 Search thru a Set of states Operators [and costs] Start state Goal state [test] Path: start  a state satisfying goal test [May require shortest path] Input: Output: Problem Space / State Space

© Daniel S. Weld 9 Example: Route Planning Input: Set of states Operators [and costs] Start state Goal state (test) Output:

© Daniel S. Weld 10 Example: N Queens Input: Set of states Operators [and costs] Start state Goal state (test) Output Q Q Q Q

© Daniel S. Weld 11 Ex: Blocks World Input: Set of states Operators [and costs] Start state Goal state (test) Partially specified plans Plan modification operators The null plan (no actions) A plan which provably achieves The desired world configuration

© Daniel S. Weld 12 Multiple Problem Spaces Real World States of the world (e.g. block configurations) Actions (take one world-state to another) Problem Space 1 PS states = models of world states Operators = models of actions Robot’s Head Problem Space 2 PS states = partially spec. plan Operators = plan modificat’n ops

© Daniel S. Weld 13 Classifying Search GUESSING (“Tree Search”) Guess how to extend a partial solution to a problem. Generates a tree of (partial) solutions. The leafs of the tree are either “failures” or represent complete solutions SIMPLIFYING (“Inference”) Infer new, stronger constraints by combining one or more constraints (without any “guessing”) Example: X+2Y = 3 X+Y =1 therefore Y = 2 WANDERING (“Markov chain”) Perform a (biased) random walk through the space of (partial or total) solutions

© Daniel S. Weld 14 Roadmap Guessing – State Space Search 1.BFS 2.DFS 3.Iterative Deepening 4.Bidirectional 5.Best-first search 6.A* 7.Game tree 8.Davis-Putnam (logic) 9.Cutset conditioning (probability) Simplification – Constraint Propagation 1.Forward Checking 2.Path Consistency (Waltz labeling, temporal algebra) 3.Resolution 4.“Bucket Algorithm” Wandering – Randomized Search 1.Hillclimbing 2.Simulated annealing 3.Walksat 4.Monte-Carlo Methods

© Daniel S. Weld 15 Search Strategies v2 Blind Search Informed Search Constraint Satisfaction Adversary Search Depth first search Breadth first search Iterative deepening search Iterative broadening search

© Daniel S. Weld 16 Depth First Search a b cd e f gh Maintain stack of nodes to visit Evaluation Complete? Time Complexity? Space Complexity? Not for infinite spaces O(b^d) O(bd) d=max depth of tree

© Daniel S. Weld 17 Breadth First Search Maintain queue of nodes to visit Evaluation Complete? Time Complexity? Space Complexity? Yes (if branching factor is finite) O(b d+1 ) a b cd e f gh

© Daniel S. Weld 18 Memory a Limitation? Suppose: 2 GHz CPU 1 GB main memory 100 instructions / expansion 5 bytes / node 200,000 expansions / sec Memory filled in 100 sec … < 2 minutes

© Daniel S. Weld 19 DFS with limit; incrementally grow limit Evaluation Complete? Time Complexity? Space Complexity? Iterative Deepening Search Yes O(b^d) O(d) a b cd e f gh

© Daniel S. Weld 20 Cost of Iterative Deepening bratio ID to DFS

© Daniel S. Weld 21 Speed 8 Puzzle 2x2x2 Rubik’s 15 Puzzle 3x3x3 Rubik’s 24 Puzzle sec sec k yrs k yrs yrs BFS Nodes Time Iter. Deep. Nodes Time Assuming 10M nodes/sec & sufficient memory sec sec days k yrs B yrs Adapted from Richard Korf presentation Why the difference? Rubik has higher branch factor 15 puzzle has greater depth # of duplicates 8x 1Mx

© Daniel S. Weld 22 When to Use Iterative Deepening N Queens? Q Q Q Q

© Daniel S. Weld 23 Search Space with Uniform Structure

© Daniel S. Weld 24 Search Space with Clustered Structure

© Daniel S. Weld 25 Iterative Broadening Search What if know solutions lay at depth N? No sense in doing iterative deepening Increment sum of sibling indicies a b c d e fg h i

© Daniel S. Weld 26 Forwards vs. Backwards start end

© Daniel S. Weld 27 vs. Bidirectional

© Daniel S. Weld 28 Problem All these methods are slow (blind) Solution  add guidance (“heurstic estimate”)  “informed search” … coming next …

© Daniel S. Weld 29 Recap: Search thru a Set of states Operators [and costs] Start state Goal state [test] Path: start  a state satisfying goal test [May require shortest path] Input: Output: Problem Space / State Space

© Daniel S. Weld 30 Cryptarithmetic SEND + MORE MONEY Input: Set of states Operators [and costs] Start state Goal state (test) Output:

© Daniel S. Weld 31 Concept Learning Labeled Training Examples <p4,… Output: f:  {ok, ter} Input: Set of states Operators [and costs] Start state Goal state (test) Output:

© Daniel S. Weld 32 Symbolic Integration E.g. x 2 e x dx = e x (x 2 -2x+2) + C  Operators: Integration by parts Integration by substitution …

© Daniel S. Weld 33 Search Strategies v2 Blind Search Informed Search Constraint Satisfaction Adversary Search Best-first search A* search Iterative deepening A* search Local search

© Daniel S. Weld 34 Best-first Search Generalization of breadth first search Priority queue of nodes to be explored Cost function f(n) applied to each node Add initial state to priority queue While queue not empty Node = head(queue) If goal?(node) then return node Add children of node to queue

© Daniel S. Weld 35 Old Friends Breadth first = best first With f(n) = depth(n) Dijkstra’s Algorithm = best first With f(n) = g(n), i.e. the sum of edge costs from start to n Space bound (stores all generated nodes)

© Daniel S. Weld 36 A* Search Hart, Nilsson & Rafael 1968 Best first search with f(n) = g(n) + h(n) Where g(n) = sum of costs from start to n h(n) = estimate of lowest cost path n  goal h(goal) = 0 If h(n) is admissible (and monotonic) then A* is optimal Underestimates cost of any solution which can reached from node { f values increase from node to descendants (triangle inequality) {

© Daniel S. Weld 37 European Example start end

© Daniel S. Weld 38 A* Example

© Daniel S. Weld 39 A* Example

© Daniel S. Weld 40 A* Example

© Daniel S. Weld 41 A* Example

© Daniel S. Weld 42 A* Example

© Daniel S. Weld 43 A* Example

© Daniel S. Weld 44 Optimality of A*

© Daniel S. Weld 45 Optimality Continued

© Daniel S. Weld 46 A* Summary Pros Cons

© Daniel S. Weld 47 Iterative-Deepening A* Like iterative-deepening depth-first, but... Depth bound modified to be an f-limit Start with limit = h(start) Prune any node if f(node) > f-limit Next f-limit = min-cost of any node pruned a b c d e f FL=15 FL=21

© Daniel S. Weld 48 IDA* Analysis Complete & Optimal (ala A*) Space usage  depth of solution Each iteration is DFS - no priority queue! # nodes expanded relative to A* Depends on # unique values of heuristic function In 8 puzzle: few values  close to # A* expands In traveling salesman: each f value is unique  1+2+…+n = O(n 2 ) where n=nodes A* expands if n is too big for main memory, n 2 is too long to wait! Generates duplicate nodes in cyclic graphs