Problem Solving and Searching

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Review: Search problem formulation
Announcements Course TA: Danny Kumar
Uninformed search strategies
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
Search Strategies Reading: Russell’s Chapter 3 1.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Implementation and Evaluation of Search Methods. Use a stack data structure. 1. Push root node of tree on the stack. 2. Pop the top node off the stack.
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Review: Search problem formulation
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
UnInformed Search What to do when you don’t know anything.
Search I Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Russell & Norvig Chapter 3]
Artificial Intelligence Chapter 3: Solving Problems by Searching
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Solving problems by searching
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
1 Problem Solving and Searching CS 171/271 (Chapter 3) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Vilalta&Eick:Uninformed Search Problem Solving By Searching Introduction Solutions and Performance Uninformed Search Strategies Avoiding Repeated States/Looping.
1 Solving problems by searching 171, Class 2 Chapter 3.
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Uninformed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #2.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
Solving problems by searching A I C h a p t e r 3.
Search Part I Introduction Solutions and Performance Uninformed Search Strategies Avoiding Repeated States Partial Information Summary.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Lecture 3 Problem Solving through search Uninformed Search
Lecture 3: Uninformed Search
Problem Solving Agents
Announcements Homework 1 will be posted this afternoon. After today, you should be able to do Questions 1-3. Python 2.7 only for the homework; turns.
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Last time: Problem-Solving
Introduction to Artificial Intelligence
Uninformed Search Chapter 3.4.
Problem Solving by Searching
Problem Solving as Search
Problem solving and search
Artificial Intelligence (CS 370D)
Artificial Intelligence
Russell and Norvig: Chapter 3, Sections 3.4 – 3.6
Solving problems by searching
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
Lecture 1B: Search.
Problem Solving and Searching
What to do when you don’t know anything know nothing
Artificial Intelligence
Searching for Solutions
Principles of Computing – UFCFA3-30-1
Problem Spaces & Search
Solving problems by searching
Solving problems by searching
Problem Spaces & Search
UNINFORMED SEARCH -BFS -DFS -DFIS - Bidirectional
ECE457 Applied Artificial Intelligence Fall 2007 Lecture #2
Solving problems by searching
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Solving Problems by Searching
Search Strategies CMPT 420 / CMPG 720.
Solving Problems by Searching
CS440/ECE 448 Lecture 4: Search Intro
Solving problems by searching
Basic Search Methods How to solve the control problem in production-rule systems? Basic techniques to find paths through state- nets. For the moment: -
Presentation transcript:

Problem Solving and Searching CS 171/271 (Chapter 3) Some text and images in these slides were drawn from Russel & Norvig’s published material

Problem Solving Agent Function

Problem Solving Agent Agent finds an action sequence to achieve a goal Requires problem formulation Determine goal Formulate problem based on goal Searches for an action sequence that solves the problem Actions are then carried out, ignoring percepts during that period

Problem Initial state Possible actions / Successor function Goal test Path cost function * State space can be derived from the initial state and the successor function

Example: Vacuum World Environment consists of two squares, A (left) and B (right) Each square may or may not be dirty An agent may be in A or B An agent can perceive whether a square is dirty or not An agent may move left, move right, suck dirt (or do nothing) Question: is this a complete PEAS description?

Vacuum World Problem Initial state: configuration describing location of agent dirt status of A and B Successor function R, L, or S, causes a different configuration Goal test Check whether A and B are both not dirty Path cost Number of actions

State Space 2 possible locations x 2 x 2 combinations ( A is clean/dirty, B is clean/dirty ) = 8 states

Sample Problem and Solution Initial State: 2 Action Sequence: Suck, Left, Suck (brings us to which state?)

States and Successors

Example: 8-Puzzle Initial state: as shown Actions? successor function? Goal test? Path cost?

Example: 8-Queens Problem Position 8 queens on a chessboard so that no queen attacks any other queen Initial state? Successor function? Goal test? Path cost?

Example: Route-finding Given a set of locations, links (with values) between locations, an initial location and a destination, find the best route Initial state? Successor function? Goal test? Path cost?

Some Considerations Environment ought to be be static, deterministic, and observable Why? If some of the above properties are relaxed, what happens? Toy problems versus real-world problems

Searching for Solutions Searching through the state space Search tree rooted at initial state A node in the tree is expanded by applying successor function for each valid action Children nodes are generated with a different path cost and depth Return solution once node with goal state is reached

Tree-Search Algorithm

fringe: initially-empty container What is returned?

Search Strategy Strategy: specifies the order of node expansion Uninformed search strategies: no additional information beyond states and successors Informed or heuristic search: expands “more promising” states

Evaluating Strategies Completeness does it always find a solution if one exists? Time complexity number of nodes generated Space complexity maximum number of nodes in memory Optimality does it always find a least-cost solution?

Time and space complexity Expressed in terms of: b: branching factor depends on possible actions max number of successors of a node d: depth of shallowest goal node m: maximum path-length in state space

Uninformed Search Strategies Breadth-First Search Uniform-Cost Search Depth-First Search Depth-Limited Search Iterative Deepening Search

Breadth-First Search fringe is a regular first-in-first-out queue Start with initial state; then process the successors of initial state, followed by their successors, and so on… Shallow nodes first before deeper nodes Complete Optimal (if path-cost = node depth) Time Complexity: O(b + b2 + b3 + … + bd+1) Space Complexity: same

Uniform-Cost Search Prioritize nodes that have least path-cost (fringe is a priority queue) If path-cost = number of steps, this degenerates to BFS Complete and optimal As long as zero step costs are handled properly The route-finding problem, for example, have varying step costs Dijkstra’s shortest-path algorithm <-> UCS Time and space complexity?

Depth-First Search fringe is a stack (last-in-first-out container) Go as deep as possible and then backtrack Often implemented using recursion Not complete and might not terminate Time Complexity: O(bm) Space complexity: O(bm)

Depth-Limited Search DFS with a pre-determined depth-limit l Guaranteed to terminate Still incomplete Worse, we might choose l < m (shallowest goal node) Depth-limit l can be based on problem definition e.g., graph diameter in route-finding problem Time and space complexity depend on l

Iterative Deepening Search Depth-Limited Search for l = 0, 1, 2, … Stops when goal is found (when l becomes d) Complete and optimal (if path-cost = node-depth) Time and space complexity?

Comparing Search Strategies

Bi-directional Search Run two simultaneous searches One forward from initial state One backward from goal state Stops when node in one search is in fringe of the other search Rationale: two “half-searches” quicker than a full search Caveat: not always easy to search backwards

Caution: Repeated States Can occur specially in environments where actions are reversible Introduces the possibility of infinite search trees Time complexity blows up even with fixed depth limits Solution: detect repeated states by storing node history

Summary A problem is defined by its initial state, a successor function, a goal test, and a path cost function Problem’s environment <-> state space Different strategies drive different tree-search algorithms that return a solution (action sequence) to the problem Coming up: informed search strategies