Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Search I (Reading R&N: Chapter.

Slides:



Advertisements
Similar presentations
Artificial Intelligent
Advertisements

Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES
Additional Topics ARTIFICIAL INTELLIGENCE
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Artificial Intelligence Problem Solving Eriq Muhammad Adams
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
1 Chapter 3 Solving Problems by Searching. 2 Outline Problem-solving agentsProblem-solving agents Problem typesProblem types Problem formulationProblem.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Artificial Intelligence Spring 2009
PSU CS 370 – Artificial Intelligence Dr. Mohamed Tounsi Artificial Intelligence 3. Solving Problems By Searching.
Search Search plays a key role in many parts of AI. These algorithms provide the conceptual backbone of almost every approach to the systematic exploration.
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence for Games Uninformed search Patrick Olivier
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
EIE426-AICV 1 Blind and Informed Search Methods Filename: eie426-search-methods-0809.ppt.
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
CHAPTER 3 CMPT Blind Search 1 Search and Sequential Action.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
CS 380: Artificial Intelligence Lecture #3 William Regli.
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
Artificial Intelligence Chapter 3: Solving Problems by Searching
1 Lecture 3 Uninformed Search. 2 Complexity Recap (app.A) We often want to characterize algorithms independent of their implementation. “This algorithm.
Solving problems by searching
1 Lecture 3 Uninformed Search
1 Solving problems by searching Chapter 3. 2 Why Search? To achieve goals or to maximize our utility we need to predict what the result of our actions.
Solving problems by searching This Lecture Read Chapters 3.1 to 3.4 Next Lecture Read Chapter 3.5 to 3.7 (Please read lecture topic material before and.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
1 Solving problems by searching This Lecture Chapters 3.1 to 3.4 Next Lecture Chapter 3.5 to 3.7 (Please read lecture topic material before and after each.
Artificial Intelligence
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
1 Solving problems by searching 171, Class 2 Chapter 3.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Problem Solving Agents
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 3 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
1 search CS 331/531 Dr M M Awais REPRESENTATION METHODS Represent the information: Animals are generally divided into birds and mammals. Birds are further.
Pengantar Kecerdasan Buatan
Problem Solving as Search. Problem Types Deterministic, fully observable  single-state problem Non-observable  conformant problem Nondeterministic and/or.
Problem Solving by Searching
Solving problems by searching A I C h a p t e r 3.
1 Solving problems by searching Chapter 3. 2 Outline Problem types Example problems Assumptions in Basic Search State Implementation Tree search Example.
Search Part I Introduction Solutions and Performance Uninformed Search Strategies Avoiding Repeated States Partial Information Summary.
WEEK 5 LECTURE -A- 23/02/2012 lec 5a CSC 102 by Asma Tabouk Introduction 1 CSC AI Basic Search Strategies.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Problem Solving as Search
Artificial Intelligence
Solving problems by searching
Solving problems by searching
Artificial Intelligence
Solving problems by searching
Solving problems by searching
Solving problems by searching
Solving Problems by Searching
Solving problems by searching
Presentation transcript:

Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Search I (Reading R&N: Chapter 3)

Carla P. Gomes CS4700 Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms

Carla P. Gomes CS4700 Problem solving agents are Goal-based Agents. 1.Goal Formulation  set of (desirable) world states. 2. Problem formulation  what actions and states to consider given a goal. 3. Search for solution  given the problem, search for a solution, a sequence of actions to achieve the goal. 4. Execution of the solution Problem-solving agents

Carla P. Gomes CS4700 Example: Romania On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest Formulate goal: –be in Bucharest Formulate problem: –actions: drive between cities –states: various cities Find solution: –sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest Execution –drive from Arad to Bucharest according to the solution Initial State Goal State Environment: fully observable (map), deterministic, and the agent knows effects of each action. Is this really the case?

Carla P. Gomes CS4700 Problem-solving agents (Goal-based Agents)

Carla P. Gomes CS4700 Problem types Deterministic, fully observable  single-state problem –Agent knows exactly which state it will be in; solution is a sequence actions. Non-observable  sensorless problem (conformant problem) –Agent may have no idea where it is (no sensors); it reasons in terms of belief states; solution is a sequence actions (effects of actions certain). Nondeterministic and/or partially observable  contingency problem –Actions uncertain, percepts provide new information about current state (adversarial problem if uncertainty comes from other agents) Unknown state space and uncertain action effects  exploration problem Increasing complexity

Carla P. Gomes CS4700 Example: vacuum world Single-state, start in #5. Solution?

Carla P. Gomes CS4700 Example: vacuum world Single-state, start in #5. Solution? [Right, Suck] Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? Goal: 7 or 8

Carla P. Gomes CS4700 Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] Contingency –Nondeterministic: Suck may dirty a clean carpet –Partially observable: location, dirt at current location. –Percept: [L, Clean], i.e., start in #5 or #7 Solution?

Carla P. Gomes CS4700 Example: vacuum world Sensorless, start in {1,2,3,4,5,6,7,8} e.g., Right goes to {2,4,6,8} Solution? [Right,Suck,Left,Suck] Contingency –Nondeterministic: Suck may dirty a clean carpet –Partially observable: location, dirt at current location. –Percept: [L, Clean], i.e., start in #5 or #7 Solution? [Right, if dirt then Suck]

Carla P. Gomes CS4700 Single-state problem formulation A problem is defined by four items: 1.Initial state e.g., "at Arad" 2.Actions available to the agent Common description  successor function S(x) = set of (action–state) pairs –e.g., S(Arad) = {,, … } Note: State space – graph in which the nodes are states and arcs between nodes are actions. Initial state + successor function implicitly define state space. Path in a state space is a sequence of states connected by a sequence of actions.

Carla P. Gomes CS4700 Single-state problem formulation 3.Goal test, which determines whether a given state is the goal state. Can be –explicit, e.g., x = "at Bucharest" –implicit, e.g., Checkmate(x) 4. Path cost, which assigns a numeric cost to each path, reflecting performance measure of agent. –e.g., sum of distances, number of actions executed, etc. –Step cost of taking action a to go from state x to state y  c(x,a,y); assumed to be ≥ 0, and additive. A solution is a path (sequence of actions) leading from the initial state to a goal state. Optimal solution has the lowest path cost among all solutions.

Carla P. Gomes CS4700 Selecting a state space Real world is absurdly complex  state space must be abstracted for problem solving (Abstract) state  set of real states (Abstract) action  complex combination of real actions –e.g., "Arad  Zerind" represents a complex set of possible routes, detours, rest stops, etc. For guaranteed realizability, any real state "in Arad“ must get to some real state "in Zerind" (Abstract) solution  –set of real paths that are solutions in the real world Each abstract action should be "easier" than the original problem

Carla P. Gomes CS4700 Vacuum-cleaner world Percepts: location and contents, e.g., [A, Dirty] Actions: Left, Right, Suck states? actions? goal test? path cost?

Carla P. Gomes CS4700 Vacuum world state space graph states? The agent is in one of n (2) locations, with or without dirt. (nx2 n ) actions? Left, Right, Suck goal test? no dirt at all locations? path cost? 1 per action

Carla P. Gomes CS4700 Example: The 8-puzzle states? actions? goal test? path cost?

Carla P. Gomes CS4700 Example: The 8-puzzle states? locations of tiles actions? move blank left, right, up, down goal test? = goal state (given) path cost? 1 per move [Note: optimal solution of n-Puzzle family is NP-hard]

Carla P. Gomes CS4700 Example: robotic assembly states?: real-valued coordinates of robot joint angles parts of the object to be assembled actions?: continuous motions of robot joints goal test?: complete assembly path cost?: time to execute

Carla P. Gomes CS4700 Real World Problems Route finding problems; Touring problems (end up in the same city)  Traveling Salesman Problem VLSI layout  positioning millions of components and connections on a chip to minimize area, circuit delays, etc. Robot navigation Automatic assembly of complex objects Protein design – sequence of amino acids that will fold into the 3- dimensional protein with the right properties.

Carla P. Gomes CS4700 Now that we have formulated the problem how do we find a solution?  Search through the state space.  We will consider search techniques that use an explicit search tree that is generated by the initial state + successor function initialize (initial node) Loop choose a node for expansion according to strategy; goal node?  done expand node with successor function

Carla P. Gomes CS4700 Search is a central topic in AI. Originated with Newell and Simon’s work on problem solving; Human Problem Solving (1972). Automated reasoning is a natural search task. More recently: Given that almost all AI formalisms (planning, learning, etc) are NP-Complete or worse, Some form of search is generally unavoidable (i.e., no smarter algorithm available).

Carla P. Gomes CS4700 Tree search algorithms Basic idea: –offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)

Carla P. Gomes CS4700 Tree search example

Carla P. Gomes CS4700 Tree search example

Carla P. Gomes CS4700 Tree search example

Carla P. Gomes CS4700 Implementation: states vs. nodes A state is a --- representation of --- a physical configuration A node is a data structure constituting part of a search tree includes state, tree parent node, action (applied to parent), path cost (initial state to node) g(x), depth The Expand function creates new nodes, filling in the various fields and using the SuccessorFn of the problem to create the corresponding states. Fringe is the collection of nodes that have been generated but not expanded (queue  different types of queues insert elements in different orders); Each node of the fringe is a leaf node.

Carla P. Gomes CS4700 Implementation: general tree search

Carla P. Gomes CS4700 Search strategies A search strategy is defined by picking the order of node expansion Strategies are evaluated along the following dimensions: –completeness: does it always find a solution if one exists? –time complexity: number of nodes generated –space complexity: maximum number of nodes in memory –optimality: does it always find a least-cost solution? Time and space complexity are measured in terms of –b: maximum branching factor of the search tree –d: depth of the least-cost solution –m: maximum depth of the state space (may be ∞)

Carla P. Gomes CS4700 Uninformed search strategies Uninformed (blind) search strategies use only the information available in the problem definition: –Breadth-first search –Uniform-cost search –Depth-first search –Depth-limited search –Iterative deepening search –Bidirectional search

Carla P. Gomes CS4700 Breadth-first search Expand shallowest unexpanded node Implementation: –fringe is a FIFO queue, i.e., new successors go at end

Carla P. Gomes CS4700 Breadth-first search Expand shallowest unexpanded node Implementation: –fringe is a FIFO queue, i.e., new successors go at end

Carla P. Gomes CS4700 Breadth-first search Expand shallowest unexpanded node Implementation: –fringe is a FIFO queue, i.e., new successors go at end

Carla P. Gomes CS4700 Breadth-first search Expand shallowest unexpanded node Implementation: –fringe is a FIFO queue, i.e., new successors go at end

Carla P. Gomes CS4700 Properties of breadth-first search Complete? Yes (if b is finite) Time? 1+b+b 2 +b 3 +… +b d + b(b d -1) = O(b d+1 ) Space? O(b d+1 ) (keeps every node in memory) Optimal? Yes (if all step costs are identical) Space is the bigger problem (more than time)

Carla P. Gomes CS4700 Uniform-cost search Expand least-cost unexpanded node (instead of shallowest) (1) Implementation: –fringe = queue ordered by path cost (1) Equivalent to breadth-first if step costs all equal Complete? Yes, if b is finite, step cost ≥ ε (>0 ) Time? # of nodes with g ≤ cost of optimal solution (C*), O(b (1+  C*/ ε  ) ) Space? # of nodes with g ≤ cost of optimal solution, O(b (1+  C*/ ε  ) ) Optimal? Yes – nodes expanded in increasing order of g(n) (1) Same as breadth-first search is step costs are equal

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO (stack) queue, i.e., put successors at front

Carla P. Gomes CS4700 Depth-first search Expand deepest unexpanded node Implementation: –fringe = LIFO queue, i.e., put successors at front

Carla P. Gomes CS4700 Properties of depth-first search Complete? No: fails in infinite-depth spaces, spaces with loops –Modify to avoid repeated states along path  complete in finite spaces Time? O(b m ): terrible if m is much larger than d – but if solutions are dense, may be much faster than breadth-first Space? O(bm), i.e., linear space! Optimal? No m – maximum depth of state sapce. d – depth of shallowest (least cost) solution Note: In backtracking search only one successor is generated  only O(m) memory is needed; also successor is modification of the current state, but we have to be able to undo each modification.

Carla P. Gomes CS4700 Depth-limited search = depth-first search with depth limit l, i.e., nodes at depth l have no successors Recursive implementation: Clearly incomplete, why? How can we make it complete?

Carla P. Gomes CS4700 Iterative deepening search Combine good memory requirements of depth-first with the completeness of breadth-first when branching factor is finite and its optimality when the path cost is a non-decreasing function of the depth of the node. Why would one do that?

Carla P. Gomes CS4700 Iterative deepening search l =0

Carla P. Gomes CS4700 Iterative deepening search l =1

Carla P. Gomes CS4700 Iterative deepening search l =2

Carla P. Gomes CS4700 Iterative deepening search l =3

Carla P. Gomes CS4700 Iterative deepening search Number of nodes generated in an iterative deepening search to depth d with branching factor b: N IDS = d b 1 + (d-1)b 2 + … + 3b d-2 +2b d-1 + 1b d Number of nodes generated in a breadth-first search with branching factor b: N BFS = b 1 + b 2 + … + b d-2 + b d-1 + (b d+1 -b) For b = 10, d = 5, –N BFS = , , , ,990= 1,111,100 –N IDS = , , ,000 = 123,456 Looks quite wasteful, is it? Note: BFS generates some nodes at depth (d+1) Iterative deepening is the preferred uninformed search method When there is a large search space and the depth of the solution is not known.

Carla P. Gomes CS4700 Properties of iterative deepening search Complete? Yes (b finite) Time? d b 1 + (d-1)b 2 + … + b d = O(b d ) Space? O(bd) Optimal? Yes, if step costs identical

Carla P. Gomes CS4700 Bidirectional Search Simultaneously: –Search forward from start –Search backward from the goal Stop when the two searches meet. If branching factor = b in each direction, with solution at depth d  only O(2 b d/2 )= O(2 b d/2 ) Checking a node for membership in the other search tree can be done in constant time (hash table) Key limitations: Space O(b d/2 ) How to search backwards (e.g., in Chess)?  lost of states satisfy that goal The predecessor of a node should be easily computable (i.e., actions are easily reversible)

Carla P. Gomes CS4700 Summary of Complexity Results See page 81 R & N

Carla P. Gomes CS4700 Repeated states Failure to detect repeated states can turn a linear problem into an exponential one! Problems in which actions are reversible (e.g., routing problems or sliding-blocks puzzle. Don’t return to parent node Don’t generate successor = node’s parent Don’t allow cycles Don’t generate successor = node’s ancestor Don’t revisit state Keep every visited state in memory! O(b d )

Carla P. Gomes CS4700 Summary Problem formulation usually requires abstracting away real- world details to define a state space that can feasibly be explored Variety of uninformed search strategies Iterative deepening search uses only linear space and not much more time than other uninformed algorithms Avoid repeating states.