Artificial Intelligence Search Problem. Search is a problem-solving technique to explores successive stages in problem- solving process.

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Review: Search problem formulation
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.5) Sept, 14, 2012.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Part2 AI as Representation and Search
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.4) January, 14, 2009.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
1 Chapter 3 Solving Problems by Searching. 2 Outline Problem-solving agentsProblem-solving agents Problem typesProblem types Problem formulationProblem.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence for Games Uninformed search Patrick Olivier
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
EIE426-AICV 1 Blind and Informed Search Methods Filename: eie426-search-methods-0809.ppt.
CSC 423 ARTIFICIAL INTELLIGENCE
Artificial Intelligence (CS 461D)
Uninformed Search Jim Little UBC CS 322 – Search 2 September 12, 2014
UNINFORMED SEARCH Problem - solving agents Example : Romania  On holiday in Romania ; currently in Arad.  Flight leaves tomorrow from Bucharest.
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
CHAPTER 3 CMPT Blind Search 1 Search and Sequential Action.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Review: Search problem formulation
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
Breadth-First Searches Introduction to AI. Breadth-First Search Using a breadth-first strategy we expand the root level first and then we expand all those.
Artificial Intelligence Chapter 3: Solving Problems by Searching
Artificial Intelligence Problem solving by searching CSC 361 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
1 Lecture 3 Uninformed Search. 2 Complexity Recap (app.A) We often want to characterize algorithms independent of their implementation. “This algorithm.
Solving problems by searching
1 Lecture 3 Uninformed Search
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
1 Problem Solving and Searching CS 171/271 (Chapter 3) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Artificial Intelligence Search Problem. Search is a problem-solving technique to explores successive stages in problem-solving process.
CS 415 – A.I. Slide Set 5. Chapter 3 Structures and Strategies for State Space Search – Predicate Calculus: provides a means of describing objects and.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
Search with Costs and Heuristic Search 1 CPSC 322 – Search 3 January 17, 2011 Textbook §3.5.3, Taught by: Mike Chiang.
Lecture 3: Uninformed Search
1 Solving problems by searching 171, Class 2 Chapter 3.
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
Advanced Artificial Intelligence Lecture 2: Search.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Computer Science CPSC 322 Lecture 6 Iterative Deepening and Search with Costs (Ch: 3.7.3, 3.5.3)
1 Solving problems by searching Chapter 3. Depth First Search Expand deepest unexpanded node The root is examined first; then the left child of the root;
Basic Problem Solving Search strategy  Problem can be solved by searching for a solution. An attempt is to transform initial state of a problem into some.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
Problem Solving as Search. Problem Types Deterministic, fully observable  single-state problem Non-observable  conformant problem Nondeterministic and/or.
Solving problems by searching A I C h a p t e r 3.
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.5) Sept, 13, 2013.
WEEK 5 LECTURE -A- 23/02/2012 lec 5a CSC 102 by Asma Tabouk Introduction 1 CSC AI Basic Search Strategies.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Lecture 3: Uninformed Search
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Introduction to Artificial Intelligence
Problem Solving by Searching
Solving problems by searching
Searching for Solutions
Artificial Intelligence
Artificial Intelligence
Presentation transcript:

Artificial Intelligence Search Problem

Search is a problem-solving technique to explores successive stages in problem- solving process.

Search Space We need to define a space to search in to find a problem solution To successfully design and implement search algorithm, we must be able to analyze and predict its behavior.

State Space Search One tool to analyze the search space is to represent it as space graph, so by use graph theory we analyze the problem and solution of it.

Graph Theory A graph consists of a set of nodes and a set of arcs or links connecting pairs of nodes. Island1Island2 River1 River2

Graph structure Nodes = {a, b, c, d, e} Arcs = {(a,b), (a,d), (b,c), ….} a d b e c

Tree A tree is a graph in which two nodes have at most one path between them. The tree has a root. a b cd efghij

Space representation In the space representation of a problem, the nodes of a graph correspond to partial problem solution states and arcs correspond to steps in a problem-solving process

Example Let the game of Tic-Tac-toe

A simple example: traveling on a graph A B C F D E start state goal state

Search tree state = A, cost = 0 state = B, cost = 3 state = D, cost = 3 state = C, cost = 5 state = F, cost = 12 state = A, cost = 7 goal state! search tree nodes and states are not the same thing!

Full search tree state = A, cost = 0 state = B, cost = 3 state = D, cost = 3 state = C, cost = 5 state = F, cost = 12 state = A, cost = 7 goal state! state = E, cost = 7 state = F, cost = 11 goal state! state = B, cost = 10 state = D, cost =

Problem types Deterministic, fully observable  single-state problem – Solution is a sequence of states Non-observable  sensorless problem – Problem-solving may have no idea where it is; solution is a sequence Nondeterministic and/or partially observable Unknown state space

Algorithm types There are two kinds of search algorithm – Complete guaranteed to find solution or prove there is none – Incomplete may not find a solution even when it exists often more efficient (or there would be no point)

Comparing Searching Algorithms: Will it find a solution? the best one? Def. : A search algorithm is complete if whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time. Def.: A search algorithm is optimal if when it finds a solution, it is the best one

Comparing Searching Algorithms: Complexity Def.: The time complexity of a search algorithm is the worst- case amount of time it will take to run, expressed in terms of maximum path length m maximum branching factor b. Def.: The space complexity of a search algorithm is the worst-case amount of memory that the algorithm will use (i.e., the maximum number of nodes on the frontier), also expressed in terms of m and b. Branching factor b of a node is the number of arcs going out of the node

Example: the 8-puzzle. Given: a board situation for the 8-puzzle: Problem: find a sequence of moves that transform this board situation in a desired goal situation:

State Space representation In the space representation of a problem, the nodes of a graph correspond to partial problem solution states and arcs correspond to steps (action) in a problem-solving process

Key concepts in search Set of states that we can be in – Including an initial state… – … and goal states (equivalently, a goal test) For every state, a set of actions that we can take – Each action results in a new state – Given a state, produces all states that can be reached from it Cost function that determines the cost of each action (or path = sequence of actions) Solution: path from initial state to a goal state – Optimal solution: solution with minimal cost

( NewYork ) ( NewYork, Boston ) Boston ) ( NewYork, Miami ) Miami ) ( NewYork, Dallas ) Dallas ) ( NewYork, Frisco ) Frisco ) ( NewYork, Boston, Boston, Miami ) Miami ) ( NewYork, Frisco, Frisco, Miami ) Miami ) Keep track of accumulated costs in each state if you want to be sure to get the best path.

Example: Route Finding Initial state – City journey starts in Operators – Driving from city to city Goal test – Is current location the destination city? Liverpool London Nottingham Leeds Birmingham Manchester

State space representation (salesman) State: – the list of cities that are already visited Ex.: ( NewYork, Boston ) Initial state: Ex.: ( NewYork ) Rules: – add 1 city to the list that is not yet a member – add the first city if you already have 5 members Goal criterion: – first and last city are equal

Example: The 8-puzzle states? locations of tiles actions? move blank left, right, up, down goal? = goal state (given) path cost? 1 per move

Example: robotic assembly states?: real-valued coordinates of robot joint angles parts of the object to be assembled actions?: continuous motions of robot joints goal test?: complete assembly path cost?: time to execute

Example: Chess Problem: develop a program that plays chess A B C D E F G H 1. A way to represent board situations Ex.: List: (( king_black, 8, C), ( knight_black, 7, B), ( knight_black, 7, B), ( pawn_black, 7, G), ( pawn_black, 7, G), ( pawn_black, 5, F), ( pawn_black, 5, F), ( pawn_white, 2, H), ( pawn_white, 2, H), ( king_white, 1, E)) ( king_white, 1, E))

Chess Move 1 Move 2 Move 3 search tree ~15 ~ (15) 2 ~ (15) 3 Need very efficient search techniques to find good paths in such combinatorial trees.

independence of states: Ex.: Blocks world problem. Initially: C is on A and B is on the table. Rules: to move any free block to another or to the table Goal: A is on B and B is on C. A C B Goal: A on B and B on C A C B Goal: A on B A C B Goal: B on C AND-OR-tree? AND

Search in State Spaces Effects of moving a block (illustration and list-structure iconic model notation)

Avoiding Repeated States In increasing order of effectiveness in reducing size of state space and with increasing computational costs: 1. Do not return to the state you just came from. 2. Do not create paths with cycles in them. 3. Do not generate any state that was ever created before. Net effect depends on frequency of “loops” in state space.

Forward versus backward reasoning: Forward reasoning (or Data-driven ): from initial states to goal states.

Forward versus backward reasoning: Backward reasoning (or backward chaining / goal-driven ): from goal states to initial states.

Data-Driven search It is called forward chaining The problem solver begins with the given facts and a set of legal moves or rules for changing state to arrive to the goal.

Goal-Driven Search Take the goal that we want to solve and see what rules or legal moves could be used to generate this goal. So we move backward.

Search Implementation In both types of moving search, we must find the path from start state to a goal. We use goal-driven search if – The goal is given in the problem – There exist a large number of rules – Problem data are not given

Search Implementation The data-driven search is used if – All or most data are given – There are a large number of potential goals – It is difficult to form a goal

Criteria: Sometimes: no way to start from the goal states – because there are too many (Ex.: chess) – because you can ’ t (easily) formulate the rules in 2 directions In this case: even the same rules !! Sometimes equivalent:

General Search Considerations Given initial state, operators and goal test – Can you give the agent additional information? Uninformed search strategies – Have no additional information Informed search strategies – Uses problem specific information – Heuristic measure (Guess how far from goal)

Classical Search Strategies Breadth-first search Depth-first search Bidirectional search Depth-bounded depth first search – like depth first but set limit on depth of search in tree Iterative Deepening search – use depth-bounded search but iteratively increase limit

Breadth-first search: Move downward s, level by level, until goal is reached.SA D B D A E C EE BBF D F B FC E A C G G GG FC It explores the space in a level-by-level fashion.

Breadth-first search BFS is complete: if a solution exists, one will be found Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end

Breadth-first search Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end

Breadth-first search Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end

Breadth-first search Expand shallowest unexpanded node Implementation: – fringe is a FIFO queue, i.e., new successors go at end

Analysis of BFS 46 Def. : A search algorithm is complete if whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time. Is BFS complete? Yes If a solution exists at level l, the path to it will be explored before any other path of length l + 1 impossible to fall into an infinite cycle see this in AISpace by loading “Cyclic Graph Examples” or by adding a cycle to “Simple Tree”

Analysis of BFS 47 Is BFS optimal? Yes Def.: A search algorithm is optimal if when it finds a solution, it is the best one E.g., two goal nodes: red boxes Any goal at level l (e.g. red box N 7) will be reached before goals at lower levels

Analysis of BFS 48 What is BFS’s time complexity, in terms of m and b ? Def.: The time complexity of a search algorithm is the worst-case amount of time it will take to run, expressed in terms of - maximum path length m - maximum forward branching factor b. O(b m ) Like DFS, in the worst case BFS must examine every node in the tree E.g., single goal node -> red box

Analysis of BFS 49 Def.: The space complexity of a search algorithm is the worst case amount of memory that the algorithm will use (i.e., the maximal number of nodes on the frontier), expressed in terms of - maximum path length m - maximum forward branching factor b. O(b m ) What is BFS’s space complexity, in terms of m and b ? -BFS must keep paths to all the nodes al level m

Using Breadth-first Search When is BFS appropriate? space is not a problem it's necessary to find the solution with the fewest arcs When there are some shallow solutions there may be infinite paths When is BFS inappropriate? space is limited all solutions tend to be located deep in the tree the branching factor is very large

Depth-First Order When a state is examined, all of its children and their descendants are examined before any of its siblings. Not complete (might cycle through nongoal states) Depth- First order goes deeper whenever this is possible.

Depth-first search = Chronological backtracking Select a child – convention: left-to-right Repeatedly go to next child, as long as possible. Return to left-over alternatives (higher-up) only when needed. B C E D F G S A

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Depth-first search Expand deepest unexpanded node Implementation: – fringe = LIFO queue, i.e., put successors at front

Analysis of DFS Is DFS complete?. Is DFS optimal? What is the time complexity, if the maximum path length is m and the maximum branching factor is b ? What is the space complexity? We will look at the answers in AISpace (but see next few slides for a summary of what we do)

Analysis of DFS Def. : A search algorithm is complete if whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time. Is DFS complete? No If there are cycles in the graph, DFS may get “stuck” in one of them see this in AISpace by loading “Cyclic Graph Examples” or by adding a cycle to “Simple Tree” e.g., click on “Create” tab, create a new edge from N7 to N1, go back to “Solve” and see what happens

Analysis of DFS 67 Is DFS optimal? No Def.: A search algorithm is optimal if when it finds a solution, it is the best one (e.g., the shortest) It can “stumble” on longer solution paths before it gets to shorter ones. E.g., goal nodes: red boxes see this in AISpace by loading “Extended Tree Graph” and set N6 as a goal e.g., click on “Create” tab, right-click on N6 and select “set as a goal node”

Analysis of DFS 68 What is DFS’s time complexity, in terms of m and b ? In the worst case, must examine every node in the tree E.g., single goal node -> red box Def.: The time complexity of a search algorithm is the worst-case amount of time it will take to run, expressed in terms of - maximum path length m - maximum forward branching factor b. O(b m )

Analysis of DFS 69 Def.: The space complexity of a search algorithm is the worst-case amount of memory that the algorithm will use (i.e., the maximum number of nodes on the frontier), expressed in terms of - maximum path length m - maximum forward branching factor b. O(bm) What is DFS’s space complexity, in terms of m and b ? -for every node in the path currently explored, DFS maintains a path to its unexplored siblings in the search tree -Alternative paths that DFS needs to explore -The longest possible path is m, with a maximum of b-1 alterative paths per node See how this works in

DFS is appropriate when. Space is restricted Many solutions, with long path length It is a poor method when There are cycles in the graph There are sparse solutions at shallow depth There is heuristic knowledge indicating when one path is better than another Analysis of DFS (cont.)

A CDEFB GHIJKLMNOP QRSTUVWXYZ The example node set Initial state Goal state A L Press space to see a BFS of the example node set

A CDEFB GHIJKL QRSTU A BCD We begin with our initial state: the node labeled A. Press space to continue This node is then expanded to reveal further (unexpanded) nodes. Press space Node A is removed from the queue. Each revealed node is added to the END of the queue. Press space to continue the search. The search then moves to the first node in the queue. Press space to continue. Node B is expanded then removed from the queue. The revealed nodes are added to the END of the queue. Press space. Size of Queue: 0 Nodes expanded: 0 Current Action:Current level: n/a Queue: EmptyQueue: ASize of Queue: 1 Nodes expanded: 1 Queue: B, C, D, E, F Press space to begin the search Size of Queue: 5 Current level: 0Current Action: Expanding Queue: C, D, E, F, G, HSize of Queue: 6 Nodes expanded: 2 Current level: 1 We then backtrack to expand node C, and the process continues. Press space Current Action: Backtracking Current level: 0Current level: 1 Queue: D, E, F, G, H, I, JSize of Queue: 7 Nodes expanded: 3 Current Action: ExpandingCurrent Action: Backtracking Current level: 0Current level: 1 Queue: E, F, G, H, I, J, K, LSize of Queue: 8 Nodes expanded: 4 Current Action: ExpandingCurrent Action: Backtracking Current level: 0Current level: 1Current Action: Expanding NM Queue: F, G, H, I, J, K, L, M, NSize of Queue: 9 Nodes expanded: 5 E Current Action: Backtracking Current level: 0Current Action: ExpandingCurrent level: 1 OP Queue: G, H, I, J, K, L, M, N, O, PSize of Queue: 10 Nodes expanded: 6 F Current Action: Backtracking Current level: 0Current level: 1Current level: 2Current Action: Expanding Queue: H, I, J, K, L, M, N, O, P, Q Nodes expanded: 7 G Current Action: Backtracking Current level: 1Current Action: Expanding Queue: I, J, K, L, M, N, O, P, Q, R Nodes expanded: 8 H Current Action: Backtracking Current level: 2Current level: 1Current level: 0Current level: 1Current level: 2Current Action: Expanding Queue: J, K, L, M, N, O, P, Q, R, S Nodes expanded: 9 I Current Action: Backtracking Current level: 1Current level: 2Current Action: Expanding Queue: K, L, M, N, O, P, Q, R, S, T Nodes expanded: 10 J Current Action: Backtracking Current level: 1Current level: 0Current level: 1Current level: 2Current Action: Expanding Queue: L, M, N, O, P, Q, R, S, T, U Nodes expanded: 11 K Current Action: Backtracking Current level: 1 LLLL Node L is located and the search returns a solution. Press space to end. FINISHED SEARCH Queue: EmptySize of Queue: 0 Current level: 2 BREADTH-FIRST SEARCH PATTERN L Press space to continue the search

Aside: Internet Search Typically human search will be “incomplete”, E.g. finding information on the internet before google, etc – look at a few web pages, – if no success then give up

Example Determine whether data-driven or goal-driven and depth-first or breadth-first would be preferable for solving each of the following – Diagnosing mechanical problems in an automobile – You have met a person who claims to be your distant cousin, with a common ancestor named John. You like to verify her claim – A theorem prover for plane geometry

A program for examining sonar readings and interpreting them An expert system that will help a human classify plants by species, genus,etc. Example

Any path, versus shortest path, versus best path: Ex.: Traveling salesperson problem: Find a sequence of cities ABCDEA such that the total distance is MINIMAL. Boston Miami NewYork SanFrancisco Dallas

77 Bi-directional search IF you are able to EXPLICITLY describe the GOAL state, AND you have BOTH rules for FORWARD reasoning AND BACKWARD reasoning Compute the tree both from the start-node and from a goal node, until these meet. GoalStart

Example Search Problem A genetics professor – Wants to name her new baby boy – Using only the letters D,N & A Search through possible strings (states) – D,DN,DNNA,NA,AND,DNAN, etc. – 3 operators: add D, N or A onto end of string – Initial state is an empty string Goal test – Look up state in a book of boys’ names, e.g. DAN

G(n) = The cost of each move as the distance between each town H(n) = The Straight Line Distance between any town and town M. A45E32I12M0 B20F23J5 C34G15K40 D25H10L M L K H J I G F E DC B A

Consider the following search problem. Assume a state is represented as an integer, that the initial state is the number 1, and that the two successors of a state n are the states 2n and 2n+1. For example, the successors of 1 are 2 and 3, the successors of 2 are 4 and 5, the successors of 3 are 6 and 7, etc. Assumes the goal state is the number 12. Consider the following heuristics for evaluating the state n where the goal state is g h1(n) = |n-g| & h2(n) = (g – n) if (n  g) and h2 (n) =  if (n >g) Show the search trees generated for each of the following strategies for the initial state 1 and the goal state 12, numbering the nodes in the order expanded. Depth-first searchb) Breadth-first search c) beast-first with heuristic h1d) A* with heuristic (h1+h2) If any of these strategies get lost and never find the goal, then show the few steps and say "FAILS"