Artificial Intelligence

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Cooperating Intelligent Systems
Artificial Intelligence Solving problems by searching
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
January 26, 2003AI: Chapter 3: Solving Problems by Searching 1 Artificial Intelligence Chapter 3: Solving Problems by Searching Michael Scherger Department.
Problem Solving and Search Andrea Danyluk September 9, 2013.
Problem Solving Agents A problem solving agent is one which decides what actions and states to consider in completing a goal Examples: Finding the shortest.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
Artificial Intelligence for Games Uninformed search Patrick Olivier
CSC 423 ARTIFICIAL INTELLIGENCE
Artificial Intelligence (CS 461D)
Feng Zhiyong Tianjin University Fall  datatype PROBLEM ◦ components: INITIAL-STATE, OPERATORS, GOAL- TEST, PATH-COST-FUNCTION  Measuring problem-solving.
Artificial Intelligence Lecture No. 7 Dr. Asad Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
CS 380: Artificial Intelligence Lecture #3 William Regli.
Review: Search problem formulation
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2004.
Toy Problem: Missionaries and Cannibals
Search I Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Russell & Norvig Chapter 3]
Artificial Intelligence Chapter 3: Solving Problems by Searching
1 Solving Problems by Searching. 2 Terminology State State Space Initial State Goal Test Action Step Cost Path Cost State Change Function State-Space.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Solving problems by searching
State-Space Searches. 2 State spaces A state space consists of –A (possibly infinite) set of states The start state represents the initial problem Each.
Solving Problems by Searching
State-Space Searches.
Lecture 3 Uninformed Search.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
1 Problem Solving and Searching CS 171/271 (Chapter 3) Some text and images in these slides were drawn from Russel & Norvig’s published material.
State-Space Searches. 2 State spaces A state space consists of A (possibly infinite) set of states The start state represents the initial problem Each.
Search Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Dan Klein, Stuart Russell, Andrew Moore, Svetlana Lazebnik,
Artificial Intelligence
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
Lecture 3: Uninformed Search
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
Problem solving by search Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Uninformed Search ECE457 Applied Artificial Intelligence Spring 2007 Lecture #2.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
Solving problems by searching 1. Outline Problem formulation Example problems Basic search algorithms 2.
Uninformed search strategies A search strategy is defined by picking the order of node expansion Uninformed search strategies use only the information.
Problem Solving as Search. Problem Types Deterministic, fully observable  single-state problem Non-observable  conformant problem Nondeterministic and/or.
1 Solving Problems by Searching. 2 Terminology State State Space Goal Action Cost State Change Function Problem-Solving Agent State-Space Search.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Blind Searches - Introduction.
Implementation: General Tree Search
Solving problems by searching A I C h a p t e r 3.
G5AIAI Introduction to AI
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Lecture 3: Uninformed Search
Problem Solving as Search
Solving problems by searching
Problem Solving and Searching
Problem Solving and Searching
CS 2710, ISSP 2160 Foundations of Artificial Intelligence
ECE457 Applied Artificial Intelligence Fall 2007 Lecture #2
Presentation transcript:

Artificial Intelligence Uninformed search Chapter 3, AIMA A goal based agent

A ”problem” consists of An initial state, q(0) A list of possible actions, a, for the agent A goal test (there can be many goal states) A path cost One way to solve this is to search for a path q(0)  q(1)  q(2)  ...  q(N) such that q(N) is a goal state. This requires that the environment is observable, deterministic, static and discrete.

Example: 8-puzzle State: Specification of each of the eight tiles in the nine squares (the blank is in the remaining square). Initial state: Any state. Successor function (actions): Blank moves Left, Right, Up, or Down. Goal test: Check whether the goal state has been reached. Path cost: Each move costs 1. The path cost = the number of moves. 2 8 3 1 6 4 7 5 Start State Goal State

Example: 8-puzzle State: Specification of each of the eight tiles in the nine squares (the blank is in the remaining square). Initial state: Any state. Successor function (actions): Blank moves Left, Right, Up, or Down. Goal test: Check whether the goal state has been reached. Path cost: Each move costs 1. The path cost = the number of moves. Examples: = {7, 2, 4, 5, 0, 6, 8, 3, 1} = {2, 8, 3, 1, 6, 4, 7, 0, 5} 2 8 3 1 6 4 7 5 Start State Goal State

Example: 8-puzzle State: Specification of each of the eight tiles in the nine squares (the blank is in the remaining square). Initial state: Any state. Successor function (actions): Blank moves Left, Right, Up, or Down. Goal test: Check whether the goal state has been reached. Path cost: Each move costs 1. The path cost = the number of moves. 2 8 3 1 6 4 7 5 Start State Goal State

Expanding 8-puzzle 2 8 3 1 6 4 7 5 Blank moves right Blank moves left Blank moves up 2 8 3 1 6 4 7 5 2 8 3 1 6 4 7 5 2 8 3 1 6 4 7 5

Uninformed search Searching for the goal without knowing in which direction it is. Breadth-first Depth-first Iterative deepening (Depth and breadth refers to the search tree) We evaluate the algorithms by their: Completeness (do they explore all possibilities) Optimality (if it finds solution with minimum path cost) Time complexity (clock cycles) Space complexity (memory requirement)

Breadth-first b = branching factor, d = depth Image from Russel & Norvig, AIMA, 2003 Nodes marked with open circles = fringe = in the memory Breadth-first finds the solution that is closest (in the graph) to the start node (always expands the shallowest node). Keeps O(bd) nodes in memory  exponential memory requirement! Complete (finds a solution if there is one) Not necessarily optimal (optimal if cost is the same for each step) Exponential space complexity (very bad) Exponential time complexity b = branching factor, d = depth

Breadth-first search for 8-puzzle. Image from N. J. Nilsson, Artificial Intelligence – A New Synthesis, 1998 Breadth-first search for 8-puzzle. The path marked by bold arrows is the solution. Note: This assumes that you apply goal test immediately after expansion (not the case for AIMA implementation) If we keep track of visited states  Graph search (rather than tree search) Solution in node # 46

Depth-first b = branching factor, d = depth Keeps O(bd) nodes in memory. Requires a depth limit to avoid infinite paths (limit is 4 in the figure). Incomplete (is not guaranteed to find a solution) Not optimal Linear space complexity (good) Exponential time complexity Image from Russel & Norvig, AIMA, 2003 Black nodes are removed from memory b = branching factor, d = depth

Depth-first search (limit = 5) for 8-puzzle. Image from N. J. Nilsson, Artificial Intelligence – A New Synthesis, 1998 Depth-first search (limit = 5) for 8-puzzle.

Depth-first on the 8-puzzle example. Image from N. J. Nilsson, Artificial Intelligence – A New Synthesis, 1998 Depth-first on the 8-puzzle example. Depth = 5 Solution in node # 31

Iterative deepening b = branching factor, d = depth Image from Russel & Norvig, AIMA, 2003 Keeps O(bd) nodes in memory. Iteratively increases the depth limit. Complete (like BFS) Not optimal Linear space complexity (like DFS) Exponential time complexity The preferred search method for large search spaces with unknown depth. Black nodes are removed from memory b = branching factor, d = depth

Iterative deepening on the 8-puzzle example. Image from N. J. Nilsson, Artificial Intelligence – A New Synthesis, 1998 Iterative deepening on the 8-puzzle example. Solution in node # 46

Exercise Exercise 3.4: Show that the 8-puzzle states are divided into two disjoint sets, such that no state in one set can be transformed into a state in the other set by any number of moves. Devise a procedure that will tell you which class a given state is in, and explain why this is a good thing to have for generating random states.

Proof for exercise 3.4: N = 11 in the figure. 2 8 3 1 6 4 7 5 Definition: Define the order of counting from the upper left corner to the lower right corner (see figure). Let N denote the number of lower numbers following a number (so-called ”inversions”) when counting in this fashion. N = 11 in the figure. 2 8 3 1 6 4 7 5 2 8 3 1 6 4 7 5 2 8 3 1 6 4 7 5 2 8 3 1 6 4 7 5 2 8 3 1 6 4 7 5 2 8 3 1 6 4 7 5 Yellow tiles are inverted relative to the tile with ”8” in the top row. 1 + 6 + 1 + 2 + 1 = 11

Proof for exercise 3.4: Proposition: N is either always even or odd (i.e. Nmod2 is conserved). Proof: (1) Sliding the blank along a row does not change the row number and not the internal order of the tiles, i.e. N (and thus also Nmod2) is conserved. (2) Sliding the blank between rows does not change Nmod2 either, as shown on the following slide.

Proof for exercise 3.4: We only need to consider tiles B, C, and D since the relative order of the other tiles remains the same. If B > C and B > D, then the move removes two inversions. If B > C and B < D, then the move adds one inversion and removes one (sum = 0). If B < C and B < D, then the move adds two inversions. The number of inversions changes in steps of 2. A B C D G E F H A C D B E F G H

Observation The upper goal state has N = 0 The lower goal state has N = 7 We cannot go from one to the other. 1 2 3 4 5 6 7 8 1 2 3 8 6 4 7 5

Exercise Exercise 3.9: The missionaries and cannibals: Three missionaries and three cannibals are on one side of a river, along with a boat that can hold one or two people (one for rowing). Find a way to get everyone to the other side, without ever leaving a group of missionaries in one place outnumbered by the cannibals in that place (the cannibals eat the missionaries then). Formulate the problem precisely, making only those distinctions necessary to ensure a valid solution. Draw a diagram of the complete state space. Implement and solve the problem optimally using an appropriate search algorithm. Is it a good idea to check for repeated states? Why do you think people have a hard time solving this puzzle, given that the state space is so simple? Image from http://www.cse.msu.edu/~michmer3/440/Lab1/cannibal.html

Missionaries & Cannibals State: q = (M,C,B) signifying the number of missionaries, cannibals, and boats on the left bank. The start state is (3,3,1) and the goal state is (0,0,0). Actions (successor function): (10 possible but only 5 available each move due to boat) One cannibal/missionary crossing L  R: subtract (0,1,1) or (1,0,1) Two cannibals/missionaries crossing L  R: subtract (0,2,1) or (2,0,1) One cannibal/missionary crossing R  L: add (1,0,1) or (0,1,1) Two cannibals/missionaries crossing R  L: add (2,0,1) or (0,2,1) One cannibal and one missionary crossing: add/subtract (1,1,1) Image from http://www.cse.msu.edu/~michmer3/440/Lab1/cannibal.html

Missionaries & Cannibals states Assumes that passengers have to get out of the boat after the trip. Red states = missionaries get eaten.

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states 

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states 

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states 

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states 

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states 

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states 

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states 

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states 

Breadth-first search on Missionaries & Cannibals States are generated by applying: +/- (1,0,1) +/- (0,1,1) +/- (2,0,1) +/- (0,2,1) +/- (1,1,1) In that order (left to right) Red states = missionaries get eaten Yellow states = repeated states 

Breadth-first search on Missionaries & Cannibals -(0,2,1) [2 cannibals cross L  R] +(0,1,1) [1 cannibal crosses R  L] -(0,2,1) [2 cannibals cross L  R] +(0,1,1) [1 cannibal crosses R  L] -(2,0,1) [2 missionaries cross L  R] +(1,1,1) [1 cannibal & 1 missionary cross R  L] -(2,0,1) [2 missionaries cross L  R] +(0,1,1) [1 cannibal crosses R  L] -(0,2,1) [2 cannibals cross L  R] +(1,0,1) [1 missionary crosses R  L] -(1,1,1) [1 cannibal & 1 missionary cross L  R] This is an optimal solution (minimum number of crossings). [Why?] Would Depth-first work? 

Breadth-first search on Missionaries & Cannibals -(0,2,1) [2 cannibals cross L  R] +(0,1,1) [1 cannibal crosses R  L] -(0,2,1) [2 cannibals cross L  R] +(0,1,1) [1 cannibal crosses R  L] -(2,0,1) [2 missionaries cross L  R] +(1,1,1) [1 cannibal & 1 missionary cross R  L] -(2,0,1) [2 missionaries cross L  R] +(0,1,1) [1 cannibal crosses R  L] -(0,2,1) [2 cannibals cross L  R] +(1,0,1) [1 missionary crosses R  L] -(1,1,1) [1 cannibal & 1 missionary cross L  R] This is an optimal solution (minimum number of crossings). [Why?] Would Depth-first work? 

Breadth-first search on Missionaries & Cannibals Expanded 48 nodes Depth-first search on Missionaries & Cannibals Expanded 30 nodes (if repeated states are checked, otherwise we end up in an endless loop) 

An example of a real search application Finding interesting web pages (expanding from links). Breadth-first works very nicely and quickly finds pages with high PageRank R(p). PageRank is the scoring measure used by Google. k is an index over all pages that link to page p; C(k) is the total number of links out of k; R(k) is the PageRank for page k; T is the total number of web pages on the internet; d is a number 0 < d < 1.