Classic AI Search Problems

Slides:



Advertisements
Similar presentations
Solving problems by searching Chapter 3. Outline Problem-solving agents Problem types Problem formulation Example problems Basic search algorithms.
Advertisements

Review: Search problem formulation
January 26, 2003AI: Chapter 3: Solving Problems by Searching 1 Artificial Intelligence Chapter 3: Solving Problems by Searching Michael Scherger Department.
Artificial Intelligence Adversarial search Fall 2008 professor: Luigi Ceccaroni.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2008.
1 Lecture 3 Uninformed Search. 2 Uninformed search strategies Uninformed: While searching you have no clue whether one non-goal state is better than any.
Solving Problems by Searching Currently at Chapter 3 in the book Will finish today/Monday, Chapter 4 next.
Blind Search by Prof. Jean-Claude Latombe
CS 480 Lec 3 Sept 11, 09 Goals: Chapter 3 (uninformed search) project # 1 and # 2 Chapter 4 (heuristic search)
Blind Search1 Solving problems by searching Chapter 3.
Search Strategies Reading: Russell’s Chapter 3 1.
May 12, 2013Problem Solving - Search Symbolic AI: Problem Solving E. Trentin, DIISM.
1 Chapter 3 Solving Problems by Searching. 2 Outline Problem-solving agentsProblem-solving agents Problem typesProblem types Problem formulationProblem.
1 Blind (Uninformed) Search (Where we systematically explore alternatives) R&N: Chap. 3, Sect. 3.3–5.
Solving Problem by Searching Chapter 3. Outline Problem-solving agents Problem formulation Example problems Basic search algorithms – blind search Heuristic.
Touring problems Start from Arad, visit each city at least once. What is the state-space formulation? Start from Arad, visit each city exactly once. What.
EIE426-AICV 1 Blind and Informed Search Methods Filename: eie426-search-methods-0809.ppt.
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
14 Jan 2004CS Blind Search1 Solving problems by searching Chapter 3.
CS 380: Artificial Intelligence Lecture #3 William Regli.
This time: Outline Game playing The minimax algorithm
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
CS 561, Sessions Administrativia Assignment 1 due tuesday 9/24/2002 BEFORE midnight Midterm exam 10/10/2002.
CS 561, Sessions Last time: search strategies Uninformed: Use only information available in the problem formulation Breadth-first Uniform-cost Depth-first.
Game Playing CSC361 AI CSC361: Game Playing.
Artificial Intelligence Problem solving by searching CSC 361
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
CS 460, Sessions Last time: search strategies Uninformed: Use only information available in the problem formulation Breadth-first Uniform-cost Depth-first.
1 Lecture 3 Uninformed Search. 2 Complexity Recap (app.A) We often want to characterize algorithms independent of their implementation. “This algorithm.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Informed Search Methods How can we make use of other knowledge about the problem to improve searching strategy? Map example: Heuristic: Expand those nodes.
Problem Solving and Search in AI Heuristic Search
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
CSC344: AI for Games Lecture 4: Informed search
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
1 Lecture 3 Uninformed Search
Artificial Intelligence Problem solving by searching CSC 361
Lecture 3 Uninformed Search.
Game Playing State-of-the-Art  Checkers: Chinook ended 40-year-reign of human world champion Marion Tinsley in Used an endgame database defining.
Review: Search problem formulation Initial state Actions Transition model Goal state (or goal test) Path cost What is the optimal solution? What is the.
AI in game (II) 권태경 Fall, outline Problem-solving agent Search.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
An Introduction to Artificial Intelligence Lecture 3: Solving Problems by Sorting Ramin Halavati In which we look at how an agent.
SOLVING PROBLEMS BY SEARCHING Chapter 3 August 2008 Blind Search 1.
A General Introduction to Artificial Intelligence.
1 Solving problems by searching Chapter 3. Depth First Search Expand deepest unexpanded node The root is examined first; then the left child of the root;
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Goal-based Problem Solving Goal formation Based upon the current situation and performance measures. Result is moving into a desirable state (goal state).
ARTIFICIAL INTELLIGENCE (CS 461D) Princess Nora University Faculty of Computer & Information Systems.
Implementation: General Tree Search
Fahiem Bacchus © 2005 University of Toronto 1 CSC384: Intro to Artificial Intelligence Search II ● Announcements.
Solving problems by searching A I C h a p t e r 3.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Blind Search Russell and Norvig: Chapter 3, Sections 3.4 – 3.6 CS121 – Winter 2003.
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Artificial Intelligence Solving problems by searching.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Lecture 3: Uninformed Search
Last time: search strategies
Last time: Problem-Solving
Uninformed Search Strategies
Solving problems by searching
Principles of Computing – UFCFA3-30-1
Solving Problems by Searching
Presentation transcript:

Classic AI Search Problems Sliding tile puzzles 8 Puzzle (3 by 3 variation) Small number of 8!/2, about 1.8 *105 states 15 Puzzle (4 by 4 variation) Large number of 16!/2, about 1.0 *1013 states 24 Puzzle (5 by 5 variation) Huge number of 25!/2, about 7.8 *1025 states Rubik’s Cube (and variants) 3 by 3 by 3 4.3 * 1019 states Navigation (Map searching)

Classic AI Search Problems 2 1 3 4 7 6 5 8 Invented by Sam Loyd in 1878 16!/2, about 1013 states Average number of 53 moves to solve Known diameter (maximum length of optimal path) of 87 Branching factor of 2.13

3*3*3 Rubik’s Cube Invented by Rubik in 1974 4.3 * 1019 states Average number of 18 moves to solve Conjectured diameter of 20 Branching factor of 13.35

Navigation Arad to Bucharest start end

Representing Search Arad Zerind Sibiu Timisoara Oradea Fagaras Arad Rimnicu Vilcea Arad Sibiu Bucharest

General (Generic) SearchAlgorithm function general-search(problem, QUEUEING-FUNCTION) nodes = MAKE-QUEUE(MAKE-NODE(problem.INITIAL-STATE)) loop do if EMPTY(nodes) then return "failure" node = REMOVE-FRONT(nodes) if problem.GOAL-TEST(node.STATE) succeeds then return node nodes = QUEUEING-FUNCTION(nodes, EXPAND(node, problem.OPERATORS)) end A nice fact about this search algorithm is that we can use a single algorithm to do many kinds of search. The only difference is in how the nodes are placed in the queue.

Search Terminology Completeness Time complexity Space complexity solution will be found, if it exists Time complexity number of nodes expanded Space complexity number of nodes in memory Optimality least cost solution will be found

Uninformed (blind) Search Breadth first Uniform-cost Depth-first Depth-limited Iterative deepening Bidirectional

Breadth first QUEUING-FN:- successors added to end of queue (FIFO) Arad Zerind Sibiu Timisoara Oradea Fagaras Rimnicu Vilcea Arad Arad Oradea Arad Lugoj

Properties of Breadth first Complete ? Yes if branching factor (b) finite Time ? 1 + b + b2 + b3 +…+ bd = O(bd), so exponential Space ? O(bd), all nodes are in memory Optimal ? Yes (if cost = 1 per step), not in general

Properties of Breadth first cont. Assuming b = 10, 1 node per ms and 100 bytes per node

Uniform-cost

Uniform-cost QUEUING-FN:- insert in order of increasing path cost Arad Zerind Sibiu Timisoara 75 118 140 Oradea Fagaras Rimnicu Vilcea Arad 140 151 99 80 Arad Oradea 75 71 Arad Lugoj 118 111

Properties of Uniform-cost Complete ? Yes if step cost >= epsilon Time ? Number of nodes with cost <= cost of optimal solution Space ? Optimal ?- Yes

Depth-first QUEUING-FN:- insert successors at front of queue (LIFO) Arad Zerind Sibiu Timisoara Oradea

Properties of Depth-first Complete ? No:- fails in infinite- depth spaces, spaces with loops complete in finite spaces Time ? O(bm), bad if m is larger than d Space ? O(bm), linear in space Optimal ?:- No

Depth-limited Choose a limit to depth first strategy e.g 19 for the cities Works well if we know what the depth of the solution is Otherwise use Iterative deepening search (IDS)

Properties of depth limited Complete ? Yes if limit, l >= depth of solution, d Time ? O(bl) Space ? Optimal ? No

Iterative deepening search (IDS) function ITERATIVE-DEEPENING-SEARCH(): for depth = 0 to infinity do if DEPTH-LIMITED-SEARCH(depth) succeeds then return its result end return failure

Properties of IDS Complete ? Time ? Space ? Optimal ? Yes (d + 1)b0 + db1 + (d - 1)b2 + .. + bd = O(bd) Space ? O(bd) Optimal ? Yes if step cost = 1

Comparisons

Summary Various uninformed search strategies Iterative deepening is linear in space not much more time than others Use Bi-directional Iterative deepening were possible

Island Search Suppose that you happen to know that the optimal solution goes thru Rimnicy Vilcea…

Island Search Suppose that you happen to know that the optimal solution goes thru Rimnicy Vilcea… Rimnicy Vilcea

A* Search Uses evaluation function f = g + h g is a cost function Total cost incurred so far from initial state Used by uniform cost search h is an admissible heuristic Guess of the remaining cost to goal state Used by greedy search Never overestimating makes h admissible

A* Our Heuristic

A* QUEUING-FN:- insert in order of f(n) = g(n) + h(n) Arad Zerind Sibiu Timisoara g(Zerind) = 75 g(Timisoara) = 118 g(Sibiu) = 140 h(Zerind) = 374 h(Sibiu) = 253 g(Timisoara) = 329 f(Zerind) = 75 + 374 f(Sibui) = …

Properties of A* Optimal and complete Admissibility guarantees optimality of A* Becomes uniform cost search if h = 0 Reduces time bound from O(b d ) to O(b d - e) b is asymptotic branching factor of tree d is average value of depth of search e is expected value of the heuristic h Exponential memory usage of O(b d ) Same as BFS and uniform cost. But an iterative deepening version is possible … IDA*

IDA* Solves problem of A* memory usage Easier to implement than A* Reduces usage from O(b d ) to O(bd ) Many more problems now possible Easier to implement than A* Don’t need to store previously visited nodes AI Search problem transformed Now problem of developing admissible heuristic Like The Price is Right, the closer a heuristic comes without going over, the better it is Heuristics with just slightly higher expected values can result in significant performance gains

A* “trick” Suppose you have two admissible heuristics… But h1(n) > h2(n) You may as well forget h2(n) Sometimes h1(n) > h2(n) and sometimes h1(n) < h2(n) We can now define a better heuristic, h3 h3(n) = max( h1(n) , h2(n) )

What different does the heuristic make? Suppose you have two admissible heuristics… h1(n) is h(n) = 0 (same as uniform cost) h2(n) is misplaced tiles h3(n) is Manhattan distance

Effective Branching Factor Search Cost Effective Branching Factor A*(h1) A*(h2) A*(h3) 2 10 6 2.45 1.79 4 112 13 12 2.87 1.48 1.45 680 20 18 2.73 1.34 1.30 8 6384 39 25 2.80 1.33 1.24 47127 93 2.79 1.38 1.22 364404 227 73 2.78 1.42 14 3473941 539 113 2.83 1.44 1.23 16 Big number 1301 211 1.25 Real big Num 3056 363 1.46 1.26

Game Search (Adversarial Search) The study of games is called game theory A branch of economics We’ll consider special kinds of games Deterministic Two-player Zero-sum Perfect information

Games A zero-sum game means that the utility values at the end of the game total to 0 e.g. +1 for winning, -1 for losing, 0 for tie Some kinds of games Chess, checkers, tic-tac-toe, etc.

Problem Formulation Initial state Operators Terminal test Initial board position, player to move Operators Returns list of (move, state) pairs, one per legal move Terminal test Determines when the game is over Utility function Numeric value for states E.g. Chess +1, -1, 0

Game Tree

Game Trees Each level labeled with player to move Max if player wants to maximize utility Min if player wants to minimize utility Each level represents a ply Half a turn

Optimal Decisions MAX wants to maximize utility, but knows MIN is trying to prevent that MAX wants a strategy for maximizing utility assuming MIN will do best to minimize MAX’s utility Consider minimax value of each node Utility of node assuming players play optimally

Minimax Algorithm Calculate minimax value of each node recursively Depth-first exploration of tree Game tree (aka minimax tree) Max node Min node

Example Max 5 Min 4 2 5 2 7 5 5 6 7 10 4 6 Utility

Minimax Algorithm Time Complexity? Space Complexity? O(bm) Space Complexity? O(bm) or O(m) Is this practical? Chess, b=35, m=100 (50 moves per player) 3510010154 nodes to visit

Alpha-Beta Pruning Improvement on minimax algorithm Effectively cut exponent in half Prune or cut out large parts of the tree Basic idea Once you know that a subtree is worse than another option, don’t waste time figuring out exactly how much worse

Alpha-Beta Pruning Example 3 3 2 3 30 a pruned 32 a pruned 3 5 2 3 5 53 b pruned 2 3 5 1 2 2 1 2 3 5