Searching for Solutions

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Informed search algorithms
Review: Search problem formulation
Informed Search Algorithms
Informed search algorithms
An Introduction to Artificial Intelligence
Solving Problem by Searching
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
EIE426-AICV 1 Blind and Informed Search Methods Filename: eie426-search-methods-0809.ppt.
Problem Solving by Searching
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
Informed search algorithms
CS 460 Spring 2011 Lecture 3 Heuristic Search / Local Search.
Searching for Solutions Artificial Intelligence CSPP January 14, 2004.
CSC344: AI for Games Lecture 4: Informed search
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Rutgers CS440, Fall 2003 Heuristic search Reading: AIMA 2 nd ed., Ch
Informed Search Idea: be smart about what paths to try.
Informed search algorithms
Search: Heuristic &Optimal Artificial Intelligence CMSC January 16, 2003.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Informed search algorithms
Informed (Heuristic) Search
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
CHAPTER 4: INFORMED SEARCH & EXPLORATION Prepared by: Ece UYKUR.
1 Shanghai Jiao Tong University Informed Search and Exploration.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
ISC 4322/6300 – GAM 4322 Artificial Intelligence Lecture 3 Informed Search and Exploration Instructor: Alireza Tavakkoli September 10, 2009 University.
CS 380: Artificial Intelligence Lecture #4 William Regli.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Chapter 4 Informed/Heuristic Search
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Searching for Solutions Artificial Intelligence CMSC January 15, 2008.
Advanced Artificial Intelligence Lecture 2: Search.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
CS 312: Algorithm Design & Analysis Lecture #37: A* (cont.); Admissible Heuristics Credit: adapted from slides by Stuart Russell of UC Berkeley. This work.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
4/11/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 4, 4/11/2005 University of Washington, Department of Electrical Engineering Spring 2005.
A General Introduction to Artificial Intelligence.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Informed Search CSE 473 University of Washington.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Heuristic Search  Best First Search –A* –IDA* –Beam Search  Generate and Test  Local Searches –Hill Climbing  Simple Hill Climbing  Steepest Ascend.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Last time: Problem-Solving
Artificial Intelligence Problem solving by searching CSC 361
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
CSE 473 University of Washington
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
Heuristic Search Generate and Test Hill Climbing Best First Search
HW 1: Warmup Missionaries and Cannibals
Artificial Intelligence
Solving Problems by Searching
Informed Search Idea: be smart about what paths to try.
Presentation transcript:

Searching for Solutions Artificial Intelligence CMSC 25000 January 17, 2008

Agenda Search – Review Heuristic search: Efficient, optimal search: Problem-solving agents Rigorous problem definitions Heuristic search: Hill-climbing, beam, best-first search Efficient, optimal search: A*, admissible heuristics Search analysis Computational cost, limitations

Problem-Solving Agents Goal-based agents Identify goal, sequence of actions that satisfy Goal: set of satisfying world states Precise specification of what to achieve Problem formulation: Identify states and actions to consider in achieving goal Given a set of actions, consider sequence of actions leading to a state with some value Search: Process of looking for sequence Problem -> action sequence solution

Formal Problem Definitions Key components: Initial state: E.g. First location Available actions: Successor function: reachable states Goal test: Conditions for goal satisfaction Path cost: Cost of sequence from initial state to reachable state Solution: Path from initial state to goal Optimal if lowest cost

Heuristic Search A little knowledge is a powerful thing Order choices to explore better options first More knowledge => less search Better search alg?? Better search space Measure of remaining cost to goal-heuristic E.g. actual distance => straight-line distance A 10.4 B C 6.7 4.0 S 11.0 G 3.0 8.9 6.9 D E F

Hill-climbing Search Select child to expand that is closest to goal S 8.9 A D 10.4 6.9 10.4 E A B 3.0 F 6.7 G

Hill-climbing Search Algorithm Form a 1-element queue of 0 cost=root node Until first path in queue ends at goal or no paths Remove 1st path from queue; extend path one step Reject all paths with loops Sort new paths by estimated distance to goal Add new paths to FRONT of queue If goal found=>success; else, failure

Beam Search Breadth-first search of fixed width - top w Guarantees limited branching factor, E.g. w=2 S A D 10.4 8.9 B D A E 6.7 8.9 10.4 6.9 C E B F 4.0 6.9 6.7 3 A C G

Beam Search Algorithm Form a 1-element queue of 0 cost=root node Until first path in queue ends at goal or no paths Extend all paths one step Reject all paths with loops Sort all paths in queue by estimated distance to goal Put top w in queue If goal found=>success; else, failure

Best-first Search Expand best open node ANYWHERE in tree Form a 1-element queue of 0 cost=root node Until first path in queue ends at goal or no paths Remove 1st path from queue; extend path one step Reject all paths with loops Put in queue Sort all paths by estimated distance to goal If goal found=>success; else, failure

Heuristic Search Issues Parameter-oriented hill climbing Make one step adjustments to all parameters E.g. tuning brightness, contrast, r, g, b on TV Test effect on performance measure Problems: Foothill problem: aka local maximum All one-step changes - worse!, but not global max Plateau problem: one-step changes, no FOM + Ridge problem: all one-steps down, but not even local max Solution (local max): Randomize!!

Optimal Search Find BEST path to goal Exhaustive search: Find best path EFFICIENTLY Exhaustive search: Try all paths: return best Optimal paths with less work: Expand shortest paths Expand shortest expected paths Eliminate repeated work - dynamic programming

Efficient Optimal Search Find best path without exploring all paths Use knowledge about path lengths Maintain path & path length Expand shortest paths first Halt if partial path length > complete path length

Underestimates Improve estimate of complete path length Add (under)estimate of remaining distance f(total path) = g(partial path)+h(remaining) Underestimates must ultimately yield shortest Stop if all f(total path) > g(complete path) Straight-line distance => underestimate Better estimate => Better search No missteps

Search with Dynamic Programming Avoid duplicating work Dynamic Programming principle: Shortest path from S to G through I is shortest path from S to I plus shortest path from I to G No need to consider other routes to or from I

Optimality of A* (proof) Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) = g(G2) since h(G2) = 0 g(G2) > g(G) since G2 is suboptimal f(G) = g(G) since h(G) = 0 f(G2) > f(G) from above

Optimality of A* (proof) Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) > f(G) from above h(n) ≤ h*(n) since h is admissible g(n) + h(n) ≤ g(n) + h*(n) f(n) ≤ f(G) Hence f(G2) > f(n), and A* will never select G2 for expansion

Consistent heuristics A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n') If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n) i.e., f(n) is non-decreasing along any path. Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

A* Search Algorithm Combines good optimal search ideas Dynamic programming and underestimates Form a 1-element queue of 0 cost=root node Until first path in queue ends at goal or no paths Remove 1st path from queue; extend path one step Reject all paths with loops For all paths with same terminal node, keep only shortest Add new paths to queue Sort all paths by total length underestimate, shortest first (g(partial path) + h(remaining)) If goal found=>success; else, failure

A* Search Example S A D 13.4 12.9 A E 19.4 12.9 B F 17.7 13 G 13

Heuristics A* search: only as good as the heuristic Heuristic requirements: Admissible: UNDERESTIMATE true remaining cost to goal Consistent: h(n) <= c(n,a,n') + h(n')

Constructing Heuristics Relaxation: State problem Remove one or more constraints What is the cost then? Example: 8-square: Move A to B if 1) A &B horizontally or vertically adjacent, and 2) B is empty Ignore 1) -> Manhattan distance Ignore 1) & 2): # of misplaced squares