AI – Week 5 Implementing your own AI Planner in Prolog – part II : HEURISTICS Lee McCluskey, room 2/09

Slides:



Advertisements
Similar presentations
Informed search algorithms
Advertisements

Review: Search problem formulation
Informed Search Algorithms
Problem Solving: Informed Search Algorithms Edmondo Trentin, DIISM.
Traveling Salesperson Problem
Solving Problem by Searching
October 1, 2012Introduction to Artificial Intelligence Lecture 8: Search in State Spaces II 1 A General Backtracking Algorithm Let us say that we can formulate.
1 Heuristic Search Chapter 4. 2 Outline Heuristic function Greedy Best-first search Admissible heuristic and A* Properties of A* Algorithm IDA*
CS171 Introduction to Computer Science II Graphs Strike Back.
Search Strategies CPS4801. Uninformed Search Strategies Uninformed search strategies use only the information available in the problem definition Breadth-first.
Problem Solving by Searching
SE Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Review: Search problem formulation
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
AI – Week 5 An Introduction to AI Planning Lee McCluskey, room 2/07
Using Search in Problem Solving
Problem Solving and Search in AI Heuristic Search
Using Search in Problem Solving
CSC344: AI for Games Lecture 4: Informed search
CS 561, Session 6 1 Last time: Problem-Solving Problem solving: Goal formulation Problem formulation (states, operators) Search for solution Problem formulation:
Heuristic Search Heuristic - a “rule of thumb” used to help guide search often, something learned experientially and recalled when needed Heuristic Function.
Informed Search Idea: be smart about what paths to try.
Graphs II Robin Burke GAM 376. Admin Skip the Lua topic.
P ROBLEM Write an algorithm that calculates the most efficient route between two points as quickly as possible.
Problem Solving by Searching Search Methods : informed (Heuristic) search.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
Informed search algorithms Chapter 4. Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed Search Methods. Informed Search  Uninformed searches  easy  but very inefficient in most cases of huge search tree  Informed searches  uses.
For Friday Finish reading chapter 4 Homework: –Lisp handout 4.
For Monday Read chapter 4, section 1 No homework..
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
CSC3203: AI for Games Informed search (1) Patrick Olivier
Informed Search I (Beginning of AIMA Chapter 4.1)
1 Kuliah 4 : Informed Search. 2 Outline Best-First Search Greedy Search A* Search.
Search (continued) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Informed Search and Heuristics Chapter 3.5~7. Outline Best-first search Greedy best-first search A * search Heuristics.
Feng Zhiyong Tianjin University Fall  Best-first search  Greedy best-first search  A * search  Heuristics  Local search algorithms  Hill-climbing.
Best-first search Idea: use an evaluation function f(n) for each node –estimate of "desirability"  Expand most desirable unexpanded node Implementation:
Informed Search II CIS 391 Fall CIS Intro to AI 2 Outline PART I  Informed = use problem-specific knowledge  Best-first search and its variants.
Informed Search CSE 473 University of Washington.
AI – Week 5 More on HEURISTICS and domain modelling Lee McCluskey, room 2/09
Searching for Solutions
A* optimality proof, cycle checking CPSC 322 – Search 5 Textbook § 3.6 and January 21, 2011 Taught by Mike Chiang.
Chapter 3.5 and 3.6 Heuristic Search Continued. Review:Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
G5AIAI Introduction to AI Graham Kendall Heuristic Searches.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 5 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
AI – Week 3 An Introduction to AI Planning Lee McCluskey, room 3/10
For Monday Read chapter 4 exercise 1 No homework.
Chapter 3 Solving problems by searching. Search We will consider the problem of designing goal-based agents in observable, deterministic, discrete, known.
Chapter 3.5 Heuristic Search. Learning Objectives Heuristic search strategies –Best-first search –A* algorithm Heuristic functions.
Review: Tree search Initialize the frontier using the starting state
Last time: Problem-Solving
Artificial Intelligence Problem solving by searching CSC 361
HW #1 Due 29/9/2008 Write Java Applet to solve Goats and Cabbage “Missionaries and cannibals” problem with the following search algorithms: Breadth first.
Discussion on Greedy Search and A*
Discussion on Greedy Search and A*
CS 4100 Artificial Intelligence
EA C461 – Artificial Intelligence
Informed search algorithms
Informed search algorithms
Artificial Intelligence
CSE 473 University of Washington
CPSC 322 Introduction to Artificial Intelligence
HW 1: Warmup Missionaries and Cannibals
Informed Search Idea: be smart about what paths to try.
HW 1: Warmup Missionaries and Cannibals
Artificial Intelligence
Informed Search Idea: be smart about what paths to try.
Informed Search.
Presentation transcript:

AI – Week 5 Implementing your own AI Planner in Prolog – part II : HEURISTICS Lee McCluskey, room 2/09

Artform Research Group Last Week: Building your own forward searching planner IN PROLOG INITIAL STATE Each node is modelled as a “state” + sequence of operators that lead to the state + heuristic value A search algorithm is COMPLETE if it can always find a solution if there is one A search algorithm is OPTIMAL if it always finds the shortest (minimal cost) solution

Artform Research Group RECAP: Planning Algorithm in Prolog: breadth-first forward search through state-space: 1. Store the first node (initial state + empty solution) Repeat 2. pick a node(State,Soln) 3. pick an operator and parameter grounding - 'O' - that can be applied to State 4. apply O to State to get State' 5. assert(node(State', Soln++[O])) 6. if possible, backtrack to 3. and make a different choice. until a node has been asserted that contains a solution

Artform Research Group Heuristic Search - Definitions Optimal – if a solution is found, it is best solution (that is it minimises some metric such as length of solution, or amount of resource consumed) Complete - guaranteed to find a solution if there is one Efficiency – amount of time / space required to find a solution

Artform Research Group Heuristic Search - Definitions BEST - FIRST search – repeat the following.. – collect the set of un-expanded (ie OPEN) nodes -- pick a node from the set to expand if a heuristic function gives it the best value -- mark it as closed, and expand it giving new open nodes to the set

Artform Research Group Heuristic Search - Definitions Variation: “GREEDY” search – pick a node to expand if it appears to be nearest the goal That is pick a node which appears to have the minimum “distance” between the node and a goal node – GREEDY search takes no account of the effort spent in getting to that node in the first place! Hence Greedy.

Artform Research Group Heuristics - Definitions NON-GREEDY variation of best-first search: factor in the cost of getting to the current state.. Let a heuristic value of node n be given by COST(n) = g(n) + h(n) where g(n) is the ACTUAL COST of the path to the current node h(n) is the ESTIMATED COST of reaching the goal from n

Artform Research Group Heuristics - Example Breadth First Search As carried out by the planner in last weeks practical is “best-first” in the sense that COST(n) = g(n) + h(n) where g(n) = Count of action applications is the COST of the path to get to node n h(n) = 0 This makes Breadth First Search OPTIMAL and COMPLETE but often hopelessly inefficient.

Artform Research Group Admissable Heuistics An ADMISSIBLE heuristic evaluation function is one that supplies an estimate of the cost to reach a goal state from current state and the goal, and never overestimates that cost. Example: Goal - get from current position P to a Goal position G by the road network. an ADMISSIBLE estimated cost is the straight line distance between P and G

Artform Research Group The FAMOUS A* Property An A* search algorithm is one that expands a node with the lowest cost, where 1. COST(n) = g(n) + h(n) 2. g(n) is the ACTUAL COST of the path to the current node (usually number of operators/actions required, or amount of resources) 3. h(n) is an ADMISSIBLE estimated cost of reaching the goal from n A* algorithms are OPTIMAL and COMPLETE

Artform Research Group Adding Heuristics to Prolog Code Heuristic 1: Prune the tree: don't visit/expand the same state twice: eg 5. IF State' NOT EXPANDED BEFORE THEN assert(node(State', Soln++[O])) +ve: In some domains cuts down search considerably. Does not affect completeness of search. Does not affect optimality in a breadth first search - ve overhead in storing and searching through all previous states. Might not find ALL solutions See website for an implementation of this (= don’t expand a node(S,Soln1) if a state with node(S,Soln2) is already in the open nodes…)

Artform Research Group Adding Heuristics to Prolog Code Heuristic 2: greedy search: COST = estimated ‘effort’ to reach a solution from the current state, eg number of goals still to be achieved 5. assert(node(State', Soln++[O], COST) 2. pick a node(State,Soln, COST) ---- WHERE COST HAS THE LOWEST VALUE (ignoring the cost/size of Soln) eg Evaluate the nodes BEFORE ASSERTING them, and store them with the cost attached

Artform Research Group Heuristic 2 -Example INITIAL STATE Goal=set of subgoals: A&B&C&D&E A&B&C&D solved A&B solved A&B&C solved C&D solved A&B solved E solved C&D solved GREADY SEARCH: PICK THIS NODE TO EXPAND Greedy Scores in Red

Artform Research Group Adding Heuristics to the Planner recall node n = (state, [ops]), COST(n)=g(n)+h(n) Heuristic 3: COST(n) = how many ops it took to get there (g(n) = length of [ops]), and h(n) = 0 minimise cost(n) = Breadth first search. Is h(n) admissible, Is this A* ? Heuristic 4: COST(n) = how many ops it took to get there (g(n) = length of [ops]), h(n) = how many subgoals still to solve. -ve crude - may work in some domains but not in others +ve negligible overhead Is h(n) admissible, Is this A* ?

Artform Research Group Heuristic 4 -Example INITIAL STATE Goal condition: A&B&C&D&E A&B&C&D solved A&B solved A&B&C solved C&D solved A&B solved E solved C&D solved PICK THIS NODE TO EXPAND Greedy Scores in Red

Artform Research Group Adding Heuristics – the PlanGraph Heuristic 5: Relax the problem – take away some of the constraints To calculate the cost of a node n = (state, [ops]): Calculate solution plan from state ignoring delete lists (ie ignoring undoing effects and interference) of planning operators. Let h(n) = length of shortest relaxed plan So COST(n) = length [ops] + h(n) This is admissible as it is always an underestimation of the distance to goal - it never over estimates.

Artform Research Group Other improvements to Planner.. See website – another world + planner: the WEB SERVICE COMPOSITION world (simulates a web agent that needs to plan to achieve goals) Improvement: I have added the ability to put EVALUABLE predicates in states eg maths operators, assigment

Artform Research Group Summary It is easy to add Heuristics to the Breadth first state space planner to make it Best first Relaxed problem solving can provide a very good admissible measure of h(n)