Fast Strong Planning for FOND Problems with Multi-Root DAGs Jicheng Fu, Andres Calderon Jaramillo - University of Central Oklahoma Vincent Ng, Farokh B.

Slides:



Advertisements
Similar presentations
Heuristic Search techniques
Advertisements

AI Pathfinding Representing the Search Space
Planning with Non-Deterministic Uncertainty (Where failure is not an option) R&N: Chap. 12, Sect (+ Chap. 10, Sect 10.7)
Ch 4. Heuristic Search 4.0 Introduction(Heuristic)
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Traveling Salesperson Problem
Artificial Intelligence Constraint satisfaction problems Fall 2008 professor: Luigi Ceccaroni.
Top 5 Worst Times For A Conference Talk 1.Last Day 2.Last Session of Last Day 3.Last Talk of Last Session of Last Day 4.Last Talk of Last Session of Last.
an incremental version of A*
1 Some Comments on Sebastiani et al Nature Genetics 37(4)2005.
~1~ Infocom’04 Mar. 10th On Finding Disjoint Paths in Single and Dual Link Cost Networks Chunming Qiao* LANDER, CSE Department SUNY at Buffalo *Collaborators:
Constraint Programming for Compiler Optimization March 2006.
Fast Strong Planning for FOND Problems with Multi-Root DAGs Andres Calderon Jaramillo - Dr. Jicheng Fu Department of Computer Science, University of Central.
CSC 423 ARTIFICIAL INTELLIGENCE
Graduate Center/City University of New York University of Helsinki FINDING OPTIMAL BAYESIAN NETWORK STRUCTURES WITH CONSTRAINTS LEARNED FROM DATA Xiannian.
Best-First Search: Agendas
3/25  Monday 3/31 st 11:30AM BYENG 210 Talk by Dana Nau Planning for Interactions among Autonomous Agents.
November 10, 2009Introduction to Cognitive Science Lecture 17: Game-Playing Algorithms 1 Decision Trees Many classes of problems can be formalized as search.
Handling non-determinism and incompleteness. Problems, Solutions, Success Measures: 3 orthogonal dimensions  Incompleteness in the initial state  Un.
Uninformed Search Reading: Chapter 3 by today, Chapter by Wednesday, 9/12 Homework #2 will be given out on Wednesday DID YOU TURN IN YOUR SURVEY?
Strong Method Problem Solving.
Constraint Satisfaction Problems
Distributed Constraint Optimization Michal Jakob Agent Technology Center, Dept. of Computer Science and Engineering, FEE, Czech Technical University A4M33MAS.
Artificial Intelligence Lecture 9. Outline Search in State Space State Space Graphs Decision Trees Backtracking in Decision Trees.
1 State Space of a Problem Lecture 03 ITS033 – Programming & Algorithms Asst. Prof.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
October 3, 2012Introduction to Artificial Intelligence Lecture 9: Two-Player Games 1 Iterative Deepening A* Algorithm A* has memory demands that increase.
Computing & Information Sciences Kansas State University Lecture 9 of 42 CIS 530 / 730 Artificial Intelligence Lecture 9 of 42 William H. Hsu Department.
1 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
Traffic Based Pathway Optimization Michael LeGore TJHSST CSL.
Hande ÇAKIN IES 503 TERM PROJECT CONSTRAINT SATISFACTION PROBLEMS.
Christopher Moh 2005 Competition Programming Analyzing and Solving problems.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Fall 2006 Jim Martin.
Search CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Lecture 3: 18/4/1435 Searching for solutions. Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Boolean Satisfiability Present and Future
Lecture 2: 11/4/1435 Problem Solving Agents Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Informed Search Reading: Chapter 4.5 HW #1 out today, due Sept 26th.
1 CSE 4705 Artificial Intelligence Jinbo Bi Department of Computer Science & Engineering
An Introduction to Artificial Intelligence Lecture 5: Constraint Satisfaction Problems Ramin Halavati In which we see how treating.
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
Complexity & Computability. Limitations of computer science  Major reasons useful calculations cannot be done:  execution time of program is too long.
Arc Consistency CPSC 322 – CSP 3 Textbook § 4.5 February 2, 2011.
Heuristic Search for problems with uncertainty CSE 574 April 22, 2003 Mausam.
Robust Planning using Constraint Satisfaction Techniques Daniel Buettner and Berthe Y. Choueiry Constraint Systems Laboratory Department of Computer Science.
Search in State Spaces Problem solving as search Search consists of –state space –operators –start state –goal states A Search Tree is an efficient way.
Solving problems by searching A I C h a p t e r 3.
February 11, 2016Introduction to Artificial Intelligence Lecture 6: Search in State Spaces II 1 State-Space Graphs There are various methods for searching.
Fast Comprehensive Planner for Fully Observable Nondeterministic Problems Andres Calderon Jaramillo – Faculty Advisor: Dr. Jicheng Fu Department of Computer.
Constraint Programming for the Diameter Constrained Minimum Spanning Tree Problem Thiago F. Noronha Celso C. Ribeiro Andréa C. Santos.
PART-2 CSC 450-AI by Asma Tabuk 1 CSC AI Informed Search Algorithms College of Computer and Information Technology Department of Computer.
Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Artificial Intelligence Chapter 8 Uninformed Search.
On the Relation Between Simulation-based and SAT-based Diagnosis CMPE 58Q Giray Kömürcü Boğaziçi University.
1 Constraint Satisfaction Problems (CSP). Announcements Second Test Wednesday, April 27.
Dept. Computer Science, Korea Univ. Intelligent Information System Lab A I (Artificial Intelligence) Professor I. J. Chung.
WELCOME TO COSRI IBADAN
Abstraction Transformation & Heuristics
Informed Search and Exploration
Two-player Games (2) ZUI 2013/2014
Empirical Comparison of Preprocessing and Lookahead Techniques for Binary Constraint Satisfaction Problems Zheying Jane Yang & Berthe Y. Choueiry Constraint.
Finding Heuristics Using Abstraction
CSE 4705 Artificial Intelligence
Searching for Solutions
Informed search algorithms
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
Haskell Tips You can turn any function that takes two inputs into an infix operator: mod 7 3 is the same as 7 `mod` 3 takeWhile returns all initial.
UNINFORMED SEARCH -BFS -DFS -DFIS - Bidirectional
CS 8520: Artificial Intelligence
Reading: Chapter 4.5 HW#2 out today, due Oct 5th
Depth-First Searches.
Presentation transcript:

Fast Strong Planning for FOND Problems with Multi-Root DAGs Jicheng Fu, Andres Calderon Jaramillo - University of Central Oklahoma Vincent Ng, Farokh B. Bastani, and I-Ling Yen - University of Texas at Dallas We present a planner for addressing a difficult, yet under-investigated class of planning problems: Fully Observable Non-Deterministic planning problems with strong solutions. Our strong planner employs a new data structure, MRDAG (multi-root directed acyclic graph), to define how the solution space should be expanded. We further equip a MRDAG with heuristics to ensure planning towards the relevant search direction. We performed extensive experiments to evaluate MRDAG and the heuristics. Results show that our strong algorithm achieves impressive performance on a variety of benchmark problems: on average it runs more than three orders of magnitude faster than the state-of-the-art planners, MBP and Gamer, and demonstrates significantly better scalability. ABSTRACT In its broadest terms, artificial intelligence planning deals with the problem of designing algorithms to find a plan in order to achieve a goal under certain constraints. In this context, a domain is a structure that describes the possible actions that can be used in finding a plan. A planning problem for a given domain specifies the initial state of a system and a set of goals to achieve. A planner is an algorithm that solves a planning problem by finding a suitable set of actions in the domain to take the system from the initial state to at least one goal state. FOND problems assume that each state in a system can be fully observed and that some actions in the domain may have more than one possible outcome (non-determinism). Solutions can be classified in three categories [Cimatti et al., 2003]: weak plans, strong cyclic plans, and strong plans. See Figure 1 and Figure 2. BACKGROUND Our planner finds a strong plan if one exists. At each stage, states with a single applicable action are expanded until states with more than one applicable action are encountered. A set of actions is then selected to be applied to the latter set. The procedure continues until the only non- expanded states are goal states, in which case a strong plan is returned. If dead-ends are encountered, the algorithm backtracks to a previous stage. If the algorithm has to backtrack from the initial state, a strong plan can not exist. At each expansion, the planner checks that no cycle is produced. Each stage produces a multi-root directed acyclic graph (MRDAG), where the roots of the graph are the states with more than one applicable action. See Figure 3. We use two heuristics to inform our planner: Most Constrained State (MCS): expands states with fewer applicable actions first. Least Heuristic Distance (LHD): uses applicable actions with the least estimated distance to the goal first. OUR PLANNER Figure 1(a). A weak plan. There is at least one successful path to the goal. Figure 1(b). A strong cyclic plan. Plan may use actions that can cause cycles but will likely succeed eventually. Figure 1(c). A strong plan. Goal is achieved from any state without using actions that cause cycles. Initial State Goal Initial State Goal Initial State Goal AC B pick-up(B, A) put-down(B) Figure 2. Example of a simple strong plan. The action pick-up(x, y) is non-deterministic as it can succeed or fail (block x may fall on the table). The action put-down(x) is deterministic. Initial State Goal … … … Figure 3. Expansion of the solution space. This graph illustrates how MRDAGs are structured and expanded. Dark green nodes are roots of a MRDAG. Light green nodes are states with exactly one applicable action. Initial State Goal MRDAG 2 MRDAG 1 MRDAG 3 MRDAG 4 AC B ACB Among the planners that are capable of solving strong FOND problems, the two that are most well-known are arguably MBP [Cimatti et al., 2003] and Gamer [Kissmann and Edelkamp, 2009]. We used domains derived from the FOND track of the 2008 International Planning Competition [Bryce and Buffet, 2008]. Gamer outperformed MBP in all domains. Nevertheless, our planner could perform 2 to 4 orders of magnitude faster than Gamer with comparable plan sizes in most cases. EVALUATION [Bryce and Buffet, 2008] Daniel Bryce and Olivier Buffet. International Planning Competition Uncertainty Part: Benchmarks and Results, In Proceedings of International Planning Competition, [Cimatti et al., 2003] Alessandro Cimatti, Marco Pistore, Marco Roveri, and Paolo Traverso. Weak, strong, and strong cyclic planning via symbolic model checking, Artificial Intelligence, 147(1-2):35– 84, [Kissmann and Edelkamp, 2009] Peter Kissmann and Stefan Edelkamp. Solving Fully-Observable Non-Deterministic Planning Problems via Translation into a General Game, In Proceedings of the 32nd Annual German Conference on Advances in Artificial Intelligence (KI'09), pages 1–8, Berlin, Heidelberg: Springer-Verlag. REFERENCES