Anifuddin Azis PLANNING. Artificial Intelligence A Modern Approach The subtitle of this book is "A Modern Approach." The intended meaning of this rather.

Slides:



Advertisements
Similar presentations
UNIT-3 PLANNING AI Planning Prepared by N. Sathish Kumar AP
Advertisements

Planning.
REVIEW : Planning To make your thinking more concrete, use a real problem to ground your discussion. –Develop a plan for a person who is getting out of.
Language for planning problems
CSE391 – 2005 NLP 1 Planning The Planning problem Planning with State-space search.
Planning
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
CLASSICAL PLANNING What is planning ?  Planning is an AI approach to control  It is deliberation about actions  Key ideas  We have a model of the.
Problem Solving by Searching Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 3 Spring 2007.
Plan Generation & Causal-Link Planning 1 José Luis Ambite.
Computing & Information Sciences Kansas State University Lecture 20 of 42 CIS 530 / 730 Artificial Intelligence Lecture 20 of 42 Introduction to Classical.
Planning CSE 473 Chapters 10.3 and 11. © D. Weld, D. Fox 2 Planning Given a logical description of the initial situation, a logical description of the.
1 Classical STRIPS Planning Alan Fern * * Based in part on slides by Daniel Weld.
Planning Search vs. planning STRIPS operators Partial-order planning.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
1 Chapter 16 Planning Methods. 2 Chapter 16 Contents (1) l STRIPS l STRIPS Implementation l Partial Order Planning l The Principle of Least Commitment.
Artificial Intelligence 2005/06
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
CS 460, Session 20 1 Planning Search vs. planning STRIPS operators Partial-order planning.
Artificial Intelligence Chapter 11: Planning
Planning Russell and Norvig: Chapter 11. Planning Agent environment agent ? sensors actuators A1A2A3.
3/25  Monday 3/31 st 11:30AM BYENG 210 Talk by Dana Nau Planning for Interactions among Autonomous Agents.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
1 Planning. R. Dearden 2007/8 Exam Format  4 questions You must do all questions There is choice within some of the questions  Learning Outcomes: 1.Explain.
CS 561, Session Planning Search vs. planning STRIPS operators Partial-order planning.
Artificial Intelligence 2005/06 Planning: STRIPS.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
Planning: Part 1 Representation and State-space Search COMP151 March 30, 2007.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
Automated Planning and HTNs Planning – A brief intro Planning – A brief intro Classical Planning – The STRIPS Language Classical Planning – The STRIPS.
1 Planning Chapters 11 and 12 Thanks: Professor Dan Weld, University of Washington.
Classical Planning via State-space search COMP3431 Malcolm Ryan.
AI Principles, Lecture on Planning Planning Jeremy Wyatt.
Classical Planning Chapter 10.
Feng Zhiyong Tianjin University Fall planning.
April 3, 2006AI: Chapter 11: Planning1 Artificial Intelligence Chapter 11: Planning Michael Scherger Department of Computer Science Kent State University.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Production Systems A production system is –a set of rules (if-then or condition-action statements) –working memory the current state of the problem solving,
Homework 1 ( Written Portion )  Max : 75  Min : 38  Avg : 57.6  Median : 58 (77%)
Planning, page 1 CSI 4106, Winter 2005 Planning Points Elements of a planning problem Planning as resolution Conditional plans Actions as preconditions.
Planning (Chapter 10)
First-Order Logic and Plans Reading: C. 11 (Plans)
CS 415 – A.I. Slide Set 5. Chapter 3 Structures and Strategies for State Space Search – Predicate Calculus: provides a means of describing objects and.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
1 Chapter 16 Planning Methods. 2 Chapter 16 Contents (1) l STRIPS l STRIPS Implementation l Partial Order Planning l The Principle of Least Commitment.
Introduction to Planning Dr. Shazzad Hosain Department of EECS North South Universtiy
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Course Overview  What is AI?  What are the Major Challenges?  What are the Main Techniques?  Where are we failing, and why?  Step back and look at.
1/16 Planning Chapter 11- Part1 Author: Vali Derhami.
Intro to Planning Or, how to represent the planning problem in logic.
Classical Planning Chapter 10 Mausam / Andrey Kolobov (Based on slides of Dan Weld, Marie desJardins)
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Search in State Spaces Problem solving as search Search consists of –state space –operators –start state –goal states A Search Tree is an efficient way.
Planning I: Total Order Planners Sections
April 3, 2006AI: Chapter 11: Planning1 Artificial Intelligence Chapter 11: Planning.
Planning in FOL Systems sequences of actions to achieve goals.
Introduction to Artificial Intelligence April 11, 2012 Planning 1 Introduction to Artificial Intelligence CSCE476/876, Spring 2012:
Some Thoughts to Consider 5 Take a look at some of the sophisticated toys being offered in stores, in catalogs, or in Sunday newspaper ads. Which ones.
Computing & Information Sciences Kansas State University Wednesday, 04 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 17 of 42 Wednesday, 04 October.
Computing & Information Sciences Kansas State University Friday, 13 Oct 2006CIS 490 / 730: Artificial Intelligence Lecture 21 of 42 Friday, 13 October.
CLASSICAL PLANNING. Outline  The challenges in planning with standard search algorithm  Representing Plans – the PDDL language  Planning as state -
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Introduction Defining the Problem as a State Space Search.
EA C461 – Artificial Intelligence Planning
Logic for Artificial Intelligence
AI Planning Introduction to Artificial Intelligence
Graphplan/ SATPlan Chapter
Graphplan/ SATPlan Chapter
Presentation transcript:

Anifuddin Azis PLANNING

Artificial Intelligence A Modern Approach The subtitle of this book is "A Modern Approach." The intended meaning of this rather empty phrase is that we have tried to synthesize what is now known into a common framework, rather than trying to explain each subfield of AI in its own historical context. We apologize to those whose subfields are, as a result, less recognizable than they might otherwiise have been. The main unifying theme is the idea of an intelligent agent. We define AI as the study of agents that receive percepts from the environment and perform actions. Each such agent implements a function that maps percept sequences to actions, and we cover different ways to represent these func- tions, such as production systems, reactive agents, real-time cortditional planners, neural networks, and decision-theoretic systems. We explain the role of learning as extending the reach of the designer into unknown environments, and we show how that role constrains agent design, favoring explicit knowledge representation and reasoning. We treat robotics and vision not as independently defined problems, but as occurring in the service of achieving goals. We stress the importance of the task environment in determining the appropriate agent design.

Overview of the book The book is divided into eight parts. Part I, Artificial Intelligence, offers a view of the AI enterprise based around the idea of intelligent agents-systems that can decide what to do and then do it. Part II Problem Solving, concentrates on methods for deciding what to do when one needs to think ahead several steps-for example in navigating across a country or playing chess. Part III, Knowledge and Reasoning, discusses ways to represent knowledge about the world-how it works, what it is currently like, and what one's actions inight do-and how to reason logically with that knowledge.

Part IV, Planning, then discusses how to use these reasoning methods to decide what to do, particularly by constructing plans. Part V, Uncertain Knowledge and Reasoning, is analogous to Parts III and IV, but it concentrates on reasoning and decision making in the presence of uncertainty about the world, as might be faced, for example, by a system for medical diagnosis and treatment. Together, Parts II-V describe that part of the intelligent agent responsible for reaching decisions. Part VI, Learning, describes methods for generating the knowledge required by these decision-making components. Part VII, Communicating, Perceiving, and Acting, describes ways in which an intelligent agent can perceive its environment so as to know what is going on, whether by vision, touch, hearing, or understanding language, and ways in which it can turn its plans into real actions, either as robot motion or as natural language utterances. Finally, Part VIII, Conclusions, analyzes the past and future of AI and the philosophical and ethical implications of artificial intelligence.

Areas of AI and their inter- dependencies Search Vision Planning Machine Learning Knowledge Representation Logic Expert Systems RoboticsNLP

Introduction to Planning The purpose of planning is to find a sequence of actions that achieves a given goal when performed starting in a given state What is a plan? A sequence of operator instances, such that "executing" them in the initial state will change the world to a state satisfying the goal state description. Goals are usually specified as a conjunction of goals to be achieved

problem-solving agents are able to plan ahead - to consider the consequences of sequences of actions - before acting knowledge-based agents can select actions based on explicit, logical representations of the current state and the effects of actions Problem Solving Agents + Knowledge-based Agents = Planning Agents

Algorithm of a simple planning agent: 1. Generate a goal to achieve 2. Construct a plan to achieve goal from current state 3. Execute plan until finished 4. Begin again with new goal

Planning The task of coming up with a sequence of actions that will achieve a goal is called planning. We studied how to take actions in the world (search) We studied how to represent objects, relations, etc. (logic) Now we will combine the two!

Planning Problem Any problem that needs sequential decision For a single decision, you should look for Machine Learning Classification Given a picture “is this a cat or a dog?” Any Examples? FreeCell Sokoban Micro-mouse Bridge Game Football

Space Exploration Autonomous planning, scheduling, control NASA: JPL and Ames Remote Agent Experiment (RAX) Deep Space 1 Mars Exploration Rover (MER)

Manufacturing Sheet-metal bending machines - Amada Corporation Software to plan the sequence of bends [Gupta and Bourne, J. Manufacturing Sci. and Engr., 1999]

Games Bridge Baron - Great Game Products 1997 world champion of computer bridge [Smith, Nau, and Throop, AI Magazine, 1998] 2004: 2nd place (North —  Q) …… PlayCard(P 3 ; S, R 3 )PlayCard(P 2 ; S, R 2 )PlayCard(P 4 ; S, R 4 ) FinesseFour(P 4 ; S) PlayCard(P 1 ; S, R 1 ) StandardFinesseTwo(P 2 ; S) LeadLow(P 1 ; S) PlayCard(P 4 ; S, R 4 ’ ) StandardFinesseThree(P 3 ; S) EasyFinesse(P 2 ; S)BustedFinesse(P 2 ; S) FinesseTwo(P 2 ; S) StandardFinesse(P 2 ; S) Finesse(P 1 ; S) Us:East declarer, West dummy Opponents:defenders, South & North Contract:East – 3NT On lead:West at trick 3 East:  KJ74 West:  A2 Out:  QT98653 (North —  3) East —  J West —  2 North —  3South —  5South —  Q

Planning Languages Languages must represent.. States Goals Actions Languages must be Expressive for ease of representation Flexible for manipulation by algorithms 15

State Representation A state is represented with a conjunction of positive literals Using Logical Propositions: Poor  Unknown FOL literals: At(Plane1,OMA)  At(Plan2,JFK) FOL literals must be ground & function-free Not allowed: At(x,y) or At(Father(Fred),Sydney) Closed World Assumption What is not stated are assumed false 16

Goal Representation Goal is a partially specified state A proposition satisfies a goal if it contains all the atoms of the goal and possibly others.. Example: Rich  Famous  Miserable satisfies the goal Rich  Famous 17

Action Representation Action Schema Action name Preconditions Effects Example Action(Fly(p,from,to), P RECOND : At(p,from)  Plane(p)  Airport(from)  Airport(to) E FFECT :  At(p,from)  At(p,to)) Sometimes, Effects are split into A DD list and D ELETE list 18 Fly(WHI,LNK,OHA) At(WHI,LNK),Plane(WHI), Airport(LNK), Airport(OHA) At(WHI,OHA),  At(WHI,LNK)

Applying an Action Find a substitution list  for the variables of all the precondition literals with (a subset of) the literals in the current state description Apply the substitution to the propositions in the effect list Add the result to the current state description to generate the new state Example: Current state: At(P1,JFK)  At(P2,SFO)  Plane(P1)  Plane(P2)  Airport(JFK)  Airport(SFO) It satisfies the precondition with  ={p/P1,from/JFK, to/SFO) Thus the action Fly(P1,JFK,SFO) is applicable The new current state is: At(P1,SFO)  At(P2,SFO)  Plane(P1)  Plane(P2)  Airport(JFK)  Airport(SFO) 19

Example: Air Cargo See Figure 11.2 Initial state Goal State Actions: Load, Unload, Fly 20

Languages for Planning Problems STRIPS Stanford Research Institute Problem Solver Historically important ADL Action Description Languages See Table 11.1 for STRIPS versus ADL PDDL Planning Domain Definition Language Revised & enhanced for the needs of the International Planning Competition Currently version 3.1version

Example: Spare Tire Problem See Figure 11.3 Initial State Goal State Actions: Remove(Spare,Trunk), Remove(Flat, Axle) PutOn(Spare,Axle) LeaveOvernight Note the negated precondition  At(Flat,Axle) not allowed in STRIPS. Could be easily replaced with Clear(Axle), adding one more predicate to the language 22

Example: Blocks World See Fig 11.4 Initial state Goal Actions: Move(b,x,y) MoveToTable(b,x) 23

Outline Background Situation Calculus Frame, qualification, & ramification problems Representation language Planning Algorithms State-Space Search Partial-Order Planning (POP) Planning Graphs (G RAPH P LAN ) SAT Planners 24

State-Space Search (1) Search the space of states (first chapters) Initial state, goal test, step cost, etc. Actions are the transitions between state Actions are invertible (why?) Move forward from the initial state: Forward State-Space Search or Progression Planning Move backward from goal state: Backward State-Space Search or Regression Planning 25

State-Space Search (2) 26

State-Space Search (3) Remember that the language has no functions symbols Thus number of states is finite And we can use any complete search algorithm (e.g., A*) We need an admissible heuristic The solution is a path, a sequence of actions: total-order planning Problem: Space and time complexity STRIPS-style planning is PSPACE-complete unless actions have only positive preconditions and only one literal effect 27

SRIPS in State-Space Search STRIPS representation makes it easy to focus on ‘relevant’ propositions and Work backward from goal (using E FFECTS ) Work forward from initial state (using P RECONDITIONS ) Facilitating bidirectional search 28

Relevant Action An action is relevant In Progression planning when its preconditions match a subset of the current state In Regression planning, when its effects match a subset of the current goal state 29

Consistent Action The purpose of applying an action is to ‘achieves a desired literal’ We should be careful that the action does not undo a desired literal (as a side effect) A consistent action is an action that does not undo a desired literal 30

Backward State-Space Search Given A goal G description An action A that is relevant and consistent Generate a predecessor state where Positive effects (literals) of A in G are deleted Precondition literals of A are added unless they already appear Substituting any variables in A’s effects to match literals in G Substituting any variables in A’s preconditions to match substitutions in A’s effects Repeat until predecessor description matches initial state 31

Heuristic to Speed up Search We can use A*, but we need an admissible heuristic 1. Divide-and-conquer: sub-goal independence assumption Problem relaxation by removing 2. … all preconditions 3. … all preconditions and negative effects 4. … negative effects only: Empty-Delete-List 32

1. Subgoal Independence Assumption The cost of solving a conjunction of subgoals is the sum of the costs of solving each subgoal independently Optimistic Where subplans interact negatively Example: one action in a subplan delete goal achieved by an action in another subplan Pessimistic (not admissible) Redundant actions in subplans can be replaced by a single action in merged plan 33

2. Problem Relaxation: Removing Preconditions Remove preconditions from action descriptions All actions are applicable Every literal in the goal is achievable in one step Number of steps to achieve the conjunction of literals in the goal is equal to the number of unsatisfied literals Alert Some actions may achieve several literals Some action may remove the effect of another action 34

3. Remove Preconditions & Negative Effects Considers only positive interactions among actions to achieve multiple subgoals The minimum number of actions required is the sum of the union of the actions’ positive effects that satisfy the goal The problem is reduced to a set cover problem, which is NP-hard Approximation by a greedy algorithm cannot guarantee an admissible heuristic 35

4. Removing Negative Effects (Only) 36 Remove all negative effects of actions (no action may destroy the effects of another) Known as the Empty-Delete-List heuristic Requires running a simple planning algorithm Quick & effective Usable in progression or regression planning

Discussion 1. The monkey-and-bananas problem is faced by a monkey in a laboratory with some bananas hanging out of reach from the ceiling. A box is available that will enable the monkey to reach the bananas if he climbs on it. Initially, the monkey is at A, the bananas at B, and the box at C. The monkey and box have height Low, but if the monkey climbs onto the box he will have height High, the same as the bananas. The actions available to the monkey include Go from one place to another, Push an object from one place to another, Climb Up onto or ClimbDown from an object, and Grasp or Ungrasp an object. Grasping results in holding the object if the monkey and object are in the same place at the same height. a. Write down the initial state description. b. Write down STRIPS-style definitions of the six actions.

2. You are given two jugs, a 4-litre one and a 3-litre one. Neither has any measuring markers on it. There is a pump that can be used to fill the jugs with water. How can you get exactly 2 litres of water into 4-litre jug.” Write a description of this problem in STRIPS notation.