Machine Intelligence Cairo University Faculty of Engineering Computer Engineering Department Course Instructor: Prof. Dr. Nevin Darwish.

Slides:



Advertisements
Similar presentations
CSE391 – 2005 NLP 1 Planning The Planning problem Planning with State-space search.
Advertisements

Presentation on Artificial Intelligence
Planning with Non-Deterministic Uncertainty (Where failure is not an option) R&N: Chap. 12, Sect (+ Chap. 10, Sect 10.7)
Artificial Intelligence CS482, CS682, MW 1 – 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis,
Planning Module THREE: Planning, Production Systems,Expert Systems, Uncertainty Dr M M Awais.
CLASSICAL PLANNING What is planning ?  Planning is an AI approach to control  It is deliberation about actions  Key ideas  We have a model of the.
A Survey of Probability Concepts
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
Intelligent Agents Russell and Norvig: 2
ECE457 Applied Artificial Intelligence R. Khoury (2007)Page 1 Please pick up a copy of the course syllabus from the front desk.
Feng Zhiyong Tianjin University Fall 2008 Planning and Acting in the Real World.
Planning CSE 473 Chapters 10.3 and 11. © D. Weld, D. Fox 2 Planning Given a logical description of the initial situation, a logical description of the.
1 Classical STRIPS Planning Alan Fern * * Based in part on slides by Daniel Weld.
An Introduction to Artificial Intelligence CE Chapter 12 – Planning and Acting in Real World Ramin Halavati In which we.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
3/25  Monday 3/31 st 11:30AM BYENG 210 Talk by Dana Nau Planning for Interactions among Autonomous Agents.
Reinforcement Learning
Planning & Acting Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 12 Scheduling (12.1), Planning and Acting in nondetermininistic domains (12.3),
Planning II CSE 473. © Daniel S. Weld 2 Logistics Tournament! PS3 – later today Non programming exercises Programming component: (mini project) SPAM detection.
Planning Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 11.
4-1 Chapter 4: PRACTICAL REASONING An Introduction to MultiAgent Systems
Introduction What is this ? What is this ? This project is a part of a scientific research in machine learning, whose objective is to develop a system,
Planning in the Real World Time and Resources Hierarchical Task Network Conditional Planning Execution Monitoring and Replanning Continuous Planning MultiAgent.
1 Planning Chapters 11 and 12 Thanks: Professor Dan Weld, University of Washington.
Planning & Acting Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 12 Scheduling (12.1), Planning and Acting in nondetermininistic domains (12.3),
Time, Clocks, and the Ordering of Events in a Distributed System Leslie Lamport (1978) Presented by: Yoav Kantor.
McGraw-Hill/Irwin Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. A Survey of Probability Concepts Chapter 5.
Introduction and Chapter 1
Independence and Dependence 1 Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Ontologies Reasoning Components Agents Simulations Belief Update, Planning and the Fluent Calculus Jacques Robin.
RMTD 404 Lecture 8. 2 Power Recall what you learned about statistical errors in Chapter 4: Type I Error: Finding a difference when there is no true difference.
L 9 : Collaborations Why? Terminology Coherence Coordination Reference s :
Homework 1 ( Written Portion )  Max : 75  Min : 38  Avg : 57.6  Median : 58 (77%)
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
1.3 Simulations and Experimental Probability (Textbook Section 4.1)
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
Artificial Intelligence in Game Design
A RTIFICIAL I NTELLIGENCE Intelligent Agents 30 November
4 Proposed Research Projects SmartHome – Encouraging patients with mild cognitive disabilities to use digital memory notebook for activities of daily living.
Rules of Probability. Recall: Axioms of Probability 1. P[E] ≥ P[S] = 1 3. Property 3 is called the additive rule for probability if E i ∩ E j =
Artificial Intelligence
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Behavior-based Multirobot Architectures. Why Behavior Based Control for Multi-Robot Teams? Multi-Robot control naturally grew out of single robot control.
CSCI1600: Embedded and Real Time Software Lecture 11: Modeling IV: Concurrency Steven Reiss, Fall 2015.
CSCI1600: Embedded and Real Time Software Lecture 28: Verification I Steven Reiss, Fall 2015.
Adding and Subtracting Decimals © Math As A Second Language All Rights Reserved next #8 Taking the Fear out of Math 8.25 – 3.5.
Artificial Intelligence Lecture No. 6 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Independence and Dependence 1 Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University.
1 CMSC 471 Fall 2004 Class #21 – Thursday, November 11.
1 CMSC 471 Fall 2002 Class #24 – Wednesday, November 20.
Smart Sleeping Policies for Wireless Sensor Networks Venu Veeravalli ECE Department & Coordinated Science Lab University of Illinois at Urbana-Champaign.
WHAT ARE PLANS FOR? Philip E. Agre David Chapman October 1989 CS 790X ROBOTICS Presentation by Tamer Uz.
Agenda  Quick Review  Finish Introduction  Java Threads.
Chapter 14 Probability Rules!. Do Now: According to the 2010 US Census, 6.7% of the population is aged 10 to 14 years, and 7.1% of the population is aged.
Artificial Intelligence Lecture No. 5
Announcements Homework 3 due today (grace period through Friday)
Planning CSE 573 A handful of GENERAL SEARCH TECHNIQUES lie at the heart of practically all work in AI We will encounter the SAME PRINCIPLES again and.
Class #20 – Wednesday, November 5
CS 416 Artificial Intelligence
Independence and Counting
Chapter 4 . Trajectory planning and Inverse kinematics
Independence and Counting
Independence and Counting
Class #17 – Tuesday, October 30
Presentation transcript:

Machine Intelligence Cairo University Faculty of Engineering Computer Engineering Department Course Instructor: Prof. Dr. Nevin Darwish

Team members Sylvia Boshra Lydia Wahid Madonna Samuel Wessam Wagdy

Artificial Intelligence A Modern Approach 3rd Edition Chapter 11

Planning and Acting in the Real World

Agenda 1. Recall classical planning 2. Types of the environment 3. Methods To deal with different types of the environment I.Sensorless (Conformant)Planning II.Contingent Planning III.Online Replanning 4. Multiagent Planning I.Planning with multiple simultaneous actions II.Planning with multiple agents 5. Summary

1. Recall Classical Planning Classical Planning Example: The spare tire problem  Init(Tire(Flat)  Tire(Spare)  At(Flat, Axle)  At(Spare, Trunk))  Goal(At(Spare, Axle))  Action(Remove(obj, loc), PRECOND: At(obj, loc) EFFECT :  At(obj,loc)  At(obj, Ground))  Action(PutOn(t. Axle), PRECOND: Tire(t)  At(t, Ground)   At(Flat, Axle) EFFECT:  At(t, Ground)  At(t, Axle))

2. Types of the Environment Fully Observable Partially Observable Non Observable

Example: Painting a chair and a table  Init(Object(Table)  Object(Chair)  Can(C1)  Can(C2))  Goal(Color(Chair,c)  Color(Table,c) )  Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can))  Action(Paint(x,can), PRECOND: Object(x)  Can(can)  Color(can,c)  Open(can) EFFECT: Color(x,c)) Percept (Color(x,c), PRECOND: Object(x)  InView(x) Percept (Color(can,c), PRECOND: Can(can)  InView(can)  Open (can)

Example: Painting a chair and a table  Init(Object(Table)  Object(Chair)  Can(C1)  Can(C2)  InView(Table))  Goal(Color(Chair,c)  Color(Table,c) )  Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can))  Action(Paint(x,can), PRECOND: Object(x)  Can(can)  Color(can,c)  Open(can) EFFECT: Color(x,c))  Percept (Color(x,c), PRECOND: Object(x)  InView(x)  Percept (Color(can,c), PRECOND: Can(can)  InView(can)  Open (can)

Example: Painting a chair and a table  Init(Object(Table)  Object(Chair)  Can(C1)  Can(C2)  InView(Table))  Goal(Color(Chair,c)  Color(Table,c) )  Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can))  Action(Paint(x,can), PRECOND: Object(x)  Can(can)  Color(can,c)  Open(can) EFFECT: Color(x,c))  Percept (Color(x,c), PRECOND: Object(x)  InView(x)  Percept (Color(can,c), PRECOND: Can(can)  InView(can)  Open (can)

Example: Painting a chair and a table  Init(Object(Table)  Object(Chair)  Can(C1)  Can(C2)  InView(Table))  Goal(Color(Chair,c)  Color(Table,c) )  Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can))  Action(Paint(x,can), PRECOND: Object(x)  Can(can)  Color(can,c)  Open(can) EFFECT: Color(x,c))  Action(LookAt(x), PRECOND: InView(y)  (x  y) EFFECT : InView(x)   InView(y)) ggggggggggg

3. Methods To deal with different types of the environment I. Sensorless (Conformant)Planning II. Contingent Planning III. Online Replanning

I. Sensorless Planning Belief state Belief state for the coloring problem Object(Table)  Object(Chair)  Can(C1)  Can(C2)  Color(x,C(x)) b o = Color(x,C(x)) Open-world assumption

Using belief state to reach the goal Update belief state b’=RESULT(b,a)=(b-DEL(a))  ADD(a)

 Init(Object(Table)  Object(Chair)  Can(C1)  Can(C2))  Goal(Color(Chair,c)  Color(Table,c) )  Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can))  Action(Paint(x,can), PRECOND: Object(x)  Can(can)  Color(can,c)  Open(can) EFFECT: Color(x,c)) Percept (Color(x,c), PRECOND: Object(x)  InView(x) Percept (Color(can,c), PRECOND: Can(can)  InView(can)  Open (can)

Using belief state to reach the goal Update belief state b’=RESULT(b,a)=(b-DEL(a))  ADD(a) After applying action RemoveLid(Can1) b 1 = Color(x,C(x))  Open(Can1)

 Init(Object(Table)  Object(Chair)  Can(C1)  Can(C2))  Goal(Color(Chair,c)  Color(Table,c) )  Action(RemoveLid(can), PRECOND: Can(can) EFFECT : Open(can))  Action(Paint(x,can), PRECOND: Object(x)  Can(can)  Color(can,c)  Open(can) EFFECT: Color(x,c)) Percept (Color(x,c), PRECOND: Object(x)  InView(x) Percept (Color(can,c), PRECOND: Can(can)  InView(can)  Open (can)

Using belief state to reach the goal Update belief state b’=RESULT(b,a)=(b-DEL(a))  ADD(a) After applying action RemoveLid(Can1) b1 = Color(x,C(x))  Open(Can1) After applying action Paint(Chair, Can1) b2 = Color(x,C(x))  Open(Can1)  Color(Chair,C(Can1)) After applying Paint(Table, Can1) b3= Color(x,C(x))  Open(Can1)  Color(Chair,C(Can1))  Color(Table,C(Can1))

 Goal(Color(Chair,c)  Color(Table,c) ) Belief state satisfies the goal.

Problem Vacuum world Belief state= AtL  AtR If we applied action Suck There is a problem: Two different effects!! If AtL, effect: CleanL If AtR, effect:CleanR

Solution 1. Conditional effect Action(Suck EFFECT: when AtL: CleanL  when AtR: CleanR) b=(AtL  CleanL)  (AtR  CleanR) 2. Action(SuckL PRECOND: AtL; EFFECT: CleanL) Action(SuckR PRECOND: AtR; EFFECT: CleanR)

3. Conservative approach Look for action sequences that keep the belief state as simple as possible  Retain 1-CNF belief state  Some sequences can go outside 1-CNF

Agenda 1. Recall classical planning 2. Types of the environment 3. Methods To deal with different types of the environment I.Sensorless (Conformant)Planning II.Contingent Planning III.Online Replanning 4. Multiagent Planning I.Planning with multiple simultaneous actions II.Planning with multiple agents 5. Summary

II. Contingent Planning: What’s contingent planning: What’s contingent planning: Contingent planning is the generation of plans with conditional branching based on percepts. It is appropriate for environments with partial observability and/or non- determinism.

After an action and subsequent percept, calculating the new belief state is done in two stages: 1.Calculating the belief state after the action. 2.Updating the belief state after perception of the environment.  If a percept P has more than one percept axiom then we have to add the disjunction (OR) of the preconditions. Which will take the belief state out of CNF (and of ors).  We can generate contingent plans with an extension of the AND-OR forward search over belief states.

III. Online Planning Example: The spot-welding agent in a car plant:  The robot welds the same accurate position in every car that passes down the line, if a car door falls off a car as the robot is trying to apply a spot weld, the robot replaces the welding actuator with a gripper, picks up the door, checks for scratches, reattaches it to the car, the floor supervisor, switches back to its welding actuator and continues its work.  The robot’s behavior seems purposive. The robot knows what it’s trying to do.

Conditions for Replanning: ◦ Execution monitoring to determine the need for a new plan. ◦ The need for a new plan arises when a contingent planning agent gets tired of planning for every contingency such as the sky might fall on its head.

Needs for Replanning: 1.If the agent’s model of the world is incorrect. 2.If the agent’s model of an action have a missing precondition. Example: Opening a can of paints involves using a screwdriver to remove the lid. 3.If the agent’s model have a missing effect. Example: Painting a chair may get paint on the floor. 4.If the agent’s model is missing a state variable. Example: How the amount of paint in the can can effect the agent’s actions. 5.If the agent’s model is lacking provision for exogenous events. Events that are out of the hands of the agent, like someone knocking over the can of paint.

Without monitoring and replanning, the agent’s behavior is as fragile if it relies on absolute correctness of its model. Levels of monitoring the environment: ◦ Action monitoring: Before the execution of an action, the agent verifies that all the preconditions still hold. ◦ Plan Monitoring: Before the execution of an action, the agent verifies that the remaining plan will still succeed. ◦ Goal Monitoring: Before the execution of an action, the agent checks to see if there is a better set of goals it could be trying to achieve.

Action monitoring Vs. Plan Monitoring: Action monitoring is a simple method of execution monitoring, but it can sometimes lead to less than intelligent behavior. Example: The agent constructs a plan to solve the painting problem by painting both the chair and the table red. Suppose that there is only enough red paint for the chair. With action monitoring, the agent would go ahead and paint the chair red, then notice that it is out of paint and cannot paint the table, at which it would replan a repair, painting the chair and table green. A plan monitoring agent can detect failure whenever the current state is such that the plan no longer works. Thus, it would not waste time painting a chair red.

Does it work? We cannot guarantee that the agent always reaches the goal, Because it could arrive to a dead end state from which there is no repair. For example, the Vacuum cleaner agent may have a faulty model of itself and doesn’t know that its batteries may run out. For the agent to always reach the goal, we must assume that: 1. There are no dead ends, the goal is reachable from all states in the environment. 2.Environment is really non-deterministic.

When actions are not really non- deterministic: ◦ Trouble occurs when actions depend on some precondition the agent doesn’t know about. ◦ Example: The painting agent may not know that the paint can is empty and no amount of retrying to paint would reach the goal. ◦ Two Approaches to solve this problem:  Choosing randomly among the set of possible repair plans, rather than trying the same repair.  Learning: The agent should be able to modify its model of the world to accord with its percepts.

Agenda 1. Recall classical planning 2. Types of the environment 3. Methods To deal with different types of the environment I.Sensorless (Conformant)Planning II.Contingent Planning III.Online Replanning 4. Multiagent Planning I.Planning with multiple simultaneous actions II.Planning with multiple agents 5. Summary

4. MULTIAGENT PLANNING When there are multiple agents in the environment, each agent faces a multiagent planning problem. Between the purely single-agent and truly multiagent cases is a wide spectrum of problems. Examples:  human who can type and speak at the same time  A fleet of delivery robots in a factory 34

The multiple bodies act as a single body as long as the relevant sensor information collected by each body can be pooled form a common estimate of the world state that then informs the execution of the overall plan. When communication constraints make this impossible, we have a decentralized planning problem Example: Multiple reconnaissance robots covering a wide area 35

When a single entity is doing the planning, there is really only one goal, which all the bodies necessarily share. When the bodies are distinct agents that do their own planning, they may still share identical goals Example: two human tennis players who form a doubles team share the goal of winning the match. The multibody and multiagent cases are quite different In a multibody robotic doubles team 36

The clearest case of a multiagent problem, of course is when the agents have different goals. Example: In tennis, the goals of two opposing teams are in direct conflict. Some systems are a mixture of centralized and multiagent planning. Example: a delivery company 37

The issues involved in multiagent planning can be divided roughly into two sets: 1. involves issues of representing and planning for multiple simultaneous actions; these issues occur in all settings from multieffector to multiagent planning. 2. involves issues of cooperation, coordination, and competition arising in true multiagent settings. 38

I. Planning with multiple simultaneous actions For the time being, we will treat the multieffector, multibody, and multiagent settings in the same way. A correct plan is one that, if executed by the actors, achieves the goal. We assume perfect synchronization: each action takes the same amount of time and actions at each point in the joint plan are simultaneous. 39

40

If the actors have no interaction with one then we can simply solve n separate problems.(Example: n actors each playing a game of solitaire). The standard approach to loosely coupled problems is to pretend the problems are completely decoupled and then fix up the interactions. For the transition model, this means writing action schemas as if the actors acted independently. 41

Problems arise, however, when a plan has both agents hitting the ball at the same time. Technically, the difficulty is that preconditions constrain the state in which an action can be executed successfully, but do not constrain other actions that might mess it up. 42

A concurrent action list stating which actions must or must not be executed concurrently. the Hit action has its stated effect only if no other Hit action by another agent Occurs at the same time. 43

For some actions, the desired effect is achieved only when another action occurs concurrently. Example: two agents are needed to carry a cooler full of beverages to the tennis court 44

Agenda 1. Recall classical planning 2. Types of the environment 3. Methods To deal with different types of the environment I.Sensorless (Conformant)Planning II.Contingent Planning III.Online Replanning 4. Multiagent Planning I.Planning with multiple simultaneous actions II.Planning with multiple agents 5. Summary

II. Planning with multiple agents: Co-operation and Co-ordination 46 Actors(A,B) Init(At(A, LeftBaseline)  At(B,RightNet)  Approaching(Ball,RightBaseline))  Partner(A,B)  Partner(B,A) Goal(Returned(Ball)  (At(a,RightNet) OR At(a,LeftNet)) Action(Hit(actor,Ball), PRECOND: Approaching(Ball,loc)  AT(actor,loc) EFFECR: Returned(Ball)) Action(Go(actor,to), PRECOND: AT(actor,loc)  to ≠ loc, EFFECT: At(actor,to)  ¬At(actor,loc)) Plan1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)] Plan2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]

II. Planning with multiple agents: Co-operation and Co-ordination 47 Initially Init(At(A, LeftBaseline)  At(B,RightNet)  Approaching(Ball,RightBaseline))  Partner(A,B)  Partner(B,A) Goal(Returned(Ball)  (At(a,RightNet) OR At(a,LeftNet))

II. Planning with multiple agents: Co-operation and Co-ordination 48 Plan 1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)]

II. Planning with multiple agents: Co-operation and Co-ordination 49 Plan 1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)]

II. Planning with multiple agents: Co-operation and Co-ordination 50 Plan 1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)] The GOAL

II. Planning with multiple agents: Co-operation and Co-ordination 51 Plan 2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]

II. Planning with multiple agents: Co-operation and Co-ordination 52 Plan 2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]

II. Planning with multiple agents: Co-operation and Co-ordination 53 Plan 2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]

II. Planning with multiple agents: Co-operation and Co-ordination 54 Plan 2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)] The GOAL

II. Planning with multiple agents: Co-operation and Co-ordination 55 Again initially Init(At(A, LeftBaseline)  At(B,RightNet)  Approaching(Ball,RightBaseline))  Partner(A,B)  Partner(B,A) Goal(Returned(Ball)  (At(a,RightNet) OR At(a,LeftNet))

II. Planning with multiple agents: Co-operation and Co-ordination 56 Plan 1 by A Plan1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)]

II. Planning with multiple agents: Co-operation and Co-ordination 57 Plan 2 by B Plan2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]

II. Planning with multiple agents: Co-operation and Co-ordination 58 Plan 2 by A Plan2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]

II. Planning with multiple agents: Co-operation and Co-ordination 59 Plan 2 by A Plan2 A: [Go(A, LeftNet), NoOp(A)] B: [Go(B, RightBaseline), Hit( B, Ball)]

II. Planning with multiple agents: Co-operation and Co-ordination 60 Plan 1 by B Plan1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)]

II. Planning with multiple agents: Co-operation and Co-ordination How to coordinate to make sure they agree on the plan? 1. Adopt a convention 2. Use communication 1.Verbal Exchange 2.Plan Recognition 61

II. Planning with multiple agents: Co-operation and Co-ordination 1. Adopt a convention: ◦ Any constraint on selection of joint plans. Convention: “Stick to your side of the court” (Plan 1 is out, both agents will select Plan 2) ◦ Any alternative conventions works equally as long as all agents in an environment agree ◦ When the conventions are widespread then called social laws 62

II. Planning with multiple agents: Co-operation and Co-ordination 2. Use communication: ◦ To achieve common knowledge of a feasible joint plan ◦ Chapter 22: more mechanisms for commination ◦ Can work as well with competitive agents as with cooperative ones 63

II. Planning with multiple agents: Co-operation and Co-ordination 64 Yours ! 2. Use communication 1. By Verbal Exchange

II. Planning with multiple agents: Co-operation and Co-ordination 65 Mine ! 2. Use communication 1. By Verbal Exchange

II. Planning with multiple agents: Co-operation and Co-ordination 66 Then it’s Plan 1 Plan1 A: [Go(A, RightBaseline), Hit( A, Ball)] B: [NoOp(B), NoOp(B)]

II. Planning with multiple agents: Co-operation and Co-ordination Use communication 2. Plan Recognition By Executing the first part of the plan

II. Planning with multiple agents: Co-operation and Co-ordination 68 It’s plan 2 2. Use communication 2. Plan Recognition By Executing the first part of the plan

II. Planning with multiple agents: Co-operation and Co-ordination Use communication 2. Plan Recognition By Executing the first part of the plan

II. Planning with multiple agents: Co-operation and Co-ordination Another Examples: 1. Seed-Eating harvester ants ◦ The queen’s job is to reproduce not to do centralized planning ◦ There is some learning mechanism that make them able to make successful actions over decades-long life even though individual ants live only about a year 2. Flocking behavior of birds ◦ Observes the position of its nearest neighbors and choose the heading and acceleration that max the weighted sum of: 1.Cohesion 2.Separation 3.Alignment 70

5.Summary: ◦ Resources as numeric measures ◦ Time is handled by specialized scheduling or integrated with planning ◦ HTN planning using HLAs ◦ Effects of HLAs can be defined with angelic semantic ◦ Contingent plans allows the agent to sense the world, Sensor-less and Contingent plan can be constructed by search in the space of Belief States ◦ Online planning agent can re-plan due to nondeterministic actions or incorrect models of environment ◦ Multiagents to cooperate or compete and must agree on which joint plans to be executed ◦ This chapter extends classical planning to cover nondeterministic environments 71

72

73