Design of Multi-Agent Systems Teacher Bart Verheij Student assistants Albert Hankel Elske van der Vaart Web site

Slides:



Advertisements
Similar presentations
Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
Advertisements

Intelligent Architectures for Electronic Commerce Part 1.5: Symbolic Reasoning Agents.
“Comments” on Modeling Bounded Rationality Ariel Rubinstein Tel Aviv and New York Universities Leiden, Nov 14 th, 2014.
Justification-based TMSs (JTMS) JTMS utilizes 3 types of nodes, where each node is associated with an assertion: 1.Premises. Their justifications (provided.
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Artificial Intelligence Chapter 21 The Situation Calculus Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Title: Intelligent Agents A uthor: Michael Woolridge Chapter 1 of Multiagent Systems by Weiss Speakers: Tibor Moldovan and Shabbir Syed CSCE976, April.
The process of formulating responses remains
CS 330 Programming Languages 12 / 02 / 2008 Instructor: Michael Eckmann.
3-1 LECTURE 3: DEDUCTIVE REASONING AGENTS An Introduction to MultiAgent Systems
Motivation Are you motivated to achieve what you really want in life? And how hard do you push yourself to get things done? Wanting to do something and.
1 Chapter 16 Planning Methods. 2 Chapter 16 Contents (1) l STRIPS l STRIPS Implementation l Partial Order Planning l The Principle of Least Commitment.
Expert System Human expert level performance Limited application area Large component of task specific knowledge Knowledge based system Task specific knowledge.
OASIS Reference Model for Service Oriented Architecture 1.0
1 Introduction to Computability Theory Lecture15: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
Concrete architectures (Section 1.4) Part II: Shabbir Ssyed We will describe four classes of agents: 1.Logic based agents 2.Reactive agents 3.Belief-desire-intention.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
3-1 Chapter 3 DEDUCTIVE REASONING AGENTS. 3-2 Agent Architectures An agent is a computer system capable of flexible autonomous action… Issues one needs.
Design of Multi-Agent Systems Teacher Bart Verheij Student assistants Albert Hankel Elske van der Vaart Web site
4-1 Chapter 4: PRACTICAL REASONING An Introduction to MultiAgent Systems
Developing Ideas for Research and Evaluating Theories of Behavior
BDI Agents Martin Beer, School of Computing & Management Sciences,
Chapter 6 Consumer Attitudes Consumer Attitudes.
Belief Revision Lecture 1: AGM April 1, 2004 Gregory Wheeler
Transaction. A transaction is an event which occurs on the database. Generally a transaction reads a value from the database or writes a value to the.
Overview Aggregating preferences The Social Welfare function The Pareto Criterion The Compensation Principle.
CHAPTER 8 SOLVING PROBLEMS.
2APL A Practical Agent Programming Language March 6, 2007 Cathy Yen.
Ontologies Reasoning Components Agents Simulations Belief Update, Planning and the Fluent Calculus Jacques Robin.
Agent Architectures Michal Jakob Agent Technology Center, Dept. of Cybernetics, FEE Czech Technical University A4M33MAS Autumn Lect. 2 (based on.
Event Management & ITIL V3
Belief Desire Intention Agents Presented by Justin Blount From Reasoning about Rational Agents By Michael Wooldridge.
Introduction to AgentSpeak and Jason for Programming Multi-agent Systems (1) Dr Fuhua (Oscar) Lin SCIS Athabasca University June 19, 2009.
Requirements Analysis via Use Cases SE-2030 Dr. Rob Hasker 1 Based on slides written by Dr. Mark L. Hornick Used with permission.
EEL 5937 Models of agents based on intentional logic EEL 5937 Multi Agent Systems.
14/10/04 AIPP Lecture 7: The Cut1 Controlling Backtracking: The Cut Artificial Intelligence Programming in Prolog Lecturer: Tim Smith Lecture 7 14/10/04.
Chapter 4 Function, Dysfunction, and Change. © Copyright 2009 Delmar, Cengage Learning. All Rights Reserved.2 Function Functional behaviors influence.
Advanced Topics in Propositional Logic Chapter 17 Language, Proof and Logic.
Ayestarán SergioA Formal Model for CPS1 A Formal Model for Cooperative Problem Solving Based on: Formalizing the Cooperative Problem Solving Process [Michael.
Copyright © Curt Hill Mathematical Logic An Introduction.
Chapter 6 Attitudes.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Developing and Evaluating Theories of Behavior.
EEL 5937 Agent models. EEL 5937 Multi Agent Systems Lecture 4, Jan 16, 2003 Lotzi Bölöni.
1 Problem/Solution Proposals English 2010 Intermediate Writing.
1 CS 385 Fall 2006 Chapter 1 AI: Early History and Applications.
October 27, 2006 BDI Agents and AgentSpeak(L) Romelia Plesa PhD Candidate SITE, University of Ottawa.
Ch. 13 Ch. 131 jcmt CSE 3302 Programming Languages CSE3302 Programming Languages (notes?) Dr. Carter Tiernan.
1 Chapter 16 Planning Methods. 2 Chapter 16 Contents (1) l STRIPS l STRIPS Implementation l Partial Order Planning l The Principle of Least Commitment.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Sections © Copyright by Pearson Education, Inc. All Rights Reserved.
Morality in the Modern World
1 What is the Software Life Cycle? The stages of developing a software application Requirements Analysis High-level Design Plan Low-level Design Implementation.
1 Reasoning with Infinite stable models Piero A. Bonatti presented by Axel Polleres (IJCAI 2001,
1 Temporal logic. 2 Prop. logic: model and reason about static situations. Example: Are there truth values that can be assigned to x,y simultaneously.
Forward and Backward Chaining
From NARS to a Thinking Machine Pei Wang Temple University.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Probabilistic Robotics Introduction. SA-1 2 Introduction  Robotics is the science of perceiving and manipulating the physical world through computer-controlled.
#1 Make sense of problems and persevere in solving them How would you describe the problem in your own words? How would you describe what you are trying.
Artificial Intelligence Knowledge Representation.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
The Propositional Calculus
CMSC 691M Agent Architectures & Multi-Agent Systems
Knowledge Representation
Artificial Intelligence Chapter 21. The Situation Calculus
LECTURE 3: DEDUCTIVE REASONING AGENTS
Zimbabwe 2008 Critical Thinking.
Chapter 3 Deductive Reasoning Agents
Presentation transcript:

Design of Multi-Agent Systems Teacher Bart Verheij Student assistants Albert Hankel Elske van der Vaart Web site (Nestor contains a link)

Overview Deductive reasoning agents –Planning –Agent-oriented programming –Concurrent MetateM Practical reasoning agents –Practical reasoning & intentions –Implementation: deliberation –Implementation: commitment strategies –Implementation: intention reconsideration

Deductive Reasoning Agents Decide what to do on the basis of a theory stating the best action to perform in any given situation ,  |– Do(a) with a  Ac where  is such a theory (typically a set of rules)  is a logical database that describes the current state of the world Ac is the set of actions the agent can perform

Deductive Reasoning Agents But: - Theorem proving is in general neither fast nor efficient - Calculative rationality (rationality with respect to the moment calculation started) requires a static environment - Encoding of perception & environment into logical symbols isn’t straightforward So: - Use a weaker logic - Use a symbolic, not logic-based representation

Overview Deductive reasoning agents –Planning –Agent-oriented programming –Concurrent MetateM Practical reasoning agents –Practical reasoning & intentions –Implementation: deliberation –Implementation: commitment strategies –Implementation: intention reconsideration

Planning: STRIPS - Only atoms and their negation - Only represent changes Blocks world (blocks + a robot arm) -Stack(x,y) -Pre{Clear(y), Holding(x)} -Del{Clear(y), Holding(x)} -Add{ArmEmpty(y), On(x,y)

Problems with planning Frame problem Describe what does not change by an action Qualification problem Describe all preconditions of an action Ramification problem Describe all consequences of an action Prediction problem Describe the duration that something remains true

Overview Deductive reasoning agents –Planning –Agent-oriented programming –Concurrent MetateM Practical reasoning agents –Practical reasoning & intentions –Implementation: deliberation –Implementation: commitment strategies –Implementation: intention reconsideration

Agent-oriented programming Agent0 (Shoham) Key idea: directly programming agents in terms of intentional notions like belief, commitment, and intention In other words, the intentional stance is used as an abstraction tool for programming!

Agent-oriented programming Shoham suggested that a complete AOP system will have 3 components: –a logic for specifying agents and describing their mental states –an interpreted programming language for programming agents (example: Agent0) –an ‘agentification’ process, for converting ‘neutral applications’ (e.g., databases) into agents

Agent-oriented programming Agents in Agent0 have four components: –a set of capabilities (things the agent can do) –a set of initial beliefs –a set of initial commitments (things the agent will do) –a set of commitment rules

Agent-oriented programming Each commitment rule contains –a message condition –a mental condition –an action On each ‘agent cycle’… –The message condition is matched against the messages the agent has received –The mental condition is matched against the beliefs of the agent –If the rule fires, then the agent becomes committed to the action (the action gets added to the agent’s commitment set)

A commitment rule in Agent0 COMMIT( ( agent, REQUEST, DO(time, action) ), ;;; msg condition ( B, [now, Friend agent] AND CAN(self, action) AND NOT [time, CMT(self, anyaction)] ), ;;; mental condition self, DO(time, action)

A commitment rule in Agent0 Meaning: If I receive a message from agent which requests me to do action at time, and I believe that: –agent is currently a friend –I can do the action –At time, I am not committed to doing any other action then I commit to doing action at time

Overview Deductive reasoning agents –Planning –Agent-oriented programming –Concurrent MetateM Practical reasoning agents –Practical reasoning & intentions –Implementation: deliberation –Implementation: commitment strategies –Implementation: intention reconsideration

Concurrent METATEM Concurrent METATEM is a multi-agent language in which each agent is programmed by giving it a temporal logic specification of the behavior it should exhibit These specifications are executed directly in order to generate the behavior of the agent Temporal logic is classical logic augmented by modal operators for describing how the truth of propositions changes over time

Concurrent METATEM important(agents) it is now, and will always be true that agents are important  important(ConcurrentMetateM) sometime in the future, ConcurrentMetateM will be important  important(Prolog) sometime in the past it was true that Prolog was important (  friends(us)) U apologize(you) we are not friends until you apologize  apologize(you) tomorrow (in the next state), you apologize

Concurrent METATEM MetateM is a framework for directly executing temporal logic specifications The root of the MetateM concept is Gabbay’s separation theorem: Any arbitrary temporal logic formula can be rewritten in a logically equivalent past  future form. This past  future form can be used as execution rules

Concurrent METATEM A MetateM program is a set of such rules Execution proceeds by a process of continually matching rules against a “history”, and firing those rules whose antecedents are satisfied The instantiated future-time consequents become commitments which must subsequently be satisfied

Execution is thus a process of iteratively generating a model for the formula made up of the program rules The future-time parts of instantiated rules represent constraints on this model All ‘asks’ at some time in the past are followed by a ‘give’ at some time in the future Concurrent METATEM

Execution is thus a process of iteratively generating a model for the formula made up of the program rules The future-time parts of instantiated rules represent constraints on this model ConcurrentMetateM provides an operational framework through which societies of MetateM processes can operate and communicate

Overview Deductive reasoning agents –Planning –Agent-oriented programming –Concurrent MetateM Practical reasoning agents –Practical reasoning & intentions –Implementation: deliberation –Implementation: commitment strategies –Implementation: intention reconsideration

Practical reasoning Practical reasoning is reasoning directed towards actions — the process of figuring out what to do: “Practical reasoning is a matter of weighing conflicting considerations for and against competing options, where the relevant considerations are provided by what the agent desires/values/cares about and what the agent believes.” (Bratman) Practical reasoning is distinguished from theoretical reasoning – theoretical reasoning is directed towards beliefs

Practical reasoning Human practical reasoning consists of two activities: –deliberation deciding what state of affairs we want to achieve –means-ends reasoning deciding how to achieve these states of affairs The outputs of deliberation are intentions

Intentions in practical reasoning 1. Intentions pose problems for agents, who need to determine ways of achieving them. If I have an intention to , you would expect me to devote resources to deciding how to bring about . 2. Intentions provide a “filter” for adopting other intentions, which must not conflict. If I have an intention to , you would not expect me to adopt an intention  such that  and  are mutually exclusive. 3. Agents track the success of their intentions, and are inclined to try again if their attempts fail. If an agent’s first attempt to achieve  fails, then all other things being equal, it will try an alternative plan to achieve .

Intentions in practical reasoning 4. Agents believe their intentions are possible. That is, they believe there is at least some way that the intentions could be brought about. Otherwise: intention- belief inconsistency 5. Agents do not believe they will not bring about their intentions. It would not be rational of me to adopt an intention to  if I believed  was not possible. Otherwise: intention-belief incompleteness 6. Under certain circumstances, agents believe they will bring about their intentions. It would not normally be rational of me to believe that I would bring my intentions about; intentions can fail. Moreover, it does not make sense that if I believe  is inevitable that I would adopt it as an intention.

Intentions in practical reasoning 7. Agents need not intend all the expected side effects of their intentions. If I believe  and I intend that , I do not necessarily intend  also. (Intentions are not closed under implication.) This last problem is known as the side effect or package deal problem.

Intentions in practical reasoning Intentions are stronger than mere desires: –“My desire to play basketball this afternoon is merely a potential influencer of my conduct this afternoon. It must vie with my other relevant desires [... ] before it is settled what I will do. In contrast, once I intend to play basketball this afternoon, the matter is settled: I normally need not continue to weigh the pros and cons. When the afternoon arrives, I will normally just proceed to execute my intentions.” (Bratman, 1990)

Current beliefs and perception determine next beliefs: Current beliefs and intentions determine next desires: Current beliefs, desires and intentions determine next intentions: Current beliefs, desires and available actions determine a plan: Practical reasoning (abstract)

Overview Deductive reasoning agents –Planning –Agent-oriented programming –Concurrent MetateM Practical reasoning agents –Practical reasoning & intentions –Implementation: deliberation –Implementation: commitment strategies –Implementation: intention reconsideration

B := B_initial; I := I_initial; loop p := see; B := brf(B,p); //Update world model I := deliberate(B)  := plan(B,I);//Use means-end reasoning execute(  ); end; Implementing practical reasoning agents

Interaction between deliberation and planning Both deliberation and planning take time, perhaps too much time. Even if deliberation is optimal (maximizes expected utility), the resulting intention may no longer be optimal when deliberation has finished. (Calculative rationality)

Deliberation How does an agent deliberate? –Option generation in which the agent generates a set of possible alternatives –Filtering in which the agent chooses between competing alternatives, and commits to achieving them.

B := B_initial; I := I_initial; loop p := see; B := brf(B,p); D := option(B,I); // Deliberate (1) I := filter(B,D,I) // Deliberate (2)  := plan(B,I); execute(  ); end; Implementing practical reasoning agents

Overview Deductive reasoning agents –Planning –Agent-oriented programming –Concurrent MetateM Practical reasoning agents –Practical reasoning & intentions –Implementation: deliberation –Implementation: commitment strategies –Implementation: intention reconsideration

Commitment Strategies The following commitment strategies are commonly discussed in the literature of rational agents: –Blind commitment A blindly committed agent will continue to maintain an intention until it believes the intention has actually been achieved. Blind commitment is also sometimes referred to as fanatical commitment. –Single-minded commitment A single-minded agent will continue to maintain an intention until it believes that either the intention has been achieved, or else that it is no longer possible to achieve the intention. –Open-minded commitment An open-minded agent will maintain an intention as long as it is still believed possible.

Commitment Strategies An agent has commitment both to ends (i.e., the wishes to bring about), and means (i.e., the mechanism via which the agent wishes to achieve the state of affairs) Currently, our agent control loop is overcommitted, both to means and ends Modification: replan if ever a plan goes wrong

B := B_initial; I := I_initial; loop p := see; B := brf(B,p); D := option(B,I); I := filter(B,D,I)  := plan(B,I); while not empty(  ) do a := head(  ); execute(a); //Start plan execution  := tail(  ); p := see; B := brf(B,p);//Update world plan if not sound( ,B,I) then  := plan(B,I); //Replan if necessary end;

Commitment Strategies Still overcommitted to intentions: Never stops to consider whether or not its intentions are appropriate Modification: stop to determine whether intentions have succeeded or whether they are impossible: (Single-minded commitment)

B := B_initial; I := I_initial; loop p := see; B := brf(B,p); D := option(B,I); I := filter(B,D,I)  := plan(B,I); while not (empty(  ) or succeeded(B,I) or impossible (B,I) do a := head(  );//Check whether intentions succeeded execute(a);//and are still possible  := tail(  ); p := see; B := brf(B,p); if not sound( ,B,I) then  := plan(B,I); end;

Overview Deductive reasoning agents –Planning –Agent-oriented programming –Concurrent MetateM Practical reasoning agents –Practical reasoning & intentions –Implementation: deliberation –Implementation: commitment strategies –Implementation: intention reconsideration

Intention Reconsideration Our agent gets to reconsider its intentions once every time around the outer control loop, i.e., when: –it has completely executed a plan to achieve its current intentions; or –it believes it has achieved its current intentions; or –it believes its current intentions are no longer possible. This is limited in the way that it permits an agent to reconsider its intentions Modification: Reconsider intentions after executing every action

B := B_initial; I := I_initial; loop p := see; B := brf(B,p); D := option(B,I); I := filter(B,D,I)  := plan(B,I); while not (empty(  ) or succeeded(B,I) or impossible (B,I) do a := head(  ); execute(a);  := tail(  ); p := see; B := brf(B,p); D := option(B,I);//Reconsider (1) I := filter(B,D,I);//Reconsider (2) if not sound( ,B,I) then  := plan(B,I); end;

Intention Reconsideration But intention reconsideration is costly! A dilemma: –an agent that does not stop to reconsider its intentions sufficiently often will continue attempting to achieve its intentions even after it is clear that they cannot be achieved, or that there is no longer any reason for achieving them –an agent that constantly reconsiders its attentions may spend insufficient time actually working to achieve them, and hence runs the risk of never actually achieving them Solution: incorporate an explicit meta-level control component, that decides whether or not to reconsider

B := B_initial; I := I_initial; loop p := see; B := brf(B,p); D := option(B,I); I := filter(B,D,I)  := plan(B,I); while not (empty(  ) or succeeded(B,I) or impossible (B,I) do a := head(  ); execute(a);  := tail(  ); p := see; B := brf(B,p); if reconsider(B,I) then//Decide whether to reconsider or not D := option(B,I); I := filter(B,D,I); end; if not sound( ,B,I) then  := plan(B,I); end;

Overview Deductive reasoning agents –Planning –Agent-oriented programming –Concurrent MetateM Practical reasoning agents –Practical reasoning & intentions –Implementation: deliberation –Implementation: commitment strategies –Implementation: intention reconsideration