CMSC 691M Agent Architectures & Multi-Agent Systems

Slides:



Advertisements
Similar presentations
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Advertisements

Temporal Logic and the NuSMV Model Checker CS 680 Formal Methods Jeremy Johnson.
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
CS6133 Software Specification and Verification
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Knoweldge Representation & Reasoning
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
BDI Agents Martin Beer, School of Computing & Management Sciences,
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2005.
EEL 5937 Models of agents based on intentional logic EEL 5937 Multi Agent Systems.
Pattern-directed inference systems
Logical Agents Logic Propositional Logic Summary
Ayestarán SergioA Formal Model for CPS1 A Formal Model for Cooperative Problem Solving Based on: Formalizing the Cooperative Problem Solving Process [Michael.
An Ontological Framework for Web Service Processes By Claus Pahl and Ronan Barrett.
LDK R Logics for Data and Knowledge Representation Modal Logic Originally by Alessandro Agostini and Fausto Giunchiglia Modified by Fausto Giunchiglia,
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
LECTURE LECTURE Propositional Logic Syntax 1 Source: MIT OpenCourseWare.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
©Agent Technology, 2008, Ai Lab NJU Agent Technology Agent model and theory.
Reasoning about the Behavior of Semantic Web Services with Concurrent Transaction Logic Presented By Dumitru Roman, Michael Kifer University of Innsbruk,
1 Temporal logic. 2 Prop. logic: model and reason about static situations. Example: Are there truth values that can be assigned to x,y simultaneously.
LDK R Logics for Data and Knowledge Representation Propositional Logic Originally by Alessandro Agostini and Fausto Giunchiglia Modified by Fausto Giunchiglia,
First-Order Logic Semantics Reading: Chapter 8, , FOL Syntax and Semantics read: FOL Knowledge Engineering read: FOL.
Artificial Intelligence Chapter 23 Multiple Agents Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
EEL 5937 Content languages EEL 5937 Multi Agent Systems Lecture 10, Feb. 6, 2003 Lotzi Bölöni.
Artificial Intelligence Knowledge Representation.
Logics for Data and Knowledge Representation ClassL (part 1): syntax and semantics.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Propositional Logic: Logical Agents (Part I)
Lecture 6 Modality: Possible worlds
Logics for Data and Knowledge Representation
Service-Oriented Computing: Semantics, Processes, Agents
Modal, Dynamic and Temporal Logics
Propositional Logic Russell and Norvig: Chapter 6 Chapter 7, Sections 7.1—7.4 CS121 – Winter 2003.
Formal Modeling Concepts
Knowledge Representation and Reasoning
Knowledge and reasoning – second part
The Propositional Calculus
Knowledge Representation
EA C461 – Artificial Intelligence Logical Agent
Learning and Knowledge Acquisition
Service-Oriented Computing: Semantics, Processes, Agents
Artificial Intelli-gence 1: logic agents
CS201: Data Structures and Discrete Mathematics I
Logics for Data and Knowledge Representation
CSEP590 – Model Checking and Automated Verification
Artificial Intelligence
Artificial Intelligence: Logic agents
CS 416 Artificial Intelligence
Formal Methods in software development
Knowledge and reasoning – second part
Back to “Serious” Topics…
MA/CSSE 474 More Math Review Theory of Computation
Computer Security: Art and Science, 2nd Edition
Service-Oriented Computing: Semantics, Processes, Agents
Logics for Data and Knowledge Representation
CS 416 Artificial Intelligence
Knowledge Representation I (Propositional Logic)
Formal Methods in DAI : Logic-Based Representation and Reasoning
Propositional Logic CMSC 471 Chapter , 7.7 and Chuck Dyer
CS201: Data Structures and Discrete Mathematics I
Introduction to verification
Formal Methods in software development
CMSC 471 Fall 2011 Class #10 Tuesday, October 4 Knowledge-Based Agents
Representations & Reasoning Systems (RRS) (2.2)
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

CMSC 691M Agent Architectures & Multi-Agent Systems Spring 2002 – February 26 Class #9 – Formal Methods for MAS Prof. Marie desJardins

Today’s overview Reading: Weiss Chap. 8, “Formal Methods in DAI” (Munindar P. Singh, Anand S. Rao, and Michael P. Georgeff) Why use formal methods? Classes of logics Beliefs, desires, and intentions Implementing BDI models Coordinating BDI agents Communicating BDI agents Societies of BDI agents

Why use formal methods? Specify properties of agents declaratively Provide reasoning mechanisms for agents Disadvantages Intractable in general case Limiting precisely because of formalism and abstract representation Benefits Specify complex behavior at an abstract level Validate agent behaviors

What’s to be modeled? First-order logic Propositional logic To design and implement intelligent agents, we may need to be able to reason about the truth of propositions and relations between objects in the world, to reason about what may or must be true, to reason about how the agent’s actions affect the state of the world, and to reason about how other agents and external agents change the world over time. Modal logic Temporal logic Dynamic logic

Classes of logics Wffs (syntax) Proof theory (inference) Model theory (semantics) Propositional Atoms closed under  and ¬ Implication and rules for  and ¬ “Meaning” of propositions Predicate (first-order) Adds quantifiers  and , relations, variables Inference rules for quantifiers and variable binding “Meaning” of relations Modal Adds possibility  and necessity  Inference rules for  and  Possible worlds Dynamic Adds sequencing, branching, testing Inference rules for outcomes Possible worlds with transitions Temporal Adds notion of time points or intervals, ordering Inference for propositions w/ temporal extent Possible worlds with multiple transitions

Propositional logic L = true atomic propositions P is entailed iff P  L P  Q is entailed iff P and Q are entailed P is entailed iff P is not entailed

Predicate (first-order) logic x (Q(x)) is entailed iff Q(l) holds for every object l x (Q(x)) is entailed iff Q(l) holds for some object l

Modal logic Possible worlds semantics Accessibility relation R(W1, W2): Possibility: P is entailed in world w iff P is true in some possible world ( w’: R(w,w’)  P is entailed in w’) Necessity: □P is entailed in w iff P is true in every possible world ( w’: R(w,w’) → P is entailed in w’)

Dynamic logic (“modal logic of action”) Sequencing: a;b – do a, then do b Choice: a+b – do either a or b nondeterministically Testing: p? –TRUE if p, FALSE if p ((q?;a) + ((q?;b)) ≡ if q then a else b Accessibility relation RA – reachability of worlds via (composite) action A

Dynamic logic – modeling outcomes Possible outcomes <A>P is entailed in w iff P is entailed in some world reachable by applying action A <A>P  w’: RA(w,w’)  P entailed in w’ Necessary outcomes [A]P is entailed in w iff P is entailed in all worlds reachable by applying action A [A]P w’: RA(w,w’) → P entailed in w’

Temporal logic – variations Linear vs. branching: modeling a single sequence of events/outcomes vs. modeling a branching series of alternative possible worlds Discrete vs. dense (continuous): time treated as discrete intervals vs. continuously flowing Moment-based (point) vs. period-based (interval): units of time treated as points or intervals

Discrete moment-based branching temporal logic Moment in time are partially ordered Each moment is associated with a possible world The actions of multiple agents can influence which moment (possible world) occurs next

Linear temporal logic p  q at moment t means that p holds from t until t’ and q holds at t’ X p at moment t means that p holds in the moment immediately following t P p at moment t means that p was true at t’ where t’ is before t F p at moment t means that p is true at some moment t’ after t G p at moment t means that p is true at every moment t’ after t

Branching temporal logic “The present moment” “Reality” A p means that p is true in all paths at the present moment (i.e., no matter what may have gone before or will happen in the future, p is true now) – temporal equivalent of the necessity operator of modal logic E p means that p is true in some path at the present moment – temporal equivalent of possibility operator

Branching temporal logic II X<a>p is true (at a particular moment, on a particular path) iff p is a possible outcome of agent x performing action a X[a]p is true (at a particular moment, on a particular path) iff p is a necessary outcome of agent x performing action a V a : p is true (at a moment and on a path) iff there is some action a under which p becomes true

Brief commentary on logics Branching temporal logic is very powerful, and has been used to develop planners and other agent architectures Many researchers use ideas from some or all of these logics in their agent designs and representations Very few researchers use the full formal specification of these logics in building systems (though it isn’t uncommon to see them in conference and journal papers)

Beliefs, desires, and intentions Use modal logic to model agent’s cognitive attitudes: beliefs, desires, goals, know-how, and intentions

Beliefs X Bel p iff p is entailed in every possible world the agent believes it can be in (modeled by the B accessibility relation) Interestingly, although a proposition q may be believed by this definition, an agent may not believe that it believes q Limited rationality / limited computational resources means that the agent can’t derive everything that it “believes”

Desires x Des p iff p holds in all possible worlds reachability by the D accessibility relation The agent might not know how to reach the states it desires to be in An agent can desire to be in conflicting states Goals are the subset of the agent’s desires that are achievable and consistent

Intentions x Int p iff p is true along all paths that are reachable by the I accessibility relation According to this definition, an agent can “intend” something it doesn’t desire An agent can also have an unsatisfiable intention (if the set of reachable paths is empty) An agent can intend something, and yet fail to make it come true (if it proceeds along a path that isn’t in its set of intended paths) Know-how models when an agent can guarantee the success of its actions More useful might be to model when an agent might be able to guarantee the success of its actions

Commitments Agents that persist with their intentions (as long as they are satisfiable) are said to be committed to those intentions The concept of a commitment is very useful in modeling societies of agents

Basic interpreter basic-interpreter initialize-state(); do until quit. options := option-generator (event-queue, S); selected-options := deliberate (options, S); update-state (selected-options, S); execute (S); event-queue := get-new-events(); until quit. internal state percepts “intentions”

BDI interpreter BDI-interpreter initialize-state(); do until quit. Beliefs, desires (goals), and intentions BDI-interpreter initialize-state(); do options := option-gen (event-queue, B, G, I); selected-options := deliberate (options, B, G, I); update-intentions (selected-options, I); execute (I); event-queue := get-new-events(); drop-successful-attitudes (B, G, I); drop-impossible-attitudes (B, G, I); until quit. satisfied or unrealizable beliefs, goals, and intentions

Issues in implementation Updating the BDI structures is intractable in the general case Use only explicit beliefs and goals Represent beliefs, goals and intentions as plan structures that are followed by the agent Support means-ends reasoning Hierarchically structured

Coordinating BDI agents Model actions of the agents in terms of how they can be affected by other agents’ preferences Flexible actions can be delayed or omitted Inevitable actions can be delayed but not omitted Immediate actions can be neither delayed nor omitted Triggerable actions can be performed at the request of another agent Use a finite state automaton (skeleton) to model the state transitions of the agent

Coordination relationships Model the relationships between two agents’ events Is-required-by Disables Enables Conditionally enables (guaranteeing enablement) Initiates Jointly-required-by Compensates-for-failure

Communicating BDI agents Performative: speech act that is itself an action Informing Requesting Promising Permitting Prohibiting Declaring Expressing

Communicating: Ontologies Ontology – representation of objects and relationships in the world Not quite the same as a knowledge base An ontology is typically the “representational” part of a knowledge base… …but sometimes axioms and rules are in an ontology

Societies of BDI agents Groups of agents interact in some way Agents may have different roles within the group The agents may be heterogeneous or homogeneous Teams of agents share (some) common goals

Mutual BDI Mutual beliefs Joint intentions Everyone believes p, believes that the others believe p, believes that the others believe … Impossible to achieve perfect mutual information in environments where communication can fail: “We attack at dawn” Joint intentions Everyone intends p; everyone will persist with p until achieved or impossible Shared plans: intending-to and intending-that Social commitments: promises and persistence

A few notes on grammar “Punctuation always goes inside a quote.” “That” is used to define; “which” is used to clarify or extend The system that Weiss describes is … (“that” tells which system I’m talking about) The PRS system, which Georgeff et al. developed, … (“which” tells more about the only system in question) Useful references: Strunk and White, Elements of Style DuPre, Bugs in Writing Chicago Manual of Style