Download presentation
Presentation is loading. Please wait.
1
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa lmp@di.fct.unl.pt Pierangelo Dell’Acqua Dept. of Science and Technology Linköping University pier@itn.liu.se
2
Our agents FWe propose a LP approach to agents that can: 3Reason and React to other agents 3Prefer among possible choices 3Intend to reason and to act 3Update their own knowledge, reactions, and goals 3Interact by updating the theory of another agent 3Decide whether to accept an update depending on the requesting agent
3
Framework FThis framework builds on the works: 3 Updating Agents - P. Dell’Acqua & L. M. Pereira MAS’99 3 Updates plus Preferences - J. J. Alferes & L. M. Pereira JELIA’00
4
Enabling agents to update their KB FUpdating agent: a rational, reactive agent that can dynamically change its own knowledge and goals 8makes observations 8reciprocally updates other agents with goals and rules 8thinks (rational) 8selects and executes an action (reactive)
5
Agent’s language Atomic formulae: A objective atoms not A default atoms i:C projects updates iCiC FFormulae: L i is an atom, an update or a negated update active rule generalized rules Z j is a project integrity constraint false L 1 L n Z 1 Z m A L 1 L n not A L 1 L n L 1 L n Z
6
Projects and updates A project j:C denotes the intention of some agent i of proposing the updating the theory of agent j with C. denotes an update proposed by i of the current theory of some agent j with C. wilma:C iCiC fred C
7
Example: active rules money maria : not work beach maria : goToBeach travelling pedro : bookTravel Consider the following active rules in the theory of Maria.
8
Agent’s language A project i:C can take one of the forms: FNote that a program can be updated with another program, i.e., any rule can be updated. i : ( A L 1 L n ) i : ( L 1 L n Z ) i : ( ?- L 1 L n ) i : ( not A L 1 L n ) i : ( false L 1 L n Z 1 Z m )
9
Agents’ knowledge states FKnowledge states represent dynamically evolving states of agents’ knowledge. They undergo change due to updates. Given the current knowledge state P s, its successor knowledge state P s+1 is produced as a result of the occurrence of a set of parallel updates. FUpdate actions do not modify the current or any of the previous knowledge states. They only affect the successor state: the precondition of the action is evaluated in the current state and the postcondition updates the successor state.
10
Enabling agents to prefer city not mountain not beach not travelling work vacation not work mountain not city not beach not travelling money beach not city not mountain not travelling money travelling not city not mountain not beach money Let the underlying theory of Maria be: Since the theory has a unique two-valued model: M={city, work} Maria decides to live in the city.
11
Enabling agents to prefer If we add the fact ”money” to the theory of Maria, then the theory has 4 models: M 1 ={city, money, work}M 2 = {mountain, money, work} M 3 = {beach, money, work}M 4 = {travelling, money, work} Therefore, Maria is unable to decide where to live. To select among alternative choices, Maria needs the ability of preferring.
12
Updates plus preferences FA logic programming framework that combines two distinct forms of reasoning: preferring and updating. Updates create new models, while preferences allow us to select among pre-existing models The priority relation can itself be updated. A language capable of considering sequences of logic programs that result from the consecutive updates of an initial program, where it is possible to define a priority relation among the rules of all successive programs.
13
Preferring agents Agents can express preferences about their own rules. FPreferring agent: an agent that is able to prefer beliefs and reactions when several alternatives are possible. Preferences are expressed via priority rules. Preferences can be updated, possibly on advice from others.
14
Priority rules FLet < be a binary predicate symbol whose set of constants includes all the generalized rules: r 1 < r 2 means that the rule r 1 is preferred to rule r 2. A priority rule is a generalized rule defining <. FA prioritized LP is a set of generalized rules (possibly, priority rules) and integrity constraints.
15
Example: a prioritized LP (1) city not mountain not beach not travelling (2) work (3) vacation not work (4) mountain not city not beach not travelling money (5) beach not city not mountain not travelling money (6) travelling not city not mountain not beach money 1<4 work 4<6 vacation 1<5 work 5<6 vacation 1<6 work 6<1 vacation M={city, money, work} If we add ”money” to the theory, then there is a unique model: If work is false, then vacation holds: M 1 ={mountain, money, vacation}M 2 ={beach, money, vacation}
16
Agent theory FThe initial theory of an agent is a pair (P,R): - P is an prioritized LP. - R is a set of active rules. FAn updating program is a finite set of updates. FLet S be a set of natural numbers. We call the elements s S states. FAn agent at state s, written s, is a pair (T,U): - T is the initial theory of . - U={U 1,…, U s } is a sequence of updating programs.
17
Multi-agent system FA multi-agent system M={ 1 s,…, n s } at state s is a set of agents 1,…, n at state s. FM characterizes a fixed society of evolving agents. FThe declarative semantics of M characterizes the relationship among the agents in M and how the system evolves. FThe declarative semantics is stable models based.
18
Example: happy story (1) city not mountain not beach not travelling (2) work (3) vacation not work (4) mountain not city not beach not travelling money (5) beach not city not mountain not travelling money (6) travelling not city not mountain not beach mone 1<4 work 4<6 vacation 1<5 work 5<6 vacation 1<6 work 6<1 vacation money maria : not work beach maria : goToBeach travelling pedro : bookTravel Let the initial theory (P,R) of Maria be: U={ } State: 0
19
Example: happy story (1) city not mountain not beach not travelling (2) work (3) vacation not work (4) mountain not city not beach not travelling money (5) beach not city not mountain not travelling money (6) travelling not city not mountain not beach mone 1<4 work 4<6 vacation 1<5 work 5<6 vacation 1<6 work 6<1 vacation money maria : not work beach maria : goToBeach travelling pedro : bookTravel At state 0 Maria receives l money U={ } U 1 ={ } l money State: 1
20
Example: happy story (1) city not mountain not beach not travelling (2) work (3) vacation not work (4) mountain not city not beach not travelling money (5) beach not city not mountain not travelling money (6) travelling not city not mountain not beach mone 1<4 work 4<6 vacation 1<5 work 5<6 vacation 1<6 work 6<1 vacation money maria : not work beach maria : goToBeach travelling pedro : bookTravel State: 2 Then, Maria receives maria not work U={ } U 1 ={ }, U 2 ={ } l money maria not work
21
Example: happy story (1) city not mountain not beach not travelling (2) work (3) vacation not work (4) mountain not city not beach not travelling money (5) beach not city not mountain not travelling money (6) travelling not city not mountain not beach money 1<4 work 4<6 vacation 1<5 work 5<6 vacation 1<6 work 6<1 vacation money maria : not work beach maria : goToBeach travelling pedro : bookTravel State: 3 Then, Maria receives f (5<4 vacation) U={ } U 1 ={ }, U 2 ={ }, U 3 ={ } l moneymaria not workf (5<4 vacation)
22
Future work FThe approach can be extended in several ways: 3Non synchronous, dynamic multi-agent system. 3Other rational abilities can be incorporated, e.g., learning. FDevelopment of a proof procedure for updating and preferring reasoning.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.