Download presentation
Presentation is loading. Please wait.
Published byArnold Lyons Modified over 9 years ago
1
EEL 5937 Models of agents based on intentional logic EEL 5937 Multi Agent Systems
2
EEL 5937 Agents as Intentional Systems When explaining human activity, we find it useful to make statements such as: –Janine took her umbrella because she believed it will rain –Michael worked hard because he wanted a PhD. These statements make use of a folk psychology, by which human behavior is predicted and explained through the attribution of attitudes, such as believing and wanting, and also hoping, fearing and so on. The attitudes employed in such folk psychological descriptions are called the intentional notions.
3
EEL 5937 Agents as intentional systems (cont’d) The philosopher Daniel Dennett coined the term intentional system to describe entities “whose behavior can be predicted by the method of attributing belief, desires and rational acumen”. Dennett identifies different “grades” of intentional system: –“A first order intentional system had beliefs and desires (etc.) but no beliefs and desires about beliefs and desires. –…A second order intentional system is more sophisticated; it has beliefs and desires (and no doubt other intentional states) about beliefs and desires (and other intentional states) – both those of others and its own. Is it legitimate or useful to attribute beliefs, desires, and so on, to computer systems?
4
EEL 5937 Legitimacy of intentional stance McCarthy argued that there are occasions when the intentional stance is appropriate: `To ascribe beliefs, free will, intentions, consciousness, abilities, or wants to a machine is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behaviour, or how to repair or improve it. It is perhaps never logically required even for humans, but expressing reasonably briefly what is actually known about the state of the machine in a particular situation may require mental qualities or qualities isomorphic to them. Theories of belief, knowledge and wanting can be constructed for machines in a simpler setting than for humans, and later applied to humans. Ascription of mental qualities is most straightforward for machines of known structure such as thermostats and computer operating systems, but is most useful when applied to entities whose structure is incompletely known'. [McCarthy, 1978], (quoted in [Shoham, 1990])[McCarthy, 1978][Shoham, 1990]
5
EEL 5937 What can be described by an intentional stance? Turns out, almost everything can: –`It is perfectly coherent to treat a light switch as a (very cooperative) agent with the capability of transmitting current at will, who invariably transmits current when it believes that we want it transmitted and not otherwise; flicking the switch is simply our way of communicating our desires'. (Shoham, 1990) But it does not buy us anything, so it sounds ridiculous. Put crudely, the more we know about a system, the less we need to rely on animistic, intentional explanations of its behavior. However, with very complex systems, even if a complete, accurate picture of the system's architecture and working is available, a mechanistic, design stance explanation of its behavior may not be practicable.
6
EEL 5937 So, how we design our agents? There are a number of intentional stances we can consider: beliefs, desires, intentions, fears, wishes, preferences, …, emotions: love, hate, anger, faith. Which one are we going to choose? Various approaches were proposed. –Cohen and Levesque: beliefs and goals –Rao and Georgeff: beliefs, desires and intentions in a branching time framework –Singh: family of logics for representing intentions, beliefs, knowledge, know-how, and communication in a branching-time framework –Kinny et. others: BDI + social plans, team work –… many others
7
EEL 5937 Intentions Cohen and Levesque identify seven properties that must be satisfied by a reasonable theory of intention: 1.Intentions pose problems for agents, who need to determine ways of achieving them. 2.Intentions provide a `filter' for adopting other intentions, which must not conflict. 3.Agents track the success of their intentions, and are inclined to try again if their attempts fail. 4.Agents believe their intentions are possible. 5.Agents do not believe they will not bring about their intentions. 6.Under certain circumstances, agents believe they will bring about their intentions. 7.Agents need not intend all the expected side effects of their intentions.
8
EEL 5937 BDI Belief – desire –intention model Belief: –What the agent believes about the world, as information from different sources. –Also, beliefs about the beliefs of other agents. Desire: –The high level goals of the agent Intention –Low level goals, which can be immediately transformed into action. In the Rao and Georgeff formulation, these notions are extended to reasoning in a branching time framework.
9
EEL 5937 Intentional notions as abstraction tools The intentional notions are abstraction tools, which provide us with a convenient and familiar way of describing, explaining, and predicting the behavior of complex systems. Remember: most important developments in computing are based on new abstractions: –Procedural abstraction –Abstract data types –Objects. Agents, and intentional systems, represent a similar abstraction. So agent theorists start from the strong view of agents as intentional systems: one whose simples consistent description requires the intentional stance.
10
EEL 5937 Intentional models as post- declarative systems Procedural programming: we say exactly what the system should do. Declarative programming: we state something we want to achieve, give the system general info about the relationships between objects, and let a built- in control mechanism figure out what to do (eg. SQL, goal-directed theorem proving) Intentional models: give a very abstract specification of the system (“desires”) and let the control mechanisms figure out what to do, knowing that it will act in accordance with some built-in theory of agency (eg: the Cohen-Levesque model of intention, or BDI logic).
11
EEL 5937 A critique of intentional models Intentional logic is very complicated. It is very difficult to program. (*) The resulting programming models are computationally complex, usually untracteable. There is a question if they are in fact biologically accurate or not. (*) This might be just a result of the insufficiently developed tools and methodologies.
12
EEL 5937 Practice and theory We will use the notions of beliefs, desires and intentions in our explanations and implementations. We will not strike for a conceptual purity of our implementation. We will use theoretical models as long as they can serve as basis for implementation.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.