EEL 5937 Models of agents based on intentional logic EEL 5937 Multi Agent Systems.

Slides:



Advertisements
Similar presentations
EECS 690 April 5. Type identity Is a kind of physicalism Every mental event is identical with a physical event In each case where two minds have something.
Advertisements

Peer-to-peer and agent-based computing Agents & Multi-Agent Systems: Introduction (Contd)
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
What is deontology?.
Title: Intelligent Agents A uthor: Michael Woolridge Chapter 1 of Multiagent Systems by Weiss Speakers: Tibor Moldovan and Shabbir Syed CSCE976, April.
LECTURE 2: INTELLIGENT AGENTS
Week 11 Review: Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution.
Chapter 4 Flashcards. systematic collection, organization, and interpretation of data related to a client’s functioning in order to make decisions or.
SESSION-4: RESPECTING OTHERS AS HUMAN BEINGS. What is “respect”? Respect has great importance in everyday life Belief: all people are worthy of respect.
Clarke, R. J (2001) S951-02: 1 Critical Issues in Information Systems BUSS 951 Seminar 2 Theories and their Methods.
Summer 2011 Monday, 07/25. Recap on Dreyfus Presents a phenomenological argument against the idea that intelligence consists in manipulating symbols according.
Understanding Emotions
Faculty of Management and Organization Emergence of social constructs and organizational behaviour How cognitive modelling enriches social simulation Martin.
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Models -1 Scientists often describe what they do as constructing models. Understanding scientific reasoning requires understanding something about models.
Design of Multi-Agent Systems Teacher Bart Verheij Student assistants Albert Hankel Elske van der Vaart Web site
Creating Architectural Descriptions. Outline Standardizing architectural descriptions: The IEEE has published, “Recommended Practice for Architectural.
Scientific Thinking - 1 A. It is not what the man of science believes that distinguishes him, but how and why he believes it. B. A hypothesis is scientific.
BDI Agents Martin Beer, School of Computing & Management Sciences,
Autonomous Agents Overview. Topics Theories: logic based formalisms for the explanation, analysis, or specification of autonomous agents. Languages: agent-based.
1 Chapter 19 Intelligent Agents. 2 Chapter 19 Contents (1) l Intelligence l Autonomy l Ability to Learn l Other Agent Properties l Reactive Agents l Utility-Based.
Chapter 6 Consumer Attitudes Consumer Attitudes.
Philosophy 4610 Philosophy of Mind Week 5: Functionalism.
The Need for Scientific Methodology The Characteristics of Modern Science The Objectives of Psychological Science The Tools of Psychological Science Scientific.
Persuasive Writing Writing whose Purpose is to CHANGE MINDS and BRING ABOUT ACTION.
INTRODUCTION TO ARTIFICIAL INTELLIGENCE Massimo Poesio Intelligent agents.
Chapter One Theories of Learning
1 Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A. Franz J. Kurfess CPE/CSC 580: Intelligent Agents 1.
2APL A Practical Agent Programming Language March 6, 2007 Cathy Yen.
Chapter 1 Dimensions of Psychology
SLB /04/07 Thinking and Communicating “The Spiritual Life is Thinking!” (R.B. Thieme, Jr.)
The Society of Mind The Society of Mind by Marvin Minsky.
Attitude You learn to behave in a particular way to a particular object in a particular situation. A learned predisposition to behave in a consistently.
THE OSCAR PROJECT Rational Cognition in OSCAR John L. Pollock Department of Philosophy University of Arizona Tucson, Arizona 85721
Big Idea 1: The Practice of Science Description A: Scientific inquiry is a multifaceted activity; the processes of science include the formulation of scientifically.
Observation & Analysis. Observation Field Research In the fields of social science, psychology and medicine, amongst others, observational study is an.
EEL 5937 Agent communication EEL 5937 Multi Agent Systems Lecture 10, Feb. 6, 2003 Lotzi Bölöni.
Philosophy 224 Persons and Morality: Pt. 1. Ah Ha! Dennett starts by addressing an issue we’ve observed in the past: the tendency to identify personhood.
Ayestarán SergioA Formal Model for CPS1 A Formal Model for Cooperative Problem Solving Based on: Formalizing the Cooperative Problem Solving Process [Michael.
LOGIC AND ONTOLOGY Both logic and ontology are important areas of philosophy covering large, diverse, and active research projects. These two areas overlap.
Copyright © Cengage Learning. All rights reserved. CHAPTER 7 FUNCTIONS.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Developing and Evaluating Theories of Behavior.
Eliminativism Philosophy of Mind Lecture 5 (Knowledge and Reality)
Philosophical Aspects of Science Soraj Hongladarom Department of Philosophy Faculty of Arts.
California Mathematics Council WORKING WITH PARENTS AND FAMILIES TO SUPPORT THE COMMON CORE MATH STANDARDS.
Slide 1 Improving your Persuasion and Influencing Skills for better negotiated outcomes Presented by Katrena Friel March 2009.
EEL 5937 Agent communication EEL 5937 Multi Agent Systems Lotzi Bölöni.
Models and Modeling1 Ahmed Waheed Moustafa Prof. of ergonomics and Computer Simulation.
Winter 2011SEG Chapter 11 Chapter 1 (Part 1) Review from previous courses Subject 1: The Software Development Process.
Agent Overview. Topics Agent and its characteristics Architectures Agent Management.
Eliminative materialism
Randolph Clarke Florida State University. Free will – or freedom of the will – is often taken to be a power of some kind.
Chapter 7 Self-Concept and Communication Person to Person Self-Concept and Communication Person to Person.
Artificial Intelligence Chapter 23 Multiple Agents Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Definitions. Definitions to Know Morality: any major decisions that affect others becomes a moral decision. Immoral: refers to the way people ought not.
Lesson Objective Key Words Lesson outcomes Hypothetical Categorical Imperatives Freedom To evaluate the differences between the Hypothetical and Categorical.
THE SEVEN DIMENSIONS OF CULTURE A DEFINITION. What are the Seven Dimensions of Culture? Trompenaars Hampden-Turner (THT) is a research- driven consulting.
Philosophy of Mind Lecture II: Mind&behavior. Behaviorism
Ryle’s philosophical behaviourism
ATS2840 Philosophy of Mind Semester 1, 2017
Perspective on Consumer Behavior Chapter 4
Knowledge Representation
Service-Oriented Computing: Semantics, Processes, Agents
Introduction Artificial Intelligent.
Developing and Evaluating Theories of Behavior
Introduction to Psychology Chapter 1
CONSUMER MARKETS AND CONSUMER BUYING BEHAVIOR
Service-Oriented Computing: Semantics, Processes, Agents
RESEARCH BASICS What is research?.
Presentation transcript:

EEL 5937 Models of agents based on intentional logic EEL 5937 Multi Agent Systems

EEL 5937 Agents as Intentional Systems When explaining human activity, we find it useful to make statements such as: –Janine took her umbrella because she believed it will rain –Michael worked hard because he wanted a PhD. These statements make use of a folk psychology, by which human behavior is predicted and explained through the attribution of attitudes, such as believing and wanting, and also hoping, fearing and so on. The attitudes employed in such folk psychological descriptions are called the intentional notions.

EEL 5937 Agents as intentional systems (cont’d) The philosopher Daniel Dennett coined the term intentional system to describe entities “whose behavior can be predicted by the method of attributing belief, desires and rational acumen”. Dennett identifies different “grades” of intentional system: –“A first order intentional system had beliefs and desires (etc.) but no beliefs and desires about beliefs and desires. –…A second order intentional system is more sophisticated; it has beliefs and desires (and no doubt other intentional states) about beliefs and desires (and other intentional states) – both those of others and its own. Is it legitimate or useful to attribute beliefs, desires, and so on, to computer systems?

EEL 5937 Legitimacy of intentional stance McCarthy argued that there are occasions when the intentional stance is appropriate: `To ascribe beliefs, free will, intentions, consciousness, abilities, or wants to a machine is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behaviour, or how to repair or improve it. It is perhaps never logically required even for humans, but expressing reasonably briefly what is actually known about the state of the machine in a particular situation may require mental qualities or qualities isomorphic to them. Theories of belief, knowledge and wanting can be constructed for machines in a simpler setting than for humans, and later applied to humans. Ascription of mental qualities is most straightforward for machines of known structure such as thermostats and computer operating systems, but is most useful when applied to entities whose structure is incompletely known'. [McCarthy, 1978], (quoted in [Shoham, 1990])[McCarthy, 1978][Shoham, 1990]

EEL 5937 What can be described by an intentional stance? Turns out, almost everything can: –`It is perfectly coherent to treat a light switch as a (very cooperative) agent with the capability of transmitting current at will, who invariably transmits current when it believes that we want it transmitted and not otherwise; flicking the switch is simply our way of communicating our desires'. (Shoham, 1990) But it does not buy us anything, so it sounds ridiculous. Put crudely, the more we know about a system, the less we need to rely on animistic, intentional explanations of its behavior. However, with very complex systems, even if a complete, accurate picture of the system's architecture and working is available, a mechanistic, design stance explanation of its behavior may not be practicable.

EEL 5937 So, how we design our agents? There are a number of intentional stances we can consider: beliefs, desires, intentions, fears, wishes, preferences, …, emotions: love, hate, anger, faith. Which one are we going to choose? Various approaches were proposed. –Cohen and Levesque: beliefs and goals –Rao and Georgeff: beliefs, desires and intentions in a branching time framework –Singh: family of logics for representing intentions, beliefs, knowledge, know-how, and communication in a branching-time framework –Kinny et. others: BDI + social plans, team work –… many others

EEL 5937 Intentions Cohen and Levesque identify seven properties that must be satisfied by a reasonable theory of intention: 1.Intentions pose problems for agents, who need to determine ways of achieving them. 2.Intentions provide a `filter' for adopting other intentions, which must not conflict. 3.Agents track the success of their intentions, and are inclined to try again if their attempts fail. 4.Agents believe their intentions are possible. 5.Agents do not believe they will not bring about their intentions. 6.Under certain circumstances, agents believe they will bring about their intentions. 7.Agents need not intend all the expected side effects of their intentions.

EEL 5937 BDI Belief – desire –intention model Belief: –What the agent believes about the world, as information from different sources. –Also, beliefs about the beliefs of other agents. Desire: –The high level goals of the agent Intention –Low level goals, which can be immediately transformed into action. In the Rao and Georgeff formulation, these notions are extended to reasoning in a branching time framework.

EEL 5937 Intentional notions as abstraction tools The intentional notions are abstraction tools, which provide us with a convenient and familiar way of describing, explaining, and predicting the behavior of complex systems. Remember: most important developments in computing are based on new abstractions: –Procedural abstraction –Abstract data types –Objects. Agents, and intentional systems, represent a similar abstraction. So agent theorists start from the strong view of agents as intentional systems: one whose simples consistent description requires the intentional stance.

EEL 5937 Intentional models as post- declarative systems Procedural programming: we say exactly what the system should do. Declarative programming: we state something we want to achieve, give the system general info about the relationships between objects, and let a built- in control mechanism figure out what to do (eg. SQL, goal-directed theorem proving) Intentional models: give a very abstract specification of the system (“desires”) and let the control mechanisms figure out what to do, knowing that it will act in accordance with some built-in theory of agency (eg: the Cohen-Levesque model of intention, or BDI logic).

EEL 5937 A critique of intentional models Intentional logic is very complicated. It is very difficult to program. (*) The resulting programming models are computationally complex, usually untracteable. There is a question if they are in fact biologically accurate or not. (*) This might be just a result of the insufficiently developed tools and methodologies.

EEL 5937 Practice and theory We will use the notions of beliefs, desires and intentions in our explanations and implementations. We will not strike for a conceptual purity of our implementation. We will use theoretical models as long as they can serve as basis for implementation.