Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005.

Slides:



Advertisements
Similar presentations
Peer-to-peer and agent-based computing Basic Theory of Agency.
Advertisements

Peer-to-peer and agent-based computing Basic Theory of Agency (Contd)
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Formal Semantics for an Abstract Agent Programming Language K.V. Hindriks, Ch. Mayer et al. Lecture Notes In Computer Science, Vol. 1365, 1997
Lecture 8: Three-Level Architectures CS 344R: Robotics Benjamin Kuipers.
Title: Intelligent Agents A uthor: Michael Woolridge Chapter 1 of Multiagent Systems by Weiss Speakers: Tibor Moldovan and Shabbir Syed CSCE976, April.
Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March
Artificial Intelligence: Chapter 2
Agents in the previous examples Agents are just 3D objects in virtual worlds Agents are not independent thread. No agent architecture. ……
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
JACK Intelligent Agents and Applications Hitesh Bhambhani CSE 6362, SPRING 2003 Dr. Lawrence B. Holder.
Plans for Today Chapter 2: Intelligent Agents (until break) Lisp: Some questions that came up in lab Resume intelligent agents after Lisp issues.
A Multi-Agent System for Visualization Simulated User Behaviour B. de Vries, J. Dijkstra.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Autonomous Mobile Robots CPE 470/670 Lecture 8 Instructor: Monica Nicolescu.
Design of Multi-Agent Systems Teacher Bart Verheij Student assistants Albert Hankel Elske van der Vaart Web site
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
1 Chapter 19 Intelligent Agents. 2 Chapter 19 Contents (1) l Intelligence l Autonomy l Ability to Learn l Other Agent Properties l Reactive Agents l Utility-Based.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.2 [P]: Agent Architectures and Hierarchical.
Multiagent Systems: Local Decisions vs. Global Coherence Leen-Kiat Soh, Nobel Khandaker, Adam Eck Computer Science & Engineering University of Nebraska.
The Need of Unmanned Systems
Towards A Multi-Agent System for Network Decision Analysis Jan Dijkstra.
Rational Agents (Chapter 2)
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
INTRODUCTION TO ARTIFICIAL INTELLIGENCE Massimo Poesio Intelligent agents.
Robotica Lezione 1. Robotica - Lecture 12 Objectives - I General aspects of robotics –Situated Agents –Autonomous Vehicles –Dynamical Agents Implementing.
1 Intelligent Systems Q: Where to start? A: At the beginning (1940) by Denis Riordan Reference Modern Artificial Intelligence began in the middle of the.
© 2007 Tom Beckman Features:  Are autonomous software entities that act as a user’s assistant to perform discrete tasks, simplifying or completely automating.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
ISBN Chapter 3 Describing Semantics -Attribute Grammars -Dynamic Semantics.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Artificial Intelligence Lecture 1. Objectives Definition Foundation of AI History of AI Agent Application of AI.
Ann Nowe VUB 1 What are agents anyway?. Ann Nowe VUB 2 Overview Agents Agent environments Intelligent agents Agents versus objects.
Lecture 2: 11/4/1435 Problem Solving Agents Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Introduction of Intelligent Agents
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Agent Overview. Topics Agent and its characteristics Architectures Agent Management.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
Intelligent Agents A Tutorial Prof. Fuhua Lin. From Objects to Agents Objects: the “Classical Perspective” Objects: the “Classical Perspective” State.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
Artificial Intelligence Logical Agents Chapter 7.
Done by Fazlun Satya Saradhi. INTRODUCTION The main concept is to use different types of agent models which would help create a better dynamic and adaptive.
Intelligent Agents (Ch. 2)
OPERATING SYSTEMS CS 3502 Fall 2017
Service-Oriented Computing: Semantics, Processes, Agents
Do software agents know what they talk about?
CSCE 580 Artificial Intelligence Ch
Artificial Intelligence Lecture No. 4
Artificial Intelligence Lecture No. 5
Intelligent Agents Chapter 2.
Intelligent Agents (Ch. 2)
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
© James D. Skrentny from notes by C. Dyer, et. al.
Service-Oriented Computing: Semantics, Processes, Agents
Michael Wooldridge presented by Kim Sang Soon
The VSK logic for Intelligent Agents
L E A R N I N G O U T L I N E Follow this Learning Outline as you read and study this chapter.
EA C461 – Artificial Intelligence Problem Solving Agents
Organizational Culture and Environment: The Constraints
Subsuption Architecture
Service-Oriented Computing: Semantics, Processes, Agents
Organizational Culture and Environment: The Constraints
Structure of intelligent agents and environments
Chapter 12: Building Situated Robots
Presentation transcript:

Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005

Nottingham, March 2005Agents and Ontology Definition revisited Autonomy (generally accepted) Learning (not necessarily, maybe undesirable … An agent is a computer system that is situated in some environment and that is capable of autonomous action in this environment in order to meet ist design objectives.

Nottingham, March 2005Agents and Ontology Agent Environment Action output Sensor input

Nottingham, March 2005Agents and Ontology Definition An agent Has impact on its environment Has partial control Actions may have undeterministic effects The agent has a set of possible actions, which may make sense depending on environment parameters

Nottingham, March 2005Agents and Ontology The fundamental problem The agent must decide which of its actions are best fit to meet its objectives. An agent architecture is a software structure for a decision system that functions in an environment.

Nottingham, March 2005Agents and Ontology Example: a control system: A thermostate works according to the rules Distinguish environment, action, impact Too cold => heating on Temperature OK => heating off

Nottingham, March 2005Agents and Ontology Example : control system X Windows xbiff handle Xbiff lives in a software environment It uses LINUX software functions uitvoeren to arrive at its information (ls to check mailbox) It uses LINUX software functions to change its environment (adapt the icon on the desktop) As an agent it is not more complicated than the thermostate.

Nottingham, March 2005Agents and Ontology Environments Access Deterministic or not Static or dynamic Discrete or continuous

Nottingham, March 2005Agents and Ontology Access The temperature at the north pole of Mars? Uncertainty, incompleteness of information But the agent must decide Better access makes simpler agents

Nottingham, March 2005Agents and Ontology Deterministic or not Sometimes the result of an action is not deterministic. This is caused by Limited impact of the agent Limited capabilities of the agent The complexity of the environment The agent must check the consequences of its actions

Nottingham, March 2005Agents and Ontology Static/Dynamic Is the agent the only actor? E.g. Software systems, large civil constructions, visitors in an exhibition. Most systems are dynamic The agent must keep collecting data, the state may change during the action or the decision process Synchronisation, co-ordination between processes and agents is necessary.

Nottingham, March 2005Agents and Ontology Discrete or continuous Classify: Chess, taxi driving, navigating,, word processing, understanding natural language Which is more difficult?

Nottingham, March 2005Agents and Ontology Interaction with environment Originally: functional systems Compilers Given a precondition, they realise a postcondition Top down design is possible f:I->O

Nottingham, March 2005Agents and Ontology Interaction: reactivity Most programs are reactive They maintain a relationship with modules and environment, respond on signals Can react fastly React and think afterwards (or not) Reactive agents take local decisions with a global impact

Nottingham, March 2005Agents and Ontology Intelligent agents Intelligence is Responsivity Proactivity Social ability E.g. proactivity: C-program Constant environment E.g. responsivity The agent is in the middle, this is complicated

Nottingham, March 2005Agents and Ontology Agenten and Objects “Objects are actors. They respond in a human like way to messages…” Agents are AUTONOMOUS Objects implement methods that can be CALLED by other objects Agents DECIDE what to do, in response to messages

Nottingham, March 2005Agents and Ontology Objects do it for free Agents do it because they want it

Nottingham, March 2005Agents and Ontology Agents and expertsystems Vb: Mycin,… Expertsystems are consultants, they do not act They are in general not proactive They have no social abilities

Nottingham, March 2005Agents and Ontology Agents as intentional systems Belief, Desire, Intention First order: Belief,… about objects NOT about Belief… Higher order: May model its own beliefs, … or those of other agents BDI

Nottingham, March 2005Agents and Ontology A simple example A light switch is an agent that can allow current to pass or not. It will do so if it beliefs that we want the current to pass and not of it beliefs that we do not. We pass our intentionts by switching. There are simpler models of a switch…

Nottingham, March 2005Agents and Ontology Abstract architecture Environment is a set of states: E = {e,e’,…} An agent has a set of actions Ac= { ,  ’,…} A run is a sequence state-action-state-… R=e 0 -  0 -> e 1 -  1 -> e 2 -  2 ->… -  u -> e u

Nottingham, March 2005Agents and Ontology Abstract architecture Symbols R is the set of runs R Ac is the set of runs ending in an action R E is the set of runs ending in a state r,r’ are in R.

Nottingham, March 2005Agents and Ontology Abstract architecture The state transformation:  : R Ac ->P(E) An action may lead to a set of states The result depends on the run  (r) may be empty

Nottingham, March 2005Agents and Ontology Abstract architecture Environment: Env = E a set of states, e 0 an initial state,  state transformation An agent is a function Ag: R E -> Ac Which is deterministic! R(Ag, Env) is the set of all ended runs

Nottingham, March 2005Agents and Ontology Abstract architecture A sequence (e 0,  0, e 1,  1, e 2,  2 …) Is a run of agent Ag in Env= iff e 0 is the initial stae of Env for u>0 e u   ( (e 0,  0, …  u-1 …))  u = Ag ( (e 0,  0, …  u-1, e u )

Nottingham, March 2005Agents and Ontology Perception The action function can be split Perception Actionselection We now call see the function that allows the agent to observe action the function modelling the decision process

Nottingham, March 2005Agents and Ontology Agent Environment Action output Sensor input seeaction

Nottingham, March 2005Agents and Ontology Perception We have see : E -> Per action : Per * -> Ac action works on sequences of perceptions. An agent is a pair: Ag=

Nottingham, March 2005Agents and Ontology Perception: an example Beliefs x=‘The temperature is OK’ y=‘Gherard Schröder is chanceler’ Environment E={e 1 = {  x,  y}, e 2 = {  x,y}, e 3 = {x,  y}, e 4 = {x,y}} Thermostate?

Nottingham, March 2005Agents and Ontology Perception Equivalence of states: e 1 ~ e 2 a.s.a. see( e 1 )=see( e 2 ) |~| = |E| for a strong agent |~| = 1 for an agent with a weak perception

Nottingham, March 2005Agents and Ontology Agents with a state The past is taken into account through an internal state of the agent: see: E -> Per action: I->Ac next: I x Per -> I Action selection is action(next(i,see(e))) The new state is i’=next(i,see(e)) Environmental impact: e’   (r)

Nottingham, March 2005Agents and Ontology How to tell the agent what to do Two approaches: benaderingen: Utility Predicates Utitility is a performance measure for states Predicates contain a specification of the states.

Nottingham, March 2005Agents and Ontology Utility Let it purely work on states: u:E->R The fitness of an action is judged on minimum of available u-values Average of available u-values … Approach is local, agents become myopic

Nottingham, March 2005Agents and Ontology Utility Let it work on runs u:R->R Agents can look forward E.g.: Tileworld (Pollack 1990)

Nottingham, March 2005Agents and Ontology Utilities May be defined probabilistically, by adding a probability to the state transformation. A problem is computability, within specific time limits. In most cases the optimum cannot be found. One can use heuristics here.

Nottingham, March 2005Agents and Ontology Predicates Utilities are not the most natural way to define a state. What does it mean that the temperature is ok? Humans think in objectives. Those are statements, or predicates.

Nottingham, March 2005Agents and Ontology Task environments A pair is called a task environment iff Env is an environment and  :R->{0,1}  is a predicate over the runs R The set of runs satisfying the predicate is R  An agent Ag is successful iff R  (Ag,Env) = R(Ag,Env) or  r  R(Ag,Env) :  (r) Alternatively:  r  R(Ag,Env) :  (r)

Nottingham, March 2005Agents and Ontology Task environments One distinguishes Achievement tasks Aim at a certain condition on the environment Maintenance tasks Try to avoid a certain condition on the environment