Presentation is loading. Please wait.

Presentation is loading. Please wait.

Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005.

Similar presentations


Presentation on theme: "Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005."— Presentation transcript:

1 Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005

2 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be2 Definition revisited Autonomy (generally accepted) Learning (not necessarily, maybe undesirable … An agent is a computer system that is situated in some environment and that is capable of autonomous action in this environment in order to meet ist design objectives.

3 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be3 Agent Environment Action output Sensor input

4 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be4 Definition An agent Has impact on its environment Has partial control Actions may have undeterministic effects The agent has a set of possible actions, which may make sense depending on environment parameters

5 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be5 The fundamental problem The agent must decide which of its actions are best fit to meet its objectives. An agent architecture is a software structure for a decision system that functions in an environment.

6 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be6 Example: a control system: A thermostate works according to the rules Distinguish environment, action, impact Too cold => heating on Temperature OK => heating off

7 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be7 Example : control system X Windows xbiff handle email Xbiff lives in a software environment It uses LINUX software functions uitvoeren to arrive at its information (ls to check mailbox) It uses LINUX software functions to change its environment (adapt the icon on the desktop) As an agent it is not more complicated than the thermostate.

8 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be8 Environments Access Deterministic or not Static or dynamic Discrete or continuous

9 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be9 Access The temperature at the north pole of Mars? Uncertainty, incompleteness of information But the agent must decide Better access makes simpler agents

10 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be10 Deterministic or not Sometimes the result of an action is not deterministic. This is caused by Limited impact of the agent Limited capabilities of the agent The complexity of the environment The agent must check the consequences of its actions

11 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be11 Static/Dynamic Is the agent the only actor? E.g. Software systems, large civil constructions, visitors in an exhibition. Most systems are dynamic The agent must keep collecting data, the state may change during the action or the decision process Synchronisation, co-ordination between processes and agents is necessary.

12 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be12 Discrete or continuous Classify: Chess, taxi driving, navigating,, word processing, understanding natural language Which is more difficult?

13 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be13 Interaction with environment Originally: functional systems Compilers Given a precondition, they realise a postcondition Top down design is possible f:I->O

14 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be14 Interaction: reactivity Most programs are reactive They maintain a relationship with modules and environment, respond on signals Can react fastly React and think afterwards (or not) Reactive agents take local decisions with a global impact

15 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be15 Intelligent agents Intelligence is Responsivity Proactivity Social ability E.g. proactivity: C-program Constant environment E.g. responsivity The agent is in the middle, this is complicated

16 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be16 Agenten and Objects “Objects are actors. They respond in a human like way to messages…” Agents are AUTONOMOUS Objects implement methods that can be CALLED by other objects Agents DECIDE what to do, in response to messages

17 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be17 Objects do it for free Agents do it because they want it

18 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be18 Agents and expertsystems Vb: Mycin,… Expertsystems are consultants, they do not act They are in general not proactive They have no social abilities

19 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be19 Agents as intentional systems Belief, Desire, Intention First order: Belief,… about objects NOT about Belief… Higher order: May model its own beliefs, … or those of other agents BDI

20 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be20 A simple example A light switch is an agent that can allow current to pass or not. It will do so if it beliefs that we want the current to pass and not of it beliefs that we do not. We pass our intentionts by switching. There are simpler models of a switch…

21 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be21 Abstract architecture Environment is a set of states: E = {e,e’,…} An agent has a set of actions Ac= { ,  ’,…} A run is a sequence state-action-state-… R=e 0 -  0 -> e 1 -  1 -> e 2 -  2 ->… -  u -> e u

22 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be22 Abstract architecture Symbols R is the set of runs R Ac is the set of runs ending in an action R E is the set of runs ending in a state r,r’ are in R.

23 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be23 Abstract architecture The state transformation:  : R Ac ->P(E) An action may lead to a set of states The result depends on the run  (r) may be empty

24 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be24 Abstract architecture Environment: Env = E a set of states, e 0 an initial state,  state transformation An agent is a function Ag: R E -> Ac Which is deterministic! R(Ag, Env) is the set of all ended runs

25 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be25 Abstract architecture A sequence (e 0,  0, e 1,  1, e 2,  2 …) Is a run of agent Ag in Env= iff e 0 is the initial stae of Env for u>0 e u   ( (e 0,  0, …  u-1 …))  u = Ag ( (e 0,  0, …  u-1, e u )

26 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be26 Perception The action function can be split Perception Actionselection We now call see the function that allows the agent to observe action the function modelling the decision process

27 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be27 Agent Environment Action output Sensor input seeaction

28 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be28 Perception We have see : E -> Per action : Per * -> Ac action works on sequences of perceptions. An agent is a pair: Ag=

29 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be29 Perception: an example Beliefs x=‘The temperature is OK’ y=‘Gherard Schröder is chanceler’ Environment E={e 1 = {  x,  y}, e 2 = {  x,y}, e 3 = {x,  y}, e 4 = {x,y}} Thermostate?

30 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be30 Perception Equivalence of states: e 1 ~ e 2 a.s.a. see( e 1 )=see( e 2 ) |~| = |E| for a strong agent |~| = 1 for an agent with a weak perception

31 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be31 Agents with a state The past is taken into account through an internal state of the agent: see: E -> Per action: I->Ac next: I x Per -> I Action selection is action(next(i,see(e))) The new state is i’=next(i,see(e)) Environmental impact: e’   (r)

32 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be32 How to tell the agent what to do Two approaches: benaderingen: Utility Predicates Utitility is a performance measure for states Predicates contain a specification of the states.

33 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be33 Utility Let it purely work on states: u:E->R The fitness of an action is judged on minimum of available u-values Average of available u-values … Approach is local, agents become myopic

34 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be34 Utility Let it work on runs u:R->R Agents can look forward E.g.: Tileworld (Pollack 1990)

35 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be35 Utilities May be defined probabilistically, by adding a probability to the state transformation. A problem is computability, within specific time limits. In most cases the optimum cannot be found. One can use heuristics here.

36 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be36 Predicates Utilities are not the most natural way to define a state. What does it mean that the temperature is ok? Humans think in objectives. Those are statements, or predicates.

37 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be37 Task environments A pair is called a task environment iff Env is an environment and  :R->{0,1}  is a predicate over the runs R The set of runs satisfying the predicate is R  An agent Ag is successful iff R  (Ag,Env) = R(Ag,Env) or  r  R(Ag,Env) :  (r) Alternatively:  r  R(Ag,Env) :  (r)

38 Nottingham, March 2005Agents and Ontology Patrick.DeCausmaecker@kahosl.be38 Task environments One distinguishes Achievement tasks Aim at a certain condition on the environment Maintenance tasks Try to avoid a certain condition on the environment


Download ppt "Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005."

Similar presentations


Ads by Google