Politecnico di Torino / CIM Group Intelligent Agents Franco GUIDI POLANCO Politecnico di Torino / CIM Group http://www.cim.polito.it franco.guidi@polito.it 09-APR-2003 Franco Guidi P.
Agenda Introduction Abstract Architectures for Autonomous Agents Concrete Architectures for Intelligent Agents Multi-Agent Systems Summary Franco Guidi P.
Introduction Franco Guidi P.
What agents are “One who is authorised to act for or in place of another as a : a representative, emissary, or official of a government <crown agent> <federal agent> b : one engaged in undercover activities (as espionage) : SPY <secret agent> c : a business representative (as of an athlete or entertainer) <a theatrical agent>” Franco Guidi P.
What agents are "An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors." Russell & Norvig Franco Guidi P.
What agents are "Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed." Pattie Maes Franco Guidi P.
What agents are “Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions.” Barbara Hayes-Roth Franco Guidi P.
What agents are "Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires." IBM's Intelligent Agent Strategy white paper Franco Guidi P.
What agents are Definition that refers to “agents” (and not “intelligent agents”): “An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives.” Wooldridgep & Jennings Franco Guidi P.
What agents are Franco Guidi P.
Agents & Environments The agent takes sensory input from its environment, and produces as output actions that affect it. Environment sensor input action output Agent Franco Guidi P.
Agents & Environments (cont.) In complex environments: An agent do not have complete control over its environment, it just have partial control Partial control means that an agent can influence the environment with its actions An action performed by an agent may fail to have the desired effect. Conclusion: environments are non-deterministic, and agents must be prepared for the possibility of failure. Franco Guidi P.
Agents & Environments (cont.) Effectoric capability: agent’s ability to modify its environment. Actions have pre-conditions Key problem for an agent: deciding which of its actions it should perform in order to best satisfy its design objectives. Franco Guidi P.
N Examples of agents Control systems Software daemons e.g. Thermostat e.g. Mail client But… are they known as Intelligent Agents? N Franco Guidi P.
What is “intelligence”? Franco Guidi P.
What intelligent agents are “An intelligent agent is one that is capable of flexible autonomous action in order to meet its design objectives, where flexible, I mean three things: reactivity: agents are able to perceive their environment, and respond in a timely fashion to changes that occur in it in order to satisfy its design objectives; pro-activeness: intelligent agents are able to exhibit goal-directed behaviour by taking the initiative in order to satisfy its design objectives; social ability: intelligent agents are capable of interacting with other agents (and possibly humans) in order to satisfy its design objectives”; Wooldridgep & Jennings Franco Guidi P.
Agent characteristics Weak notion of agent Autonomy Proactiveness (Goal oriented) Reactivity Socially able (a.k.a. communicative) Strong notion of agent Weak notion + Mobility Veracity Benevolence Rationality An Agent has the weak agent characteristics. It may have the strong agent characteristics. (Amund Tveit) Franco Guidi P.
sayHelloToThePeople() Objects & Agents sayHelloToThePeople() Object say Hello to the people “Hello People!” Classes control its states Agents control its states and behaviours “Objects do it for free; agents do it for money” Franco Guidi P.
Objects & Agents (cont.) Distinctions: Agents embody stronger notion of autonomy than objects Agents are capable of flexible (reactive, pro-active, social) behaviour A multi-agent system is inherently multi-threaded Franco Guidi P.
Abstract Architectures for Autonomous Agents Franco Guidi P.
Formalization Agents Environments History Perception Standard agents Purely reactive agents Agents with state Environments History Perception Franco Guidi P.
Agents & Environments Agent’s environment states characterised by a set: S={ s1,s2,…} Effectoric capability of the Agent characterised by a set of actions: A={ a1,a2,…} Environment sensor input action output Agent Franco Guidi P.
S* is the set of sequences of elements of S. Standard agents A Standard agent decides what action to perform on the basis of his history (experiences). A Standard agent can be viewed as function action: S* A S* is the set of sequences of elements of S. Franco Guidi P.
Environments Environments can be modeled as function env: S x A P(S) where P(S) is the powerset of S; This function takes the current state of the environment sS and an action aA (performed by the agent), and maps them to a set of environment states env(s,a). Deterministic environment: all the sets in the range of env are singletons. Non-deterministic environment: otherwise. Franco Guidi P.
History History represents the interaction between an agent and its environment. A history is a sequence: Where: s0 is the initial state of the environment au is the u’th action that the agent choose to perform su is the u’th environment state h:s0 s1 s2 … su a0 a1 a2 au-1 au Franco Guidi P.
Purely reactive agents A purely reactive agent decides what to do without reference to its history (no references to the past). It can be represented by a function action: S A Example: thermostat Environment states: temperature OK; too cold heater off if s = temperature OK action(s) = heater on otherwise Franco Guidi P.
Perception see and action functions: Agent see action Environment Franco Guidi P.
Perception (cont.) Perception is the result of the function see: S P where P is a (non-empty) set of percepts (perceptual inputs). Then, the action becomes: action: P* A which maps sequences of percepts to actions Franco Guidi P.
Perception ability Non-existent perceptual ability Omniscient MIN MAX | E | = 1 | E | = | S | where E: is the set of different perceived states Two different states s1 S and s2 S (with s1 s2) are indistinguishable if see( s1 ) = see( s2 ) Franco Guidi P.
Perception ability (cont.) Example: x = “The room temperature is OK” y = “There is no war at this moment” then: S={ (x,y),(x,y),(x,y),(x, y)} s1 s2 s3 s4 but for the thermostat: p1 if s=s1 or s=s2 see(s) = p2 if s=s3 or s=s4 Franco Guidi P.
Agents with state see, next and action functions Agent see action Environment Franco Guidi P.
Agents with state (cont.) The same perception function: see: S P The action-selection function is now: action: I A where I: set of all internal states of the agent An aditional function is introduced: next: I x P I Franco Guidi P.
Agents with state (cont.) Behaviour: The agent starts in some internal initial state i0 Then observes its environment state s The internal state of the agent is updated with next(i0,see(s)) The action selected by the agent becomes action(next(i0,see(s))), and it is performed The agent repeats the cycle observing the environment Franco Guidi P.
Concrete Architectures for Intelligent Agents Franco Guidi P.
Classes of agents Logic-based agents Reactive agents Belief-desire-intention agents Layered architectures Franco Guidi P.
Logic-based architectures “Traditional” approach to build artificial intelligent systems: Logical formulas: symbolic representation of its environment and desired behaviour. Logical deduction or theorem proving: syntactical manipulation of this representation. and grasp(x) Kill(Marco, Caesar) or Pressure( tank1, 220) Franco Guidi P.
Logic-based architectures: example A cleanning robot In(x,y) agent is at (x,y) Dirt(x,y) there is a dirt at (x,y) Facing(d) the agent is facing direction d Franco Guidi P.
Logic-based architectures: abstraction Let L be the set of sentences of classical first-order logic Let D=P(L) be the set of L databases (the internal state of the agent is element of D), and 1, 2,.. memebers of D The agent decision making rules are modelled through a set of deduction rules, | means that formula can be proved from database using only the deduction rules Franco Guidi P.
Logic-based architectures: abstraction (cont.) The perception function remains unchanged: see: S P The next function is now : next: D x P D The action function becomes: action: D A Franco Guidi P.
Logic-based architectures: abstraction (cont.) Pseudo-code of function action is: begin function action for each a A do if | Do(a) then return a If | Do(a) then return a return null end function action Franco Guidi P.
Reactive architectures Forces: Rejection of symbolic representations Rational behaviour is seen innately linked to the environment Intelligent behaviour emerges from the interaction of various simpler behaviours situation action Franco Guidi P.
Reactive architectures: example A mobile robot that avoids obstacles ActionGoTo (x,y): moves to position (x,y) ActionAvoidFront(z): turn left or rigth if there is an obstacle in a distance less than z units. Franco Guidi P.
Belief-Desire-Intention (BDI) architectures They have their Roots in understanding practical reasoning. It involves two processes: Deliberation: deciding what goals we want to achieve. Means-ends reasoning: deciding how we are going to achieve these goals. Franco Guidi P.
BDI architectures (cont.) First: try to understand what options are available. Then: choose between them, and commit to some. These choosen options become intentions, which then determine the agent’s actions. Franco Guidi P.
BDI architectures (cont.) Intentions are important in practical reasoning: Intentions drive means-end reasoning Intentions constrain future deliberation Intentions persist Intentions influence beliefs upon which future reasoning is based Franco Guidi P.
BDI architectures: reconsideration of intentions Example (taken from Cisneros et al.) P Time t = 0 Desire: Kill the alien Intention: Reach point P Belief: The alien is at P Franco Guidi P.
BDI architectures: reconsideration of intentions Q P Time t = 1 Desire: Kill the alien Intention: Reach point P Belief: The alien is at P Wrong! Franco Guidi P.
BDI architectures: reconsideration of intentions Dilemma: If intentions are not reconsidered sufficiently often, the agent can continue to aim to an unreachable or no longer valid goal (bold agents) If intentions are constantly reconsidered, the agent can fail to dedicate sufficient work to achieve any goal (cautious agents) Some experiments: Environments with low rate of change: better bold agents than cautious ones. Environments with high rate of change: the opposite. Franco Guidi P.
Layered architectures To satisfy the requirement of integrating a reactive and a proactive behaviour. Two types of control flow: Horizontal layering: software layers are each directly connected to the sensory input and action output. Vertical layering: sensory input and action output are each dealt with by at most one layer each. Franco Guidi P.
Layered architectures: horizontal layering Advantage: conceptual simplicity (to implement n behaviours we implement n layers) Problem: a mediator function is required to ensure the coherence of tje overall behaviour Layer n … perceptual input action output Layer 2 Layer 1 Franco Guidi P.
Layered architectures: vertical layering Subdivided into: action output Two pass architecture Layer n … Layer 2 Layer 1 Layer n … Layer 2 Layer 1 perceptual input perceptual input action output One pass architecture Franco Guidi P.
Layered architectures: TOURINGMACHINES Proposed by Innes Ferguson Modelling layer sensor input Perception subsystem Planning layer Action subsystem action output Reactive layer Control system Franco Guidi P.
Layered architectures: INTERRAP Proposed by Jörg Müller Cooperation layer Social knowledge Plan layer Planning knowledge Behaviour layer World model World interface sensor input action output Franco Guidi P.
Multi-Agent Systems (MAS) Franco Guidi P.
Main idea Cooperative working environment comprising synergistic software components can cope with complex problems. Franco Guidi P.
Cooperation Three main approaches: Cooperative interaction Contract-based co-operation Negotiated cooperation Franco Guidi P.
Rationality Priciple of social rationality by Hogg et al.: “Whithin an agent-based society, if a socially rational agent can perform an action so that agents’ join benefit is greather than their joint loss then it may select that action.” EU(a) = f( IU(a), SU(a) ) where: EU(a): expected utility of action a IU(a): individual utility SU(a): social utility Franco Guidi P.
Communication Agent Communication Languages (ACL) Different ACLs: FIPA (Foundation for Intelligent Physical Agents) ACL etc. Ontology Franco Guidi P.
MAS Tools and Techniques Some products identified by AgentLink: ADK AgentSheets AgentTool Bee-gent CABLE Cornet Way JAK CORMAS Cougaar DECAF Excalibur Agent FIPA-OS Grasshopper IDOL IMPACT JACK JADE JADE / LEAP JAFMAS /JIVE JATLiteBean JESS Kaarlboga LEE Living Markets MAML MAP /CSM Massyve Kit NARVAL RePast RESTINA SEMOA SIM_AGENT StarLogo TuCSon VOYAGER Xraptor ZEUZ Franco Guidi P.
Summary Franco Guidi P.
Summary Agents exhibit autonomy, responsiveness, proactiveness and social ability. They may also exhibit mobility, veracity, benevolence, rationality and cooperation Frameworks for agent development see agents as intentional systems. Some invoke semantics of possible worlds, other distinguish between explicit and implicit belief Franco Guidi P.
Summary (cont.) Agents’ architectures may be fundamentally deliberative or reactive, or may combine both approaches in a hybrid architecture Rationality in MAS involves considering the social and the individual utility of an action For an effective communication between agents is required a common language and ontology Franco Guidi P.
References Cisneros J., Huerta D. and Mandujano S. “Arquitectura BDI - Sistemas multiagente” Franklin S. et al. “Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents” in Proceedings of the Third International Workshop an Agent Theories, Architectures, and Languages. Springer-Verlag, 1996 Maes P. “Software Agents”. Available http://www.media.mit.edu Mangina E. “Review of software products for multi-agent systems”. Available http://www.agentlink.com Wooldridge M. “An introduction to multiagent systems”. John Wiley & Sons, Chichester, February 2002 Franco Guidi P.