Presentation is loading. Please wait.

Presentation is loading. Please wait.

Intelligent Agents Russell and Norvig: AI: A Modern Approach

Similar presentations


Presentation on theme: "Intelligent Agents Russell and Norvig: AI: A Modern Approach"— Presentation transcript:

1 Intelligent Agents Russell and Norvig: AI: A Modern Approach
Mike Wooldridge: An Introduction to MAS

2 Outline Agents and environments Rationality
PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

3 Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. The agent function maps from percept histories to actions: f: P*  A

4 A Semantic Framework

5 Vacuum-cleaner world Percepts: location and contents, e.g., [A,Dirty]
Actions: Left, Right, Suck Function-table (table look-up agent) Percept Action [A, Clean] [A, Dirty] [B, Clean] [B, Dirty] Right Suck Left

6 Agency Autonomous Reactivity Proactivity Social ability

7 Reactivity If a program’s environment is guaranteed to be fixed, the program need never worry about its own success or failure – program just executes blindly Example of fixed environment: compiler The real world is not like that: things change, information is incomplete. Many (most?) interesting environments are dynamic A reactive system is one that maintains an ongoing interaction with its environment, responds to changes that occur in it.

8 Proactiveness Reacting to an environment is easy (e.g., stimulus  response rules) But we generally want agents to do things for us Hence goal directed behavior Pro-activeness = generating and attempting to achieve goals; not driven solely by events; taking the initiative

9 Social Ability The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account Some goals can only be achieved with the cooperation of others Social ability in agents is the ability to interact with other agents (and possibly humans) via some kind of agent-communication language.

10 Rational agents An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful. Performance measure: An objective criterion for success of an agent's behavior.

11 Rational agents Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. The performance measure that defines the criterion of success. The agent’s prior knowledge of the environment. The actions that the agent can perform. The agent’s percept sequence to date.

12 Rational agents Rationality is distinct from omniscience (all-knowing with infinite knowledge) Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, planning) An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt).

13 Specify the setting for intelligent agent design: PEAS Description
Performance measure Environment Actuators Sensors

14 Specify the setting for intelligent agent design: PEAS Description
Performance measure Environment Actuators Sensors

15 Environment Types Fully Observable vs. Partially Observable
An fully observable environment is one in which the agent can obtain complete, accurate, up-to-date information about the environment’s state. Most moderately complex environments (including, for example, the everyday physical world and the Internet) are partially observable. The more accessible an environment is, the simpler it is to build agents to operate in it. Do not maintain the internal state to keep track of the world.

16 Environment Types Deterministic vs. non-deterministic
A deterministic environment is one in which any action has a single guaranteed effect — there is no uncertainty about the state that will result from performing an action. In deterministic environments, agents do not worry about uncertainty. Non-deterministic environments present greater problems for the agent designer.

17 Environment Types Episodic vs. sequential
In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performance of an agent in different scenarios. In episodic environment, agents do not think ahead.

18 Environment Types Static vs. dynamic
A static environment is one that can be assumed to remain unchanged except by the performance of actions by the agent. Agents do not keep looking at ENV when making decisions. A dynamic environment is one that has other processes operating on it, and which hence changes in ways beyond the agent’s control.

19 Environment Types Discrete vs. continuous
An environment is discrete if there are a fixed, finite number of actions and percepts in it. Continuous environments have a certain level of mismatch with computer systems

20 Agent types Simple reflex agents Model-based reflex agents
Goal-based agents Utility-based agents Generality

21 Simple reflex agents Percepts ENV Inference Reflex rules Engine
Actions Use condition-action rules to map the agent’s perceptions directly to action. Making decisions only with inputs.

22 Model-based reflex agents
Update World Model World Model ENV Percepts Decision Rules Actions Have an internal model (state) of the external environment.

23 Goal-based agents Update World Percepts World Model Model ENV Goals
Tasks Trigger/Prioritize Goals/Tasks Problem Solving Methods Select goals/tasks Actions Select Methods/ Actions

24 Utility-based agents Update World Percepts World Model Model ENV Goals
Tasks Trigger/Prioritize Goals/Tasks Problem Solving Methods Select goals/tasks Actions Utility Select Methods/ Actions

25 Task of Software Agents
Interacting with human users Personal assistants ( processing) Information/Product search Sales Chat room host Computer generated characters in games Interacting with other agents Facilitators. Brokers.

26 Intelligent Behavior of Agents
Learning about users Learning about information sources Learning about categorizing information Learning about similarity Constraint satisfaction algorithms Reasoning using domain-specific knowledge Planning

27 Technologies of Software Agents
Machine learning Information retrieval Agent communication Agent coordination Agent negotiation Natural language understanding Distributed objects

28 Multi-Agent Systems What are MAS Objections to MAS Agents and objects
Agents and expert systems Agent communication languages Application areas

29 What are Multi-Agent Systems?

30 MultiAgent Systems: A Definition
A multiagent system is one that consists of a number of agents, which have different goals and interact with one-another To successfully interact, they will require the ability to cooperate, compete, and negotiate with each other, much as people do

31 MultiAgent Systems: A Definition
Two key problems: How do we build agents capable of independent, autonomous action, so that they can successfully carry out tasks we delegate to them? (agent design) How do we build agents that are capable of interacting (cooperating, coordinating, negotiating) with other agents in order to successfully carry out those delegated tasks, especially when the other agents cannot be assumed to share the same interests/goals? (society design)

32 Multi-Agent Systems It addresses questions such as:
How can cooperation emerge in societies of self-interested agents? What kinds of communication languages can agents use? How can self-interested agents recognize conflict, and how can they (nevertheless) reach agreement? How can autonomous agents coordinate their activities so as to cooperatively achieve goals? These questions are all addressed in part by other disciplines (notably economics and social sciences). Agents are computational, information processing entities.

33 Objections to MAS Isn’t it all just Distributed/Concurrent Systems? There is much to learn from this community, but: Agents are assumed to be autonomous, capable of making independent decision – so they need mechanisms to synchronize and coordinate their activities at run time.

34 Objections to MAS Isn’t it all just AI?
We don’t need to solve all the problems of artificial intelligence (i.e., all the components of intelligence). Classical AI ignored social aspects of agency. These are important parts of intelligent activity in real-world settings.

35 Objections to MAS Isn’t it all just Economics/Game Theory? These fields also have a lot to teach us in multiagent systems (like rationality), but: Insofar as game theory provides descriptive concepts, it doesn’t always tell us how to compute solutions; we’re concerned with computational, resource-bounded agents.

36 Objections to MAS Isn’t it all just Social Science?
We can draw insights from the study of human societies, but again agents are computational, resource-bounded entities.

37 Agents and Objects Are agents just objects by another name? Object:
encapsulates some state communicates via message passing has methods, corresponding to operations that may be performed on this state

38 Agents and Objects Main differences:
agents are autonomous: they decide for themselves whether or not to perform an action on request from another agent agents are smart: capable of flexible (reactive, pro-active, social) behavior, and the standard object model has nothing to say about such types of behavior agents are active: a multi-agent system is inherently multi-threaded, in that each agent is assumed to have at least one thread of active control

39 Objects do it for free… agents do it because they want to
agents do it for money

40 Agents and Expert Systems
Expert systems typically disembodied ‘expertise’ about some (abstract) domain of discourse (e.g., blood diseases) Example: MYCIN knows about blood diseases in humans It has a wealth of knowledge about blood diseases, in the form of rules A doctor can obtain expert advice about blood diseases by giving MYCIN facts, answering questions, and posing queries

41 Agents and Expert Systems
Main differences: Distributed. agents situated in an environment: MYCIN is not aware of the world — only information obtained is by asking the user questions agents act: MYCIN does not operate on patients

42 Agent Communication speech acts KQML

43 Speech Acts Searle (1969) identified various different types of speech act: representatives: such as informing, e.g., ‘It is raining’ directives: attempts to get the hearer to do something e.g., ‘please make the tea’ commisives: which commit the speaker to doing something, e.g., ‘I promise to… ’ expressives: whereby a speaker expresses a mental state, e.g., ‘thank you!’ declarations: such as declaring war or christening

44 KQML Knowledge Query and Manipulation Language
A language for the “message structure” of agent communication It describes the “speech act” of the message using a set of performatives (communicative verbs). Each performative has required and optional arguments. The content language of the message is not part of KQML, but can be specified by KQML performatives.

45 An Example (stream-all :content “(PRICE ?my-profolio ?price)”
: receiver stock-server : language PROLOG : ontology NYSE ) The stream-all performative asks a set of answers to be returned into a stream of replies.

46 KQML Performatives It describes the speech acts of the message.
It specifies the communication protocol to be used. Classified into 7 categories.

47 Categories of Performatives
Basic query: evaluate, ask-if, ask-about, ask-one, ask-all Multiple-response query: stream-about, stream-all Response: reply, sorry Generic information: tell, achieve (ask other agents to create a goal), cancel, untell (undo tell), unachieve (forget the previous goal) Generator: standby, ready, next, rest, discard Capability-definition: advertise, recommend, subscribe, monitor, import, export Networking: register, unregister, forward, broadcast, route.

48 Application Areas Agents are usefully applied in domains where autonomous action is required. Main application areas: Distributed Systems Networks Human-Computer Interfaces

49 Domain 1: Distributed Systems
In this area, the idea of an agent is seen as a natural metaphor, and a development of the idea of concurrent object programming. Example domains: air traffic control (Sydney airport) business process management power systems management distributed sensing factory process control

50 Domain 2: Networks There is currently a lot of interest in mobile agents, that can move themselves around a network (e.g., the Internet) operating on a user’s behalf This kind of functionality is achieved in the TELESCRIPT language developed by General Magic for remote programming Applications include: hand-held PDAs with limited bandwidth information gathering

51 Domain 3: HCI One area of much current interest is the use of agent in interfaces The idea is to move away from the direct manipulation paradigm that has dominated for so long Agents sit ‘over’ applications, watching, learning, and eventually doing things without being told — taking the initiative Pioneering work at MIT Media Lab (Pattie Maes): news reader web browsers mail readers


Download ppt "Intelligent Agents Russell and Norvig: AI: A Modern Approach"

Similar presentations


Ads by Google