Intelligent Agents Russell and Norvig: AI: A Modern Approach

Slides:



Advertisements
Similar presentations
Additional Topics ARTIFICIAL INTELLIGENCE
Advertisements

Agents & Mobile Agents.
Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
Artificial Intelligence: Chapter 2
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Agents the AI metaphor. D Goforth - COSC 4117, fall The agent model  agents include all aspects of AI in one object-oriented organizing model:
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
Agents and Intelligent Agents  An agent is anything that can be viewed as  perceiving its environment through sensors and  acting upon that environment.
Cooperating Intelligent Systems Intelligent Agents Chapter 2, AIMA.
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
Cooperating Intelligent Systems Intelligent Agents Chapter 2, AIMA.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Plans for Today Chapter 2: Intelligent Agents (until break) Lisp: Some questions that came up in lab Resume intelligent agents after Lisp issues.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Intelligent Agents Chapter 2.
Design of Multi-Agent Systems Teacher Bart Verheij Student assistants Albert Hankel Elske van der Vaart Web site
Rational Agents (Chapter 2)
Introduction to Intelligent Software Agents Martin Beer, School of Computing & Management Sciences, Sheffield Hallam University, Sheffield, United Kingdom.
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
1 AI and Agents CS 171/271 (Chapters 1 and 2) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Agents. Intelligent Agents. MultiAgent Systems. Delegation Computers are doing more for us – without our intervention We are giving control to computers,
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
COMP 4640 Intelligent & Interactive Systems Cheryl Seals, Ph.D. Computer Science & Software Engineering Auburn University Lecture 2: Intelligent Agents.
How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
Artificial Intelligence Lecture 1. Objectives Definition Foundation of AI History of AI Agent Application of AI.
Introduction of Intelligent Agents
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Intelligent Agents. 2 What is an Agent? The main point about agents is they are autonomous: capable of acting independently, exhibiting control over their.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Intelligent Agents: Technology and Applications Agent Communications IST 597B Spring 2003 John Yen.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
Chapter 2 Agents. Intelligent Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
Web-Mining Agents Cooperating Agents for Information Retrieval Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny.
Agents and their Environment
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
© James D. Skrentny from notes by C. Dyer, et. al.
Intelligent Agents Chapter 2.
The VSK logic for Intelligent Agents
Intelligent Agents Chapter 2.
Structure of intelligent agents and environments
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Intelligent Agents Russell and Norvig: AI: A Modern Approach Mike Wooldridge: An Introduction to MAS

Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types

Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. The agent function maps from percept histories to actions: f: P*  A

A Semantic Framework

Vacuum-cleaner world Percepts: location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck Function-table (table look-up agent) Percept Action [A, Clean] [A, Dirty] [B, Clean] [B, Dirty] Right Suck Left

Agency Autonomous Reactivity Proactivity Social ability

Reactivity If a program’s environment is guaranteed to be fixed, the program need never worry about its own success or failure – program just executes blindly Example of fixed environment: compiler The real world is not like that: things change, information is incomplete. Many (most?) interesting environments are dynamic A reactive system is one that maintains an ongoing interaction with its environment, responds to changes that occur in it.

Proactiveness Reacting to an environment is easy (e.g., stimulus  response rules) But we generally want agents to do things for us Hence goal directed behavior Pro-activeness = generating and attempting to achieve goals; not driven solely by events; taking the initiative

Social Ability The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account Some goals can only be achieved with the cooperation of others Social ability in agents is the ability to interact with other agents (and possibly humans) via some kind of agent-communication language.

Rational agents An agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful. Performance measure: An objective criterion for success of an agent's behavior.

Rational agents Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. The performance measure that defines the criterion of success. The agent’s prior knowledge of the environment. The actions that the agent can perform. The agent’s percept sequence to date.

Rational agents Rationality is distinct from omniscience (all-knowing with infinite knowledge) Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, planning) An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt).

Specify the setting for intelligent agent design: PEAS Description Performance measure Environment Actuators Sensors

Specify the setting for intelligent agent design: PEAS Description Performance measure Environment Actuators Sensors

Environment Types Fully Observable vs. Partially Observable An fully observable environment is one in which the agent can obtain complete, accurate, up-to-date information about the environment’s state. Most moderately complex environments (including, for example, the everyday physical world and the Internet) are partially observable. The more accessible an environment is, the simpler it is to build agents to operate in it. Do not maintain the internal state to keep track of the world.

Environment Types Deterministic vs. non-deterministic A deterministic environment is one in which any action has a single guaranteed effect — there is no uncertainty about the state that will result from performing an action. In deterministic environments, agents do not worry about uncertainty. Non-deterministic environments present greater problems for the agent designer.

Environment Types Episodic vs. sequential In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performance of an agent in different scenarios. In episodic environment, agents do not think ahead.

Environment Types Static vs. dynamic A static environment is one that can be assumed to remain unchanged except by the performance of actions by the agent. Agents do not keep looking at ENV when making decisions. A dynamic environment is one that has other processes operating on it, and which hence changes in ways beyond the agent’s control.

Environment Types Discrete vs. continuous An environment is discrete if there are a fixed, finite number of actions and percepts in it. Continuous environments have a certain level of mismatch with computer systems

Agent types Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents Generality

Simple reflex agents Percepts ENV Inference Reflex rules Engine Actions Use condition-action rules to map the agent’s perceptions directly to action. Making decisions only with inputs.

Model-based reflex agents Update World Model World Model ENV Percepts Decision Rules Actions Have an internal model (state) of the external environment.

Goal-based agents Update World Percepts World Model Model ENV Goals Tasks Trigger/Prioritize Goals/Tasks Problem Solving Methods Select goals/tasks Actions Select Methods/ Actions

Utility-based agents Update World Percepts World Model Model ENV Goals Tasks Trigger/Prioritize Goals/Tasks Problem Solving Methods Select goals/tasks Actions Utility Select Methods/ Actions

Task of Software Agents Interacting with human users Personal assistants (email processing) Information/Product search Sales Chat room host Computer generated characters in games Interacting with other agents Facilitators. Brokers.

Intelligent Behavior of Agents Learning about users Learning about information sources Learning about categorizing information Learning about similarity Constraint satisfaction algorithms Reasoning using domain-specific knowledge Planning

Technologies of Software Agents Machine learning Information retrieval Agent communication Agent coordination Agent negotiation Natural language understanding Distributed objects

Multi-Agent Systems What are MAS Objections to MAS Agents and objects Agents and expert systems Agent communication languages Application areas

What are Multi-Agent Systems?

MultiAgent Systems: A Definition A multiagent system is one that consists of a number of agents, which have different goals and interact with one-another To successfully interact, they will require the ability to cooperate, compete, and negotiate with each other, much as people do

MultiAgent Systems: A Definition Two key problems: How do we build agents capable of independent, autonomous action, so that they can successfully carry out tasks we delegate to them? (agent design) How do we build agents that are capable of interacting (cooperating, coordinating, negotiating) with other agents in order to successfully carry out those delegated tasks, especially when the other agents cannot be assumed to share the same interests/goals? (society design)

Multi-Agent Systems It addresses questions such as: How can cooperation emerge in societies of self-interested agents? What kinds of communication languages can agents use? How can self-interested agents recognize conflict, and how can they (nevertheless) reach agreement? How can autonomous agents coordinate their activities so as to cooperatively achieve goals? These questions are all addressed in part by other disciplines (notably economics and social sciences). Agents are computational, information processing entities.

Objections to MAS Isn’t it all just Distributed/Concurrent Systems? There is much to learn from this community, but: Agents are assumed to be autonomous, capable of making independent decision – so they need mechanisms to synchronize and coordinate their activities at run time.

Objections to MAS Isn’t it all just AI? We don’t need to solve all the problems of artificial intelligence (i.e., all the components of intelligence). Classical AI ignored social aspects of agency. These are important parts of intelligent activity in real-world settings.

Objections to MAS Isn’t it all just Economics/Game Theory? These fields also have a lot to teach us in multiagent systems (like rationality), but: Insofar as game theory provides descriptive concepts, it doesn’t always tell us how to compute solutions; we’re concerned with computational, resource-bounded agents.

Objections to MAS Isn’t it all just Social Science? We can draw insights from the study of human societies, but again agents are computational, resource-bounded entities.

Agents and Objects Are agents just objects by another name? Object: encapsulates some state communicates via message passing has methods, corresponding to operations that may be performed on this state

Agents and Objects Main differences: agents are autonomous: they decide for themselves whether or not to perform an action on request from another agent agents are smart: capable of flexible (reactive, pro-active, social) behavior, and the standard object model has nothing to say about such types of behavior agents are active: a multi-agent system is inherently multi-threaded, in that each agent is assumed to have at least one thread of active control

Objects do it for free… agents do it because they want to agents do it for money

Agents and Expert Systems Expert systems typically disembodied ‘expertise’ about some (abstract) domain of discourse (e.g., blood diseases) Example: MYCIN knows about blood diseases in humans It has a wealth of knowledge about blood diseases, in the form of rules A doctor can obtain expert advice about blood diseases by giving MYCIN facts, answering questions, and posing queries

Agents and Expert Systems Main differences: Distributed. agents situated in an environment: MYCIN is not aware of the world — only information obtained is by asking the user questions agents act: MYCIN does not operate on patients

Agent Communication speech acts KQML

Speech Acts Searle (1969) identified various different types of speech act: representatives: such as informing, e.g., ‘It is raining’ directives: attempts to get the hearer to do something e.g., ‘please make the tea’ commisives: which commit the speaker to doing something, e.g., ‘I promise to… ’ expressives: whereby a speaker expresses a mental state, e.g., ‘thank you!’ declarations: such as declaring war or christening

KQML Knowledge Query and Manipulation Language A language for the “message structure” of agent communication It describes the “speech act” of the message using a set of performatives (communicative verbs). Each performative has required and optional arguments. The content language of the message is not part of KQML, but can be specified by KQML performatives.

An Example (stream-all :content “(PRICE ?my-profolio ?price)” : receiver stock-server : language PROLOG : ontology NYSE ) The stream-all performative asks a set of answers to be returned into a stream of replies.

KQML Performatives It describes the speech acts of the message. It specifies the communication protocol to be used. Classified into 7 categories.

Categories of Performatives Basic query: evaluate, ask-if, ask-about, ask-one, ask-all Multiple-response query: stream-about, stream-all Response: reply, sorry Generic information: tell, achieve (ask other agents to create a goal), cancel, untell (undo tell), unachieve (forget the previous goal) Generator: standby, ready, next, rest, discard Capability-definition: advertise, recommend, subscribe, monitor, import, export Networking: register, unregister, forward, broadcast, route.

Application Areas Agents are usefully applied in domains where autonomous action is required. Main application areas: Distributed Systems Networks Human-Computer Interfaces

Domain 1: Distributed Systems In this area, the idea of an agent is seen as a natural metaphor, and a development of the idea of concurrent object programming. Example domains: air traffic control (Sydney airport) business process management power systems management distributed sensing factory process control

Domain 2: Networks There is currently a lot of interest in mobile agents, that can move themselves around a network (e.g., the Internet) operating on a user’s behalf This kind of functionality is achieved in the TELESCRIPT language developed by General Magic for remote programming Applications include: hand-held PDAs with limited bandwidth information gathering

Domain 3: HCI One area of much current interest is the use of agent in interfaces The idea is to move away from the direct manipulation paradigm that has dominated for so long Agents sit ‘over’ applications, watching, learning, and eventually doing things without being told — taking the initiative Pioneering work at MIT Media Lab (Pattie Maes): news reader web browsers mail readers