Intelligent Agents Marco Loog.

Slides:



Advertisements
Similar presentations
Additional Topics ARTIFICIAL INTELLIGENCE
Advertisements

Artificial Intelligent
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
Agentes Inteligentes Capítulo 2. Contenido Agentes y medios ambientes Racionalidad PEAS (Performance measure, Environment, Actuators, Sensors) Tipos de.
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
COMP 4640 Intelligent and Interactive Systems Intelligent Agents Chapter 2.
Agents and Intelligent Agents  An agent is anything that can be viewed as  perceiving its environment through sensors and  acting upon that environment.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
AI CSC361: Intelligent Agents1 Intelligent Agents -1 CSC361.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Intelligent Agents Chapter 2.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, spring 2007.
Rational Agents (Chapter 2)
Introduction to Logic Programming WS2004/A.Polleres 1 Introduction to Artificial Intelligence MSc WS 2009 Intelligent Agents: Chapter 2.
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
INTELLIGENT AGENTS Chapter 2 02/12/ Outline  Agents and environments  Rationality  PEAS (Performance measure, Environment, Actuators, Sensors)
Artificial Intelligence
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
© Copyright 2008 STI INNSBRUCK Introduction to A rtificial I ntelligence MSc WS 2009 Intelligent Agents: Chapter.
Chapter 2 Intelligent Agents. Chapter 2 Intelligent Agents What is an agent ? An agent is anything that perceiving its environment through sensors and.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
1/34 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
AI in game (I) 권태경 Fall, outline AI definition taxonomy agents.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
CS 8520: Artificial Intelligence Intelligent Agents Paula Matuszek Fall, 2008 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
INTELLIGENT AGENTS. Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through.
Dr. Alaa Sagheer Chapter 2 Artificial Intelligence Ch2: Intelligent Agents Dr. Alaa Sagheer.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Chapter 2 Agents & Environments
CSC 9010 Spring Paula Matuszek Intelligent Agents Overview Slides based in part on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn.
Intelligent Agents Chapter 2 Dewi Liliana. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
Web-Mining Agents Cooperating Agents for Information Retrieval Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny.
Artificial Intelligence
EA C461 – Artificial Intelligence Intelligent Agents
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Artificial Intelligence
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Artificial Intelligence
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Intelligent Agents Marco Loog

Some Unnecessary Remark Book is rather formal and strict, we probably won’t be [in a lot of cases]

From Chapter 1 What is AI? Two main considerations Thinking <> behavior Model humans <> ideal standard Book is concerned with acting & ideals : acting rationally [Good for games? Possible objections?]

More From Chapter 1 Acting rationally : rational agent Agent : something that acts Computer agent : operating under autonomous control, perceiving environment, adapting to change, capable of taking on another’s goal Rational agent : acts so as to achieve the best [expected] outcome

And More... Book deals with Intelligence that is concerned with rational action General principles of rational agents and their construction Ideally, intelligent agent takes best possible action However, the issue of limited rationality is also discussed

Some More Unnecessary Remarks We will regularly talk about AI without necessarily referring to games and / or exemplifying everything based on games Academia <> practice This may ask a lot from the student’s creativity, autonomy, imagination, etc.

Intelligent Agents We have determined what an intelligent agent is In Chapter 2, agents, environments, and the interactions / relations between them are examined Start out by a kind of taxonomy of possible agents and environments

Outline Agents and environments Rationality PEAS Environment types Agent types

Agents and Environments Agent : anything that can be viewed as perceiving its environment through sensors acting upon environment through actuators Human agent : eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators Robotic agent : cameras and infrared range finders for sensors; various motors for actuators

Agents and Environments Agent : anything that can be viewed as perceiving its environment through sensors acting upon environment through actuators Percepts : agent’s perceptual inputs from the environment Percept sequence : complete history of percepts Generally, choice of action can depend on entire percept sequence

Agents and Environments If possible to specify agent’s action for every percept sequence, then agent is fully defined Agent’s complete behavior is described by this specification, which is called the agent function

Agents and Environments If possible to specify agent’s action for every percept sequence, then agent is fully defined Agent’s complete behavior is described by this specification, which is called the agent function Mathematically, the agent function f maps percept histories P to actions A; f:P -> A Note that P’s cardinality is often infinite and therefore tabulating it is impossible

Agents and Environments Agent function is actually implemented via an agent program, which runs on the agent architecture Agent program runs on the physical architecture to produce f Agent := agent function + agent architecture

Schematically...

Vacuum-Cleaner World Percepts : location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck, NoOp [do nothing]

A Vacuum-Cleaner Agent

? What is the right way to fill out the previous percept sequence / action table? What makes an agent good? What makes it bad? How about intelligent?

Rational Agents Agent should strive to “do the right thing” using What it can perceive The actions it can perform Right action : one that will cause the agent to be most successful Performance measure : objective criterion for measuring agent’s success

“General Rule” It is better to design performance measures according to what one actually wants in the environment, rather than according to how one thinks the agent should behave True? And for games? [FSM?] One way or the other : this might be very difficult

For Our Vacuum-Cleaner Agent Possible performance measures Amount of dirt cleaned up? Amount of time taken? Amount of electricity consumed? Amount of noise generated? Etc.?

At the Risk of Somehow Repeating Myself Too Often... What is rational for an agent depends on the following Performance measure that defines success Prior knowledge of the environment Possible actions Percept sequence to date

Definition of Rational Agent For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in, a priori knowledge the agent has

Some Necessary Remarks Rationality is distinct from omniscience Agents can perform actions to modify future percepts to obtain useful information [gathering / exploration] An agent is autonomous if its behavior is determined by its own experience [with the ability to learn and adapt] Initial agent configuration could reflect prior knowledge, but may be modified and augmented based on its experience Let game agent adapt to new environment...

More : Agent Function Successful agents split up the task of determining the agent function in three periods Design : certain perception / action relations are directly available; f is partially pre-defined ‘In action’, while functioning : need for additional calculations before deciding on an action; right way to apply f Learning from experience : mapping between percept sequence an actions is reconsidered; f is altered

And More : Autonomy The more agent relies on prior knowledge of its designers, and the less on its percepts, the less autonomous it is “A rational agent should be autonomous—it should learn what it can to compensate for partial or incorrect prior knowledge” Do we agree?

Autonomy in Practice[?] In practice, an agent may start with some prior knowledge + ability to learn “Hence, the incorporation of learning allows one to design a single rational agent that will succeed in a vast variety of environments”

Task Environments Before actually building rational agents, we should consider their environment Task environment : the ‘problems’ to which rational agents are the ‘solutions’

Task Environment & PEAS PEAS : Performance measure, Environment, Actuators, Sensors Must first specify these setting for intelligent agent design

PEAS Consider the task of designing an automated taxi driver Performance measure : safe, fast, legal, comfortable trip, maximize profits Environment : roads, other traffic, pedestrians, customers Actuators : steering wheel, accelerator, brake, signal, horn Sensors : cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard

Mo’ PEAS Agent : medical diagnosis system Performance measure : healthy patient, minimize costs, lawsuits Environment : patient, hospital, staff Actuators: screen display [questions, tests, diagnoses, treatments, referrals] Sensors : keyboard [entry of symptoms, findings, patient’s answers]

Mo’ PEAS Agent : interactive English tutor Performance measure : maximize student’s score on test Environment : set of students Actuators : screen display [exercises, suggestions, corrections] Sensors : keyboard

Environment Types Fully observable [vs. partially observable] Agent's sensors give access to complete state of the environment at each point in time Fully observable leads to cheating... Deterministic [vs. stochastic] Next state of the environment is completely determined by the current state and the action executed by the agent. Strategic : deterministic except actions of other agents Episodic [vs. sequential] Agent's experience is divided into atomic ‘episodes’ [perceiving and performing a single action]; choice of action in each episode depends only on the episode itself

Deterministic [vs. Stochastic] In reality, situations are often so complex that they may are better treated as stochastic [even if they are deterministic] [Luckily, we all like probability theory, statictics, etc., i.e., the appropriate tools for describing these environments]

Environment Types Static [vs. dynamic] Discrete [vs. continuous] Environment is unchanged while an agent is deliberating Semidynamic : environment itself not change over time but agent’s performance score does Anyone playing chess? Clock -> semidynamic? Discrete [vs. continuous] Limited number of distinct, clearly defined percepts and actions Not clear-cut in practice Single agent [vs. multi-agent] Agent operating by itself in an environment

Single [vs. Multi-Agent] Some subtle issue Which entities must be viewed as agents? The ones maximizing a performance measure, which depends on other agents Competitive [vs. cooperative] environment Pursuing goal of maximizing performance measure implies minimizing some other agent’s measure, e.g. as in chess

Environment Types [E.g] Chess with Chess without Taxi a clock a clock driving Fully observable Yes Yes No Deterministic Strategic Strategic No Episodic No No No Static Semi Yes No Discrete Yes Yes No Single agent No No No Environment type largely determines agent design Real world is [of course] partially observable, stochastic, sequential, dynamic, continuous, multi-agent Not always cut and dried / definition dependent Chess again... : castling? En passant capture? Draws by repetition?

How About the Inside? Earlier : mainly only gave description of agent’s behavior, i.e., action that is performed following any percept sequence, i.e., the agent function Agent = architecture + program Job of AI : design agent program that implements agent function Architectures is, more or less, assumed to be given; in games [and other applications] design may take place simultaneously

Agent Functions and Programs An agent is completely specified by the agent function mapping percept sequences to actions One agent function [or a small equivalence class] is rational [= optimal] Aim : find a way to implement the rational agent function concisely

Table-lookup Agent Lookup table that relates every percept sequence to the appropriate action Drawbacks Huge table [really huge in many cases] Take a long time to build the table [virtually infinite in many cases] No autonomy Even with learning, need a long time to learn the table entries However, it does what it should do...

Key Challenge for AI Find out how to write programs that, to the extent possible, produce rational behavior from a small amount of code [‘rules’] rather than from a large number of table entries

Four Basic Agent Types Four basic types of agent programs embodying principles of almost every intelligent system In order of increasing generality Simple reflex Model-based reflex Goal-based Utility-based

Simple Reflex Agent selects action on the basis of current percept [if... then...]

Program for a Vacuum-Cleaner

From Simple Reflex to... Simple reflex agents work only if the environment is fully observable, because decision should be made on the current percept

Model-Based Reflex Effective way to handle partial observability is keeping track of every part of the environment the agent cannot see now Some internal state should be maintained, which is based on the percept sequence, and which reflects some of the unobserved aspects of the current state

Internal State Updating requires the encoding of two types of knowledge in the program Knowledge about evolution of the world independent of the agent How do own actions influence the environment? Knowledge about world = model of world Model-based agent

Model-Based Reflex

From Model-Based Reflex to... Knowledge about current state of the world not always enough In addition, it may be necessary to define a goal / situation that is desirable

Goal-Based

From Goal-Based to... For most environments, goals alone are not really enough to generate high-quality behavior There are many ways in which one achieve a goal... Instead, measure utility [‘hapiness’] Utility function maps states onto real numbers describing degree of hapiness Can perform tradeoff if there are several ways to a goal, or if there are multiple goals [using likelihood]

Utility-Based

Two Remarks on Utility Rather theoretical result : any rational agent must behave as if it possesses a utility function whose expected value it tries to maximize Utility-based agent programs are used let an agent handle inherent uncertainty in partially observable worlds

But How... ...do these agent programs come into existence? The particular method suggested by Turing [the one from that machine] is to build learning agents, that should subsequently be teached One of the advantages of these agents is that they can [learn to] deal with unknown environments

Learning Agent Four conceptual components Learning element : responsible for making improvements Performance element : responsible for external actions [previously considered equal to entire agent] Critic : provides feedback on how agent is doing and determines how performance element should be modified Problem generator : responsible for suggesting actions leading to new and informative experience

Learning Agent

Next Week More...