Download presentation
Presentation is loading. Please wait.
1
Intelligent Agents Marco Loog
2
Some Unnecessary Remark
Book is rather formal and strict, we probably won’t be [in a lot of cases]
3
From Chapter 1 What is AI? Two main considerations
Thinking <> behavior Model humans <> ideal standard Book is concerned with acting & ideals : acting rationally [Good for games? Possible objections?]
4
More From Chapter 1 Acting rationally : rational agent
Agent : something that acts Computer agent : operating under autonomous control, perceiving environment, adapting to change, capable of taking on another’s goal Rational agent : acts so as to achieve the best [expected] outcome
5
And More... Book deals with
Intelligence that is concerned with rational action General principles of rational agents and their construction Ideally, intelligent agent takes best possible action However, the issue of limited rationality is also discussed
6
Some More Unnecessary Remarks
We will regularly talk about AI without necessarily referring to games and / or exemplifying everything based on games Academia <> practice This may ask a lot from the student’s creativity, autonomy, imagination, etc.
7
Intelligent Agents We have determined what an intelligent agent is
In Chapter 2, agents, environments, and the interactions / relations between them are examined Start out by a kind of taxonomy of possible agents and environments
8
Outline Agents and environments Rationality PEAS Environment types
Agent types
9
Agents and Environments
Agent : anything that can be viewed as perceiving its environment through sensors acting upon environment through actuators Human agent : eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators Robotic agent : cameras and infrared range finders for sensors; various motors for actuators
10
Agents and Environments
Agent : anything that can be viewed as perceiving its environment through sensors acting upon environment through actuators Percepts : agent’s perceptual inputs from the environment Percept sequence : complete history of percepts Generally, choice of action can depend on entire percept sequence
11
Agents and Environments
If possible to specify agent’s action for every percept sequence, then agent is fully defined Agent’s complete behavior is described by this specification, which is called the agent function
12
Agents and Environments
If possible to specify agent’s action for every percept sequence, then agent is fully defined Agent’s complete behavior is described by this specification, which is called the agent function Mathematically, the agent function f maps percept histories P to actions A; f:P -> A Note that P’s cardinality is often infinite and therefore tabulating it is impossible
13
Agents and Environments
Agent function is actually implemented via an agent program, which runs on the agent architecture Agent program runs on the physical architecture to produce f Agent := agent function + agent architecture
14
Schematically...
15
Vacuum-Cleaner World Percepts : location and contents, e.g., [A,Dirty]
Actions: Left, Right, Suck, NoOp [do nothing]
16
A Vacuum-Cleaner Agent
17
? What is the right way to fill out the previous percept sequence / action table? What makes an agent good? What makes it bad? How about intelligent?
18
Rational Agents Agent should strive to “do the right thing” using
What it can perceive The actions it can perform Right action : one that will cause the agent to be most successful Performance measure : objective criterion for measuring agent’s success
19
“General Rule” It is better to design performance measures according to what one actually wants in the environment, rather than according to how one thinks the agent should behave True? And for games? [FSM?] One way or the other : this might be very difficult
20
For Our Vacuum-Cleaner Agent
Possible performance measures Amount of dirt cleaned up? Amount of time taken? Amount of electricity consumed? Amount of noise generated? Etc.?
21
At the Risk of Somehow Repeating Myself Too Often...
What is rational for an agent depends on the following Performance measure that defines success Prior knowledge of the environment Possible actions Percept sequence to date
22
Definition of Rational Agent
For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in, a priori knowledge the agent has
23
Some Necessary Remarks
Rationality is distinct from omniscience Agents can perform actions to modify future percepts to obtain useful information [gathering / exploration] An agent is autonomous if its behavior is determined by its own experience [with the ability to learn and adapt] Initial agent configuration could reflect prior knowledge, but may be modified and augmented based on its experience Let game agent adapt to new environment...
24
More : Agent Function Successful agents split up the task of determining the agent function in three periods Design : certain perception / action relations are directly available; f is partially pre-defined ‘In action’, while functioning : need for additional calculations before deciding on an action; right way to apply f Learning from experience : mapping between percept sequence an actions is reconsidered; f is altered
25
And More : Autonomy The more agent relies on prior knowledge of its designers, and the less on its percepts, the less autonomous it is “A rational agent should be autonomous—it should learn what it can to compensate for partial or incorrect prior knowledge” Do we agree?
26
Autonomy in Practice[?]
In practice, an agent may start with some prior knowledge + ability to learn “Hence, the incorporation of learning allows one to design a single rational agent that will succeed in a vast variety of environments”
27
Task Environments Before actually building rational agents, we should consider their environment Task environment : the ‘problems’ to which rational agents are the ‘solutions’
28
Task Environment & PEAS
PEAS : Performance measure, Environment, Actuators, Sensors Must first specify these setting for intelligent agent design
29
PEAS Consider the task of designing an automated taxi driver
Performance measure : safe, fast, legal, comfortable trip, maximize profits Environment : roads, other traffic, pedestrians, customers Actuators : steering wheel, accelerator, brake, signal, horn Sensors : cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard
30
Mo’ PEAS Agent : medical diagnosis system
Performance measure : healthy patient, minimize costs, lawsuits Environment : patient, hospital, staff Actuators: screen display [questions, tests, diagnoses, treatments, referrals] Sensors : keyboard [entry of symptoms, findings, patient’s answers]
31
Mo’ PEAS Agent : interactive English tutor
Performance measure : maximize student’s score on test Environment : set of students Actuators : screen display [exercises, suggestions, corrections] Sensors : keyboard
32
Environment Types Fully observable [vs. partially observable]
Agent's sensors give access to complete state of the environment at each point in time Fully observable leads to cheating... Deterministic [vs. stochastic] Next state of the environment is completely determined by the current state and the action executed by the agent. Strategic : deterministic except actions of other agents Episodic [vs. sequential] Agent's experience is divided into atomic ‘episodes’ [perceiving and performing a single action]; choice of action in each episode depends only on the episode itself
33
Deterministic [vs. Stochastic]
In reality, situations are often so complex that they may are better treated as stochastic [even if they are deterministic] [Luckily, we all like probability theory, statictics, etc., i.e., the appropriate tools for describing these environments]
34
Environment Types Static [vs. dynamic] Discrete [vs. continuous]
Environment is unchanged while an agent is deliberating Semidynamic : environment itself not change over time but agent’s performance score does Anyone playing chess? Clock -> semidynamic? Discrete [vs. continuous] Limited number of distinct, clearly defined percepts and actions Not clear-cut in practice Single agent [vs. multi-agent] Agent operating by itself in an environment
35
Single [vs. Multi-Agent]
Some subtle issue Which entities must be viewed as agents? The ones maximizing a performance measure, which depends on other agents Competitive [vs. cooperative] environment Pursuing goal of maximizing performance measure implies minimizing some other agent’s measure, e.g. as in chess
36
Environment Types [E.g]
Chess with Chess without Taxi a clock a clock driving Fully observable Yes Yes No Deterministic Strategic Strategic No Episodic No No No Static Semi Yes No Discrete Yes Yes No Single agent No No No Environment type largely determines agent design Real world is [of course] partially observable, stochastic, sequential, dynamic, continuous, multi-agent Not always cut and dried / definition dependent Chess again... : castling? En passant capture? Draws by repetition?
37
How About the Inside? Earlier : mainly only gave description of agent’s behavior, i.e., action that is performed following any percept sequence, i.e., the agent function Agent = architecture + program Job of AI : design agent program that implements agent function Architectures is, more or less, assumed to be given; in games [and other applications] design may take place simultaneously
38
Agent Functions and Programs
An agent is completely specified by the agent function mapping percept sequences to actions One agent function [or a small equivalence class] is rational [= optimal] Aim : find a way to implement the rational agent function concisely
39
Table-lookup Agent Lookup table that relates every percept sequence to the appropriate action Drawbacks Huge table [really huge in many cases] Take a long time to build the table [virtually infinite in many cases] No autonomy Even with learning, need a long time to learn the table entries However, it does what it should do...
40
Key Challenge for AI Find out how to write programs that, to the extent possible, produce rational behavior from a small amount of code [‘rules’] rather than from a large number of table entries
41
Four Basic Agent Types Four basic types of agent programs embodying principles of almost every intelligent system In order of increasing generality Simple reflex Model-based reflex Goal-based Utility-based
42
Simple Reflex Agent selects action on the basis of current percept [if... then...]
43
Program for a Vacuum-Cleaner
44
From Simple Reflex to... Simple reflex agents work only if the environment is fully observable, because decision should be made on the current percept
45
Model-Based Reflex Effective way to handle partial observability is keeping track of every part of the environment the agent cannot see now Some internal state should be maintained, which is based on the percept sequence, and which reflects some of the unobserved aspects of the current state
46
Internal State Updating requires the encoding of two types of knowledge in the program Knowledge about evolution of the world independent of the agent How do own actions influence the environment? Knowledge about world = model of world Model-based agent
47
Model-Based Reflex
48
From Model-Based Reflex to...
Knowledge about current state of the world not always enough In addition, it may be necessary to define a goal / situation that is desirable
49
Goal-Based
50
From Goal-Based to... For most environments, goals alone are not really enough to generate high-quality behavior There are many ways in which one achieve a goal... Instead, measure utility [‘hapiness’] Utility function maps states onto real numbers describing degree of hapiness Can perform tradeoff if there are several ways to a goal, or if there are multiple goals [using likelihood]
51
Utility-Based
52
Two Remarks on Utility Rather theoretical result : any rational agent must behave as if it possesses a utility function whose expected value it tries to maximize Utility-based agent programs are used let an agent handle inherent uncertainty in partially observable worlds
53
But How... ...do these agent programs come into existence?
The particular method suggested by Turing [the one from that machine] is to build learning agents, that should subsequently be teached One of the advantages of these agents is that they can [learn to] deal with unknown environments
54
Learning Agent Four conceptual components
Learning element : responsible for making improvements Performance element : responsible for external actions [previously considered equal to entire agent] Critic : provides feedback on how agent is doing and determines how performance element should be modified Problem generator : responsible for suggesting actions leading to new and informative experience
55
Learning Agent
56
Next Week More...
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.