Download presentation
Presentation is loading. Please wait.
Published byBritton Carson Modified over 6 years ago
1
How R&N define AI humanly vs. rationally thinking vs. acting
humanly vs. rationally Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally thinking vs. acting Rational Agents
2
Acting Rationally “Doing the right thing" … "that which is expected to maximize goal achievement, given the available information." Unlike the previous approach (thinking rationally) the process of "acting" rationally doesn't necessarily require "thinking." (blinking fits in here). In AI we define things that act rationally as “agents.”
3
Agents “An agent is simply something that acts.”
An agent is an entity that is capable of perceiving its environment (through sensors) and responding appropriately to it (through actuators).
4
Agents If the agent is intelligent, it should be able to weigh alternatives. “A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.”
5
Agents An agent should be able to derive new information from data by applying sound logical rules. It should possess extensive knowledge in the domain where it is expected to solve problems.
6
Agents We will consider true intelligent, rational, agents as entities which display: Perception Persistence Adaptability Autonomous control
7
Agents and Environments
Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: The agent program runs on the physical architecture to produce
8
Agents and Environments
Vacuum-Cleaner World (Figure 2.2)
9
Agents and Environments
Vacuum-Cleaner World (Figure 2.3)
10
Agents and Environments
Vacuum-Cleaner World (Figure 2.3)
11
Agents and Environments
Vacuum-Cleaner World
12
Rationality A rational agent does the right thing. What is the right thing? One possibility: The action that will maximize success. But what is success? The action that maximizes the agent’s goals. Use a performance measure to evaluate agent’s success. So what would be a good performance measure for the vacuum agent?
13
Rationality Fixed performance measure evaluates the environment sequence One point per square cleaned up in time T One point per clean square per time step, minus one per move? Penalize for more than k dirty squares? A rational agent chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date.
14
Rationality Rational agent definition: “For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.”
15
Rationality Rationality is not Rationality implies Omniscience
Clairvoyance Success Rationality implies Exploration Learning Autonomy
16
PEAS To design a rational agent, we must specify the task environment (the “problems” to which rational agents are the “solutions”). Performance measure Environment Actuators Sensors Example: the task of designing an automated taxi.
17
PEAS Performance measure?
Environment? Actuators? Sensors? Safety, destination, profits, legality, comfort… US streets/freeways, traffic, pedestrians, weather… Steering, accelerator, brake, horn, speaker/display… Video, accelerometers, gauges, engine sensors, keyboard, GPS, …
18
PEAS - Internet news gathering agent
Scans Internet news sources to pick interesting items for its customers Performance measure? Environment? Actuators? Sensors?
19
PEAS - Internet news gathering agent
Scans Internet news sources to pick interesting items for its customers Performance measure? Environment? Actuators? Sensors?
20
Environment Types We often describe the environment based on six attributes. Fully/partially observable Deterministic/stochastic Episodic/sequential Static/dynamic Discrete/continuous Single agent/multiagent
21
Environment Types Categorization of environment tasks:
Fully/partially observable extent to which an agent’s sensors give it access to the complete state of the environment Deterministic/stochastic extent to which the next state of the environment is determined by the current state and the current action
22
Environment Types Categorization of environment tasks:
Episodic/sequential extent to which the agent’s experience is divided into atomic episodes Static/dynamic extent to which the environment can change while the agent is deliberating
23
Environment Types Categorization of environment tasks:
Discrete/continuous extent to which state of the environment, time, percepts and actions of the agent are expressed as a set of discrete values Single agent/multiagent
24
This is as far as we got
25
Environment Types
26
Environment Types
27
Environment Types
28
Environment Types
29
Environment Types
30
Environment Types
31
Environment Types The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent
32
RoboCup “By the year 2050, develop a team of fully autonomous humanoid robots that can win against the human world soccer champion team. “ ( Develop a PEAS description of the task environment for a RoboCup participant. Include a thorough classification of the environment using R&N’s six properties of task environments.
33
Agent Types Agent = architecture + program Simple reflex agent
Reflex agent with state Goal-based agent Utility-based agent Learning agent (arguably not a 5th agent but a different model of any of the previous agents).
34
Agent Types Simple reflex agent
35
Agent Types Reflex agent with state
36
Agent Types Goal-based agent
37
Agent Types Utility-based agent
38
Agent Types Learning agent
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.