Download presentation
Presentation is loading. Please wait.
Published byLorin Fleming Modified over 9 years ago
1
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College
2
Agents An agent perceives its environment through sensors, and acts upon it through actuators. The agent’s percepts are its impression of the sensor input. (The agent doesn’t necessarily know everything in its environment) Agents may have knowledge and/or memory
3
A Simple Vacuum Cleaner Agent 2 Locations, A and B Dirt sensor (current location only) Agent knows where it is Actions: left, right, suck “Knowledge” represented by percept, action pairs (e.g. [A, dirty] -> (suck))
4
Agent Function vs. Agent Program Agent function: –Mathematical abstraction f(percepts) = action –Externally observable (behavior) Agent program: –Concrete implementation of an algorithm that decides what the agent will do –Runs within a “physical system” –Not externally observable (thought)
5
Rational Agents Rational Agents “do the right thing” based on –Performance measure that defines criterion of success –The agent’s prior knowledge of the environment –Actions that the agent can perform –Agent’s percept sequence to date Rationality is not omniscience; it optimizes expected performance, based on (necessarily) incomplete information.
6
Program for an Agent Repeat forever 1.Record latest percept from sensors into memory 2.Choose best action based on memory 3.Record action in memory 4.Perform action (observe results) Almost all of AI elaborates this!
7
A Reasonable Vacuum Program [A, dirty] -> suck [B, dirty] -> suck [A, clean] -> right [B, clean] -> left What goals will this program satisfy? What are pitfalls, if any? Does a longer history of percepts help?
8
Aspects of Agent Behavior Information gathering - actions that modify future percepts Learning - modifying the program based on actions and perceived results Autonomy - agent’s behavior depends on its own percepts, rather than designer’s programming (a priori knowledge)
9
Specifying Task Environment Performance measure Environment (real world or “artificial”) Actuators Sensors Examples: –Pilot –Rat in a maze –Surgeon –Search engine
10
Properties of Environments Fully vs. partially observable (e.g. map?) Single-agent vs. multi-agent –Adversaries (competitive) –Teammates (cooperative) Deterministic vs. stochastic –May appear stochastic if only partially observable (e.g. card game) –Strategic: deterministic except for other agents (Uncertain = not fully observable, or nondeterministic)
11
Properties (cont) Episodic vs. Sequential –Do we need to know history? Static vs. Dynamic –Does environment change while agent is thinking? Discrete vs. Continuous –Time, space, actions Known vs. Unknown –Does the agent know the “rules” or “laws of physics”?
12
Examples Solitaire Driving Conversation Chess Internet search Lawn mowing
13
Agent Types Reflex Model-based Reflex Goal based Utility based
14
Reflex Agent
15
Model-Based Reflex Agent
16
Goal Based
17
Utility Based
18
Learning Agent Performance Element (was agent) Environment Critic Learning Element Problem Generator L. Goals Feedback Sensors Effectors changes know- ledge
19
Classes of Representations Atomic –State is indivisible Factored –State consists of attributes and values Structured –State consists of objects (which have attributes and relate to other objects)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.