©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Intelligent Agents Chapter 2 Concepts Properties of Agents Classes of Agents Characteristics of Environments 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. What is an Agent? An agent perceives its environment with sensors and acts upon that environment with its effectors. An agent gets percepts one at a time, and maps this percept sequence to actions. Why Agents? The agent metaphor provides a useful framework for thinking about and designing AI systems. 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. What is an Agent? Agents usually carry out a task on behalf of a user. Agents can be biological, robotic, or computational. We’ll focus on computational agents more commonly called software agents. 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Simple Agent Model 1. Percieve 2. Reason/Decide 3. Act 4. Repeat Environment Agent Sensors Effectors 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. How should an Agent Act? A rational agent acts intelligently to achieve some goal. A performance measure is used to determine how successfully a goal has been achieved. time taken resources required false alarm rate A rational agent tries to maximize its success by increasing its performance. 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. How should an Agent Act? An ideal rational agent, for each possible percept sequence, does whatever action(s) maximizes its performance measure given the percept sequence and its knowledge, both built-in and acquired These agents gather information so they aren’t “rationally” ignorant. 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
Examples of Rational Agents Type Percepts Actions Goals Environment ALVINN Images, Signals, Position Steer, Speed control, Sensor control Drive from A to B Roads, Vehicles, Hazards Price Grabber Web pages Navigate web, Gather info. Find best price Internet Chess program Current board state Next move Win game Opponent, Game board Medical Diagnosis Symptoms, Test results Tests, Treatments Healthy patient Patient, Hospital 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Properties of Agents Some main properties of software agents are: situatedness autonomy adaptivity sociability 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Properties of Agents Situatedness The agent has a direct connection to its environment receives some form of sensory input from its environment performs some action that changes its environment in some way 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Properties of Agents Autonomy The agent can act without direct intervention by humans or other agents has control over its own actions and internal state. Decisions must be made independently of the programmer. Some aspect of the current situation must trigger a response. 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Properties of Agents Adaptivity The agent is capable of reacting flexibly to changes in its environment taking goal-directed initiative (being pro-active) learning from its own experience, its environment, and interactions with others 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Properties of Agents Sociability The agent is capable of interacting in a peer-to-peer manner with other agents or humans communicating, sharing information cooperating and/or competing 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Classes of Agents Simple Table-Based Reflex Agents use a table lookup where each percept is matched to an action Examples? Problems/Limitations? table may be too big to generate and store not adaptive to changes in the environment; instead table must be updated can't make actions conditional reacts only to current percept; no history kept 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Classes of Agents Simple Rule-Based Reflex Agents use if-then rule to match percepts to actions no need to consider all percepts can generalize percepts by mapping to the same action can adapt to changes in the environment by adding rules Examples? Problems/Limitations? reacts only to current percept; no knowledge of non-perceptual parts of the current state 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Classes of Agents Reflex Agents with an Internal State encodes state of the world from past percepts (and knowledge of the world) actions can be based on a sequence of percepts and knowledge of non-perceptual world state representing current state limits reasoning ability, better to also represent changes in the world Examples? Problems/Limitations? not deliberative, agent classes so far are reactive 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Classes of Agents Goal-Based Agents chose actions to achieve a desired goal search or planning often used deliberative/purposeful rather than reactive Examples? Problems/Limitations? may have to consider long sequences of possible actions before goal is achieved involves consideration of the future, “What will happen if I do...?” How are competing goals treated? What about degrees of success? 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Classes of Agents Utility-Based Agents achieve goals while trying to maximize some utility value utility value gives a measure of success or "happiness" for a given situation allows decisions comparing choice between conflicting goals likelihood of success and importance of goal Examples? 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
Intelligent Agent Model Environment Model of World (being updated) Prior Knowledge about the World Sensors Reasoning & Decisions Making List of Possible Actions Goals/Utility Effectors 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
Characteristics of Environments Is the environment fully observable or partially observable? An environment is fully observable if the agent's sensors give it access to the complete state of the environment at any point in time. If all aspects that are relevant to the choice of action are able to be detected then the environment is effectively fully observable. Noisy and inaccurate sensors can result in partially observable environments. 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
Characteristics of Environments Is the environment deterministic or stochastic? An environment is deterministic if the next state of the world is completely determined by the current state and the agents' actions. Often it is better to consider this from the point of view of the agent. Randomness and chance are common causes non-deterministic environments. 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
Characteristics of Environments Is the environment episodic or sequential? An environment is episodic if each percept-action episode does not depend on the actions in prior episodes. Games are often sequential requiring one to think ahead. 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
Characteristics of Environments Is the environment static or dynamic? An environment is static if it doesn't change between the time of perceiving and acting. An environment is semidynamic if it doesn't change but the agent does (e.g. performance score). Time is an important factor in dynamic environments, since perceptions can become "stale". 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
Characteristics of Environments Is the environment discrete or continuous? An environment is discrete if there are a limited number of distinct, clearly-defined states of the world which limits range of possible percepts and actions. 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
Characteristics of Environments Is the environment single agent or multiagent? An environment is multiagent if more than one agents effect the each other's performance. Multiagent environments can be competitive and/or cooperative. 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
Characteristics of Environments In “real world” environments there is often uncertainty due to partially observable, stochastic, dynamic environments. In AI research, it is common to use “toy worlds”, which approximate the “real world” but are simplified and often oversimplified in a manner that diminishes the value of the results. 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Summary An agent perceives and acts in an environment to achieve specific goals. Characteristics of Agents situatedness autonomy adaptivity sociability 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Summary Agent Types simple reflex lookup table if-then rules reflex with state goal-based utility-based 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.
©2001-2004 James D. Skrentny from notes by C. Dyer, et. al. Summary Characteristics of Environments fully observable vs. partially observable deterministic vs. stochastic episodic vs. sequential static vs. dynamic discrete vs. continuous single agent vs. multiagent 11/8/2018 ©2001-2004 James D. Skrentny from notes by C. Dyer, et. al.