Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 1 Intelligent Agents Yaohang Li Department of Computer Science

Similar presentations


Presentation on theme: "Lecture 1 Intelligent Agents Yaohang Li Department of Computer Science"— Presentation transcript:

1 Lecture 1 Intelligent Agents Yaohang Li Department of Computer Science
Old Dominion University Reading for Next Class: Chapter 3, Russell and Norvig Syllabus

2 Review of Intelligent Agents
Motivation Objectives Introduction Agents and Environments Rationality Agent Structure Agent Types Simple reflex agent Model-based reflex agent Goal-based agent Utility-based agent Learning agent

3 Introduction to Intelligent Agents
Motivations agents are used to provide a consistent viewpoint on various topics in the field AI agents require essential skills to perform tasks that require intelligence intelligent agents use methods and techniques from the field of AI What is an agent? in general, an entity that interacts with its environment perception through sensors actions through effectors or actuators Agent and its environment an agent perceives its environment through sensors the complete set of inputs at a given time is called a percept the current percept, or a sequence of percepts may influence the actions of an agent it can change the environment through actuators an operation involving an actuator is called an action actions can be grouped into action sequences

4 Intelligent Agents

5 Example of Agents powered by muscles often powered by motors
human agent eyes, ears, skin, taste buds, etc. for sensors hands, fingers, legs, mouth, etc. for actuators powered by muscles robot camera, infrared, bumper, etc. for sensors grippers, wheels, lights, speakers, etc. for actuators often powered by motors software agent functions as sensors information provided as input to functions in the form of encoded bit strings or symbols functions as actuators results deliver the output

6 Vacuum-Cleaner World

7 Agents and Their Environment
a rational agent does “the right thing” the action that leads to the best outcome under the given circumstances an agent function maps percept sequences to actions abstract mathematical description an agent program is a concrete implementation of the respective function it runs on a specific agent architecture (“platform”) problems: what is “ the right thing” how do you measure the “best outcome”

8 The Concept of Rationality
Rational  omniscient Percepts may not supply all relevant information Rational  clairvoyant Action outcomes may not be as expected Rational  successful Rationality vs Perfection Rationality maximizes expected performance Perfection maximizes actual performance Rational  exploration, learning, autonomy

9 Performance Measure Performance of Agents criteria for measuring the outcome and the expenses of the agent often subjective, but should be objective task dependent time may be important Performance measure example vacuum agent Performance Measure: number of tiles cleaned during a certain period based on the agent’s report, or validated by an objective authority doesn’t consider expenses of the agent, side effects energy, noise, loss of useful objects, damaged furniture, scratched floor might lead to unwanted activities agent re-cleans clean tiles, covers only part of the room, drops dirt on tiles to have more tiles to clean, etc. Alternative Performance Measure: one point per square cleaned up in time T? one point per clean square per time step, minus one per move penalize for > k dirty squares?

10 Rational Agents Rationality: selects the action that is expected to maximize its performance based on a performance measure depends on the percept sequence, background knowledge, and feasible actions Performance measure for the successful completion of a task complete perceptual history (percept sequence) background knowledge especially about the environment dimensions, structure, basic “laws” task, user, other agents feasible actions capabilities of the agent Definition of a Rational Agent For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

11 The Nature of Environments
determine to a large degree the interaction between the “outside world” and the agent the “outside world” is not necessarily the “real world” as we perceive it in many cases, environments are implemented within computers they may or may not have a close correspondence to the “real world” Environment Properties fully observable vs. partially observable sensors capture all relevant information from the environment deterministic vs. stochastic (non-deterministic) changes in the environment are predictable episodic vs. sequential (non-episodic) independent perceiving-acting episodes static vs. dynamic no changes while the agent is “thinking” discrete vs. continuous limited number of distinct percepts/actions single vs. multiple agents interaction and collaboration among agents competitive, cooperative

12 Specifying the Task Environment (I)
PEAS Description Performance Measures used to evaluate how well an agent solves the task at hand Environment surroundings beyond the control of the agent Actuators determine the actions the agent can perform Sensors provide information about the current state of the environment

13 VacBot PEAS Description
Performance Measures Environment Actuators Sensors cleanliness of the floor time needed energy consumed grid of tiles dirt on tiles possibly obstacles, varying amounts of dirt movement (wheels, tracks, legs, ...) dirt removal (nozzle, gripper, ...) position (tile ID reader, camera, GPS, ...) dirtiness (camera, sniffer, touch, ...) possibly movement (camera, wheel movement)

14 SearchBot PEAS Description
Performance Measures Environment Actuators Sensors number of “hits” (relevant retrieved items) recall (hits / all relevant items) precision (relevant items/retrieved items) quality of hits document repository (data base, files, WWW, ...) computer system (hardware, OS, software, ...) network (protocol, interconnection, ...) query functions retrieval functions display functions input parameters

15 Chess Player PEAS Description
Winning the game Time spent in the game Performance Measures Environment Actuators Sensors Chessboard Positions of every piece Move a piece Input from the keyboard

16 Specifying the Environment (II)
PAGE Description: used for high-level characterization of agents Percepts information acquired through the agent’s sensory system Actions operations performed by the agent on the environment through its actuators Goals desired outcome of the task with a measurable performance Environment surroundings beyond the control of the agent

17 VacBot PAGE Description
Percepts Actions Goals Environment tile properties like clean/dirty, empty/occupied movement and orientation pick up dirt, move desired outcome of the task with a measurable performance surroundings beyond the control of the agent

18 StudentBot PAGE Description
images (text, pictures, instructor, classmates) sound (language) Percepts Actions Goals Environment comments, questions, gestures note-taking (?) mastery of the material performance measure: grade classroom

19 Agent Program the emphasis in this course is on programs that specify the agent’s behavior through mappings from percepts to actions less on environment and goals agents receive one percept at a time they may or may not keep track of the percept sequence performance evaluation is often done by an outside authority, not the agent more objective, less complicated can be integrated with the environment program Basic Framework function SKELETON-AGENT(percept) returns action static: memory memory := UPDATE-MEMORY(memory, percept) action := CHOOSE-BEST-ACTION(memory) memory := UPDATE-MEMORY(memory, action) return action

20 Agent Program Types Classification of Agent Programs
different ways of achieving the mapping from percepts to actions different levels of complexity Agent Program Types simple reflex agents agents that keep track of the world goal-based agents utility-based agents learning agents

21 Simple Reflex Agents application of simple rules to situations
requires processing of percepts to achieve some abstraction Select actions on the basis of the current percept, ignoring the rest of percept history frequent method of specification is through condition-action rules if percept then action similar to innate reflexes or learned responses in humans efficient implementation, but limited power environment must be fully observable easily runs into infinite loops application of simple rules to situations function SIMPLE-REFLEX-AGENT(percept) returns action static: rules //set of condition-action rules condition := INTERPRET-INPUT(percept) rule := RULE-MATCH(condition, rules) action := RULE-ACTION(rule) return action

22 Environment Agent Sensors Actuators Simple Reflex Agent
What the world is like now What should I do now Condition-action rules Agent Environment

23 Model-based Reflex Agents
an internal state maintains important information from previous percepts sensors only provide a partial picture of the environment helps with some partially observable environments Keep track of the percept history the internal states reflects the agent’s knowledge about the world this knowledge is called a model may contain information about changes in the world caused by actions of the action independent of the agent’s behavior application of simple rules to situations function REFLEX-AGENT-WITH-STATE(percept) returns action static: rules //set of condition-action rules state //description of the current world state action //most recent action, initially none state := UPDATE-STATE(state, action, percept) rule := RULE-MATCH(state, rules) action := RULE-ACTION[rule] return action

24 Model-based Reflex Agents
Sensors Actuators What the world is like now What should I do now States How the world evolves What my actions do Agent Environment Condition-action rules

25 Goal-based Agent the agent tries to reach a desirable state, the goal
may be provided from the outside (user, designer, environment), or inherent to the agent itself results of possible actions are considered with respect to the goal easy when the results can be related to the goal after each action in general, it can be difficult to attribute goal satisfaction results to individual actions may require consideration of the future what-if scenarios search, reasoning or planning very flexible, but not very efficient

26 Agent Environment Sensors Actuators Goal-based Agent
What the world is like now What happens if I do an action What should I do now State How the world evolves What my actions do Goals Agent Environment

27 A Utility-based Agent more sophisticated distinction between different world states a utility function maps states onto a real number may be interpreted as “degree of happiness” permits rational actions for more complex tasks resolution of conflicts between goals (tradeoff) multiple goals (likelihood of success, importance) a utility function is necessary for rational behavior, but sometimes it is not made explicit

28 Agent Environment Sensors Actuators A Utility-based Agent
What the world is like now What happens if I do an action How happy will I be then What should I do now State How the world evolves What my actions do Utility Agent Environment

29 Learning Agent performance element learning element critic
selects actions based on percepts, internal state, background knowledge can be one of the previously described agents learning element identifies improvements critic provides feedback about the performance of the agent can be external; sometimes part of the environment problem generator suggests actions required for novel solutions (creativity)

30 A Model of a Learning Agent
Sensors Actuators Agent Environment What the world is like now What happens if I do an action How happy will I be then What should I do now State How the world evolves What my actions do Utility Critic Learning Element Problem Generator Performance Standard

31 Summary agents perceive and act in an environment
Definition of Percept Description of the environment (PEAS) ideal agents maximize their performance measure Rationality Performance Measure Rational Agent basic agent types simple reflex reflex with state goal-based utility-based learning some environments may make life harder for agents inaccessible, non-deterministic, non-episodic, dynamic, continuous

32 What I want you to do Review Chapter 2 Review Class Notes
Build a Project Team


Download ppt "Lecture 1 Intelligent Agents Yaohang Li Department of Computer Science"

Similar presentations


Ads by Google