Lecture 1 Intelligent Agents Yaohang Li Department of Computer Science

Slides:



Advertisements
Similar presentations
Chapter 2: Intelligent Agents
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
Chapter 2 Intelligent Agents
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
© Franz Kurfess Agents 1 CSC 480: Artificial Intelligence Dr. Franz J. Kurfess Computer Science Department Cal Poly.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
Cooperating Intelligent Systems Intelligent Agents Chapter 2, AIMA.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
1 AI and Agents CS 171/271 (Chapters 1 and 2) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Dr. Samy Abu Nasser Faculty of Engineering & Information Technology Artificial Intelligence.
How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
CSC 423 ARTIFICIAL INTELLIGENCE Intelligence Agents.
© 2004 Kurfess CalPoly edited by Eggen Introduction 1 CAP 4630/5605: Artificial Intelligence Computer Science Department University of North Florida.
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
A RTIFICIAL I NTELLIGENCE Intelligent Agents 30 November
Introduction of Intelligent Agents
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
INTELLIGENT AGENTS. Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
Chapter 2 Agents. Intelligent Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Chapter 2 Agents & Environments
CSC 9010 Spring Paula Matuszek Intelligent Agents Overview Slides based in part on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn.
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
The Agent and Environment Presented By:sadaf Gulfam Roll No:15156 Section: E.
CSC AI Intelligent Agents.
Artificial Intelligence
How R&N define AI humanly vs. rationally thinking vs. acting
EA C461 – Artificial Intelligence Intelligent Agents
Chapter 2 – AIMA Dr. Aarij Mahmood Hussaan
Artificial Intelligence Lecture No. 4
Artificial Intelligence Lecture No. 5
Intelligent Agents Chapter 2.
Artificial Intelligence (CS 461D)
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
© James D. Skrentny from notes by C. Dyer, et. al.
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Problem Solving Agents
Intelligent Agents Chapter 2.
AI and Agents CS 171/271 (Chapters 1 and 2)
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Structure of intelligent agents and environments
Artificial Intelligence
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Lecture 1 Intelligent Agents Yaohang Li Department of Computer Science Old Dominion University Reading for Next Class: Chapter 3, Russell and Norvig Syllabus

Review of Intelligent Agents Motivation Objectives Introduction Agents and Environments Rationality Agent Structure Agent Types Simple reflex agent Model-based reflex agent Goal-based agent Utility-based agent Learning agent

Introduction to Intelligent Agents Motivations agents are used to provide a consistent viewpoint on various topics in the field AI agents require essential skills to perform tasks that require intelligence intelligent agents use methods and techniques from the field of AI What is an agent? in general, an entity that interacts with its environment perception through sensors actions through effectors or actuators Agent and its environment an agent perceives its environment through sensors the complete set of inputs at a given time is called a percept the current percept, or a sequence of percepts may influence the actions of an agent it can change the environment through actuators an operation involving an actuator is called an action actions can be grouped into action sequences

Intelligent Agents

Example of Agents powered by muscles often powered by motors human agent eyes, ears, skin, taste buds, etc. for sensors hands, fingers, legs, mouth, etc. for actuators powered by muscles robot camera, infrared, bumper, etc. for sensors grippers, wheels, lights, speakers, etc. for actuators often powered by motors software agent functions as sensors information provided as input to functions in the form of encoded bit strings or symbols functions as actuators results deliver the output

Vacuum-Cleaner World

Agents and Their Environment a rational agent does “the right thing” the action that leads to the best outcome under the given circumstances an agent function maps percept sequences to actions abstract mathematical description an agent program is a concrete implementation of the respective function it runs on a specific agent architecture (“platform”) problems: what is “ the right thing” how do you measure the “best outcome”

The Concept of Rationality Rational  omniscient Percepts may not supply all relevant information Rational  clairvoyant Action outcomes may not be as expected Rational  successful Rationality vs Perfection Rationality maximizes expected performance Perfection maximizes actual performance Rational  exploration, learning, autonomy

Performance Measure Performance of Agents criteria for measuring the outcome and the expenses of the agent often subjective, but should be objective task dependent time may be important Performance measure example vacuum agent Performance Measure: number of tiles cleaned during a certain period based on the agent’s report, or validated by an objective authority doesn’t consider expenses of the agent, side effects energy, noise, loss of useful objects, damaged furniture, scratched floor might lead to unwanted activities agent re-cleans clean tiles, covers only part of the room, drops dirt on tiles to have more tiles to clean, etc. Alternative Performance Measure: one point per square cleaned up in time T? one point per clean square per time step, minus one per move penalize for > k dirty squares?

Rational Agents Rationality: selects the action that is expected to maximize its performance based on a performance measure depends on the percept sequence, background knowledge, and feasible actions Performance measure for the successful completion of a task complete perceptual history (percept sequence) background knowledge especially about the environment dimensions, structure, basic “laws” task, user, other agents feasible actions capabilities of the agent Definition of a Rational Agent For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.

The Nature of Environments determine to a large degree the interaction between the “outside world” and the agent the “outside world” is not necessarily the “real world” as we perceive it in many cases, environments are implemented within computers they may or may not have a close correspondence to the “real world” Environment Properties fully observable vs. partially observable sensors capture all relevant information from the environment deterministic vs. stochastic (non-deterministic) changes in the environment are predictable episodic vs. sequential (non-episodic) independent perceiving-acting episodes static vs. dynamic no changes while the agent is “thinking” discrete vs. continuous limited number of distinct percepts/actions single vs. multiple agents interaction and collaboration among agents competitive, cooperative

Specifying the Task Environment (I) PEAS Description Performance Measures used to evaluate how well an agent solves the task at hand Environment surroundings beyond the control of the agent Actuators determine the actions the agent can perform Sensors provide information about the current state of the environment

VacBot PEAS Description Performance Measures Environment Actuators Sensors cleanliness of the floor time needed energy consumed grid of tiles dirt on tiles possibly obstacles, varying amounts of dirt movement (wheels, tracks, legs, ...) dirt removal (nozzle, gripper, ...) position (tile ID reader, camera, GPS, ...) dirtiness (camera, sniffer, touch, ...) possibly movement (camera, wheel movement)

SearchBot PEAS Description Performance Measures Environment Actuators Sensors number of “hits” (relevant retrieved items) recall (hits / all relevant items) precision (relevant items/retrieved items) quality of hits document repository (data base, files, WWW, ...) computer system (hardware, OS, software, ...) network (protocol, interconnection, ...) query functions retrieval functions display functions input parameters

Chess Player PEAS Description Winning the game Time spent in the game Performance Measures Environment Actuators Sensors Chessboard Positions of every piece Move a piece Input from the keyboard

Specifying the Environment (II) PAGE Description: used for high-level characterization of agents Percepts information acquired through the agent’s sensory system Actions operations performed by the agent on the environment through its actuators Goals desired outcome of the task with a measurable performance Environment surroundings beyond the control of the agent

VacBot PAGE Description Percepts Actions Goals Environment tile properties like clean/dirty, empty/occupied movement and orientation pick up dirt, move desired outcome of the task with a measurable performance surroundings beyond the control of the agent

StudentBot PAGE Description images (text, pictures, instructor, classmates) sound (language) Percepts Actions Goals Environment comments, questions, gestures note-taking (?) mastery of the material performance measure: grade classroom

Agent Program the emphasis in this course is on programs that specify the agent’s behavior through mappings from percepts to actions less on environment and goals agents receive one percept at a time they may or may not keep track of the percept sequence performance evaluation is often done by an outside authority, not the agent more objective, less complicated can be integrated with the environment program Basic Framework function SKELETON-AGENT(percept) returns action static: memory memory := UPDATE-MEMORY(memory, percept) action := CHOOSE-BEST-ACTION(memory) memory := UPDATE-MEMORY(memory, action) return action

Agent Program Types Classification of Agent Programs different ways of achieving the mapping from percepts to actions different levels of complexity Agent Program Types simple reflex agents agents that keep track of the world goal-based agents utility-based agents learning agents

Simple Reflex Agents application of simple rules to situations requires processing of percepts to achieve some abstraction Select actions on the basis of the current percept, ignoring the rest of percept history frequent method of specification is through condition-action rules if percept then action similar to innate reflexes or learned responses in humans efficient implementation, but limited power environment must be fully observable easily runs into infinite loops application of simple rules to situations function SIMPLE-REFLEX-AGENT(percept) returns action static: rules //set of condition-action rules condition := INTERPRET-INPUT(percept) rule := RULE-MATCH(condition, rules) action := RULE-ACTION(rule) return action

Environment Agent Sensors Actuators Simple Reflex Agent What the world is like now What should I do now Condition-action rules Agent Environment

Model-based Reflex Agents an internal state maintains important information from previous percepts sensors only provide a partial picture of the environment helps with some partially observable environments Keep track of the percept history the internal states reflects the agent’s knowledge about the world this knowledge is called a model may contain information about changes in the world caused by actions of the action independent of the agent’s behavior application of simple rules to situations function REFLEX-AGENT-WITH-STATE(percept) returns action static: rules //set of condition-action rules state //description of the current world state action //most recent action, initially none state := UPDATE-STATE(state, action, percept) rule := RULE-MATCH(state, rules) action := RULE-ACTION[rule] return action

Model-based Reflex Agents Sensors Actuators What the world is like now What should I do now States How the world evolves What my actions do Agent Environment Condition-action rules

Goal-based Agent the agent tries to reach a desirable state, the goal may be provided from the outside (user, designer, environment), or inherent to the agent itself results of possible actions are considered with respect to the goal easy when the results can be related to the goal after each action in general, it can be difficult to attribute goal satisfaction results to individual actions may require consideration of the future what-if scenarios search, reasoning or planning very flexible, but not very efficient

Agent Environment Sensors Actuators Goal-based Agent What the world is like now What happens if I do an action What should I do now State How the world evolves What my actions do Goals Agent Environment

A Utility-based Agent more sophisticated distinction between different world states a utility function maps states onto a real number may be interpreted as “degree of happiness” permits rational actions for more complex tasks resolution of conflicts between goals (tradeoff) multiple goals (likelihood of success, importance) a utility function is necessary for rational behavior, but sometimes it is not made explicit

Agent Environment Sensors Actuators A Utility-based Agent What the world is like now What happens if I do an action How happy will I be then What should I do now State How the world evolves What my actions do Utility Agent Environment

Learning Agent performance element learning element critic selects actions based on percepts, internal state, background knowledge can be one of the previously described agents learning element identifies improvements critic provides feedback about the performance of the agent can be external; sometimes part of the environment problem generator suggests actions required for novel solutions (creativity)

A Model of a Learning Agent Sensors Actuators Agent Environment What the world is like now What happens if I do an action How happy will I be then What should I do now State How the world evolves What my actions do Utility Critic Learning Element Problem Generator Performance Standard

Summary agents perceive and act in an environment Definition of Percept Description of the environment (PEAS) ideal agents maximize their performance measure Rationality Performance Measure Rational Agent basic agent types simple reflex reflex with state goal-based utility-based learning some environments may make life harder for agents inaccessible, non-deterministic, non-episodic, dynamic, continuous

What I want you to do Review Chapter 2 Review Class Notes Build a Project Team