Chapter 2: Intelligent Agents

Slides:



Advertisements
Similar presentations
Additional Topics ARTIFICIAL INTELLIGENCE
Advertisements

Artificial Intelligent
Intelligent Agents Chapter 2.
Chapter 2 Intelligent Agents.
Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
Intelligent Agents CPS
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
AI CSC361: Intelligent Agents1 Intelligent Agents -1 CSC361.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
Properties of task environments
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2.
Rutgers CS440, Fall 2003 Lecture 2: Intelligent Agents Reading: AIMA, Ch. 2.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, spring 2007.
Rational Agents (Chapter 2)
Introduction to Logic Programming WS2004/A.Polleres 1 Introduction to Artificial Intelligence MSc WS 2009 Intelligent Agents: Chapter 2.
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
© Copyright 2008 STI INNSBRUCK Introduction to A rtificial I ntelligence MSc WS 2009 Intelligent Agents: Chapter.
How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally.
Chapter 2 Intelligent Agents. Chapter 2 Intelligent Agents What is an agent ? An agent is anything that perceiving its environment through sensors and.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
CS 8520: Artificial Intelligence Intelligent Agents Paula Matuszek Fall, 2008 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Rational Agents (Chapter 2)
Dr. Alaa Sagheer Chapter 2 Artificial Intelligence Ch2: Intelligent Agents Dr. Alaa Sagheer.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
Chapter 2 Agents & Environments
CSC 9010 Spring Paula Matuszek Intelligent Agents Overview Slides based in part on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn.
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
Intelligent Agents Chapter 2 Dewi Liliana. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
CSC AI Intelligent Agents.
How R&N define AI humanly vs. rationally thinking vs. acting
ECE 448 Lecture 3: Rational Agents
EA C461 – Artificial Intelligence Intelligent Agents
Chapter 2 – AIMA Dr. Aarij Mahmood Hussaan
Rational Agents (Chapter 2)
Artificial Intelligence Lecture No. 5
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
Introduction to Artificial Intelligence
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Chapter 2: Intelligent Agents

Agents and environments Agent: perceives environment, using sensors, acting on environment with actuators Agent examples: robots, softbots, thermostats… Percept: agent’s perceptual inputs at any given instant Historically, AI has focussed on isolated components of agents--now, looking at whole thing

…agents Sensors receive : camera and video images, keyboard input, file contents, … Actuators act on environment by: robotic arm moving things, softbot displaying on screen/writing files/sending network packets… General assumption: every agent can perceive its own actions, but possibly not its effects

…agents Agent function: maps any given percept sequence to an action (an abstract mathematical formula) Agent’s choice of action depends on percept sequence observed to date Imagine tabulating the agent function: table will be an external characterization of the agent Internally, agent function will be implemented by an agent program (a concrete implementation of the agent function)

Vacuum cleaner world 2 locations: square A, square B Agent perceives location and contents (dirty/not dirty) Actions: left, right suck, no_op

A vacuum cleaner agent What’s the ‘right’ way to fill out the table? ‘Right’ way makes agent good/intelligent

Rationality “Do the right thing”, or more formally: “A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.” Need to as questions: What do we mean by ‘best’? What’s the outcome? What does it cost to get it? What’s involved in computing an ‘expected’ outcome?

Rationality What is rational depends on: The performance measure (criterion for success) The percept sequence agent’s prior knowledge of the environment Actions that the agent can perform Rational agent: selects an action that is expected to maximize its performance measure, based on evidence provided by percept sequence and a priori knowledge

Performance measure Be careful in choosing! Vacuum cleaner agent: measure performance by ‘amount of dirt cleaned in an 8 hour shift’ Commercial management agent: ‘minimize the expenditures in the present quarter’ Performance measures should be designed according to what you want in the environment, not how you think the agent should behave

Is the vacuum cleaner agent rational? Rational under the following assumptions: Performance measure: 1 point for each clean square over ‘lifetime’ of 1000 steps ‘geography’ known but dirt distribution, initial position of agent not known Clean squares stay clean, sucking cleans squares Left and Right don’t take agent outside environment Available actions: Left, Right, Suck, NoOp Agent knows where it is and whether that location contains dirt

…rationality in vacuum But notice that under different assumptions, this vacuum cleaner agent would not be rational Performance measure penalty for unnecessary movement If clean squares become dirty If environment is unknown, contains more than A and B …

More on rationality Rationality is not omniscience Rationality is not clairvoyance Rationality is not (necessarily) successful ! Rational behavior often requires Info gathering: exploring an unknown environment Learning: finding out which action is likely to produce a desired outcome (and getting feedback from the environment on success/failure) …so a rational agent should be autonomous (does not completely rely on a priori knowledge of its designer; learns from its own percepts)

Task environments: PEAS description TE: The ‘problem’ to which a rational agent will provide a ‘solution’ Example: designing an automated taxi Performance measure: safe, fast, legal, comfortable, maximizes profits Environment: roads (highway, alley, 1 lane, …), other traffic, pedestrians, customers… Actuators: steering, accelerator, display (for customers), horn (communicate with other vehicles), … Sensors: cameras, sonar, speedometer, GPS, engine sensors, keyboard, … Sensors: taxi needs to know where it is, what else is on the road, how fast it’s going

..PEAS example: internet shopping agent Performance measures: price, quality, appropriateness, efficiency, … Environment: web pages, vendors, shippers Actuators: display to user, follow URL, fill in form “Sensors” (input?): HTML pages (text, graphics, scripts)

More on environments Environment can be real, or artificial Environment can be simple (ex: conveyor belt for inspection robot) or complex/rich (ex: flight simulator environment) Key points are complexity of the relationships among the behavior of the robot, the percept sequence generated by the environment, and the performance measure

Properties of task environments Fully observable vs partially observable Fully: agent’s sensors give access to the complete state of environment at each point in time Effectively fully if sensors detect all aspects relevant to choice of action (as determined by performance measure) Fully: agent doesn’t need internal state to keep track of the world

…task environments Deterministic vs stochastic Deterministic if next state of environment is completely determined by current state and action executed by agent Partially observable environment could appear to be stochastic Strategic environment: deterministic except for actions of other agents

…task environments Episodic vs sequential Episodic environment: agent’s experience is divided into ‘atomic episode’; each episode consists of agent perceiving then performing a single action Episodes are independent: next episode doesn’t depend on actions taken in previous episodes Ex: classification tasks: spotting defective parts on an assembly line Sequential: current decision could affect all future decisions (ex: chess playing)

…task environments Static vs dynamic Discrete vs Continuous Dynamic: environment can change while agent is deliberating Semidynamic: performance score can change with passage of time, but environment doesn’t (ex: playing chess with a clock) Discrete vs Continuous Distinction can be applied to state of the environment, way time is handled, percepts and actions of the agent

…task environment Single agent vs multiagent How do you decide whether another entity must be viewed as an agent? Is it an agent or just a stochastically behaving object (ex: wave on a beach)? Key question: can its behavior be described as maximizing performance depending on the actions of ‘our’ agent? Classify multiagent env. As (partially) competitive and/or (partially) cooperative Ex: Taxis partially comptitive and partially coooperative

Environment summary Solitaire: observable, deterministic, sequential, static, discrete, single-agent Backgammon: observable, deterministic, sequential, semi-static, discrete, multi-agent Internet shopping: partially observable, partially deterministic, sequential, semi-static, discrete, single-agent (except auctions) Taxi driving (“the real world”): partially observable, not deterministic, sequential, dynamic, continuous, multi-agent

Agent structure Agent = architecture + program Inputs Architecture: computing device, sensors, actuators Program: what you design to implement agent function, mapping percepts to actions Inputs Agent function: entire percept history Agent program: current percept; if function needs percept history, agent must ‘remember’ it

Naïve structure: table driven Table represents explicitly the agent function; contains appropriate action for every possible percept sequence Infeasible size of lookup table: for chess, 10150 entries The challenge: produce rational behavior from small amount of code

Agent types Four basic types, in order of increasing generality Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents All can be implemented as learning agents

Simple reflex agent Selects actions on basis of current precept, ignores rest of precept history Ex: vacuum cleaner agent

Agent programs Specified by rules: known as condition-action, situation-action, productions, if-then Usual format: If condition then action The challenge is to find the right way to specify conditions/actions (if such a thing exists), and the order in which rules should be applied

Model based reflex agent Problem: simple reflex agent works only if the environment is fully observable: means that the correct decision can be made on the basis of only the current percept So partial observability is handled by having the agent keep track of the part of the world it can’t see now--an internal state that depends on percept history Ex: driving task: can’t always see other cars (blind spot), so need to have model of where they’ve been/are likely to be now Model based includes a model of how the world works (where ‘world’ is the bit pertaining to the agent) Note model combines current percept with old internal state to update description of current state (‘what world is like now’)

Goal based agents Current state of environment may not be enough to decide on an action: if taxi is at an intersection, which way does it go? Agent has goal information describing situations that are deisrable: for example, the destination for a passenger in a taxi Complex: need to model ‘what will happen if I do A’ More flexible, because model knowledge is explicitly represented and can be modified as situations in the ‘world’ change, and as agent learns more about its world

Model based, utility based agents Problem with goals alone: usually there are many action sequences that will satisfy the goal (ex: many routes a taxi can take). How to choose between them? Taxi: quicker, safer, cheaper Usually talk about the ‘utility’ for an agent, not how ‘happy’ it will be Utility function maps a state or sequence of states (plan) onto a real number describing the degree of ‘happiness’ Can be used to choose when conflicting goals are present (ex: taxi, speed and safety); this specifies the tradeoff

Learning agents Learning element is responsible for making improvements in action choice Performance element: selects external actions Critic: provides feedback on how the agent is performing and determines how performance element should be modified to do better in the future Note that performance standard is outside agent: don’t want the agent to be able to modify its own standards! Otherwise, it could choose to make it easy for itself Problem generator: suggests exploratory actions, maybe even do something that’s suboptimal in the short run, better in the long run

Summary Agents interact with environments through actuators and sensors agent function defines behaviour Performance measure evaluates environment sequence Perfectly rational agent maximizes expected performance PEAS descriptions define task environments Dimensions: observable? Deterministic? Episodic? Static? Discrete? Single-agent? Architectures: reflex, reflex with state, goal-based, utility-based