Chapter 2 – AIMA Dr. Aarij Mahmood Hussaan

Slides:



Advertisements
Similar presentations
Chapter 2: Intelligent Agents
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Intelligent Agents Chapter 2.
Chapter 2 Intelligent Agents.
Artificial Intelligence CS482, CS682, MW 1 – 2:15, SEM 201, MS 227 Prerequisites: 302, 365 Instructor: Sushil Louis,
Intelligent Agents Russell and Norvig: 2
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
Intelligent Agents CPS
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
IES 503, Lecture 2 Outline Intelligent Agents (IA) Environment types IA Behavior IA Structure IA Types.
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
AI CSC361: Intelligent Agents1 Intelligent Agents -1 CSC361.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Intelligent Agents Chapter 2.
Rutgers CS440, Fall 2003 Lecture 2: Intelligent Agents Reading: AIMA, Ch. 2.
Russell and Norvig: Chapter 2 CMSC421 – Fall 2006
Rational Agents (Chapter 2)
CS 561, Lecture 2 Last Time: Acting Humanly: The Full Turing Test Alan Turing's 1950 article Computing Machinery and Intelligence discussed conditions.
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally.
Chapter 2 Intelligent Agents. Chapter 2 Intelligent Agents What is an agent ? An agent is anything that perceiving its environment through sensors and.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
Intelligent Agents อาจารย์อุทัย เซี่ยงเจ็น สำนักเทคโนโลยีสารสนเทศและการ สื่อสาร มหาวิทยาลัยนเรศวร วิทยาเขต สารสนเทศพะเยา.
INTELLIGENT AGENTS. Agent and Environment Environment Agent percepts actions ? Sensors Effectors.
Rational Agents (Chapter 2)
Artificial Intelligence
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Chapter 2 Agents & Environments
CSC 9010 Spring Paula Matuszek Intelligent Agents Overview Slides based in part on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
ARTIFICIAL INTELLIGENCE
CSC AI Intelligent Agents.
Artificial Intelligence
How R&N define AI humanly vs. rationally thinking vs. acting
ECE 448 Lecture 3: Rational Agents
EA C461 – Artificial Intelligence Intelligent Agents
Pertemuan 2 Kecerdasan Buatan
Rational Agents (Chapter 2)
Artificial Intelligence Lecture No. 5
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
Intelligent Agents Chapter 2.
Artificial Intelligence Intelligent Agents
AI: Artificial Intelligence
Intelligent Agents Chapter 2.
Artificial Intelligence
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Administrative Issues
Artificial Intelligence
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Chapter 2 – AIMA Dr. Aarij Mahmood Hussaan Intelligent Agents Chapter 2 – AIMA Dr. Aarij Mahmood Hussaan

What is an (Intelligent) Agent? An over-used, over-loaded, and misused term. Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its actuators to maximize progress towards its goals.

What is an (Intelligent) Agent? PAGE (Percepts, Actions, Goals, Environment) PEAS[AIMA] (Performance, Environment, Actuators, Sensors) Task-specific & specialized: well-defined goals and environment The notion of an agent is meant to be a tool for analyzing systems, It is not a different hardware or new programming languages

Agent Percept Percept Sequence agent’s perceptual inputs at any given instant Percept Sequence the complete history of everything the agent has ever perceived Agent’s action depends upon the complete history of percept sequence to date

Agents and environments Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: f : P*  A The agent program runs on the physical architecture to produce f

Vacuum-cleaner world Percepts: location and contents, e.g., [A;Dirty] Actions: Left, Right, Suck, NoOp

A vacuum-cleaner agent function Reflex-Vacuum-Agent( [location,status]) returns an action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left What is the right function? Can it be implemented in a small agent program?

Rationality Fixed performance measure evaluates the environment sequence one point per square cleaned up in time T? one point per clean square per time step, minus one per move? penalize for > k dirty squares? A rational agent chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date Rational != omniscient percepts may not supply all relevant information Rational != clairvoyant action outcomes may not be as expected Hence, rational != successful Rational => exploration, learning, autonomy

Agent To design a rational agent, we must specify the task environment Consider, e.g., the task of designing an automated taxi: Performance measure?? Environment?? Actuators?? Sensors??

Agent To design a rational agent, we must specify the task environment Consider, e.g., the task of designing an automated taxi: Performance measure?? safety, destination, profits, legality, comfort, .. Environment?? streets/highways, tracks, pedestrians, weather, Actuators?? steering, accelerator, brake, horn, speaker/display,…. Sensors?? video, accelerometers, gauges, engine sensors, keyboard, GPS, :…

Some more examples

A Windshield Wiper Agent Goals:? Percepts:? Sensors:? Actuators: ? Actions:? Environment: ?

A Windshield Wiper Agent Goals: Keep windshields clean & maintain visibility Percepts: Raining, Dirty Sensors: Camera (moist sensor) Actuators: Wipers (left, right, back) Actions: Off, Slow, Medium, Fast Environment: Inner city, freeways, highways, weather …

Internet shopping agent Performance measure?? price, quality, appropriateness, efficiency Environment?? current and future WWW sites, vendors, shippers Actuators?? display to user, follow URL, fill in form Sensors?? HTML pages (text, graphics, scripts)

Interacting Agents Collision Avoidance Agent (CAA) Goals : Avoid running into obstacles Percepts: ? Sensors:? Actuators:? Actions: ? Environment: Freeway Lane Keeping Agent (LKA) •Goals : Stay in current lane •Percepts:? •Sensors:? •Actuators:? •Actions:? •Environment: Freeway

Interacting Agents Collision Avoidance Agent (CAA) Goals : Avoid running into obstacles Percepts: Obstacle distance, velocity, trajectory Sensors: Vision, proximity sensing Actuators : Steering Wheel, Accelerator, Brakes, Horn, Headlights Actions : Steer, speed up, brake, blow horn, signal (headlights) Environment: Freeway Lane Keeping Agent (LKA) •Goals : Stay in current lane •Percepts: Lane center, lane boundaries •Sensors: Vision •Actuators: Steering Wheel, Accelerator, Brakes •Actions: Steer, speed up, brake •Environment: Freeway

Conflict Resolution by Action Selection Agents Override: CAA overrides LKA Arbitrate: if Obstacle is Close then CAA else LKA Compromise: Choose action that satisfies both agents Any combination of the above Challenges: Doing the right thing

Behavior and performance of IAs Perception(sequence) to Action Mapping: f : P*  A ◦Ideal mapping: specifies which actions an agent ought to take at any point in time ◦Description: Look-Up-Table, Closed Form, etc. Performance measure: a subjective measure to characterize how successful an agent is (e.g., speed, power usage, accuracy, money, etc.) (degree of) Autonomy: to what extent is the agent able to make decisions and take actions on its own?

Look up table

Closed form Output (degree of rotation) = F(distance) E.g., F(d) = 10/d (distance cannot be less than 1/10)

How is an Agent different from other software? Agents are autonomous, that is, they act on behalf of the user Agents contain some level of intelligence, from fixed rules to learning engines that allow them to adapt to changes in the environment Agents don't only act reactively, but sometimes also proactively Agents have social ability, that is, they communicate with the user, the system, and other agents as required Agents may also cooperate with other agents to carry out more complex tasks than they themselves can handle Agents may migrate from one system to another to access remote resources or even to meet other agents

Environment types Fully observable vs. partially observable Single agent vs. multi-agent Deterministic vs. stochastic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous (Known vs. unknown)

Environment types Solitaire Chess Internet Shopping Taxi Observable? Deterministic? Episodic? Static? Discrete? Single-agent?

Environment types Solitaire Chess Internet Shopping Taxi Observable? Yes No Deterministic? Episodic? Static? Discrete? Single-agent?

Environment types Solitaire Chess Internet Shopping Taxi Observable? Yes No Deterministic? Partly Episodic? Static? Discrete? Single-agent?

Environment types Solitaire Chess Internet Shopping Taxi Observable? Yes No Deterministic? Partly Episodic? Static? Discrete? Single-agent?

Environment types Solitaire Chess Internet Shopping Taxi Observable? Yes No Deterministic? Partly Episodic? Static? Semi Discrete? Single-agent?

Environment types Solitaire Chess Internet Shopping Taxi Observable? Yes No Deterministic? Partly Episodic? Static? Semi Discrete? Single-agent?

Environment types Solitaire Chess Internet Shopping Taxi Observable? Yes No Deterministic? Partly Episodic? Static? Semi Discrete? Single-agent? Yes (except auctions) The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

Structure of Intelligent Agents Agent = architecture + program Agent program: the implementation of f : P*  A, the agent’s perception-action mapping Architecture: a device that can execute the agent program (e.g., general-purpose computer, specialized device, beobot, etc.) Function Skeleton-Agent(Percept) returns Action memory  UpdateMemory(memory, Percept) Action  ChooseBestAction(memory) memory  UpdateMemory(memory, Action) Return Action

Using a look-up-table to encode f : P*  A Example: Collision Avoidance ◦Sensors:3 proximity sensors ◦Effectors: Steering Wheel, Brakes How to generate? How large? How to select action?

Using a look-up-table to encode f : P*  A Example: Collision Avoidance ◦Sensors: 3 proximity sensors ◦Effectors: Steering Wheel, Brakes How to generate: for each p ∈ Pl x Pm x Pr generate an appropriate action, a ∈ S x B How large: size of table = #possible percepts * #possible actions = |Pl | |Pm| |Pr| |S| |B| E.g., P = {close, medium, far}3 A = {left, straight, right} {on, off} then size of table = 27*3*2 = 162 How to select action? Search.

Agent types Four basic types in order of increasing generality: simple reflex agents reflex agents with state goal-based agents utility-based agents All these can be turned into learning agents

Simple reflex agents

Simple reflex agents function Reflex-Vacuum-Agent( [location, status]) returns an action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left

Reflex agents with state

Reflex agents with state

Goal-based agents

Utility-based agents

Learning agents

Summary Agents interact with environments through actuators and sensors The agent function describes what the agent does in all circumstances The performance measure evaluates the environment sequence A perfectly rational agent maximizes expected performance Agent programs implement (some) agent functions PEAS descriptions define task environments Environments are categorized along several dimensions: observable? deterministic? episodic? static? discrete? single-agent? Several basic agent architectures exist: reflex, reflex with state, goal-based, utility-based