Artificial Intelligence Intelligent Agents

Slides:



Advertisements
Similar presentations
Chapter 2: Intelligent Agents
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Artificial Intelligent
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Intelligent Agents Russell and Norvig: 2
Agentes Inteligentes Capítulo 2. Contenido Agentes y medios ambientes Racionalidad PEAS (Performance measure, Environment, Actuators, Sensors) Tipos de.
ICS-171: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2009.
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
COMP 4640 Intelligent and Interactive Systems Intelligent Agents Chapter 2.
AI CSC361: Intelligent Agents1 Intelligent Agents -1 CSC361.
ICS-271: 1 Intelligent Agents Chapter 2 ICS 279 Fall 09.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2 ICS 171, Fall 2005.
Intelligent Agents Chapter 2.
ICS-171: Notes 2: 1 Intelligent Agents Chapter 2 ICS 171, spring 2007.
Rational Agents (Chapter 2)
Introduction to Logic Programming WS2004/A.Polleres 1 Introduction to Artificial Intelligence MSc WS 2009 Intelligent Agents: Chapter 2.
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Artificial Intelligence
CHAPTER 2 Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
© Copyright 2008 STI INNSBRUCK Introduction to A rtificial I ntelligence MSc WS 2009 Intelligent Agents: Chapter.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
Chapter 2 Agents & Environments. © D. Weld, D. Fox 2 Outline Agents and environments Rationality PEAS specification Environment types Agent types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
CE An introduction to Artificial Intelligence CE Lecture 2: Intelligent Agents Ramin Halavati In which we discuss.
CS 8520: Artificial Intelligence Intelligent Agents Paula Matuszek Fall, 2008 Slides based on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are.
CHAPTER 2 Intelligent Agents. Outline Artificial Intelligence a modern approach 2 Agents and environments Rationality PEAS (Performance measure, Environment,
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Rational Agents (Chapter 2)
INTELLIGENT AGENTS. Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through.
Dr. Alaa Sagheer Chapter 2 Artificial Intelligence Ch2: Intelligent Agents Dr. Alaa Sagheer.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Chapter 2 Agents & Environments
CSC 9010 Spring Paula Matuszek Intelligent Agents Overview Slides based in part on Hwee Tou Ng, aima.eecs.berkeley.edu/slides-ppt, which are in turn.
Intelligent Agents Chapter 2 Dewi Liliana. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
Web-Mining Agents Cooperating Agents for Information Retrieval Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny.
CSC AI Intelligent Agents.
Artificial Intelligence
ECE 448 Lecture 3: Rational Agents
EA C461 – Artificial Intelligence Intelligent Agents
Artificial Intelligence Lecture No. 4
Rational Agents (Chapter 2)
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
Introduction to Artificial Intelligence
Intelligent Agents Chapter 2.
AI: Artificial Intelligence
Intelligent Agents Chapter 2.
Artificial Intelligence
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Artificial Intelligence
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

Artificial Intelligence Intelligent Agents Dr. Shahriar Bijani Shahed University Spring 2017

Slides’ References Michael Rovatsos, Agent-Based Systems, Edinburgh University, 2016. S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, Chapter 2, Prentice Hall, 2010, 3rd Edition.

Outline Introduction Agent & Multi-agent Systems Agents and environments Rationality PEAS Environment types Agent types

New challenges for computer systems Traditional design problem: How can I build a system that produces the correct output given some input? Modern-day design problem: How can I build a system that can operate independently on my behalf in a networked, distributed, large-scale environment in which it will need to interact with different other components? Distributed systems in which different components have different goals and need to cooperate have not been studied until recently

What is an Agent? (1) An agent is anything that can perceive its environment through sensors and acting upon that environment through actuators (effectors) Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators Robotic agent: cameras and infrared range finders for sensors; various motors for actuators The agent can only influence the environment but not fully control it (sensor/actuator failure, non- determinism) agents / عاملها An actuator is a component of a machine that is responsible for moving or controlling a mechanism or system

What is an Agent? (2) Definition from the agents community: An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives Adds a second dimension to the agent definition: the relationship between agent and designer/user Agent is capable of independent action Agent action is purposeful Definition from the agents/MAS community (Wooldridge& Jennings):

Agent Autonomy Autonomy is a prerequisite for A system is autonomous: Delegating (assigning) complex tasks to agents ensuring flexible action in unpredictable environments A system is autonomous: If it can learn and adapt if its behavior is determined by its own experience if it requires little help from the human user if we don’t have to tell it what to do step by step if it can choose its own goal and the way to achieve it if we don’t understand its internal workings Autonomy dilemma: how to make the agent smart without losing control over it Delegate = تفویض An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)

Agent Autonomy Fundamental aspect of autonomy: We want to tell agent what to do, but not how to do it

Agents and environments The agent function maps from percept histories to actions: [f: P*  A] The agent program runs on the physical architecture to produce f agent = architecture + program

Vacuum-cleaner world Percepts: location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck, NoOp

Multiagent Systems (MAS) Two fundamental ideas: Individual agents are capable of autonomous action to a certain extent (they don’t need to be told exactly what to do) These agents interact with each other in multiagent systems (and which may represent users with different goals) Foundational problems of multiagent systems research: The agent design problem: how should agents act to do their tasks? The society design problem: how should agents interact to do their tasks? These are known as the micro and macro perspective of MAS

Applications of Multi-agent Systems Two types of agent applications: Distributed systems (processing nodes) Personal software assistants (aiding a user) Agents have been applied to various application areas: Workflow/business process management Distributed sensing Information retrieval and management Electronic commerce Human-computer interfaces Virtual environments Social simulation

Rational agents An agent should try to "do the right thing", based on what it can perceive and the actions it can perform. The “right action” is the one that will cause the agent to be most successful. Performance measure: An objective criterion for success of an agent's behavior E.g., in a vacuum-cleaner agent: amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc. Rational agents = عاملهای منطقی/ خردمند

Rational agents For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Rational agents / عاملهای خردمند

Rational agents Rationality is distinct from omniscience (knowing everything) Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration) Omniscience = علم لایتناهی all-knowing with infinite knowledge

What is agent technology? Agents as a software engineering paradigm Interaction is most important aspect of complex software systems Ideal for loosely coupled “black-box” components Agents as a tool for understanding human societies Human society is very complex, computer simulation can be useful the field of (agent-based) social simulation Agents vs. distributed systems Long tradition of distributed systems research But MAS are not simply distributed systems, because of different goals Agents vs. economics/game theory Distributed rational decision making extensively studied in economics, game theory very popular Agents vs. economics/game theory: Many strengths but also objections

Agents vs AI Agents grew out of “distributed” AI Much debate whether MAS is a sub-field of AI or vice versa AI is mostly concerned with the building blocks of intelligence reasoning and problem-solving, planning and learning, perception and action The agents field is more concerned with Combining these components (agents can also be built without any AI) Social interaction, which has mostly been ignored by standard AI

PEAS PEAS: Performance measure, Environment, Actuators, Sensors Must first specify the setting for intelligent agent design Consider, e.g., the task of designing an automated taxi driver: Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, speedometer, GPS, odometer, engine sensors, keyboard

PEAS PEAS: Performance measure, Environment, Actuators, Sensors Must first specify the setting for intelligent agent design Consider, e.g., the task of designing an automated taxi driver: Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, speedometer, GPS, odometer, engine sensors, keyboard

PEAS Example Agent: Part-picking robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors

PEAS Example Agent: Interactive English Teacher (tutor) Performance measure: Maximize student's score on test Environment: Set of students Actuators: Screen display (exercises, suggestions, corrections) Sensors: Keyboard

Environment types Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the agent action. If the environment is deterministic except for the actions of other agents, then the environment is strategic Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" Each episode consists of the agent perceiving and then performing a single action, The choice of action in each episode depends only on the episode itself.

Environment types Static (vs. dynamic): Discrete (vs. continuous): The environment is unchanged while an agent is thinking. The environment is semidynamic if the environment itself does not change during time but the agent's performance score changes. Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. Single agent (vs. multiagent): An agent operating by itself in an environment. Accessible vs. inaccessible Can agents obtain complete and correct information about the state of the world? Deterministic vs. non-deterministic Do actions have guaranteed and uniquely defined effects? Static vs. dynamic Does the environment change by processes beyond agent control? Episodic vs. non-episodic Can agents decisions be made for different, independent episodes? Discrete vs. continuous Is the number of actions and percepts fixed and finite? Open environments= inaccessible, non-deterministic, dynamic, continuous environments

Environment types Chess with Chess without Taxi driving a clock a clock Fully observable Yes Yes No Deterministic Strategic Strategic No Episodic No No No Static Semi Yes No Discrete Yes Yes No Single agent No No No The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

Agent functions and programs An agent is completely specified by the agent function mapping percept sequences (history) to actions Agent program: Input: One percept Output: Action Aim: find a way to implement the rational agent function in brief

Table-lookup agent Agent Program Example: (see vacuum table) Table-Driven-Agent (percept){ static: percepts, table; append percept to end of percepts; action  Lookup (percepts, table) return action; } Weak points: Huge table Take a long time to build the table No autonomy Need a long time to learn the table entries

Agent types Four basic types in order of increasing generality: Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents

Simple reflex agents

Simple reflex agents Function: Simple-Reflex-Agent (percept){ static: rules state  Interpret-Input ( percept ) rule  Rule-Match ( state, rules ) action  Rule-Action (rule) return action }

Model-based reflex agents

Model-based reflex agents Function: Reflex-Agent-With-State (percept){ static: rules state : current world desc. action: the most recent action state  Update-State ( state, action, percept ) rule  Rule-Match ( state, rules ) action  Rule-Action (rule) return action }

Goal-based agents Goal-based agentsمبتنی بر هدف/

Utility-based agents Utility Function: state (s) -> number

Learning agents