Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Intelligence Intelligent Agents

Similar presentations


Presentation on theme: "Artificial Intelligence Intelligent Agents"— Presentation transcript:

1 Artificial Intelligence Intelligent Agents
Dr. Shahriar Bijani Shahed University Spring 2017

2 Slides’ References Michael Rovatsos, Agent-Based Systems, Edinburgh University, 2016. S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, Chapter 2, Prentice Hall, 2010, 3rd Edition.

3 Outline Introduction Agent & Multi-agent Systems
Agents and environments Rationality PEAS Environment types Agent types

4 New challenges for computer systems
Traditional design problem: How can I build a system that produces the correct output given some input? Modern-day design problem: How can I build a system that can operate independently on my behalf in a networked, distributed, large-scale environment in which it will need to interact with different other components? Distributed systems in which different components have different goals and need to cooperate have not been studied until recently

5 What is an Agent? (1) An agent is anything that can perceive its environment through sensors and acting upon that environment through actuators (effectors) Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators Robotic agent: cameras and infrared range finders for sensors; various motors for actuators The agent can only influence the environment but not fully control it (sensor/actuator failure, non- determinism) agents / عاملها An actuator is a component of a machine that is responsible for moving or controlling a mechanism or system

6 What is an Agent? (2) Definition from the agents community: An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives Adds a second dimension to the agent definition: the relationship between agent and designer/user Agent is capable of independent action Agent action is purposeful Definition from the agents/MAS community (Wooldridge& Jennings):

7 Agent Autonomy Autonomy is a prerequisite for A system is autonomous:
Delegating (assigning) complex tasks to agents ensuring flexible action in unpredictable environments A system is autonomous: If it can learn and adapt if its behavior is determined by its own experience if it requires little help from the human user if we don’t have to tell it what to do step by step if it can choose its own goal and the way to achieve it if we don’t understand its internal workings Autonomy dilemma: how to make the agent smart without losing control over it Delegate = تفویض An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt)

8 Agent Autonomy Fundamental aspect of autonomy: We want to tell agent what to do, but not how to do it

9 Agents and environments
The agent function maps from percept histories to actions: [f: P*  A] The agent program runs on the physical architecture to produce f agent = architecture + program

10 Vacuum-cleaner world Percepts: location and contents, e.g., [A,Dirty]
Actions: Left, Right, Suck, NoOp

11 Multiagent Systems (MAS)
Two fundamental ideas: Individual agents are capable of autonomous action to a certain extent (they don’t need to be told exactly what to do) These agents interact with each other in multiagent systems (and which may represent users with different goals) Foundational problems of multiagent systems research: The agent design problem: how should agents act to do their tasks? The society design problem: how should agents interact to do their tasks? These are known as the micro and macro perspective of MAS

12 Applications of Multi-agent Systems
Two types of agent applications: Distributed systems (processing nodes) Personal software assistants (aiding a user) Agents have been applied to various application areas: Workflow/business process management Distributed sensing Information retrieval and management Electronic commerce Human-computer interfaces Virtual environments Social simulation

13 Rational agents An agent should try to "do the right thing", based on what it can perceive and the actions it can perform. The “right action” is the one that will cause the agent to be most successful. Performance measure: An objective criterion for success of an agent's behavior E.g., in a vacuum-cleaner agent: amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc. Rational agents = عاملهای منطقی/ خردمند

14 Rational agents For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Rational agents / عاملهای خردمند

15 Rational agents Rationality is distinct from omniscience (knowing everything) Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration) Omniscience = علم لایتناهی all-knowing with infinite knowledge

16 What is agent technology?
Agents as a software engineering paradigm Interaction is most important aspect of complex software systems Ideal for loosely coupled “black-box” components Agents as a tool for understanding human societies Human society is very complex, computer simulation can be useful the field of (agent-based) social simulation Agents vs. distributed systems Long tradition of distributed systems research But MAS are not simply distributed systems, because of different goals Agents vs. economics/game theory Distributed rational decision making extensively studied in economics, game theory very popular Agents vs. economics/game theory: Many strengths but also objections

17 Agents vs AI Agents grew out of “distributed” AI
Much debate whether MAS is a sub-field of AI or vice versa AI is mostly concerned with the building blocks of intelligence reasoning and problem-solving, planning and learning, perception and action The agents field is more concerned with Combining these components (agents can also be built without any AI) Social interaction, which has mostly been ignored by standard AI

18 PEAS PEAS: Performance measure, Environment, Actuators, Sensors
Must first specify the setting for intelligent agent design Consider, e.g., the task of designing an automated taxi driver: Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, speedometer, GPS, odometer, engine sensors, keyboard

19 PEAS PEAS: Performance measure, Environment, Actuators, Sensors
Must first specify the setting for intelligent agent design Consider, e.g., the task of designing an automated taxi driver: Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, speedometer, GPS, odometer, engine sensors, keyboard

20 PEAS Example Agent: Part-picking robot Performance measure:
Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors

21 PEAS Example Agent: Interactive English Teacher (tutor)
Performance measure: Maximize student's score on test Environment: Set of students Actuators: Screen display (exercises, suggestions, corrections) Sensors: Keyboard

22 Environment types Fully observable (vs. partially observable):
An agent's sensors give it access to the complete state of the environment at each point in time. Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the agent action. If the environment is deterministic except for the actions of other agents, then the environment is strategic Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" Each episode consists of the agent perceiving and then performing a single action, The choice of action in each episode depends only on the episode itself.

23 Environment types Static (vs. dynamic): Discrete (vs. continuous):
The environment is unchanged while an agent is thinking. The environment is semidynamic if the environment itself does not change during time but the agent's performance score changes. Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. Single agent (vs. multiagent): An agent operating by itself in an environment. Accessible vs. inaccessible Can agents obtain complete and correct information about the state of the world? Deterministic vs. non-deterministic Do actions have guaranteed and uniquely defined effects? Static vs. dynamic Does the environment change by processes beyond agent control? Episodic vs. non-episodic Can agents decisions be made for different, independent episodes? Discrete vs. continuous Is the number of actions and percepts fixed and finite? Open environments= inaccessible, non-deterministic, dynamic, continuous environments

24 Environment types Chess with Chess without Taxi driving
a clock a clock Fully observable Yes Yes No Deterministic Strategic Strategic No Episodic No No No Static Semi Yes No Discrete Yes Yes No Single agent No No No The environment type largely determines the agent design The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

25 Agent functions and programs
An agent is completely specified by the agent function mapping percept sequences (history) to actions Agent program: Input: One percept Output: Action Aim: find a way to implement the rational agent function in brief

26 Table-lookup agent Agent Program Example: (see vacuum table) Table-Driven-Agent (percept){ static: percepts, table; append percept to end of percepts; action  Lookup (percepts, table) return action; } Weak points: Huge table Take a long time to build the table No autonomy Need a long time to learn the table entries

27 Agent types Four basic types in order of increasing generality:
Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents

28 Simple reflex agents

29 Simple reflex agents Function: Simple-Reflex-Agent (percept){
static: rules state  Interpret-Input ( percept ) rule  Rule-Match ( state, rules ) action  Rule-Action (rule) return action }

30 Model-based reflex agents

31 Model-based reflex agents
Function: Reflex-Agent-With-State (percept){ static: rules state : current world desc. action: the most recent action state  Update-State ( state, action, percept ) rule  Rule-Match ( state, rules ) action  Rule-Action (rule) return action }

32 Goal-based agents Goal-based agentsمبتنی بر هدف/

33 Utility-based agents Utility Function: state (s) -> number

34 Learning agents


Download ppt "Artificial Intelligence Intelligent Agents"

Similar presentations


Ads by Google