The VSK logic for Intelligent Agents

Slides:



Advertisements
Similar presentations
Chapter 2: Intelligent Agents
Advertisements

Additional Topics ARTIFICIAL INTELLIGENCE
Peer-to-peer and agent-based computing Basic Theory of Agency.
Intelligent Agents Chapter 2.
Title: Intelligent Agents A uthor: Michael Woolridge Chapter 1 of Multiagent Systems by Weiss Speakers: Tibor Moldovan and Shabbir Syed CSCE976, April.
Intelligent Agents Russell and Norvig: 2
Artificial Intelligence: Chapter 2
Oregon State University – CS430 Intro to AI (c) 2003 Thomas G. Dietterich and Devika Subramanian1 Agents and Environments.
Intelligent Agents Chapter 2. Outline Agents and environments Agents and environments Rationality Rationality PEAS (Performance measure, Environment,
Cooperating Intelligent Systems Intelligent Agents Chapter 2, AIMA.
January 11, 2006AI: Chapter 2: Intelligent Agents1 Artificial Intelligence Chapter 2: Intelligent Agents Michael Scherger Department of Computer Science.
6/21/2015 LECTURE-3. 6/21/2015 OBJECTIVE OF TODAY’S LECTURE T oday we are going to study about details of Intelligent Agents. In which we discuss what.
Plans for Today Chapter 2: Intelligent Agents (until break) Lisp: Some questions that came up in lab Resume intelligent agents after Lisp issues.
Intelligent Agents Chapter 2.
Rational Agents (Chapter 2)
Multiagent Systems: Local Decisions vs. Global Coherence Leen-Kiat Soh, Nobel Khandaker, Adam Eck Computer Science & Engineering University of Nebraska.
Rational Agents (Chapter 2)
For Wednesday Read chapter 3, sections 1-4 Homework: –Chapter 2, exercise 4 –Explain your answers (Identify any assumptions you make. Where you think there’s.
CPSC 7373: Artificial Intelligence Jiang Bian, Fall 2012 University of Arkansas at Little Rock.
Intelligent Agents. Software agents O Monday: O Overview video (Introduction to software agents) O Agents and environments O Rationality O Wednesday:
COMP 4640 Intelligent & Interactive Systems Cheryl Seals, Ph.D. Computer Science & Software Engineering Auburn University Lecture 2: Intelligent Agents.
How R&N define AI Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally humanly vs. rationally.
Chapter 2 Intelligent Agents. Chapter 2 Intelligent Agents What is an agent ? An agent is anything that perceiving its environment through sensors and.
Intelligent Agents Chapter 2 Some slide credits to Hwee Tou Ng (Singapore)
Lection 3. Part 1 Chapter 2 of Russel S., Norvig P. Artificial Intelligence: Modern Approach.
Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Artificial Intelligence.
Intelligent Agents Chapter 2. CIS Intro to AI - Fall Outline  Brief Review  Agents and environments  Rationality  PEAS (Performance measure,
CSC 423 ARTIFICIAL INTELLIGENCE Intelligence Agents.
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
Intelligent Agents Chapter 2. Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment.
Agents CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chapter 2 Hande AKA. Outline Agents and Environments Rationality The Nature of Environments Agent Types.
Artificial Intelligence Lecture 1. Objectives Definition Foundation of AI History of AI Agent Application of AI.
For Friday No reading (other than handout) Homework: –Chapter 2, exercises 5 and 6 –Lisp handout 1.
Introduction of Intelligent Agents
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Do software agents know what they talk about? Agents and Ontology dr. Patrick De Causmaecker, Nottingham, March 7-11, 2005.
Intelligent Agents Chapter 2. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types.
1/23 Intelligent Agents Chapter 2 Modified by Vali Derhami.
Intelligent Agents Introduction Rationality Nature of the Environment Structure of Agents Summary.
EEL 5937 Applications. Environments. EEL 5937 Multi Agent Systems Lotzi Bölöni.
Lecture 2: Intelligent Agents Heshaam Faili University of Tehran What is an intelligent agent? Structure of intelligent agents Environments.
CPSC 420 – Artificial Intelligence Texas A & M University Lecture 2 Lecturer: Laurie webster II, M.S.S.E., M.S.E.e., M.S.BME, Ph.D., P.E.
Intelligent Agents. Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types.
More with Ch. 2 Ch. 3 Problem Solving Agents
Artificial Intelligence
How R&N define AI humanly vs. rationally thinking vs. acting
Intelligent Agents By, JITHIN M J.
EA C461 – Artificial Intelligence Intelligent Agents
Agents and their Environment
Artificial Intelligence Lecture No. 5
Intelligent Agents Chapter 2.
Web-Mining Agents Cooperating Agents for Information Retrieval
Intelligent Agents Chapter 2.
Hong Cheng SEG4560 Computational Intelligence for Decision Making Chapter 2: Intelligent Agents Hong Cheng
© James D. Skrentny from notes by C. Dyer, et. al.
Intelligent Agents Chapter 2.
Artificial Intelligence Intelligent Agents
Intelligent Agents Chapter 2.
AI and Agents CS 171/271 (Chapters 1 and 2)
Intelligent Agents Chapter 2.
EA C461 – Artificial Intelligence Intelligent Agents
Structure of intelligent agents and environments
Intelligent Agents Chapter 2.
Intelligent Agents Chapter 2.
Presentation transcript:

The VSK logic for Intelligent Agents Michael Wooldridge and Alessio Lomuscio “Multi-Agent VSK Logic” Proceedings of the 17th European Workshop on Logics in AI, 2000

Outline Introduction A Semantic framework Visibility Perception Knowledge A Case study: vacuum world

Agents An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

Agency Autonomy Reactivity Proavtivity Social Ability acting independently. Reactivity reacting to changes that occur in the environment. Proavtivity taking the initiative. Social Ability interacting with other agents via communication.

Environments Accessible vs. inaccessible An accessible environment is one in which the agent can obtain complete, accurate, up-to-date information about the environment’s state. Most moderately complex environments (including, for example, the everyday physical world and the Internet) are inaccessible. The more accessible an environment is, the simpler it is to build agents to operate in it.

Environments Deterministic vs. non-deterministic A deterministic environment is one in which any action has a single guaranteed effect — there is no uncertainty about the state that will result from performing an action. Non-deterministic environments present greater problems for the agent designer.

Environments Episodic vs. sequential In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performance of an agent in different scenarios.

Environments Static vs. dynamic A static environment is one that can be assumed to remain unchanged except by the performance of actions by the agent. A dynamic environment is one that has other processes operating on it, and which hence changes in ways beyond the agent’s control.

Environments Discrete vs. continuous An environment is discrete if there are a fixed, finite number of actions and percepts in it. Continuous environments have a certain level of mismatch with computer systems

Multi-Agent Systems

Goals When designing a multi-agent system to carry out a task in environment, it is necessary to: Reason about the information properties of the agent and its environment. The information should be accessible, otherwise agents won’t be able to carry out the desired task. Agents’ sensor must be capable of perceiving the information. Agents should have abilities to store and reason about the information of the environment.

VSK Logic VSK logic is a multi-model logic, containing modalities “V”, “S”, and “K” V means that the information  is accessible in the current environment state. S means that the agent perceives (senses) . K means that the agent knows .

A Semantic Framework

Environments An environment Env is a tuple Env = <E, vis1, …, visn, e, e0>, where E = {e1, e2, …} is a set of instantaneous local states for the environment visi : E2E is the visibility function of agent i. The idea is that if the environment is actually in state e, then it is impossible for agent i in the environment to distinguish between e and any member of visi(e). visi is transparent if visi(e)=e. e : E  Act1  …  Actn  E is a total state transform function for the environment. e0 is the initial state of Env.

Agent An agent Agi is a tuple Agi = <Li, Acti, seei, doi, i, łi>, where: Li = {li1, li2,…} is a set of instantaneous local states for agent i. Acti = {i1, i2, …} is a set of actions for agent i. seei : 2E  Perci is the perception function for agent i. doi : Li  Acti is the action selection function for agent i, mapping local states to actions available to agent i. i : Li  Perci  Li is the state transformer function for agent i. łi is the initial state for agent i.

A VSK system A multi-agent VSK system is a structure M = <Env, Ag1, …, Agn>. The global states G = {g, g’, …} of a multi-agent VSK system are a subset of E  L1 …  Ln.

A run A run of a multi-agent VSK system is a sequence of global states. A sequence (g0, g1, …) over G represents a run of a system <Env, Ag1, …, Agn> iff G0 = (e0, 1 (ł1, see1(vis1(e0))), …, n (łn, seen(visn(e0)))), and For all u, if gu = (e, l1, …, ln) and gu+1 = (e’, l’1, …, l’n) then: e’  e (e, 1, …,n) where i = doi(li) and l’i = i(li, seei(visi(e’)))

Visibility Interpretation of the formula V is true in some state g  G. The property  is visible of the environment when it is in state g, not only is  true of the environment, but any agent equipped with suitable sensor would be able to perceive the information . If V were true in some state, then no agent, no matter how good its sensor was, would be able to perceive .

Sense Interpretation of the formula S is true in some state g  G. Something is visible does not mean that an agent actually sees it. S represents the information that an agent sees. SV

Knowledge Interpretation of the formula K is true in some state g  G. K represents the fact that an agent has knowledge of the formula represented by . SK if an agent’s next state function is complete. KS means the agent is local.

A case study: vacuum world A robot agent occupies an environment with two rooms, room 1 and room 2. The rooms are connected by a single door, which may be open or closed. Initially, the robot is in room 1. There may be dirt on the floor in either or both of these rooms. The robot can detect: Whether the door is open or closed Whether it is in the same room as some dirt

A case study: vacuum world (cont.) It has a vacuum cleaner which will suck dirt from one room to the other. It is also capable of opening the door and moving from one room to the other. When the door is closed, it is impossible to tell whether there is dirt in the other room. However, when the door is open, an agent could detect dirt in the other room.

Possible states of the vacuum world

Visibility function of the vacuum world vis(ei)= {e0, e1} if ei=e0 or ei=e1 {e4, e5} if ei=e4 or ei=e5 {e8, e12} if ei=e8 or ei=e12 {e9, e13} if ei=e9 or ei=e13 {ei} otherwise

Perception of the vacuum world p0: door ((ag1  d1)(ag2  d2)) p1: door ((ag1  d1)(ag2  d2)) p2: door ((ag1  d1)(ag2  d2)) p3: door ((ag1  d1)(ag2  d2))

“see” function of the vacuum world see(X) = p0 if X = {e0,e1} or X = {e8,e12} p1 if X = {e4,e5} or X = {e9,e13} p2 if X {e2,e3,e10,e14} p3 if X  {e6,e7,e11,e15}

“next state” function of the vacuum world L = {ł, l0, l1, l2, l3}, where: ł is the initial state l0 is the state coding door being closed, and no dirt being present l1 is the state coding door being closed, and dirt being present l2 is the state coding door being open, no dirt being present l3 is the state coding door being open, dirt being present (li, pj)= l0 if pj=p0; l1 if pj=p1; l2 if pj=p2; l3 if pj=p3

do function of the vacuum world do(l)= null if l=ł open if l=l0 move if l=l2 suck if l=l1 or l=l3 The history of the vacuum world <e5,l1>suck<e1,l0>open<e3,l2>move<e11,l3>suck<e10,l2>move<e2,l2>move…..