Presentation is loading. Please wait.

Presentation is loading. Please wait.

Peer-to-peer and agent-based computing Basic Theory of Agency.

Similar presentations


Presentation on theme: "Peer-to-peer and agent-based computing Basic Theory of Agency."— Presentation transcript:

1 peer-to-peer and agent-based computing Basic Theory of Agency

2 2 Plan of next two lectures Motivation States and actions Runs State transformer functions Agents and systems Purely reactive agents Perception Agents with state Utilities Achievement and maintenance tasks Agent synthesis

3 3 An Abstract Agent Architecture We need a way to tie down the concept of an agent We present an abstract architecture to formalise the concepts of: –environmental state –actions and state transformations –agent decision making

4 4 Why not look at Java code? Answer: –A program is not the best way to communicate with humans about computations Code is verbose (i.e., lots of it!) and may contain a lot of unnecessary housekeeping: –Open sockets, parse XML, set variables/flags, loops,… Abstractions help us understand essential features

5 Let us assume that The environment is part of a finite set E of discrete, instantaneous states: E = {e 1, e 2, …} Agents have a repertoire of actions available to them, which transform the state of the environment: Ac = { 1, 2, …} 5 States and actions (1)

6 6 States and actions (2) Sample environments: –Readings from a thermostat E = {-10,-9,…,0,1,…,39,40} Sample actions: –Turning on/off heating (or leaving it alone) Ac = {on, off, nil}

7 A run r of an agent in an environment is a sequence of interleaved states and actions: r : e 0 e 1 e 2 e 3 … e n Let: – R be the set of all possible finite runs (over E and Ac) – R Ac be the subset of finite runs that end with an action – R E be the subset of finite runs that end with a state 7 Runs (1) 0 n -1 1 2 3

8 8 A sample run, using previous environment & actions: r : 10 20 30 25 … -1 Sets: –R = {(10,off),(30,off,20),(-1,nil,10,on,12),…} –R Ac ={(10,off),(10,off,5,on),…} –R E = {(30,off,20),(35,off,10,nil,-2),…} Runs (2) on nil off nil nil

9 A state transformer function represents the behaviour of the environment: : R Ac (E ) –Environments: history-dependent & non-deterministic –If (r )=, then there are no possible successor states to r ; i.e. the system has ended its run. Formally, an environment consists of –A set of environment states E –The initial state e 0 –A transformer function Env = E,e 0, 9 State transformer functions (1)

10 10 Given R Ac ={(10,off),(10,off,5,on),…} and E = {-10,-9,…,0,1,…,39,40} We can define the following state transformer function ((10,off)) = {-10,…,10} ((10,off,5,on)) = {6,…,40} … A sample environment Env = {-10,…,0,…,40},0, State transformer functions (2)

11 An agent is a function mapping runs to actions: Ag : R E Ac –An agent decides which action to perform based on the history it has witnessed so far… Let AG = {Ag 1, Ag 2,…, Ag n } be the set of all agents in a multi-agent system. 11 Agents (1)

12 12 Given R E = {(30,off,20),(35,off,10,off,-2),…} we can define the following agent function: –Ag ((30,off,20)) = off –Ag ((35,off,10,nil,-2)) = on –… N.B.: there are compact ways to describe functions: –Ag ((…,on,x)) = off, if x 20 –Ag ((…,on,x)) = nil, if x < 20 –Ag ((…,off,x)) = on, if x < 20 –Ag ((…,nil,x)) = on, if x < 20 –Ag ((…,nil,x)) = nil, if x 20 –… Agents (2)

13 A system comprises a pair agent/environment: Ag, Env –Any system has a set of possible runs associated with it The set of runs of an agent in an environment is: R (Ag, Env) –Although the set of runs can be infinite, each run is finite –I.e., we do not consider (for the time being) infinite runs… 13 Systems (1)

14 14 A sample system: Ag, {-10,…,0,…,40},0, where –Ag (r ) = Ac (agent defd as a function) – (r ) = ({-10,…,0,…,40}) (state transformer function) A sample set of runs of an agent in an environment: R (Ag, {-10,…,40},0, ) = {(0,on,20,nil,15),…} Systems (2)

15 A sequence (e 0, 0, e 1, 1, e 2, …) represents a run of an agent Ag in an environment Env = E, e 0, if 15 Systems (3) 1. e 0 is the initial state of Env 2. 0 =Ag (e 0 ) ; and 3. for i > 0, e i ((e 0, 0,…, i -1 )) where i = Ag ((e 0, 0, …, e i ))

16 Two agents Ag 1 and Ag 2 are behaviourally equivalent with respect to environment Env if, and only if, R (Ag 1, Env) = R (Ag 2, Env) Two agents Ag 1 and Ag 2 are behaviourally equivalent if, and only if, they are behaviourally equivalent with respect to all environments Env. 16 Behavioural equivalence of agents

17 Some agents decide what to do without reference to their history: –Their decision-making is based entirely on the present –I.e., there is no reference whatsoever to the past! Such agents are called purely reactive: Ag : E Ac A thermostat is a purely reactive agent: Ag (e ) = off if e 20 Ag (e ) = on if e < 20 17 Purely Reactive Agents

18 18 We can now introduce a perception system: –see : agents ability to observe the environment –action : agents decision making function Perception (1)

19 The output of the see function is a percept: see : E Per which maps environmental states to percepts action is now a function action : Per * Ac which maps sequences of percepts to actions Agents are considered from now on as the pair Ag = see, action 19 Perception (2)

20 20 Sample see functions: –Robot with an infrared sensor –Sofware agent performing commands such as ls or finger or retrieving a Web page –The output is stored in some data structure Sample action functions: –Move towards direction of source of heat –Delete all files with extension.jpg obtained via ls –Submit a Web form using the retrieved page Perception (3)

21 21 Suggested Reading An Introduction to Multi-Agent Systems, M. Wooldridge, John Wiley & Sons, 2002. Chapter 2.


Download ppt "Peer-to-peer and agent-based computing Basic Theory of Agency."

Similar presentations


Ads by Google