Presentation is loading. Please wait.

Presentation is loading. Please wait.

Koen HindriksMulti-Agent Systems Introduction Agent Programming Koen Hindriks Delft University of Technology, The Netherlands Learning to program teaches.

Similar presentations


Presentation on theme: "Koen HindriksMulti-Agent Systems Introduction Agent Programming Koen Hindriks Delft University of Technology, The Netherlands Learning to program teaches."— Presentation transcript:

1 Koen HindriksMulti-Agent Systems Introduction Agent Programming Koen Hindriks Delft University of Technology, The Netherlands Learning to program teaches you how to think. Computer science is a liberal art. Steve Jobs

2 Koen Hindriks Multi-Agent Systems 2012 2 Koen HindriksMulti-Agent Systems 2 Outline Previous Lecture, last lecture on Prolog: –“Input & Output” –Negation as failure –Search Coming lectures: –Agents that use Prolog This lecture: –Agent Introduction –“Hello World” example in the GOAL agent programming language

3 Koen Hindriks Multi-Agent Systems 2012 3 Koen HindriksMulti-Agent Systems 3 Agents: Act in environments Choose an action Percepts Action environment agent

4 Koen Hindriks Multi-Agent Systems 2012 4 Koen HindriksMulti-Agent Systems 4 Agents: Act to achieve goals Percepts Action events actionsgoals environment agent

5 Koen Hindriks Multi-Agent Systems 2012 5 Koen HindriksMulti-Agent Systems 5 Agents: Represent environment Percepts Action events actionsgoals plans beliefs environment agent

6 Koen Hindriks Multi-Agent Systems 2012 6 Koen HindriksMulti-Agent Systems 6 Agent Oriented Programming Agents provide a very effective way of building applications for dynamic and complex environments + Develop agents based on Belief-Desire-Intention agent metaphor, i.e. develop software components as if they have beliefs and goals, act to achieve these goals, and are able to interact with their environment and other agents.

7 Koen Hindriks Multi-Agent Systems 2012 7 Koen HindriksMulti-Agent Systems 7 A Brief History of AOP 1990: AGENT-0 (Shoham) 1993: PLACA (Thomas; AGENT-0 extension with plans) 1996: AgentSpeak(L) (Rao; inspired by PRS) 1996: Golog (Reiter, Levesque, Lesperance) 1997: 3APL (Hindriks et al.) 1998: ConGolog (Giacomo, Levesque, Lesperance) 2000: JACK (Busetta, Howden, Ronnquist, Hodgson) 2000: GOAL (Hindriks et al.) 2000: CLAIM (Amal El FallahSeghrouchni) 2002: Jason (Bordini, Hubner; implementation of AgentSpeak) 2003: Jadex (Braubach, Pokahr, Lamersdorf) 2008: 2APL (successor of 3APL) This overview is far from complete!

8 Koen Hindriks Multi-Agent Systems 2012 8 Koen HindriksMulti-Agent Systems 8 A Brief History of AOP AGENT-0Speech acts PLACA Plans AgentSpeak(L)Events/Intentions GologAction theories, logical specification 3APLPractical reasoning rules JACKCapabilities, Java-based GOALDeclarative goals CLAIMMobile agents (within agent community) JasonAgentSpeak + Communication JadexJADE + BDI 2APLModules, PG-rules, …

9 Koen Hindriks Multi-Agent Systems 2012 9 Koen HindriksMulti-Agent Systems 9 Outline Some of the more actively being developed APLs –2APL (Utrecht, Netherlands) –Agent Factory (Dublin, Ireland) –G OAL (Delft, Netherlands) –Jason (Porto Alegre, Brasil) –Jadex (Hamburg, Germany) –JACK (Melbourne, Australia) –JIAC (Berlin, Germany) References

10 Koen Hindriks Multi-Agent Systems 2012 10 Koen HindriksMulti-Agent Systems 10 2APL – Features 2APL is a rule-based language for programming BDI agents: actions: belief updates, send, adopt, drop, external actions beliefs: represent the agent’s beliefs goals: represents what the agent wants plans: sequence, while, if then PG-rules: goal handling rules PC-rules: event handling rules PR-rules: plan repair rules

11 Koen Hindriks Multi-Agent Systems 2012 11 Koen HindriksMulti-Agent Systems 11 2APL – Code Snippet Beliefs: worker(w1), worker(w2), worker(w3) Goals: findGold() and haveGold() Plans: = { send( w3, play(explorer) ); } Rules = { …goal handling rule G( findGold() ) <- B( -gold(_) && worker(A) && -assigned(_, A) ) | send( A, play(explorer) ); ModOwnBel( assigned(_, A) );, E( receive( A, gold(POS) ) ) | B( worker(A) ) ->event handling rule { ModOwnBel( gold(POS) ); }, E( receive( A, done(POS) ) ) | B( worker(A) ) -> explicit operator for events { ModOwnBel( -assigned(POS, A), -gold(POS) ); }, … } modules to combine and structure rules

12 Koen Hindriks Multi-Agent Systems 2012 12 Koen HindriksMulti-Agent Systems 12 JACK – Features The JACK agent Language is built on top of and extends Java and provides the following features: agents: used to define the overall behavior of mas beliefset: represents an agent’s beliefs view: allows to perform queries on belief sets capability: reusable functional component made up of plans, events, belief sets and other capabilities plan: instructions the agent follows to try to achieve its goals and handle events event: occurrence to which agent should respond

13 Koen Hindriks Multi-Agent Systems 2012 13 Koen HindriksMulti-Agent Systems 13 JACK – Agent Template agent AgentType extends Agent { // Knowledge bases used by the agent are declared here. #private data BeliefType belief_name(arg_list); // Events handled, posted and sent by the agent are declared here. #handles event EventType; #posts event EventType reference;used to create internal events #sends event EventType reference;used to send messages to other agents // Plans used by the agent are declared here. Order is important. #uses plan PlanType; // Capabilities that the agent has are declared here. #has capability CapabilityType reference; // other Data Member and Method definitions }

14 Koen Hindriks Multi-Agent Systems 2012 14 Koen HindriksMulti-Agent Systems 14 Jason – Features beliefs: weak and strong negation to support both closed-world assumption and open-world belief annotations: label information source, e.g. self, percept events: internal, messages, percepts a library of “internal actions”, e.g. send user-defined internal actions: programmed in Java. automatic handling of plan failures annotations on plan labels: used to select a plan speech-act based inter-agent communication Java-based customization: (plan) selection functions, trust functions, perception, belief-revision, agent communication

15 Koen Hindriks Multi-Agent Systems 2012 15 Koen HindriksMulti-Agent Systems 15 Jason – Plans triggering event test on beliefs plan body

16 Koen Hindriks Multi-Agent Systems 2012 16 Koen HindriksMulti-Agent Systems 16 Summary Key language elements of APLs: beliefs and goals to represent environment events received from environment (& internal) actions to update beliefs, adopt goals, send messages, act in environment plans, capabilities & modules to structure action rules to select actions/plans/modules/capabilities support for multi-agent systems

17 Koen Hindriks Multi-Agent Systems 2012 17 Koen HindriksMulti-Agent Systems 17 How are these APLs related? AGENT-0 1 (PLACA ) Family of Languages Basic concepts: beliefs, action, plans, goals-to-do): AgentSpeak(L) 1, Jason 2 Golog 3APL 3 = = = 1 mainly interesting from a historical point of view 2 from a conceptual point of view, we identify AgentSpeak(L) and Jason 3 without practical reasoning rules Main addition: Declarative goals 2APL ≈ 3APL + G OAL A comparison from a high-level, conceptual point, not taking into account any practical aspects (IDE, available docs, speed, applications, etc) Java-based BDI Languages Agent Factory, Jack (commercial), Jadex, JIAC Mobile Agents CLAIM, AgentScape Multi-Agent Systems All of these languages (except AGENT-0, PLACA, JACK) have versions implemented “on top of” JADE. Prolog-based

18 Koen Hindriks Multi-Agent Systems 2012 18 Koen HindriksMulti-Agent Systems 18 References Websites 2APL: http://www.cs.uu.nl/2apl/http://www.cs.uu.nl/2apl/ Agent Factory: http://www.agentfactory.comhttp://www.agentfactory.com G OAL : http://mmi.tudelft.nl/trac/goalhttp://mmi.tudelft.nl/trac/goal JACK: http://www.agent-software.com.au/products/jack/http://www.agent-software.com.au/products/jack/ Jadex: http://jadex.informatik.uni-hamburg.de/http://jadex.informatik.uni-hamburg.de/ Jason: http://jason.sourceforge.net/http://jason.sourceforge.net/ JIAC: http://www.jiac.de/http://www.jiac.de/ Books Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2005 Multi-Agent Programming Languages, Platforms and Applications. presents 3APL, CLAIM, Jadex, Jason Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2009, Multi-Agent Programming: Languages, Tools and Applications. presents a.o.: Brahms, CArtAgO, G OAL, JIAC Agent Platform

19 Koen HindriksMulti-Agent Systems The G OAL Agent Programming Language

20 Koen Hindriks Multi-Agent Systems 2012 20 Koen HindriksMulti-Agent Systems THE BLOCKS WORLD The Hello World example of Agent Programming

21 Koen Hindriks Multi-Agent Systems 2012 21 Koen HindriksMulti-Agent Systems 21 The Blocks World Positioning of blocks on table is not relevant. A block can be moved only if it there is no other block on top of it. Objective: Move blocks in initial state such that result is goal state. A classic AI planning problem.

22 Koen Hindriks Multi-Agent Systems 2012 22 Koen HindriksMulti-Agent Systems 22 The Blocks World (Cont’d) Key concepts: A block is in position if “it is in the right place”; otherwise misplaced A constructive move puts a block in position A self-deadlock is a misplaced block above a block it should be above

23 Koen Hindriks Multi-Agent Systems 2012 23 Koen HindriksMulti-Agent Systems MENTAL STATES

24 Koen Hindriks Multi-Agent Systems 2012 24 Koen HindriksMulti-Agent Systems 24 Representing the Blocks World Basic predicate: on(X,Y). Defined predicates: tower([X]) :- on(X,table). tower([X,Y|T) :- on(X,Y),tower([Y|T]). clear(X) :- block(X), not(on(Y,X)). clear(table). block(X) :- on(X, _). EXERCISE: Prolog is the knowledge representation language used in G OAL.

25 Koen Hindriks Multi-Agent Systems 2012 25 Koen HindriksMulti-Agent Systems 25 Representing the Initial State Using the on(X,Y) predicate we can represent the initial state. beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } Initial belief base of agent

26 Koen Hindriks Multi-Agent Systems 2012 26 Koen HindriksMulti-Agent Systems 26 Representing the Blocks World What about the rules we defined before? Add clauses that do not change into the knowledge base. tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y),tower([Y|T]). clear(X) :- block(X), not(on(Y,X)). clear(table). block(X) :- on(X, _). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } Static knowledge base of agent

27 Koen Hindriks Multi-Agent Systems 2012 27 Koen HindriksMulti-Agent Systems 27 Why a Separate Knowledge Base? Concepts defined in knowledge base can be used in combination with both the belief and goal base. Example –Since agent believes on(e,table),on(d,e), then infer: agent believes tower([d,e]). –If agent wants on(a,table),on(b,a), then infer: agent wants tower([b,a]). Knowledge base introduced to avoid duplicating clauses in belief and goal base.

28 Koen Hindriks Multi-Agent Systems 2012 28 Koen HindriksMulti-Agent Systems 28 Representing the Goal State Using the on(X,Y) predicate we can represent the goal state. goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } Initial goal base of agent

29 Koen Hindriks Multi-Agent Systems 2012 29 Koen HindriksMulti-Agent Systems 29 One or Many Goals In the goal base using the comma- or period-separator makes a difference! goals{ on(a,table), on(b,a), on(c,b). } goals{ on(a,table). on(b,a). on(c,b). }  Left goal base has three goals, right goal base has single goal. Moving c on top of b (3 rd goal), c to the table, a to the table (2 nd goal), and b on top of a (1 st goal) achieves all three goals but not single goal of right goal base. The reason is that the goal base on the left does not require block c to be on b, b to be on a, and a to be on the table at the same time.

30 Koen Hindriks Multi-Agent Systems 2012 30 Koen HindriksMulti-Agent Systems 30 Mental State of G OAL Agent knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } The knowledge, belief, and goal sections together constitute the specification of the Mental State of a G OAL Agent. Initial mental state of agent

31 Koen Hindriks Multi-Agent Systems 2012 31 Koen HindriksMulti-Agent Systems 31 Inspecting the Belief & Goal base Operator bel(  ) to inspect the belief base. Operator goal(  ) to inspect the goal base. –Where  is a Prolog conjunction of literals. Examples: –bel(clear(a), not(on(a,c))). –goal(tower([a,b])).

32 Koen Hindriks Multi-Agent Systems 2012 32 Koen HindriksMulti-Agent Systems 32 Inspecting the Belief Base bel(  ) succeeds if  follows from the belief base in combination with the knowledge base. Example: –bel(clear(a), not(on(a,c))) succeeds Condition  is evaluated as a Prolog query. knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). }

33 Koen Hindriks Multi-Agent Systems 2012 33 Koen HindriksMulti-Agent Systems 33 Inspecting the Belief Base Which of the following succeed? 1.bel(on(b,c), not(on(a,c))). 2.bel(on(X,table), on(Y,X), not(clear(Y)). 3.bel(tower([X,b,d]). [X=c;Y=b] knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } EXERCISE:

34 Koen Hindriks Multi-Agent Systems 2012 34 Koen HindriksMulti-Agent Systems 34 Inspecting the Goal Base goal(  ) succeeds if  follows from one of the goals in the goal base in combination with the knowledge base. Example: –goal(clear(a)) succeeds. –but not goal(clear(a),clear(c)). Use the goal(…) operator to inspect the goal base. knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). }

35 Koen Hindriks Multi-Agent Systems 2012 35 Koen HindriksMulti-Agent Systems 35 Inspecting the Goal Base Which of the following succeed? 1.goal(on(b,table), not(on(d,c))). 2.goal(on(X,table), on(Y,X), clear(Y)). 3.goal(tower([d,X]). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } EXERCISE:

36 Koen Hindriks Multi-Agent Systems 2012 36 Koen HindriksMulti-Agent Systems 36 Negation and Beliefs not(bel(on(a,c))) = bel(not(on(a,c))) ? Answer: Yes. –Because Prolog implements negation as failure. –If φ cannot be derived, then not( φ ) can be derived. –We always have: not(bel(  )) = bel(not(  )) knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } EXERCISE:

37 Koen Hindriks Multi-Agent Systems 2012 37 Koen HindriksMulti-Agent Systems 37 Negation and Goals not(goal(  )) = goal(not(  )) ? Answer: No. We have, for example: goal(on(a,b)) and goal(not(on(a,b))). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,b), on(b,table). on(a,c), on(c,table). } EXERCISE:

38 Koen Hindriks Multi-Agent Systems 2012 38 Koen HindriksMulti-Agent Systems 38 Combining Beliefs and Goals Achievement goals: –a-goal(  ) = goal(  ), not(bel(  )) Agent only has an achievement goal if it does not believe the goal has been reached already. Goal achieved: –goal-a(  ) = goal(  ), bel(  ) A (sub)-goal  has been achieved if the agent believes in . Useful to combine the bel(…) and goal(…) operators.

39 Koen Hindriks Multi-Agent Systems 2012 39 Koen HindriksMulti-Agent Systems 39 Expressing BW Concepts Define: block X is misplaced Solution: goal(tower([X|T])),not(bel(tower([X|T]))). But this means that saying that a block is misplaced is saying that you have an achievement goal: a-goal(tower([X|T])). Possible to express key Blocks World concepts by means of basic operators. Mental States EXERCISE:

40 Koen Hindriks Multi-Agent Systems 2012 40 Koen HindriksMulti-Agent Systems ACTIONS SPECIFICATIONS Changing Blocks World Configurations

41 Koen Hindriks Multi-Agent Systems 2012 41 Koen HindriksMulti-Agent Systems 41 Actions Change the Environment… move(a,d)

42 Koen Hindriks Multi-Agent Systems 2012 42 Koen HindriksMulti-Agent Systems 42 and Require Updating Mental States. To ensure adequate beliefs after performing an action the belief base needs to be updated (and possibly the goal base). –Add effects to belief base: insert on(a,d) after move(a,d). –Delete old beliefs: delete on(a,b) after move(a,d).

43 Koen Hindriks Multi-Agent Systems 2012 43 Koen HindriksMulti-Agent Systems 43 and Require Updating Mental States. If a goal has been (believed to be) completely achieved, the goal is removed from the goal base. It is not rational to have a goal you believe to be achieved. Default update implements a blind commitment strategy. move(a,b) beliefs{ on(a,table), on(b,table). } goals{ on(a,b), on(b,table). } beliefs{ on(a,b), on(b,table). } goals{ }

44 Koen Hindriks Multi-Agent Systems 2012 44 Koen HindriksMulti-Agent Systems 44 Action Specifications Actions in GOAL have preconditions and postconditions. Executing an action in GOAL means: –Preconditions are conditions that need to be true: Check preconditions on the belief base. –Postconditions (effects) are add/delete lists (STRIPS): Add positive literals in the postcondition Delete negative literals in the postcondition STRIPS-style specification move(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) ) } post { not(on(X,Z)), on(X,Y) } }

45 Koen Hindriks Multi-Agent Systems 2012 45 Koen HindriksMulti-Agent Systems 45 move(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) )} post { not(on(X,Z)), on(X,Y) } } Example: move(a,b) Check: clear(a), clear(b), on(a,Z), not( on(a,b) ) Remove: on(a,Z) Add: on(a,b) Note: first remove, then add. Actions Specifications table

46 Koen Hindriks Multi-Agent Systems 2012 46 Koen HindriksMulti-Agent Systems 46 move(X,Y){ pre { clear(X), clear(Y), on(X,Z) } post { not(on(X,Z)), on(X,Y) } } Example: move(a,b) Actions Specifications beliefs{ on(a,table), on(b,table). } beliefs{ on(b,table). on(a,b). }

47 Koen Hindriks Multi-Agent Systems 2012 47 Koen HindriksMulti-Agent Systems 47 move(X,Y){ pre { clear(X), clear(Y), on(X,Z) } post { not(on(X,Z)), on(X,Y) } } 1.Is it possible to perform move(a,b) ? 2.Is it possible to perform move(a,d) ? Actions Specifications EXERCISE: knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } No, not( on(a,b) ) fails. Yes.

48 Koen Hindriks Multi-Agent Systems 2012 48 Koen HindriksMulti-Agent Systems ACTION RULES Selecting actions to perform

49 Koen Hindriks Multi-Agent Systems 2012 49 Koen HindriksMulti-Agent Systems 49 Agent-Oriented Programming How do humans choose and/or explain actions? Examples: I believe it rains; so, I will take an umbrella with me. I go to the video store because I want to rent I-robot. I don’t believe busses run today so I take the train. Use intuitive common sense concepts: beliefs + goals => action See Chapter 1 of the Programming Guide

50 Koen Hindriks Multi-Agent Systems 2012 50 Koen HindriksMulti-Agent Systems 50 Selecting Actions: Action Rules Action rules are used to define a strategy for action selection. Defining a strategy for blocks world: –If constructive move can be made, make it. –If block is misplaced, move it to table. What happens: –Check condition, e.g. can a-goal(tower([X|T])) be derived given current mental state of agent? –Yes, then (potentially) select move(X,table). program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

51 Koen Hindriks Multi-Agent Systems 2012 51 Koen HindriksMulti-Agent Systems 51 Order of Action Rules Action rules are executed by default in linear order. The first rule that fires is executed. Default order can be changed to random. Arbitrary rule that is able to fire may be selected. program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). } program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

52 Koen Hindriks Multi-Agent Systems 2012 52 Koen HindriksMulti-Agent Systems 52 Example Program: Action Rules Agent program may allow for multiple action choices d To table Random, arbitrary choice program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

53 Koen Hindriks Multi-Agent Systems 2012 53 Koen HindriksMulti-Agent Systems 53 The Sussman Anomaly (1/5) Non-interleaved planners typically separate the main goal, on(A,B),on(B,C) into 2 sub-goals: on(A,B) and on(B,C). Planning for these two sub-goals separately and combining the plans found does not work in this case, however. a c Initial state bc b a Goal state

54 Koen Hindriks Multi-Agent Systems 2012 54 Koen HindriksMulti-Agent Systems 54 The Sussman Anomaly (2/5) Initially, all blocks are misplaced One constructive move can be made (c to table) Note: move(b,c) not enabled. Only action enabled: c to table (2x). Need to check conditions of action rules: if bel(tower([Y|T]),a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table). We have bel(tower([c,a]) and a-goal(tower([c])). c b a Goal state a c Initial state b

55 Koen Hindriks Multi-Agent Systems 2012 55 Koen HindriksMulti-Agent Systems 55 The Sussman Anomaly (3/5) Only constructive move enabled is –Move b onto c Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table). Note that we have: a-goal(on(a,b),on(b,c),on(c,table)), but not: a-goal(tower[c])). Current state c b a Goal state a c b

56 Koen Hindriks Multi-Agent Systems 2012 56 Koen HindriksMulti-Agent Systems 56 The Sussman Anomaly (4/5) Again, only constructive move enabled –Move a onto b Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X,T))then move(X,Y). Note that we have: a-goal(on(a,b),on(b,c),on(c,table)), but not: a-goal(tower[b,c]). c b a Goal state a c b Current state

57 Koen Hindriks Multi-Agent Systems 2012 57 Koen HindriksMulti-Agent Systems 57 The Sussman Anomaly (5/5) Upon achieving a goal completely that goal is automatically removed. The idea is that no resources should be wasted on achieving the goal. In our case, goal(on(a,b),on(b,c),on(c,table)) has been achieved, and is dropped. The agent has no other goals and is ready. c b a Goal state a c b Current state

58 Koen Hindriks Multi-Agent Systems 2012 58 Koen HindriksMulti-Agent Systems 58 Organisation Read Programming Guide Ch1-3 (+ User Manual) Tutorial: –Download G OAL : See http://ii.tudelft.nl/trac/goal (v4537)http://ii.tudelft.nl/trac/goal –Practice exercises from Programming Guide –BW4T assignments 3 and 4 available Next lecture: –Sensing, perception, environments –Other types of rules & macros –Agent architectures


Download ppt "Koen HindriksMulti-Agent Systems Introduction Agent Programming Koen Hindriks Delft University of Technology, The Netherlands Learning to program teaches."

Similar presentations


Ads by Google