Fariba Sadri - ICCL 08 Agent Examples 1 Agents Some Examples Fariba Sadri Imperial College London ICCL Summer School Dresden August 2008.

Slides:



Advertisements
Similar presentations
Intelligent Architectures for Electronic Commerce Part 1.5: Symbolic Reasoning Agents.
Advertisements

Peer-to-peer and agent-based computing Agent-Based Computing: tools, languages and case studies (Cont’d)
AOSE Agent-Oriented Programming. Introduction A class of programming language that often embodies the various principles proposed by theorists. –Many.
Formal Semantics for an Abstract Agent Programming Language K.V. Hindriks, Ch. Mayer et al. Lecture Notes In Computer Science, Vol. 1365, 1997
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
Title: Intelligent Agents A uthor: Michael Woolridge Chapter 1 of Multiagent Systems by Weiss Speakers: Tibor Moldovan and Shabbir Syed CSCE976, April.
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Best-First Search: Agendas
A Formal Semantics for Brahms Presented by Richard Stocker In co-operation with Maarten Sierhuis, Louise Dennis, Clare Dixon and Michael Fisher.
What exactly is an agent? James Harland 23rd November, 2009.
Artificial Intelligence 2005/06
Concrete architectures (Section 1.4) Part II: Shabbir Ssyed We will describe four classes of agents: 1.Logic based agents 2.Reactive agents 3.Belief-desire-intention.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
Default and Cooperative Reasoning in Multi-Agent Systems Chiaki Sakama Wakayama University, Japan Programming Multi-Agent Systems based on Logic Dagstuhl.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
University of Jyväskylä An Observation Framework for Multi-Agent Systems Joonas Kesäniemi, Artem Katasonov * and Vagan Terziyan University of Jyväskylä,
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Artificial Intelligence 2005/06 Hierarchical Planning and Other Stuff Russell and Norvig, Chapter 11.
Presentation on Formalising Speech Acts (Course: Formal Logic)
4-1 Chapter 4: PRACTICAL REASONING An Introduction to MultiAgent Systems
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of Science and Technology.
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
PLANNING Partial order regression planning Temporal representation 1 Deductive planning in Logic Temporal representation 2.
Introduction to Jadex programming Reza Saeedi
2APL A Practical Agent Programming Language March 6, 2007 Cathy Yen.
Ontologies Reasoning Components Agents Simulations Belief Update, Planning and the Fluent Calculus Jacques Robin.
Artificial Intelligence 4. Knowledge Representation Course V231 Department of Computing Imperial College, London © Simon Colton.
EEL 5937 Agent communication EEL 5937 Multi Agent Systems Lecture 10, Feb. 6, 2003 Lotzi Bölöni.
Belief Desire Intention Agents Presented by Justin Blount From Reasoning about Rational Agents By Michael Wooldridge.
Introduction to AgentSpeak and Jason for Programming Multi-agent Systems (1) Dr Fuhua (Oscar) Lin SCIS Athabasca University June 19, 2009.
EEL 5937 Models of agents based on intentional logic EEL 5937 Multi Agent Systems.
An Overview of Goals and Goal Selection Justin L. Blount Knowledge Representation Lab Texas Tech University August 24, 2007.
Pattern-directed inference systems
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
First-Order Logic Introduction Syntax and Semantics Using First-Order Logic Summary.
Artificial Intelligence 2005/06 Partially Ordered Plans - or: "How Do You Put Your Shoes On?"
October 27, 2006 BDI Agents and AgentSpeak(L) Romelia Plesa PhD Candidate SITE, University of Ottawa.
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Capabilities, Plans, and Events Each capability is further broken down either into further capabilities or, eventually into the set of plans that provide.
A Quantitative Trust Model for Negotiating Agents A Quantitative Trust Model for Negotiating Agents Jamal Bentahar, John Jules Ch. Meyer Concordia University.
Computer Science CPSC /CPSC Rob Kremer Department of Computer Science University of Calgary 07/12/20151 Agent Communications.
Lecture 5 Multi-Agent Systems Lecture 5 University “Politehnica” of Bucarest Adina Magda Florea
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
EEL 5937 Agent communication EEL 5937 Multi Agent Systems Lotzi Bölöni.
Computer Science CPSC 322 Lecture 22 Logical Consequences, Proof Procedures (Ch 5.2.2)
1/16 Planning Chapter 11- Part1 Author: Vali Derhami.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
Koen HindriksMulti-Agent Systems 2012 Introduction Agent Programming Koen Hindriks Delft University of Technology, The Netherlands Learning to program.
Copyright © 2013 Curt Hill Triggers The Generation of Indirect Actions.
Planning I: Total Order Planners Sections
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
1 An infrastructure for context-awareness based on first order logic 송지수 ISI LAB.
Forward and Backward Chaining
Review: What is a logic? A formal language –Syntax – what expressions are legal –Semantics – what legal expressions mean –Proof system – a way of manipulating.
EEL 5937 Content languages EEL 5937 Multi Agent Systems Lecture 10, Feb. 6, 2003 Lotzi Bölöni.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Artificial Intelligence Knowledge Representation.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Service-Oriented Computing: Semantics, Processes, Agents
CMSC 691M Agent Architectures & Multi-Agent Systems
EA C461 – Artificial Intelligence Logical Agent
Knowledge Representation
Intelligent Agents Chapter 2.
Service-Oriented Computing: Semantics, Processes, Agents
Service-Oriented Computing: Semantics, Processes, Agents
Knowledge Representation I (Propositional Logic)
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

Fariba Sadri - ICCL 08 Agent Examples 1 Agents Some Examples Fariba Sadri Imperial College London ICCL Summer School Dresden August 2008

Fariba Sadri - ICCL 08 Agent Examples 2 Contents Teleo-Reactive agents Agent-0 BDI/AgentSpeak(L)

Fariba Sadri - ICCL 08 Agent Examples 3 Teleo-Reactive (TR) Programs Some references: Nilsson, TR Programs for agent control, Journal of AI Research, 1994, Nilsson, Teleo-reactive programs and the triple-tower architecture, October 2001

Fariba Sadri - ICCL 08 Agent Examples 4 TR-Programs They are named sequences of condition- action rules : Program for Goal G: G => nil % i.e. do nothing C1 => A1 C2 => A2.. Cn => An.

Fariba Sadri - ICCL 08 Agent Examples 5 TR-Programs They are intended to direct the agent towards a goal, while continuously taking into account changing perceptions of the environment. No declarative semantics Only procedural semantics

Fariba Sadri - ICCL 08 Agent Examples 6 Demo nilsson/trweb/TRTower/TRTower_links. html nilsson/trweb/tr.html

Fariba Sadri - ICCL 08 Agent Examples 7 TR-Programs The C i are tests to be evaluated on the world model. The A i are actions the agent can do. At each cycle: – observations are made – the rules are checked from the top. –The first rule with a true test fires, i.e. determines the action to be done next. –The action is executed. Typically actions of later rules are intended to eventually result in the test of an earlier rule to become true (Regression Property). There is always a rule that will fire.

Fariba Sadri - ICCL 08 Agent Examples 8 TR-Program Examples Example (from Nilsson 2001): unpile(x); x is a block Clear(x) =>nil On(y,x) =>move-to-table(y) move-to-table(x); x is a block On(x,Ta)=>nil Holding(y)=>putdown(y,Ta) Clear(x)=>pickup(x) T=>unpile(x) Putdown and pickup are primitive actions.

Fariba Sadri - ICCL 08 Agent Examples 9 TR-Programs Example Example (from Nilsson 2001): move(x,y);x and y are blocks On(x,y)=>nil Holding(x)  Clear(y)=>putdown(x,y) Holding(z)=>putdown(z,Ta) Clear(x)  Clear(y)=>pickup(x) Clear(y)=> unpile(x) T=> unpile(y)

Fariba Sadri - ICCL 08 Agent Examples 10 TR Triple Tower Architecture Perception Tower (Rules) Model Tower (Predicates + TMS) Action Tower (Action Routines) Sensors Environment

Fariba Sadri - ICCL 08 Agent Examples 11 TR Triple Tower Architecture Example ¬  XOn(x,Y)  ¬Holding(Y)  Clear(Y) Clear(A) On(A,B) Holding(C) TR- Program Sensors Environment

Fariba Sadri - ICCL 08 Agent Examples 12 TR-Programs The actions A i may be: –primitive, –Sets of actions that can be executed simultaneously, or –refer to other TR programs. A TR program called will continue until the original condition leading to it being called remains the highest one in the original program that remains true.

Fariba Sadri - ICCL 08 Agent Examples 13 TR-Programs New Info from environment deletes old info (TMS). Also forward reasoning to derive all provable facts.

Fariba Sadri - ICCL 08 Agent Examples 14 TR-Programs Where do TR-programs fit in within the agent classification given in the introduction ?

Fariba Sadri - ICCL 08 Agent Examples 15 Agent-0 Reference: Yoav Shoham, Agent0: A simple agent language and its interpreter, Proceedings AAAI-91, 1991, One of the early multi-agent models and programming languages. Fairly simple Motivation: partly to gain experience from implementing an agent model

Fariba Sadri - ICCL 08 Agent Examples 16 Agent-0 Agents send messages to each other: Inform, Request, Unrequest A1 A2

Fariba Sadri - ICCL 08 Agent Examples 17 Agent-0 Mental State Mental state is made up of : Capabilities - fixed Commitment rules- fixed Beliefs- get updated Commitments- get updated

Fariba Sadri - ICCL 08 Agent Examples 18 Agent-0 Capabilities cap(time,private action, mental condition) e.g. cap(T, rotate(Degree1), not (cmtd(_,do(T,rotate(Degree2)) and Degree1\=Degree2 ) ) where Degree1, Degree2, T are variables. This says: The agent is able to rotate (something) by Degree1 degrees at some future time T if it does not already have a commitment to any agent to rotate (it) by a different number of degrees at the same time.

Fariba Sadri - ICCL 08 Agent Examples 19 Agent-0 Commitment Rules commit(messpattern, mentalcond, agent, action) The action can be a single or a sequence of actions. e.g. commit( (Ag, REQUEST(Act)), (_,myfriend(Ag), Ag, Act) This says: The agent can (perhaps) commit to do Act for agent Ag if Ag has just requested Act and agent believes Ag is a friend. No declarative semantics, Just operational semantics.

Fariba Sadri - ICCL 08 Agent Examples 20 Agent-0 Beliefs bel(Ag, F) where Ag is the agent who believes Fact F AGENT-0 agents trust one another. They believe anything they are told, incorporate it in their beliefs, and retract any older contradictory beliefs. Only atomic propositions or their negations are held as beliefs. This is to simplify knowledge assimilation. It makes consistency checking trivial.

Fariba Sadri - ICCL 08 Agent Examples 21 Agent-0 Commitments cmtd(agent, action) where the commitment is to agent. The set of commitments implicitly defines the future actions for the agent. Commitments are acted upon, by executing the action when its time comes.

Fariba Sadri - ICCL 08 Agent Examples 22 Agent-0 Time Agents measure time as cycle number (number of cycle executions) and synchronise their cycle executions using a global clock. So the time of a committed-to action comes when the agent’s cycle number equals the cycle time embedded in the action description.

Fariba Sadri - ICCL 08 Agent Examples 23 Agent0 Cycle

Fariba Sadri - ICCL 08 Agent Examples 24 Agent0 Cycle Initialisation: Initialises the Capabilities, Commitment rules, Beliefs, and Commitments After that the agent is continually involved in: –Updating its beliefs –Updating its commitments –Honouring commitments whose time has come

Fariba Sadri - ICCL 08 Agent Examples 25 Agent-0 Commitments Commitments are only to primitive actions. So the agent cannot commit to bringing about a state that requires any element of planning.

Fariba Sadri - ICCL 08 Agent Examples 26 Agent-0 Actions Private –Can be anything Communicative –Inform(t a fact) –Request(t, a, action) –Unrequest(t, a, action) –Refrain(action)

Fariba Sadri - ICCL 08 Agent Examples 27 Agent-0 Actions Actions can be Conditional If mntlcond then action If at time t you believe F holds at time t’ then at time t inform a that F holds at t’ Unconditional

Fariba Sadri - ICCL 08 Agent Examples 28 BDI/AgentSpeak(L) References A. Roa, AgentSpeak(L):BDI Agents speak out in a logical language, Springer LNCS 1038, 1996 A. Rao, M. Georgeff, An abstarct architecture for rational agents, Proceedings of the 3 rd International Conference on Principles of Knowledge Representation and Reasoning, KRR92, Boston, 1992 R. Bordini et al, Programming MAS in AgentSpeak using Jason, Wiley, 2007

Fariba Sadri - ICCL 08 Agent Examples 29 BDI/AgentSpeak(L) Motivations: BDI agents are “traditionally” specified in a modal logic with modal operators to represent BDI (Beliefs, Desires and Intentions). Their implementations (e.g. PRS, dMARS), however, have typically simplified their specifications and used non-logical procedural approaches. AgentSpeak is a programming language based on restricted FOL. AgentSpeak attempts to provide operational and proof theoretic semantics for PRS and dMARS ( and thus by a roundabout way for BDI agents)

Fariba Sadri - ICCL 08 Agent Examples 30 BDI/AgentSpeak(L) Further Motivations: To incorporate some practical reasoning: –Means ends reasoning, deciding how to achieve goals –Reaction to events, for example when something unexpected happens –Choice deliberation, deciding what we want to achieve (our intention) from amongst our desires

Fariba Sadri - ICCL 08 Agent Examples 31 BDI/AgentSpeak(L) Internal (Mental) State –A set of beliefs (similar to Agent0 beliefs) –A set of current desires (or goals) typically of the form !b where b is belief interpreted as desire for state of the world in which b holds. –A set of pending events typically perceptions of messages interpreted as belief updates: +b, -b or as goals to be achieved: +!b including request messages from other agents usually recorded as new belief events, perhaps as a new belief that the request has been made. –A set of intentions (similar to agent0 commitments) –A plan library. A plan has a triggering condition (an event), a mental state applicability condition, and a collection of sub- goals and actions (similar to ECA rules).

Fariba Sadri - ICCL 08 Agent Examples 32 AgentSpeak(L) Beliefs and Event Terms No modal opeartors Beliefs: a conjunction of ground literals adjacent(room1, room2) & loc(room1) & ¬empty(room1) Events: If b is an atomic belief then the following are event terms: –!b represents an achievement goal, e.g. !loc(room2) –?b represents a test goal, e.g. ?empty(room1) –+b, -b representing events of adding or deleting beliefs (events generated by messages) –+!b, -!b –+?b, -?b Agent can have explicit goals, given by events

Fariba Sadri - ICCL 08 Agent Examples 33 AgentSpeak Agent Cycle see environment beliefs desires plans generate new intentions intentions action execute next step of some intention events

Fariba Sadri - ICCL 08 Agent Examples 34 AgentSpeak Agent Cycle Notice external/internal changes Update belief and record as events in event stores e.g. +!location(robot, b), +location(waste, a) Choose event (from event store) or desire (from desire store for which there is at least one plan) Select plan – this becomes new intention Drop intentions no longer believed viable Resume intention –Execute an action, or –Post subgoal as a new goal event Repeat cycle

Fariba Sadri - ICCL 08 Agent Examples 35 AgentSpeak Plans Each agent has its own repertoire of (primitive) actions and plan library. Plans are ECA rules of the form: e:b1,…,bm <- h1;..;hk e is an event term the bi are belief terms – b1, …, bm is called context the hi are goals or (primitive) actions Plans are used to respond to belief update events and new goal events

Fariba Sadri - ICCL 08 Agent Examples 36 AgentSpeak Plans Examples +location(waste, X) : location(robot,X) & location(bin,Y) <- pick(waste); !location(robot,Y); drop(waste).

Fariba Sadri - ICCL 08 Agent Examples 37 AgentSpeak Plans Examples +location(waste, X) : location(robot,X) & location(bin,Y) <- pick(waste); !location(robot,Y); drop(waste). Context Triggering Event- Addition Body of the plan of a fact

Fariba Sadri - ICCL 08 Agent Examples 38 AgentSpeak Plans Examples +location(waste, X) : location(robot,X) & location(bin,Y) <- pick(waste); !location(robot,Y); drop(waste). The intended reading of this is very similar to event-condition-action rules (except that the action part is more sophisticated): On event of noticing waste at X, if robot is at X and bin at Y, then (robot) pick waste, make its location Y and drop waste.

Fariba Sadri - ICCL 08 Agent Examples 39 AgentSpeak Plans Examples +!location(robot, X) : location(robot,X) <- true. +!location(robot, X) : location(robot,Y) & not X=Y & adjacent(Y,Z) & not location(car,Z) <- move(Y,Z); +!location(robot,X).

Fariba Sadri - ICCL 08 Agent Examples 40 AgentSpeak Plans Examples +!location(robot, X) : location(robot,Y) & not X=Y & adjacent(Y,Z) & not location(car,Z) <- move(Y,Z); +!location(robot,X). The intended reading of this is similar to goal reduction rules: To achieve a goal location(robot,X) ….

Fariba Sadri - ICCL 08 Agent Examples 41 AgentSpeak Plan Example +!quench_thirst:have_glass <- !have_soft_drink; fill_glass, drink +!have_soft_drink:soft_drink_in_fridge <- open_fridge; get_soft_drink

Fariba Sadri - ICCL 08 Agent Examples 42 AgentSpeak Plans Some statements from Anand Rao “Rules in a pure logic program are not context-sensitive as plans.” ???? –Situation calculus and its many descendents - State context –Event calculus - Temporal context –Conditions/preconditions of plan provide context

Fariba Sadri - ICCL 08 Agent Examples 43 AgentSpeak Plans Some statements from Anand Rao “Rules execute successfully returning a binding for unbound variables; however, execution of plans generates a sequence of ground actions that affect the environment.” compare with Abductive Logic Programs

Fariba Sadri - ICCL 08 Agent Examples 44 location(robot, X)  current_location(robot,Y) & ¬ X=Y & adjacent(Y,Z) & ¬ current_location(car,Z) & move(Y,Z) & location(robot,X).

Fariba Sadri - ICCL 08 Agent Examples 45 AgentSpeak Plans Some statements from Anand rao “In a pure logic program there is no difference between a goal in the body of a rule and the head of a rule. In an agent program the head consists of a triggering event, rather than a goal.... allows both goal-directed and data- directed invocation of plans.” compare with Abductive Logic Programs

Fariba Sadri - ICCL 08 Agent Examples 46 location(robot, X)  current_location(robot,Y) & not X=Y & adjacent(Y,Z) & not current_location(car,Z) & move(Y,Z) & location(robot,X). location(waste, X) & ¬X=bin  pick(waste) & drop(waste, bin)

Fariba Sadri - ICCL 08 Agent Examples 47 AgentSpeak Plans Some statements from Anand rao “While a goal is being queried the execution of that query cannot be interrupted in a logic program. However, the plans in an agent program can be interrupted.” compare with Abductive logic programs run within An Agent Cycle