Modeling Belief Reasoning in Multi-Agent Systems* Thomas R. Ioerger Department of Computer Science Texas A&M University *funding provided by a MURI grant.

Slides:



Advertisements
Similar presentations
1 Knowledge Representation Introduction KR and Logic.
Advertisements

Koen Hindriks, Birna van RiemsdijkMulti-agent systemen Agent Programming in G OAL Multi-agent Systems & Communication Koen Hindriks Delft University of.
By Anthony Campanaro & Dennis Hernandez
Russell and Norvig Chapter 7
Modelling uncertainty in 3APL Johan Kwisthout Master Thesis
Introduction to Truth Maintenance Systems A Truth Maintenance System (TMS) is a PS module responsible for: 1.Enforcing logical relations among beliefs.
Justification-based TMSs (JTMS) JTMS utilizes 3 types of nodes, where each node is associated with an assertion: 1.Premises. Their justifications (provided.
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Default Reasoning the problem: in FOL, universally-quantified rules cannot have exceptions –  x bird(x)  can_fly(x) –bird(tweety) –bird(opus)  can_fly(opus)
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Reasoning Forward and Backward Chaining Andrew Diniz da Costa
Propositional Logic Reading: C , C Logic: Outline Propositional Logic Inference in Propositional Logic First-order logic.
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Best Practice Precepts [... next] Arguments Arguments Possibility of the Impossible Possibility of the Impossible Belief, Truth, and Reality Belief, Truth,
B. Ross Cosc 4f79 1 Frames Knowledge bases can be enhanced by using principles from frame knowledge representation (similar to object orientation) This.
Inferences The Reasoning Power of Expert Systems.
The International RuleML Symposium on Rule Interchange and Applications Local and Distributed Defeasible Reasoning in Multi-Context Systems Antonis Bikakis,
Intelligent systems Lecture 6 Rules, Semantic nets.
Modeling Command and Control in Multi-Agent Systems* Thomas R. Ioerger Department of Computer Science Texas A&M University *funding provided by a MURI.
Knowledge Representation & Reasoning.  Introduction How can we formalize our knowledge about the world so that:  We can reason about it?  We can do.
Learning from Differences
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 9 Jim Martin.
Modeling Command and Control in Multi-Agent Systems* Thomas R. Ioerger Department of Computer Science Texas A&M University *funding provided by a MURI.
Reasoning About Beliefs, Observability, and Information Exchange in Teamwork Thomas R. Ioerger Department of Computer Science Texas A&M University.
Knowledge in intelligent systems So far, we’ve used relatively specialized, naïve agents. How can we build agents that incorporate knowledge and a memory?
© C. Kemke1Reasoning - Introduction COMP 4200: Expert Systems Dr. Christel Kemke Department of Computer Science University of Manitoba.
Production Rules Rule-Based Systems. 2 Production Rules Specify what you should do or what you could conclude in different situations. Specify what you.
Modeling Teamwork in the CAST Multi-Agent System Thomas R. Ioerger Department of Computer Science Texas A&M University.
Knoweldge Representation & Reasoning
CPSC 322, Lecture 32Slide 1 Probability and Time: Hidden Markov Models (HMMs) Computer Science cpsc322, Lecture 32 (Textbook Chpt 6.5) March, 27, 2009.
1 Backward-Chaining Rule-Based Systems Elnaz Nouri December 2007.
Chapter 6: Objections to the Physical Symbol System Hypothesis.
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
1 Introduction to Logic Programming. 2 Human Logic Humans are information processors, We acquire information about the world and use this information.
Artificial Intelligence 4. Knowledge Representation Course V231 Department of Computing Imperial College, London © Simon Colton.
Feb 24, 2003 Agent-based Proactive Teamwork John Yen University Professor of IST School of Information Sciences and Technology The Pennsylvania State University.
 Architecture and Description Of Module Architecture and Description Of Module  KNOWLEDGE BASE KNOWLEDGE BASE  PRODUCTION RULES PRODUCTION RULES 
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Logic Propositional Logic Summary
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part A Knowledge Representation and Reasoning.
COM362 Knowledge Engineering Inferencing 1 Inferencing: Forward and Backward Chaining John MacIntyre
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
Types of logic. Propositional logic: syntax Limitations of Propositional Logic 1. It is too weak, i.e., has very limited expressiveness: Each rule has.
KNOWLEDGE BASED SYSTEMS
11 Artificial Intelligence CS 165A Thursday, October 25, 2007  Knowledge and reasoning (Ch 7) Propositional logic 1.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part B Propositional Logic.
Computing & Information Sciences Kansas State University Lecture 12 of 42 CIS 530 / 730 Artificial Intelligence Lecture 12 of 42 William H. Hsu Department.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
1 An infrastructure for context-awareness based on first order logic 송지수 ISI LAB.
1 UNIT-3 KNOWLEDGE REPRESENTATION. 2 Agents that reason logically(Logical agents) A Knowledge based Agent The Wumpus world environment Representation,
Review: What is a logic? A formal language –Syntax – what expressions are legal –Semantics – what legal expressions mean –Proof system – a way of manipulating.
Expert System Seyed Hashem Davarpanah University of Science and Culture.
An argument-based framework to model an agent's beliefs in a dynamic environment Marcela Capobianco Carlos I. Chesñevar Guillermo R. Simari Dept. of Computer.
Artificial Intelligence Knowledge Representation.
Intelligent Agents (Ch. 2)
Service-Oriented Computing: Semantics, Processes, Agents
Artificial Intelligence Chapter 17 Knowledge-Based Systems
Knowledge-Based Systems Chapter 17.
Artificial Intelligence Chapter 17 Knowledge-Based Systems
Knowledge Representation and Reasoning
Service-Oriented Computing: Semantics, Processes, Agents
Knowledge Representation and Inference
Artificial Intelligence
Service-Oriented Computing: Semantics, Processes, Agents
Artificial Intelligence Chapter 17. Knowledge-Based Systems
Artificial Intelligence Chapter 17 Knowledge-Based Systems
Deniz Beser A Fundamental Tradeoff in Knowledge Representation and Reasoning Hector J. Levesque and Ronald J. Brachman.
ONTOMERGE Ontology translations by merging ontologies Paper: Ontology Translation on the Semantic Web by Dejing Dou, Drew McDermott and Peishen Qi 2003.
Representations & Reasoning Systems (RRS) (2.2)
Presentation transcript:

Modeling Belief Reasoning in Multi-Agent Systems* Thomas R. Ioerger Department of Computer Science Texas A&M University *funding provided by a MURI grant through DoD/AFOSR

Motivation Many interactions in collaborative MAS require reasoning about beliefs of others There are no efficient, complete inference procedures for modal logics of belief Need a practical way of maintaining models of other agents beliefs Want it to work much like JARE Correct representation of unknown is a must New Issue: how do you know what others believe? various reasons of different strength

Approach Define hierarchy of justification types –including: rules, defaults, persistence also incorporate observability –a major source of info about others’ beliefs represent truth-values explicitly (T,F,?) update cycle (given current beliefs, senses...) BOA - Beliefs of Other Agents –like JARE: syntax, binding envs, API, query, assert –forward-chaining –single-level of nesting: (bel joe (open door))

Justifications Rule types (with strengths for resolving conflicts) 8 - assertions (e.g. from shell, perceptions, messages) 7 - facts (static, never change truth value) 6 - direct observation (self) 5 - effects of actions (for whoever is aware of it...) 4 - inferences (by any agent) 3 - observability (of others) 2 - persistence (“memory,” certain beliefs persist) 1 - default assumptions (given no other evidence)

(infer (bel ?a (not (tank empty ?car)) (bel ?a (running ?car)) (persist (bel ?a (light-on ?room))) (obs ?a (light-on ?r) (in ?a ?r) (light-on ?r)) (effect (running ?c) (do ?a (start ?c)) (have ?a (keys ?c))) (default (unknown (wumpus alive))) (init (val (num-arrows) 3)) (infer (can-shoot)(val (num-arrows) ?x)(> ?x 0)) (obs (bel ?a (whether (light-on ?rm)) (in ?a ?rm)) (fact (bel archer (value-of (num-arrows)))) (fact (bel archer (whether (can-shoot)))) BOA Syntax consequent antecedent assume believer is ‘self’ context function procedural attachment

Prioritized Inference What conclusions can be drawn from KB? Update cycle (hence forward chaining): –KB’=update(KB,senses,action?,justification rules) If multiple rules relevant to a predicate can fire, want strongest to determine truth-value Must control order of firing, avoid premature... Semantics based on prioritized logic programs –Brewka & Eiter, Sakama & Inoue, Delegrande & Schaub Sort predicates by antecedent dependencies – fire all for rules for least-dependent predicate first no circularities allowed

Queries in BOA You get back a Vector of JareEnv’s with variable bindings for alternative solutions (as usual) –(query (threat enemy-unit-17)) –(query (val (target enemy-unit-17) ?target)) –(query (bel sam (light-on room-1))) –(query (bel joe (light-on ?r))) –(query (bel joe (not (light-on room-1)))) –(query (bel joe (whether (light-on room-1)))) Can’t use variable for agent name: –(query (bel ?a (has-weapon ?a))) does not work!

Integration into CAST (?) Useful for writing plans that depend on what others believe Challenges: –interaction with JARE and variable binding in conditions Preliminary experiments: –method 1: separate JARE and BOA KB’s beliefs can’t be used as conds in IF, WHILE...; use bq-if –method 2: replace JARE completely efficiency? - assert/retract used for many things in CAST

(task inform-others-of-loc (?enemy) (seq (bupdate) (foreach ((agent ?ag)) (if (cond (gunner ?ag) (not (radio-silence))) (bq-if (bel ?ag (unknown (val (loc ?enemy))) (bq-if (val (loc ?enemy) ?loc) (seq (send ?ag (val (loc ?enemy) ?loc)) (bassert (bel ?ag (val (loc ?enemy) ?loc)) ))))))))

Concluding Remarks Ryan’s experience –useful for implementing Proactive Information Exchange in CAST-PM (Master’s thesis online) –awkward to have to say that everything persists! Reflection on Belief Reasoning... –not as expressive as Modal Logic, but efficient –no nested beliefs –the real issue is: managing the various reasons for beliefs about others’ beliefs (observability, actions, inference, defaults, persistence...)