Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09,

Slides:



Advertisements
Similar presentations
Twelve Cs for Team Building
Advertisements

Updates plus Preferences Luís Moniz Pereira José Júlio Alferes Centro de Inteligência Artificial Universidade Nova de Lisboa Portugal JELIA’00, Málaga,
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
1 DCP 1172 Introduction to Artificial Intelligence Chang-Sheng Chen Topics Covered: Introduction to Nonmonotonic Logic.
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Well-founded Semantics with Disjunction João Alcântara, Carlos Damásio and Luís Moniz Pereira Centro de Inteligência Artificial.
The International RuleML Symposium on Rule Interchange and Applications Local and Distributed Defeasible Reasoning in Multi-Context Systems Antonis Bikakis,
João Alcântara, Carlos Damásio and Luís Pereira Centro de Inteligência Artificial (CENTRIA) Depto. Informática, Faculdade.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.5 [P]: Propositions and Inference Sections.
João Alcântara, Carlos Damásio and Luís Moniz Pereira Centro de Inteligência Artificial (CENTRIA) Depto. Informática,
Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and Technology.
Auto-Epistemic Logic Proposed by Moore (1985) Contemplates reflection on self knowledge (auto-epistemic) Allows for representing knowledge not just about.
Models -1 Scientists often describe what they do as constructing models. Understanding scientific reasoning requires understanding something about models.
José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of.
LP and Non-Monotonicity LP includes a non-monotonic form of default negation not L is true if L cannot (now) be proven This feature is used for representing.
1 Stable vs. Layered Logic Program Semantics Stable vs. Layered Logic Program Semantics Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence.
Methods of Proof Chapter 7, second half.
Reductio ad Absurdum Argumentation in Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto CENTRIA – Centro de Inteligência Artificial,
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and.
ASP vs. Prolog like programming ASP is adequate for: –NP-complete problems –situation where the whole program is relevant for the problem at hands èIf.
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of Science and Technology.
João Alexandre Leite Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa { jleite, lmp Pierangelo.
ASP vs. Prolog like programming ASP is adequate for: –NP-complete problems –situation where the whole program is relevant for the problem at hands èIf.
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Aida Vitória Dept. of Science.
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
COMP3503 Intro to Inductive Modeling
CSNB234 ARTIFICIAL INTELLIGENCE
1 Knowledge Based Systems (CM0377) Lecture 13 (Last modified 2nd May 2002)
Chapter 3 An Object-Oriented View of Action Systems.
Combining Answer Sets of Nonmonotonic Logic Programs Chiaki Sakama Wakayama University Katsumi Inoue National Institute of Informatics.
Chapter 8 The k-Means Algorithm and Genetic Algorithm.
Refined privacy models
Modelling Adaptive Controllers with Evolving Logic Programs Pierangelo Dell’Acqua Anna Lombardi Dept. of Science and Technology - ITN Linköping University,
Preference Revision via Declarative Debugging Pierangelo Dell’Acqua Dept. of Science and Technology - ITN Linköping University, Sweden EPIA’05, Covilhã,
Basics of Research and Development and Design STEM Education HON4013 ENGR1020 Learning and Action Cycles.
Learning by Simulating Evolution Artificial Intelligence CSMC February 21, 2002.
MINERVA A Dynamic Logic Programming Agent Architecture João Alexandre Leite José Júlio Alferes Luís Moniz Pereira ATAL’01 CENTRIA – New University of Lisbon.
Overview Concept Learning Representation Inductive Learning Hypothesis
Construct When we put forward a point of view and offer reasons for it, we are constructing a piece of reasoning. We reason about what we should believe,
For Monday Finish chapter 19 No homework. Program 4 Any questions?
Chapter 4 Decision Support System & Artificial Intelligence.
For Monday Finish chapter 19 Take-home exam due. Program 4 Any questions?
L. M. Pereira, J. J. Alferes, J. A. Leite Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal P. Dell’Acqua Dept. of Science.
KR A Principled Framework for Modular Web Rule Bases and its Semantics Anastasia Analyti Institute of Computer Science, FORTH-ICS, Greece Grigoris.
FINAL PRESENTATION OF ORGANIZATIONAL BEHAVIOUR AND ANALYSIS Prepared for : Dr. S. Kumar Group : Dollar 2 A. R. S. BANDARA - PGIA / 06 / 6317 B. A. G. K.
Machine Learning A Quick look Sources: Artificial Intelligence – Russell & Norvig Artifical Intelligence - Luger By: Héctor Muñoz-Avila.
April 3rd, 1998TAPD'98. Paris 2-3 April, Tabling Abduction José Alferes and Luís Moniz Pereira Univ. Évora and CENTRIA CENTRIA Univ. Nova Lisboa.
NMR98 - Logic Programming1 Learning with Extended Logic Programs Evelina Lamma (1), Fabrizio Riguzzi (1), Luís Moniz Pereira (2) (1)DEIS, University of.
Inverse Entailment in Nonmonotonic Logic Programs Chiaki Sakama Wakayama University, Japan.
CpSc 810: Machine Learning Analytical learning. 2 Copy Right Notice Most slides in this presentation are adopted from slides of text book and various.
Learning Three-Valued Logical Programs Evelina Lamma 1, Fabrizio Riguzzi 1, Luis Moniz Pereira 2 1 DEIS, Università di Bologna 2 Centro de Inteligencia.
Chap. 10 Learning Sets of Rules 박성배 서울대학교 컴퓨터공학과.
An argument-based framework to model an agent's beliefs in a dynamic environment Marcela Capobianco Carlos I. Chesñevar Guillermo R. Simari Dept. of Computer.
From NARS to a Thinking Machine Pei Wang Temple University.
Belief dynamics and defeasible argumentation in rational agents M. A. Falappa - A. J. García G. R. Simari Artificial Intelligence Research and Development.
Synthesizing Disparate Experiences in Episodic Planning Anthony Ford James Lawton, PhD US Air Force Research Lab, Information Directorate.
On Abductive Equivalence Katsumi Inoue National Institute of Informatics Chiaki Sakama Wakayama University MBR
October 19th, 2007L. M. Pereira and A. M. Pinto1 Approved Models for Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto Centre for Artificial.
Decision Making We could use two films here, so we want lots of extra time. What to cut out? Dangerous minds is good hopefully for expectancy and equity.
Chapter 7. Propositional and Predicate Logic
CS 9633 Machine Learning Concept Learning
EA C461 – Artificial Intelligence Logical Agent
Chapter 7. Propositional and Predicate Logic
Methods of Proof Chapter 7, second half.
CSNB234 ARTIFICIAL INTELLIGENCE
Lecture 14 Learning Inductive inference
Refined privacy models
NARS an Artificial General Intelligence Project
Presentation transcript:

Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09, 5-7 NovemberÉvora, Portugal

Summary - 1 Explicit affirmation and negation plus a 3 rd logic value of undefined, is useful in situations where decisions must be taken based of scarce, ambiguous, or contradictory information In a 3-valued setting, we consider agents that learn definitions for a target concept and its opposite, taking both positive and negative examples as instances of the two classes

Summary - 2 A single agent exploring an environment can gather only so much information, which may not suffice to find good explanations A cooperative multi-agent strategy, where each agent explores part of the environment and shares its findings, provides better results We employ a distributed genetic algorithm framework, enhanced by a Lamarckian belief revision operator Agents communicate explanations—coded as belief chromosomes—by sharing them in a common pool

Summary - 3 Another way of interpreting this communication is via agent argumentation When taking in all arguments to find a common ground or consensus, we may have to revise assumptions in each argument A collaborative viewpoint results: Arguments are put together to find 2-valued consensus on conflicting learnt concepts, within an evolving genetic pool, so as to identify “best” joint explanations to observations

Learning Positive and Negative Knowledge – Autonomous agent: acquisition of information by means of experiments – Experiment: execution of an action evaluation of the results with respect to the achievement of a goal positive and negative results – Learning general rules on actions: distinction among actions with a positive, negative or unknown or undefined outcome

2-valued vs. 3-valued Learning – Two values: bottom-up generalization from instances top-down refinement from general classes – Three values: learning a definition both for the concept and its opposite a c b

Learning in a 3-valued Setting – Extended Logic Programs (ELP): with explicit negation “  A” – Clauses of the form L 0  L 1,..., L m, not L m+1,..., not L m+n where L i can be either A or  A – Explicit representation of negative information  fly(X)  penguin(X). – Three logical values: true, false, unknown or undefined

Problem definition Given a set P of possible ELP programs (bias) a set E + of positive examples a set E - of negative examples a consistent extended logic program  (background knowledge) Use learning to find an ELP, P  P such that P    E + P     E - (completeness) P   L - P    L + L  E +  E - (consistency)

Intersection of definitions E+E+ E-E-  p p Exceptions to the positive concept: negative examples Exceptions to the negative concept: positive examples Unseen atoms

– Unseen atoms which are both true and false are classified as unknown or undefined: p(X)  p + (X), not  p(X).  p(X)  p - (X), not p(X). – If the concept is true and its opposite undefined, then it is classified as true: p(X)  p + (X), undefined(  p(X) ).  p(X)  p - (X), undefined( p(X) ).

Training set atoms – They must be classified according to the training set – Default literals, representing non-abnormality conditions, are added to rules: p(X)  p + (X), not ab p (X), not  p(X).  p(X)  p - (X), not ab  p (X), not p(X).

Example: knowledge B:bird(a).has_wings(a). jet(b).has_wings(b). angel(c).has_wings(c).has_limbs(c). penguin(d).has_wings(d).has_limbs(d). dog(e).has_limbs(e). cat(f).has_limbs(f). E + = { flies(a) }E - = { flies(d), flies(e) } flies + E+E+ abcf de flies - E-E-

flies + (X)  has_wings(X). flies - (X)  has_limbs(X). flies(X)  flies + (X), not ab flies+ (X), not  flies(X).  flies(X)  flies - (X), not flies(X). ab flies+ (d). flies(X)  flies + (X), undefined(  flies(X) ).  flies(X)  flies - (X), undefined( flies(X) ). Generalizing exceptions we obtain: ab flies+ (X)  penguin(X). Example: learned theory

Least General vs. Most General Solutions – Bottom-up methods: Search from specific to general: Least General Solution (LGS) GOLEM (RLGG), CIGOL (Inverse Resolution) – Top-down methods: Search from general to specific: Most General Solution (MGS) FOIL, Progol

Criteria for chosing the generality – Risk that can derive from a classification error high risk LGS low riskMGS – Confidence in the set of negative examples high confidence MGS low confidenceLGS

Generality of Solutions B:bird(X)  sparrow(X). mammal(X)  cat(X). sparrow(a). cat(b).bird(c). mammal(d). E + = { flies(a) }E – = { flies(b) } flies MGS (X)  bird(X). flies LGS (X)  sparrow(X).  flies MGS (X)  cat(X).  flies LGS (X)  mammal(X).

beggar2beggar1 attacker1 attacker2 Of Beggars and Attackers

Example: Mixing LGS and MGS (1) – Concept of attacker  maximize the concept and minimize its opposite: attacker1(X)  attacker + MGS (X), not  attacker1(X).  attacker1(X)  attacker – LGS (X), not attacker1(X). – Concept of beggar —give only to those appearing to need it  minimize the concept and maximize its opposite: beggar1(X)  beggar + LGS (X), not  beggar1(X).  beggar1(X)  beggar – MGS (X), not beggar1(X).

– However, rejected beggars, may turn into attackers  maximize the concept and minimize its opposite: beggar2(X)  beggar + MGS (X), not  beggar2(X).  beggar2(X)  beggar – LGS (X), not beggar2(X). – Concepts can be used to minimize the risk when carrying a lot of money: run  lot_of_money, attacker1(Y), not beggar2(Y).  run  give_money. give_money  beggar1(Y). give_money  attacker1(Y), beggar2(Y). Example: Mixing LGS and MGS (2)

– When carrying little money, one may prefer to risk being beaten up. Therefore one wants to relax attacker1 but not so much as to use attacker – MGS :  run  little_money, attacker2(Y). attacker2(X)  attacker + LGS (X), not  attacker2(X).  attacker2(X)  attacker – LGS (X), not attacker2(X). Example: Mixing LGS and MGS (3)

Abduction Consider rule: a => b Deduction: from a conclude b Abduction: knowing or observing b, assume a as its hypothetical explanation From theory + observations find abductive models — the explanations for observations

Distributing Observations Code observations as Integrity Constraints (ICs): <-- not some_observation Find abductive explanations for observations Create several agents, give each the same base theory and a subset of the observations Each agent comes up with alternative abductive explanations for its own ICs. These need not be minimal sets of hypotheses

Choosing the Best Explanation “Brainstorming” is used for solving complex problems Each participant contributes by adding ideas (abducibles) to a common idea pool, shared by all Ideas are mixed, crossed, mutated, and selected Solutions arise from pool by iterating this evolutionary process Our work is inspired on the evolution of alternative ideas and arguments to find collaborative solutions

Lamarckian Evolution - 1 Lamarckian evolution = meme evolution “meme” is cognitive equivalent of gene In genetic programming Lamarckian evolution has proven a powerful concept There are GAs that include, additionally, a logic-based Lamarckian belief revision operator, where assumptions are coded as memes

Lamarckian Evolution - 2 Lamarckian operator (L-op)  Darwinian ones (D-ops) L-op modifies chromosomes coding beliefs to improve fitness with experience, rather than randomly L-op and D-ops play distinct roles L-op employed to bring chromosomes closer to a solution, by belief revision D-ops randomly produce alternative belief chromosomes to deal with unencountered situations, by interchanging memes

Specific Belief Evolution Method In traditional multi-agent problem solving, agents benefit from others’ knowledge & experience by message-passing Our new method: knowledge & experience coded as memes, and exchanged by crossover Crucial: logic-based belief revision is used to modify belief assumptions (memes) based on individual agent experience

Fitness Functions Various fitness functions can be used in belief revision. The simplest is: Fitness( c i ) = (n i /n) / (1 + NC ) where - n i is number of ICs satisfied by chromosome c i - n is total number of ICs - NC is number of contradictions depending on chromosome c i

Assumptions & Argumentation - 1 Assumptions are coded as abducible literals in LPs Abducibles are packed together in chromosomes Evolutionary operators —crossover, mutation, revision, selection— are applied to chromosomes This setting provides means to search for evolutionary consensus from initial assumptions

Assumptions & Argumentation valued contradiction removal presented before (with undefined value) is superficial Removes contradiction p(X) & ¬p(X) by forcing a 3-valued semantics, not looking into reasons why both hold Improvement relies on principles of argumentation: find arguments supporting p(X) — or ¬p(X) — and change some of their assumptions

Collaborative Opposition Challenging environment of Semantic Web is a ‘place’ for future intelligent systems to float in Learning in 2- or in 3-values are open possibilities Knowledge & Reasoning shared and distributed Opposing arguments will surface from agents Need to know how to reconcile opposing arguments Find 2-valued consensus as much as possible Least commitment 3-valued consensus are not enough

Argumentation & Consensus - 1 The non-trivial problem we addressed was that of defining 2-valued complete models, consistent with the 3-valued preferred maximal scenarios of Dung The resulting semantics —Approved Models— is a conservative extension to the well-known Stable Models semantics, in that every SM is an Approved Model

Argumentation & Consensus - 2 Approved Models are guaranteed to exist for every Normal Logic Program (NLP), whereas SMs are not Examples show NLPs with no SMs can usefully model knowledge The guarantee is crucial in program composition of knowledge from diverse sources Model existence warrant also crucial after external- or self-updating NLPs

Argumentation & Consensus - 3 Start by merging all opposing abductive arguments Draw conclusions from program + single merged argument If contradictions arise: non-deterministically choose one assumption of single argument and revise its truth value Iteration finds the non-contradictory arguments Evolutionary method presented implements yet another mechanism to find consensual non- contradictory arguments

Thank you for your attention!