Download presentation
Presentation is loading. Please wait.
1
Adaptive Reasoning for Cooperative Agents Luís Moniz Pereira Alexandre Pinto Centre for Artificial Intelligence – CENTRIA Universidade Nova de Lisboa INAP’09, 5-7 NovemberÉvora, Portugal
2
Summary - 1 Explicit affirmation and negation plus a 3 rd logic value of undefined, is useful in situations where decisions must be taken based of scarce, ambiguous, or contradictory information In a 3-valued setting, we consider agents that learn definitions for a target concept and its opposite, taking both positive and negative examples as instances of the two classes
3
Summary - 2 A single agent exploring an environment can gather only so much information, which may not suffice to find good explanations A cooperative multi-agent strategy, where each agent explores part of the environment and shares its findings, provides better results We employ a distributed genetic algorithm framework, enhanced by a Lamarckian belief revision operator Agents communicate explanations—coded as belief chromosomes—by sharing them in a common pool
4
Summary - 3 Another way of interpreting this communication is via agent argumentation When taking in all arguments to find a common ground or consensus, we may have to revise assumptions in each argument A collaborative viewpoint results: Arguments are put together to find 2-valued consensus on conflicting learnt concepts, within an evolving genetic pool, so as to identify “best” joint explanations to observations
5
Learning Positive and Negative Knowledge – Autonomous agent: acquisition of information by means of experiments – Experiment: execution of an action evaluation of the results with respect to the achievement of a goal positive and negative results – Learning general rules on actions: distinction among actions with a positive, negative or unknown or undefined outcome
6
2-valued vs. 3-valued Learning – Two values: bottom-up generalization from instances top-down refinement from general classes – Three values: learning a definition both for the concept and its opposite + + + ++ + ++ - -- - --- - + + + ++ + ++ - -- - --- - + + + ++ + ++ - -- - --- - a c b
7
Learning in a 3-valued Setting – Extended Logic Programs (ELP): with explicit negation “ A” – Clauses of the form L 0 L 1,..., L m, not L m+1,..., not L m+n where L i can be either A or A – Explicit representation of negative information fly(X) penguin(X). – Three logical values: true, false, unknown or undefined
8
Problem definition Given a set P of possible ELP programs (bias) a set E + of positive examples a set E - of negative examples a consistent extended logic program (background knowledge) Use learning to find an ELP, P P such that P E + P E - (completeness) P L - P L + L E + E - (consistency)
9
Intersection of definitions E+E+ E-E- p p Exceptions to the positive concept: negative examples Exceptions to the negative concept: positive examples Unseen atoms
10
– Unseen atoms which are both true and false are classified as unknown or undefined: p(X) p + (X), not p(X). p(X) p - (X), not p(X). – If the concept is true and its opposite undefined, then it is classified as true: p(X) p + (X), undefined( p(X) ). p(X) p - (X), undefined( p(X) ).
11
Training set atoms – They must be classified according to the training set – Default literals, representing non-abnormality conditions, are added to rules: p(X) p + (X), not ab p (X), not p(X). p(X) p - (X), not ab p (X), not p(X).
12
Example: knowledge B:bird(a).has_wings(a). jet(b).has_wings(b). angel(c).has_wings(c).has_limbs(c). penguin(d).has_wings(d).has_limbs(d). dog(e).has_limbs(e). cat(f).has_limbs(f). E + = { flies(a) }E - = { flies(d), flies(e) } flies + E+E+ abcf de flies - E-E-
13
flies + (X) has_wings(X). flies - (X) has_limbs(X). flies(X) flies + (X), not ab flies+ (X), not flies(X). flies(X) flies - (X), not flies(X). ab flies+ (d). flies(X) flies + (X), undefined( flies(X) ). flies(X) flies - (X), undefined( flies(X) ). Generalizing exceptions we obtain: ab flies+ (X) penguin(X). Example: learned theory
14
Least General vs. Most General Solutions – Bottom-up methods: Search from specific to general: Least General Solution (LGS) GOLEM (RLGG), CIGOL (Inverse Resolution) – Top-down methods: Search from general to specific: Most General Solution (MGS) FOIL, Progol
15
Criteria for chosing the generality – Risk that can derive from a classification error high risk LGS low riskMGS – Confidence in the set of negative examples high confidence MGS low confidenceLGS
16
Generality of Solutions B:bird(X) sparrow(X). mammal(X) cat(X). sparrow(a). cat(b).bird(c). mammal(d). E + = { flies(a) }E – = { flies(b) } flies MGS (X) bird(X). flies LGS (X) sparrow(X). flies MGS (X) cat(X). flies LGS (X) mammal(X).
17
beggar2beggar1 attacker1 attacker2 Of Beggars and Attackers
18
Example: Mixing LGS and MGS (1) – Concept of attacker maximize the concept and minimize its opposite: attacker1(X) attacker + MGS (X), not attacker1(X). attacker1(X) attacker – LGS (X), not attacker1(X). – Concept of beggar —give only to those appearing to need it minimize the concept and maximize its opposite: beggar1(X) beggar + LGS (X), not beggar1(X). beggar1(X) beggar – MGS (X), not beggar1(X).
19
– However, rejected beggars, may turn into attackers maximize the concept and minimize its opposite: beggar2(X) beggar + MGS (X), not beggar2(X). beggar2(X) beggar – LGS (X), not beggar2(X). – Concepts can be used to minimize the risk when carrying a lot of money: run lot_of_money, attacker1(Y), not beggar2(Y). run give_money. give_money beggar1(Y). give_money attacker1(Y), beggar2(Y). Example: Mixing LGS and MGS (2)
20
– When carrying little money, one may prefer to risk being beaten up. Therefore one wants to relax attacker1 but not so much as to use attacker – MGS : run little_money, attacker2(Y). attacker2(X) attacker + LGS (X), not attacker2(X). attacker2(X) attacker – LGS (X), not attacker2(X). Example: Mixing LGS and MGS (3)
21
Abduction Consider rule: a => b Deduction: from a conclude b Abduction: knowing or observing b, assume a as its hypothetical explanation From theory + observations find abductive models — the explanations for observations
22
Distributing Observations Code observations as Integrity Constraints (ICs): <-- not some_observation Find abductive explanations for observations Create several agents, give each the same base theory and a subset of the observations Each agent comes up with alternative abductive explanations for its own ICs. These need not be minimal sets of hypotheses
23
Choosing the Best Explanation “Brainstorming” is used for solving complex problems Each participant contributes by adding ideas (abducibles) to a common idea pool, shared by all Ideas are mixed, crossed, mutated, and selected Solutions arise from pool by iterating this evolutionary process Our work is inspired on the evolution of alternative ideas and arguments to find collaborative solutions
24
Lamarckian Evolution - 1 Lamarckian evolution = meme evolution “meme” is cognitive equivalent of gene In genetic programming Lamarckian evolution has proven a powerful concept There are GAs that include, additionally, a logic-based Lamarckian belief revision operator, where assumptions are coded as memes
25
Lamarckian Evolution - 2 Lamarckian operator (L-op) Darwinian ones (D-ops) L-op modifies chromosomes coding beliefs to improve fitness with experience, rather than randomly L-op and D-ops play distinct roles L-op employed to bring chromosomes closer to a solution, by belief revision D-ops randomly produce alternative belief chromosomes to deal with unencountered situations, by interchanging memes
26
Specific Belief Evolution Method In traditional multi-agent problem solving, agents benefit from others’ knowledge & experience by message-passing Our new method: knowledge & experience coded as memes, and exchanged by crossover Crucial: logic-based belief revision is used to modify belief assumptions (memes) based on individual agent experience
27
Fitness Functions Various fitness functions can be used in belief revision. The simplest is: Fitness( c i ) = (n i /n) / (1 + NC ) where - n i is number of ICs satisfied by chromosome c i - n is total number of ICs - NC is number of contradictions depending on chromosome c i
28
Assumptions & Argumentation - 1 Assumptions are coded as abducible literals in LPs Abducibles are packed together in chromosomes Evolutionary operators —crossover, mutation, revision, selection— are applied to chromosomes This setting provides means to search for evolutionary consensus from initial assumptions
29
Assumptions & Argumentation - 2 3-valued contradiction removal presented before (with undefined value) is superficial Removes contradiction p(X) & ¬p(X) by forcing a 3-valued semantics, not looking into reasons why both hold Improvement relies on principles of argumentation: find arguments supporting p(X) — or ¬p(X) — and change some of their assumptions
30
Collaborative Opposition Challenging environment of Semantic Web is a ‘place’ for future intelligent systems to float in Learning in 2- or in 3-values are open possibilities Knowledge & Reasoning shared and distributed Opposing arguments will surface from agents Need to know how to reconcile opposing arguments Find 2-valued consensus as much as possible Least commitment 3-valued consensus are not enough
31
Argumentation & Consensus - 1 The non-trivial problem we addressed was that of defining 2-valued complete models, consistent with the 3-valued preferred maximal scenarios of Dung The resulting semantics —Approved Models— is a conservative extension to the well-known Stable Models semantics, in that every SM is an Approved Model
32
Argumentation & Consensus - 2 Approved Models are guaranteed to exist for every Normal Logic Program (NLP), whereas SMs are not Examples show NLPs with no SMs can usefully model knowledge The guarantee is crucial in program composition of knowledge from diverse sources Model existence warrant also crucial after external- or self-updating NLPs
33
Argumentation & Consensus - 3 Start by merging all opposing abductive arguments Draw conclusions from program + single merged argument If contradictions arise: non-deterministically choose one assumption of single argument and revise its truth value Iteration finds the non-contradictory arguments Evolutionary method presented implements yet another mechanism to find consensual non- contradictory arguments
34
Thank you for your attention!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.