Agents that reason logically Tuomas Sandholm Carnegie Mellon University Computer Science Department.

Slides:



Advertisements
Similar presentations
1 Knowledge and reasoning – second part Knowledge representation Logic and representation Propositional (Boolean) logic Normal forms Inference in propositional.
Advertisements

Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
UIUC CS 497: Section EA Lecture #2 Reasoning in Artificial Intelligence Professor: Eyal Amir Spring Semester 2004.
Inference and Reasoning. Basic Idea Given a set of statements, does a new statement logically follow from this. For example If an animal has wings and.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Logic Use mathematical deduction to derive new knowledge.
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Propositional Logic CMSC 471 Chapter , 7.7 and Chuck Dyer
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Propositional Logic Reading: C , C Logic: Outline Propositional Logic Inference in Propositional Logic First-order logic.
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Logic.
3/30/00 Agents that Reason Logically by Chris Horn Jiansui Yang Xiaojing Wu.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Fall 2005.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Knowledge Representation I (Propositional Logic) CSE 473.
Methods of Proof Chapter 7, second half.
Knoweldge Representation & Reasoning
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2005.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2008.
Knowledge Representation II (Inference in Propositional Logic) CSE 473 Continued…
Rutgers CS440, Fall 2003 Propositional Logic Reading: Ch. 7, AIMA 2 nd Ed. (skip )
Inference is a process of building a proof of a sentence, or put it differently inference is an implementation of the entailment relation between sentences.
Logical Agents. Knowledge bases Knowledge base = set of sentences in a formal language Declarative approach to building an agent (or other system): 
CHAPTERS 7, 8 Oliver Schulte Logical Inference: Through Proof to Truth.
3/11/2002copyright Brian Williams1 Propositional Logic and Satisfiability Brian C. Williams /6.834 October 7 th, 2002.
Fall 98 Introduction to Artificial Intelligence LECTURE 7: Knowledge Representation and Logic Motivation Knowledge bases and inferences Logic as a representation.
Pattern-directed inference systems
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Logic Propositional Logic Summary
1 Knowledge Representation. 2 Definitions Knowledge Base Knowledge Base A set of representations of facts about the world. A set of representations of.
1 CMSC 471 Fall 2002 Class #10/12–Wednesday, October 2 / Wednesday, October 9.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
S P Vimal, Department of CSIS, BITS, Pilani
지식표현 Agent that reason logically
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
LDK R Logics for Data and Knowledge Representation Propositional Logic: Reasoning First version by Alessandro Agostini and Fausto Giunchiglia Second version.
For Friday Read chapter 8 Homework: –Chapter 7, exercise 1.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.
1 Logical Inference Algorithms CS 171/271 (Chapter 7, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
CS6133 Software Specification and Verification
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
11 Artificial Intelligence CS 165A Thursday, October 25, 2007  Knowledge and reasoning (Ch 7) Propositional logic 1.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part B Propositional Logic.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Inference in Propositional Logic (and Intro to SAT) CSE 473.
Logical Agents Chapter 7 Part I. 2 Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Knowledge Repn. & Reasoning Lecture #9: Propositional Logic UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2005.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Inference in Propositional Logic (and Intro to SAT)
EA C461 – Artificial Intelligence Logical Agent
Logical Agents Chapter 7.
Artificial Intelligence
Artificial Intelligence: Agents and Propositional Logic.
CS 416 Artificial Intelligence
Back to “Serious” Topics…
Logical Agents Chapter 7.
Knowledge Representation I (Propositional Logic)
Methods of Proof Chapter 7, second half.
Propositional Logic CMSC 471 Chapter , 7.7 and Chuck Dyer
Presentation transcript:

Agents that reason logically Tuomas Sandholm Carnegie Mellon University Computer Science Department

Agents that reason logically Logic: - formal language in which knowledge can be expressed - means of carrying out reasoning in such a language Knowledge base (KB) consisting of sentences - Background knowledge - TELL’ed function KB-AGENT(percept) returns an action static: KB, a knowledge base t, a counter, initially 0, indicating time TELL(KB,MAKE-PERCEPT-SENTENCE(percept, t)) action  ASK(KB,MAKE-ACTION-QUERY(t)) TELL(KB,MAKE-ACTION-SENTENCE(action,t)) t  t+1 return action

Syntax semantics Sentences Entails Facts Follows Semantics Representation World “KB entails  ” KB  sentence “  is derived from KB by i”KB i  An inference procedure that generate only entailed sentences is called sound (truth-preserving) Proof = record of operation of sound inference procedure Proof theory specifies the sound inference steps for a logic. An inference procedure is complete if it can find a proof for any entailed sentence.

Inference “The pope is in Denver” Pope = microfilm Denver = pumpkin on the porch A sentence is true under a particular interpretation if the state of affairs it represents is the case A sentence is valid (tautology, necessarily true) if it is true under all possible worlds, i.e. regardless of what it is supposed to mean and regardless of the state of affairs in the universe being described. E.g. A  ¬A A sentence is satisfiable if there is some interpretation of some world for which it is true. E.g. A  B (satisfiable by setting A= True, B=True) Unsatisfiable: E.g. A  ¬A

Language Ontological commitment (what exists in the world) Epistemological commitment (what an agent believes about facts) Propositional logic First-order logic Temporal logic Probability theory Fuzzy logic Facts Facts, objects, relations Facts, objects, relations, times Facts Degree of truth True/false/unknown Degree of belief 0…1

Propositional Logic (PL): Syntax Sentence  AtomicSentence | ComplexSentence AtomicSentence  True | False | P | Q | R | … ComplexSentence  ( Sentence ) | Sentence Connective Sentence | ¬Sentence Connective   |  |  |  Logic constants Propositional symbols Conjunction (and’ed together) Disjunction (or’ed together) Precedence: ¬     E.g. ¬ P  Q  R  S is equivalent to ((¬ P)  (Q  R))  S

Propositional Logic: Semantics Truth table defines the semantics

Validity and inference Truth tables can be used for inference If the sentence is true in every row, then the sentence is valid. This can be used for machine inference by building a truth table for Premises  Conclusions and checking all rows. ((P  H)  ¬H)  P Slow, so need more powerful inference rules…

Inference rules in propositional logic E.g. to prove that P follows from (P  H) and  H, we require only one application of the resolution rule with  as P,  as H, and empty.

Proving soundness of inference rules for propositional logic … The truth-table demonstrating soundness of the resolution inference rule for propositional logic. An inference rule is sound if the conclusion is true in all cases where the premises are true.

Complexity of propositional inference Truth table method needs to check 2 n rows for any proof involving n propositional symbols NP-Complete [Cook 1971] 3SAT:  ? s.t. (x 1  x 5  x 6 )  (x 2  x 5  x 6 ) … Most instances may be easy Monotonicity: When we add new sentences to KB, all the sentences entailed by the original KB are still entailed. Propositional logic (and first-order-logic) are monotonic. Monotonicity allows local inference rules. Probability theory is not monotonic.

Complexity of propositional inference: a tractable special case A class of sentences that allow polynomial time inference in propositional logic: Horn sentence: P 1  P 2  …  P n  Q where Pi’s and Q are non-negated Inference procedure: apply Modus Ponens whenever possible until no more inferences possible.

Models (= dark regions in the Venn diagrams below) = those parts of the world where sentence is true. I.e. those assignments of {True,False} to propositions. A sentence  is entailed by a KB if the models of KB are all models of .

Another method for inference in propositional logic: Model finding Postulate  (Premises  Conclusions) and try to find a model

Applications of model finding Logic, theorem proving (e.g. Robbins algebra) Planning (e.g. SATPLAN) Boolean circuits Satisfiability checking Constraint satisfaction Vision interpretation [e.g. Reiter & Mackworth 89]

Model finding algorithms

Davis-Putnam procedure [1960] E.g. for 3SAT  ? s.t. (p 1  p 3  p 4 )  (  p 1  p 2  p 3 )  … Backtrack when some clause becomes empty Unit propagation (for variable & value ordering): if some clause only has one literal left, assign that variable the value that satisfies the clause (never need to check the other branch) p1p1 p3p3 p2p2 p4p4 F F T T Complete clause

A helpful observation for the Davis-Putnam procedure P 1  P 2  …  P n  Q(Horn) is equivalent to  (P 1  P 2  …  P n )  Q(Horn) is equivalent to  P 1   P 2  …   P n  Q(Horn clause) Thrm. If a propositional theory consists only of Horn clauses (i.e., clauses that have at most one non-negated variable) and unit propagation does not result in an explicit contradiction (i.e., Pi and  Pi for some Pi), then the theory is satisfiable. Proof. On the next page. …so, Davis-Putnam algorithm does not need to branch on variables which only occur in Horn clauses

Proof of the thrm Assume the theory is Horn, and that unit propagation has completed (without contradiction). We can remove all the clauses that were satisfied by the assignments that unit propagation made. From the unsatisfied clauses, we remove the variables that were assigned values by unit propagation. The remaining theory has the following two types of clauses that contain unassigned variables only:  P 1   P 2  …   P n  Qand  P 1   P 2  …   P n Each remaining clause has at least two variables (otherwise unit propagation would have applied to the clause). Therefore, each remaining clause has at least one negated variable. Therefore, we can satisfy all remaining clauses by assigning each remaining variable to False.

Variable ordering heuristic for the Davis- Putnam procedure [Crawford & Auton AAAI-93] Heuristic: Pick a non-negated variable that occurs in a non- Horn (more than 1 non-negated variable) clause with a minimal number of non-negated variables. Motivation: This is effectively a “most constrained first” heuristic if we view each non-Horn clause as a “variable” that has to be satisfied by setting one of its non-negated variables to True. In that view, the branching factor is the number of non-negated variables the clause contains. Q: Why is branching constrained to non-negated variables? A: We can ignore any negated variables in the non-Horn clauses because –whenever any one of the non-negated variables is set to True the clause becomes redundant (satisfied), and –whenever all but one of the non-negated variables is set to False the clause becomes Horn. Variable ordering heuristics can make several orders of magnitude difference in speed.

“Order parameter” for 3SAT [Mitchell, Selman, Levesque AAAI-92]  = #clauses / # variables This predicts –satisfiability –hardness of finding a model

Generality of the order parameter  The results seem quite general across model finding algorithms Other constraint satisfaction problems have order parameters as well

…but the complexity peak does not occur under all ways of generating the 3SAT instances

GSAT [Selman, Levesque, Mitchell AAAI-92] (= a local search algorithm for model finding) Incomplete (unless restart a lot) Avg. total flips variables, 215 3SAT clauses max-climbs Greediness is not essential as long as climbs and sideways moves are preferred over downward moves.

Restarting vs. Escaping

BREAKOUT algorithm [Morris AAAI-93] Initialize all variables Pi randomly UNTIL currently state is a solution IF current state is not a local minimum THEN make any local change that reduces the total cost (i.e. flip one Pi) ELSE increase weights of all unsatisfied clause by one Incomplete, but very efficient on large (easy) satisfiable problems. Reason for incompleteness: the cost increase of the current local optimum spills to other solutions because they share unsatisfied clauses.

Summary of the algorithms we covered for inference in propositional logic Truth table method Inference rules Model finding algorithms –Davis-Putnam (Systematic backtracking) Early backtracking when a clause is empty Unit propagation Variable (& value?) ordering heuristics –GSAT –BREAKOUT

Propositional logic is too weak a representational language - Too many propositions to handle, and truth table has 2 n rows. E.g. in the wumpus world, the simple rule “don’t go forward if the wumpus is in front of you” requires 64 rules ( 16 squares x 4 orientations for agent) - Hard to deal with change. Propositions might be true at times but not at others. Need a proposition P i t for each time step because one should not always forget what held in the past (e.g. where the agent came from) - don’t know # time steps - need time-dependent versions of rules - Hard to identify “individuals”, e.g. Mary, 3 - Cannot directly talk about properties of individuals or relations between individuals, e.g. Tall(bill) - Generalizations, patterns cannot easily be represented “all triangles have 3 sides.”