Pattern-directed inference systems

Slides:



Advertisements
Similar presentations
1 Knowledge Representation Introduction KR and Logic.
Advertisements

1 Logic Logic in general is a subfield of philosophy and its development is credited to ancient Greeks. Symbolic or mathematical logic is used in AI. In.
Logic Use mathematical deduction to derive new knowledge.
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Propositional Logic CMSC 471 Chapter , 7.7 and Chuck Dyer
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Logic.
Knowledge Representation I Suppose I tell you the following... The Duck-Bill Platypus and the Echidna are the only two mammals that lay eggs. Only birds.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Fall 2005.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 6 Dr Souham Meshoul CAP492.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Knoweldge Representation & Reasoning
Let remember from the previous lesson what is Knowledge representation
CS 4700: Foundations of Artificial Intelligence
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2005.
Rutgers CS440, Fall 2003 Propositional Logic Reading: Ch. 7, AIMA 2 nd Ed. (skip )
Inference is a process of building a proof of a sentence, or put it differently inference is an implementation of the entailment relation between sentences.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 19, 2012.
Knowledge Representation Use of logic. Artificial agents need Knowledge and reasoning power Can combine GK with current percepts Build up KB incrementally.
Fall 98 Introduction to Artificial Intelligence LECTURE 7: Knowledge Representation and Logic Motivation Knowledge bases and inferences Logic as a representation.
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Logic Propositional Logic Summary
1 Knowledge Representation. 2 Definitions Knowledge Base Knowledge Base A set of representations of facts about the world. A set of representations of.
1 CMSC 471 Fall 2002 Class #10/12–Wednesday, October 2 / Wednesday, October 9.
Propositional Logic Dr. Rogelio Dávila Pérez Profesor-Investigador División de Posgrado Universidad Autónoma Guadalajara
지식표현 Agent that reason logically
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
LECTURE LECTURE Propositional Logic Syntax 1 Source: MIT OpenCourseWare.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.
CS6133 Software Specification and Verification
For Wednesday Read chapter 9, sections 1-3 Homework: –Chapter 7, exercises 8 and 9.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
11 Artificial Intelligence CS 165A Thursday, October 25, 2007  Knowledge and reasoning (Ch 7) Propositional logic 1.
Logical Agents Chapter 7 Andreas Geyer-Schulz and Chuck Dyer
Chapter 7. Propositional and Predicate Logic Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part B Propositional Logic.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
First-Order Logic Semantics Reading: Chapter 8, , FOL Syntax and Semantics read: FOL Knowledge Engineering read: FOL.
Propositional Logic Rather than jumping right into FOL, we begin with propositional logic A logic involves: §Language (with a syntax) §Semantics §Proof.
ARTIFICIAL INTELLIGENCE Lecture 2 Propositional Calculus.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Propositional Logic Russell and Norvig: Chapter 6 Chapter 7, Sections 7.1—7.4.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Foundations of Discrete Mathematics Chapter 1 By Dr. Dalia M. Gil, Ph.D.
Propositional Logic Russell and Norvig: Chapter 6 Chapter 7, Sections 7.1—7.4 CS121 – Winter 2003.
1 Propositional Proofs 1. Problem 2 Deduction In deduction, the conclusion is true whenever the premises are true.  Premise: p Conclusion: (p ∨ q) 
March 3, 2016Introduction to Artificial Intelligence Lecture 12: Knowledge Representation & Reasoning I 1 Back to “Serious” Topics… Knowledge Representation.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Chapter 7. Propositional and Predicate Logic
Knowledge Representation and Reasoning
The Propositional Calculus
Knowledge Representation
EA C461 – Artificial Intelligence Logical Agent
Logical Agents Chapter 7.
Logic Use mathematical deduction to derive new knowledge.
Back to “Serious” Topics…
Logical Agents Chapter 7.
Logical Agents Chapter 7.
Chapter 7. Propositional and Predicate Logic
CSNB234 ARTIFICIAL INTELLIGENCE
Propositional Logic CMSC 471 Chapter , 7.7 and Chuck Dyer
Logical Agents Chapter 7 Andreas Geyer-Schulz and Chuck Dyer
Knowledge-based Agents (KBAs)
Presentation transcript:

Pattern-directed inference systems We can describe any problem domain in terms of 2 types of knowledge: Declarative knowledge: facts about the domain, which can be expressed as assertions (statements in some language). Examples: (Today is a beautiful day) Assertion in natural language (= (Today) (beautiful-day) Assertion in FOL language (Today beautiful-day) Assertion in OPS language Procedural knowledge: represents dependencies among facts, and can be expressed as “if – then” rules. Example: (> temperature 650F) and (no-rain)  beautiful-day In an AI program, the declarative component is called a "knowledge” base", and the procedural component is called a "rule base".

Pattern-directed inference systems: basic architecture Adding new assertions Inference Engine Adding new rules Fact 1 Fact 2 .... Fact n represented as sentences in some KR language Assertion/fact base Rule 1 Rule 2 ..... Rule k define what follows from the facts in the KB Rule base Data base

Knowledge representation: expressing knowledge in a form understandable by a computer. Choosing an appropriate language to represent knowledge is the first and the most important step in building an intelligent system. Each language has 2 sides: Syntax: defines how to build sentences (formulas). Semantics: defines the meaning and the truth value of sentences by connecting them to the facts in the outside world. If the syntax and the semantics of a language are precisely defined, we call that language a logic. Logic = Syntax + Semantics

Connection between sentences in a KR language and facts in the outside world Internal Entails Representation Sentences Sentences Outside world Facts Facts Follows There must exist an exact correspondence between the sentences entailed by the logic and the facts that follow from other facts in the outside world. If this requirement does not hold, the logic will be unpredictable and irrational.

Entailment and inference Entailment defines if sentence A is true with respect to a given KB (we denote it as KB |== A). Inference defines if sentence A can be derived from the KB (we denote it as KB |-- A). Let the KB contains only true sentences representing explicit knowledge about some domain. To find all consequences (or derived knowledge) that follow from that KB, the system must “run” an inference procedure. If this inference procedure generates only entailed sentences, then it is called sound, and if the inference procedure generates all entailed sentences, then it is called complete. Ideally, we want an inference procedure to be both, sound and complete. In many cases, however, this may not be possible (for example, if the KB is infinite) and we are willing to drop the requirement for completeness. If A is derivable from a KB by a sound inference procedure, i, i.e. KB |--i A, then the derivation process is called a proof of A.

Knowledge Representation Languages To formally express knowledge we need a language which is expressive and concise, unambiguous and context independent, and computationally efficient. Among the languages that fulfill at least partially these requirements are: Propositional Logic (PL). It can represent only facts, which are true or false. First-Order Logic (FOL). It can represent objects, facts and relations between objects and facts, which are true or false. Temporal Logic. This is an extension of FOL which takes the time into account. Probabilistic Logic. Limits the representation to facts only, but can these facts can be uncertain, true or false. To express uncertainty, it attaches a degree of belief (0..1) to each fact. Truth Maintenance Logic. Represents facts only, but these can be unknown and uncertain as well as true and false. Fuzzy Logic. Represents facts which degree of truth can be explicitly defined.

Interpretation and model of a representation Interpretation establishes a connection between sentences of a selected KR language and facts from the outside world. Example: Assume that A, B and C are sentences of our logic. If we refer to the “Moon world”, A may have the following interpretation “The moon is green”, B -- “There are people on the moon”, and C -- “It is sunny and nice on the moon, and people there eat a lot of green cheese". Given an interpretation, a sentence can be assigned a truth value. In PL, for example, it can be true or false, where true sentences represent facts that hold in the outside world, and false sentences represent facts that do not hold. Any world in which a sentence is true under a particular interpretation is called a model of that sentence under that interpretation.

Sentences may have different interpretations depending on the meaning given to them. Example: Consider English language. The word “Pope” is to be understood as a “microfilm”, and the word “Denver” is to be understood as “pumpkin on the left side of the porch”. In this interpretation, sentence “The Pope is in Denver” means “the microfilm is in the pumpkin”. Assume that we can enumerate all possible interpretations in all possible worlds that can be given to the sentences from our representation. Then: A sentence is called valid (or tautology) if it is true in all these interpretations. Example: (A v not A) is always true even if we refer to the “Moon world” (“There are people on the moon or there are no people on the moon”). A sentence is called satisfiable if it is true in some interpretation. Example: “The snow is red and the day is hot” is satisfiable if this is the case on Mars. A sentence is called unsatisfiable if it is not true in any interpretation.

Propositional logic To define any logic, we must address the following three questions: 1. How to make sentences (i.e. define the syntax). 2. How to relate sentences to facts (i.e. define the semantics). 3. How to generate implicit consequences (i.e. define the proof theory). From the syntactic point of view, sentences are finite sequences of primitive symbols. Therefore, we must first define the alphabet of PL. It consists of the following classes of symbols: propositional variables A, B, C ... logical constants true and false parentheses (, ) logical connectives &, v, <=>, =>, not

Well-formed formulas (wff) Given the alphabet of PL, a wff (or sentence, or proposition) is inductively defined as: a propositional variable; A v B, where A, B are sentences; A & B, where A, B are sentences; A => B, where A, B are sentences; A <=> B, where A, B are sentences; not A, where A is a sentence; true is a sentence; false is a sentence. The following hierarchy is imposed on logical operators: not, &, v, =>, <=>. Composite statements are evaluated with respect to this hierarchy, unless parentheses are used to alter it. Example: ((A & B) => C) is equivalent to A & B => C (A & (B => C)) is a different sentence.

The semantics of PL is defined by specifying the interpretation of wwf and the meaning of logical connectives. If a sentence is composed by only one propositional symbol, then it may have any possible interpretation. Depending on the interpretation, the sentence can be either true or false (i.e. satisfiable). If a sentence is composed by a logical constant (true or false), then its interpretation is fixed: true has as its interpretation a true fact; false has as its interpretation a false fact. If a sentence is composite (complex), then its meaning is derived from the meaning of its parts as follows (such semantics is called compositional, and this is known as a truth table method): P Q not P P & Q P v Q P => Q P <=> Q F F T F F T T F T T F T T F T F F F T F F T T F T T T T

Example: using a truth table, define the validity of P & (Q & R) <=> (P & Q) & R P Q R Q & R P & (Q & R) (P & Q) & R P & (Q & R)<=>(P & Q) & R F F F F F F T T F F F F F T F T F F F F T T T F F F F T F F T F F F T T F T F F F T F T T T F F T T T T T T T T This formula is valid, because it is true in all possible interpretations of its propositional variables. It is known as the “associativity of conjunction” law.

Inference: this is a process of building a proof of a sentence. Inference is carried out by inference rules, which allow one formula to be inferred from a set of other formulas. For example, A |-- B meaning that B can be derived from A. An inference procedure is sound if and only if (iff) its inference rules are sound. In turn, an inference rule is sound iff its conclusion is true whenever the rule premises are true. PL inference rules are sound (this can be proven by means of truth tables); therefore they can be used for building proofs of other formulas.

PL inference rules Modus ponens: if sentence A and implication A => B hold, then B also holds, i.e. (A, A => B) |-- B. Example: Let A means “lights are off”, A => B means “if lights are off, then there is no one in the office” B means “there is no one in the office” AND-elimination: if conjunction A1 & A2 & ... & An holds, then any of its conjuncts also holds, i.e. A1 & A2 & ... & An |-- Ai. AND-introduction: if a list of sentences holds, then their conjunction also holds, i.e. A1, A2,...,An |-- (A1 & A2 & ... & An). OR-introduction: If Ai holds, then any disjunction containing Ai also holds, i.e. Ai |-- (A1 v ... v Ai v ... v An). Double-negation elimination: states that a formula can be either true or false, i.e. (not (not A)) |-- A

PL inference rules (cont.) Unit resolution: (A v B), not B |-- A. Note that (A v B) is equivalent to (not B => A), i.e. unit resolution is a modification of modus ponens. Resolution: (A v B), (not B v C) |-- (A v C). Note that (A v B) is equivalent to (not A => B), (not B v C) is equivalent to (B => C) By eliminating the intermediate conclusion, we get (not A => C). The soundness of each one of these rules can be checked by means of the truth table method. Once the soundness of a rule has been established, it can be used for building proofs. Proofs are sequences of applications of inference rules, starting with sentences initially contained in the KB

Complexity of propositional inference 1. Propositional inference is complete, i.e. any valid formula can be proved by means of the truth tables method. 2. However, the truth table may have as much as 2^N rows, where N is the number of propositional variables in the KB. To build such a table, takes time proportional to N, i.e. the problem of proving the validity of a PL formula is NP-complete. 3. Inference rules in PL are monotonic. That is, if KB1 |-- A, then KB1 U KB2 |-- A Here, KB2 can be contradictory to KB1, which makes monotonicity of PL rule a major representation problem. 4. Inference rules in PL are local, i.e. they depend only on their premises (this is a consequent of the monotonicity property). This, in turn, makes the inference procedure much better than exponential, because only a small number of propositions are involved in each inference.

Horn formulas Although inference in PL in the worst case takes an exponential time, for one special class of PL formulas, there exists a polynomial time inference procedure. These formulas are called Horn formulas, and they have the following form: A1 & A2 & ... & An  B, where A1, A2, ..., An, B are positive literals. A literal is a formula or its negation. If the KB can be represented as a collection of Horn formulas, then by just applying modus ponens, we can infer all conclusions.

Pattern-matching and unification Assume that the KB consists of Horn formulas, whose literals are not simple propositional variables or constants, but assertions such as: (Robot Robbie) (Robot ?x)  (Can_reason ?x), where ?x is called a pattern variable. We can apply MP in this case, if our inference procedure is augmented with a pattern-matching facility. It allows pattern (Robot ?x) to be “matched” to data (Robot Robbie). Note that the match must be propagated to the right hand side of the implication making (Can_reason Robbie) the conclusion of this inference step. In some cases, there is a need to match two patterns (rather than a pattern and a data). This requires a special procedure, called unification, which finds all values of pattern variables that make the two patterns identical.