Knowledge-Based Systems. Page 2 === Rule-Based Expert Systems n Expert Systems – One of the most successful applications of AI reasoning technique using.

Slides:



Advertisements
Similar presentations
Artificial Intelligence
Advertisements

CS4026 Formal Models of Computation Part II The Logic Model Lecture 1 – Programming in Logic.
Modelling with expert systems. Expert systems Modelling with expert systems Coaching modelling with expert systems Advantages and limitations of modelling.
Some Prolog Prolog is a logic programming language
Biointelligence Lab School of Computer Sci. & Eng.
SLD-resolution Introduction Most general unifiers SLD-resolution
1 A formula in predicate logic An atom is a formula. If F is a formula then (~F) is a formula. If F and G are Formulae then (F /\ G), (F \/ G), (F → G),
EXPERT SYSTEMS AND KNOWLEDGE REPRESENTATION Ivan Bratko Faculty of Computer and Info. Sc. University of Ljubljana.
CHAPTER 13 Inference Techniques. Reasoning in Artificial Intelligence n Knowledge must be processed (reasoned with) n Computer program accesses knowledge.
Automated Reasoning Systems For first order Predicate Logic.
1 Logic Logic in general is a subfield of philosophy and its development is credited to ancient Greeks. Symbolic or mathematical logic is used in AI. In.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
For Friday No reading Homework: –Chapter 9, exercise 4 (This is VERY short – do it while you’re running your tests) Make sure you keep variables and constants.
Reasoning Forward and Backward Chaining Andrew Diniz da Costa
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Logic.
CS 484 – Artificial Intelligence1 Announcements Choose Research Topic by today Project 1 is due Thursday, October 11 Midterm is Thursday, October 18 Book.
Artificial Intelligence Chapter 17 Knowledge-Based Systems Biointelligence Lab School of Computer Sci. & Eng. Seoul National University Part 2/2.
Inferences The Reasoning Power of Expert Systems.
Reasoning System.  Reasoning with rules  Forward chaining  Backward chaining  Rule examples  Fuzzy rule systems  Planning.
Chapter 12: Expert Systems Design Examples
Prolog IV Logic, condensed. 2 Propositional logic Propositional logic consists of: The logical values true and false ( T and F ) Propositions: “Sentences,”
Formal Logic Proof Methods Direct Proof / Natural Deduction Conditional Proof (Implication Introduction) Reductio ad Absurdum Resolution Refutation.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
CSE (c) S. Tanimoto, 2008 Propositional Logic
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
Relational Data Mining in Finance Haonan Zhang CFWin /04/2003.
1 Automated Reasoning Introduction to Weak Methods in Theorem Proving 13.1The General Problem Solver and Difference Tables 13.2Resolution Theorem.
1 Chapter 9 Rules and Expert Systems. 2 Chapter 9 Contents (1) l Rules for Knowledge Representation l Rule Based Production Systems l Forward Chaining.
Rules and Expert Systems
© C. Kemke1Reasoning - Introduction COMP 4200: Expert Systems Dr. Christel Kemke Department of Computer Science University of Manitoba.
Methods of Proof Chapter 7, second half.
EXPERT SYSTEMS Part I.
Artificial Intelligence Chapter 17 Knowledge-Based Systems Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Artificial Intelligence CSC 361
Sepandar Sepehr McMaster University November 2008
Propositional Logic Reasoning correctly computationally Chapter 7 or 8.
INFERENCE IN FIRST-ORDER LOGIC IES 503 ARTIFICIAL INTELLIGENCE İPEK SÜĞÜT.
17.5 Rule Learning Given the importance of rule-based systems and the human effort that is required to elicit good rules from experts, it is natural to.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
Production Systems A production system is –a set of rules (if-then or condition-action statements) –working memory the current state of the problem solving,
Logical Agents Logic Propositional Logic Summary
1 Knowledge Representation. 2 Definitions Knowledge Base Knowledge Base A set of representations of facts about the world. A set of representations of.
CS62S: Expert Systems Based on: The Engineering of Knowledge-based Systems: Theory and Practice, A. J. Gonzalez and D. D. Dankel.
The AI War LISP and Prolog Basic Concepts of Logic Programming
For Monday Finish chapter 19 No homework. Program 4 Any questions?
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
Automated Reasoning Early AI explored how to automate several reasoning tasks – these were solved by what we might call weak problem solving methods as.
For Monday Finish chapter 19 Take-home exam due. Program 4 Any questions?
Automated Reasoning Systems For first order Predicate Logic.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Artificial Intelligence
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Chap. 10 Learning Sets of Rules 박성배 서울대학교 컴퓨터공학과.
Some Thoughts to Consider 5 Take a look at some of the sophisticated toys being offered in stores, in catalogs, or in Sunday newspaper ads. Which ones.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
EXPERT SYSTEMS BY MEHWISH MANZER (63) MEER SADAF NAEEM (58) DUR-E-MALIKA (55)
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
EA C461 – Artificial Intelligence Logical Agent
Review of AI Professor: Liqing Zhang
Artificial Intelligence Chapter 17 Knowledge-Based Systems
Introduction to Expert Systems Bai Xiao
Knowledge-Based Systems Chapter 17.
Artificial Intelligence Chapter 17 Knowledge-Based Systems
Artificial Intelligence Chapter 17 Knowledge-Based Systems
Artificial Intelligence Chapter 17. Knowledge-Based Systems
Methods of Proof Chapter 7, second half.
Artificial Intelligence Chapter 17 Knowledge-Based Systems
Presentation transcript:

Knowledge-Based Systems

Page 2 === Rule-Based Expert Systems n Expert Systems – One of the most successful applications of AI reasoning technique using facts and rules – “AI Programs that achieve expert-level competence in solving problems by bringing to bear a body of knowledge [Feigenbaum, McCorduck & Nii 1988]” n Expert systems vs. knowledge-based systems n Rule-based expert systems – Often based on reasoning with propositional logic Horn clauses.

Page 3 === Rule-Based Expert Systems (2) n Structure of Expert Systems – Knowledge Base Consists of predicate-calculus facts and rules about subject at hand. – Inference Engine Consists of all the processes that manipulate the knowledge base to deduce information requested by the user. – Explanation subsystem Analyzes the structure of the reasoning performed by the system and explains it to the user.

Page 4 === Rule-Based Expert Systems (3) n Knowledge acquisition subsystem – Checks the growing knowledge base for possible inconsistencies and incomplete information. n User interface – Consists of some kind of natural language processing system or graphical user interfaces with menus. n “Knowledge engineer” – Usually a computer scientist with AI training. – Works with an expert in the field of application in order to represent the relevant knowledge of the expert in a forms of that can be entered into the knowledge base.

Page 5 === Rule-Based Expert Systems (4) Example: loan officer in a bank “Decide whether or not to grant a personal loan to an individual.” Facts Rules

Page 6 === Rule-Based Expert Systems (5) n To prove OK – The inference engine searches fro AND/OR proof tree using either backward or forward chaining. n AND/OR proof tree – Root node: OK – Leaf node: facts – The root and leaves will be connected through the rules. n Using the preceding rule in a backward-chaining – The user’s goal, to establish OK, can be done either by proving both BAL and REP or by proving each of COLLAT, PYMT, and REP. – Applying the other rules, as shown, results in other sets of nodes to be proved.

Page 7 === Rule-Based Expert Systems (6) n By backward-chaining

Page 8 === Rule-Based Expert Systems (8) n In many applications, the system has access only to uncertain rules, and the user not be able to answer questions with certainty. n MYCIN [Shortliffe 1976]: Diagnose bacterial infections.

Page 9 === Rule-Based Expert Systems (7) n Consulting system – Attempt to answer a user’s query by asking questions about the truth of propositions that they might know about. – Backward-chaining through the rule is used to get to askable questions. – If a user were to “volunteer” information, bottom-up, forward chaining through the rules could be used in an attempt to connect to the proof tree already built. – The ability to give explanations for a conclusion Very important for acceptance of expert system advice. – Proof tree Used to guide the explanation-generation process.

Page 10 === Rule-Based Expert Systems (9) – PROSPECTOR [Duda, Gaschnig & Hart 1979, Campbell, et al. 1982] Reason about ore deposits If there is pre-intrusive, thorough –going fault system, then there is (5, 0.7) a regional environment favorable for porphyry copper deposit. – The numbers (.75 and.5 in MYCIN, and 5, 0.7 in PROSPECTOR) are ways to represent the certainty or strength of a rule. – The numbers are used by these systems in computing the certainty of conclusions.

Page 11 === Rule Learning Inductive rule ( 归纳规则 ) learning – Creates new rules about a domain, not derivable from any previous rules. – Ex) Neural networks Deductive rule (演绎规则) learning – Enhances the efficiency of a system’s performance by deducting additional rules from previously known domain rules and facts. – Ex) EBG (explanation-based generalization) OK –The loan should be approved COLLAT– The collateral for the loan is satisfactory PYMT –The applicant is able to make the loan payment REP – The applicant has a good financial reputation APP – The appraisal on the collateral is sufficiently greater than the loan amount RATING – The applicant has a good credit rating INC – The applicant’s income exceeds his expenses BAL – Excellent Balance sheet

Page 12 === Learning Propositional Calculus Rules Train rules from given training set – Seek a set of rules that covers only positive instances Positive instance: OK = 1 Negative instance: OK = 0 – From training set, we desire to induce rules of the form – We can make some rules more specific by adding an atom to its antecedent to make it cover fewer instances. Cover: If the antecedent of a rule has value True for an instance in the training set, we say that the rule covers that instance. – Adding a rule makes the system using these rules more general. – Searching for a set of rules can be computationally difficult.  here, we use “greedy” method which is called separate and conquer.

Page 13 === n Separate and conquer – First attempt to find a single rule that covers only positive instances Start with a rule that covers all instances Gradually make it more specific by adding atoms to its antecedent. – Gradually add rules until the entire set of rules covers all and only the positive instances. – Trained rules can be simplified using pruning. Operations and noise-tolerant modifications help minimize the risk of overfitting. Learning Propositional Calculus Rules (2)

Page 14 === n Example: loan officer in a bank – Start with the provisional rule. Which cover all instances. – Add an atom it cover fewer negative instances-working toward covering only positive ones. – Decide, which item should we added ? From by Learning Propositional Calculus Rules (3)

Page 15 === n Select that yielding the largest value of. Learning Propositional Calculus Rules (4) So, we select BAL, yielding the provisional rule.

Page 16 === Learning Propositional Calculus Rules (4) Rule covers the positive instances 3,4, and 7, but also covers the negative instance 1. – So, select another atom to make this rule more specific. We have already decided that the first component in the antecedent is BAL, so we have to consider it. We select RATING because is based on a larger sample.

Page 17 === Learning Propositional Calculus Rules We need more rules which cover positive instance 6. To learn the next rule, eliminate from the table all of the positive instances already covered by the first rule.

Page 18 === n Begin the process all over again with reduced table – Start with the rule. Learning Propositional Calculus Rules (5) APP=INC=0.25, arbitrarily select APP. This rule covers negative instances 1, 8, and 9  we need another atom to the antecedent. Select RATING, and we get This rule covers negative example 9. – Finally we get which covers only positive instances with first rule, so we are finished.

Page 19 === Learning Propositional Calculus Rules (6) Pseudocode of this rule learning process. – Generic Separate-and-Conquer Algorithm (GSCA)

Page 20 === Generic Separate-and-conquer algorithm (GSCA) 1.Initialize 2.Initialize empty set of rules 3. repeat the outer loop adds rules until covers all (or most) of the positive instances 4.Initialize 5.Initialize 6.Repeat the inner loop adds atoms to until covers only (or mainly) positive instances 7.Select an atom to add to. This is a nondeterministic choice point that can be used for backtracking Until covers only (or mainly) positive instances in 10. We add the rule to the set of rules. 11. (the positive instances in covered by ) 12.Until covers all (or most) of positive instance in

Page 21 === Learning First-Order Logic Rules n Inductive logic programming (ILP) – Concentrate on methods for inductive learning of Horn clauses in first order predicate calculus (FOPC) and thus PROLOG program. – FOIL [Quinlan, 1990] n The Objective of ILP – To learn a program,,consisting of Horn clauses,,each of which is of the form where, the are atomic formulas that unify with ground atomic facts. : should evaluate to True when its variables are bound to some set of values known to be in the relation we are trying to learn (positive instance: training set). : should evaluate to False when its variables are bound to some set of values known not to be in the relation (negative instance).

Page 22 === n We want to cover the positives instances and not cover negative ones. n Background knowledge – The ground atomic facts with which the are to unify. – They are given-as either subsidiary PROLOG programs, which can be run and evaluated, or explicitly in the form of a list of facts. n Example: A delivery robot navigating around in a building finds through experience, that it is easy to go between certain pairs of locations and not so easy to go between certain other pairs. Learning First-Order Logic Rules (2)

Page 23 === n A, B, C: junctions n All of the other locations: shops n Junction(x) – Whether junction or not. n Shop(x,y) – Whether shop or not which is connected to junction x. Learning First-Order Logic Rules (3)

Page 24 === Learning First-Order Logic Rules (4)  We want a learning program to learn a program, Easy(x,y) that covers the positive instances in but not the negative ones.  Easy(x,y) can use the background subexpressions Junction(x) and Shop(x,y).  Training set

Page 25 === – For all of the locations named in, only the following pairs give a value True for Shop: – The following PROLOG program covers all of the positive instances of the training set and none of the negative ones Easy(x, y) :- Junction(x), Junction(y) :- Shop(x, y) :- Shop(y, x) Learning First-Order Logic Rules (5)

Page 26 === n Learning process: generalized separate and conquer algorithm (GSCA) – Start with a program having a single rule with no body – Add literals to the body until the rule covers only (or mainly) positive instances – Add rules in the same way until the program covers all (or most) and only (with few exceptions) positive instances. Learning First-Order Logic Rules (6)

Page 27 === – Practical ILP systems restrict the literals in various ways. – Typical allowed additions are Literals used in the background knowledge Literals whose arguments are a subset of those in the head of the clause. Literals that introduce a new distinct variable different from those in the head of the clause. A literal that equates a variable in the head of the clause with another such variable or with a term mentioned in the background knowledge. A literal that is the same (except for its arguments) as that in the head of the clause. Learning First-Order Logic Rules (7)

Page 28 === Learning First-Order Logic Rules (8) n The literals that we might consider adding to a clause are n ILP version of GSCA – First, initialize first clause as Easy(x, y) :- – Add Junction(x), so Easy(x, y) :- Junction(x) covers the following instances – Include more literal ‘Junction(y)’  Easy(x, y) :- Junction(x), Junction(y) Junction(x), Junction(y), Junction(z) Shop(x,y), Shop(y,x), Shop(x,z) Shop(z,y), (x=y)  Positive instances  Negative instances

Page 29 === – But program does not cover the following positive instances. – Remove the positive instance covered by Easy(x,y):-Junction(x), Junction(y) from to form the to be used in next pass through inner loop. : all negative instance in + the positive instance that are not covered yet. – Inner loop create another initial clause “Easy(x,y) :-” Add literal Shop(x,y) : Easy(x,y) :- Shop(x,y)  cover no negative instances, so we are finished with another pass through inner loop. Covered positive instance by this rule ( remove this from ) Learning First-Order Logic Rules (9)

Page 30 === – Now we have Easy(x,y) :- Junction(x), Junction(y) :- Shop(x, y) – To cover following instance – Add Shop(y, x) – Then we have Easy(x,y) :- Junction(x), Junction(y) :- Shop(x, y) :- Shop(y, x) – This cover only positive instances. Learning First-Order Logic Rules (10)

Page 31 === Explanation-Based Generalization (1) n Example: “Block world” – General knowledge of the “Block world”. Rules Fact We want to proof “ ” Proof is very simple 

Page 32 === Explanation-Based Generalization (2) n Explanation: Set of facts used in the proof – Ex) explanation for “ ” is “ ” – From this explanation, we can make “ ” Replacing constant ‘A’ by variable ‘x’, then, we have “Green(x)” Then we can proof “ ”, as like the case of “ ”  Explanation Based Generalization n Explanation-based generalization (EBG): Generalizing the explanation by replacing constant by variable – More rules might slow down the reasoning process, so EBG must be used with care-possibly by keeping information about the utility of the learned rules.