Extensions to FOL. When the form of the statements provides useful information Rule-based systems Frame systems When FOL isn’t enough Default reasoning.

Slides:



Advertisements
Similar presentations
Artificial Intelligence
Advertisements

1 Knowledge Representation Introduction KR and Logic.
Logic: The Big Picture Propositional logic: atomic statements are facts –Inference via resolution is sound and complete (though likely computationally.
Big Ideas in Cmput366. Search Blind Search State space representation Iterative deepening Heuristic Search A*, f(n)=g(n)+h(n), admissible heuristics Local.
Introduction to Truth Maintenance Systems A Truth Maintenance System (TMS) is a PS module responsible for: 1.Enforcing logical relations among beliefs.
Logic Programming Automated Reasoning in practice.
Truth Maintenance Systems. Outline What is a TMS? Basic TMS model Justification-based TMS.
Inference and Reasoning. Basic Idea Given a set of statements, does a new statement logically follow from this. For example If an animal has wings and.
Default Reasoning the problem: in FOL, universally-quantified rules cannot have exceptions –  x bird(x)  can_fly(x) –bird(tweety) –bird(opus)  can_fly(opus)
1 DCP 1172 Introduction to Artificial Intelligence Chang-Sheng Chen Topics Covered: Introduction to Nonmonotonic Logic.
Knowledge Representation CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
Uncertainty Everyday reasoning and decision making is based on uncertain evidence and inferences. Classical logic only allows conclusions to be strictly.
Uncertain Knowledge Representation CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
For Friday Finish chapter 10 No homework (get started on program 2)
Reasoning System.  Reasoning with rules  Forward chaining  Backward chaining  Rule examples  Fuzzy rule systems  Planning.
Intelligent systems Lecture 6 Rules, Semantic nets.
Knowledge Representation Methods
5/17/20151 Probabilistic Reasoning CIS 479/579 Bruce R. Maxim UM-Dearborn.
Cs774 (Prasad)L7Negation1 Negation by Failure
Rule Based Systems Alford Academy Business Education and Computing
Outline Recap Knowledge Representation I Textbook: Chapters 6, 7, 9 and 10.
1 Reasoning in Uncertain Situations 8a 8.0Introduction 8.1Logic-Based Abductive Inference 8.2Abduction: Alternatives to Logic 8.3The Stochastic Approach.
Lecture of 11/13 Resolution theorem proving (end) Propositional Probabilistic Logic (start) Announcements: 1. Homework 4 socket closed; Due next week 2.
Bayesian Belief Networks
Knowledge Representation I (Propositional Logic) CSE 473.
Representing Uncertainty CSE 473. © Daniel S. Weld 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 12 Jim Martin.
Artificial Intelligence Chapter 17 Knowledge-Based Systems Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Intro to AI Fall 2002 © L. Joskowicz 1 Introduction to Artificial Intelligence LECTURE 11: Nonmonotonic Reasoning Motivation: beyond FOL + resolution Closed-world.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Uncertainty Chapter 13.
For Monday after Spring Break Read Homework: –Chapter 13, exercise 6 and 8 May be done in pairs.
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
NONMONOTONIC LOGIC AHMED SALMAN MALIK. OVERVIEW Monotonic Logic Nonmonotonic Logic Usage and Applications Comparison with other forms of logic Related.
Propositional Logic Reasoning correctly computationally Chapter 7 or 8.
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
Uncertainty1 Uncertainty Russell and Norvig: Chapter 14 Russell and Norvig: Chapter 13 CS121 – Winter 2003.
CSNB234 ARTIFICIAL INTELLIGENCE
For Friday Exam 1. For Monday No reading Take home portion of exam due.
Artificial Intelligence
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
Knowledge based Humans use heuristics a great deal in their problem solving. Of course, if the heuristic does fail, it is necessary for the problem solver.
For Wednesday Read chapter 13 Homework: –Chapter 10, exercise 5 (part 1 only, don’t redo) Progress for program 2 due.
Pattern-directed inference systems
Expert Systems Robots Unlimited, p. 234 – Expert Systems Expert knowledge in many domains can be captured as rules. Dendral (1965 – 1975) If: The.
1 Reasoning Under Uncertainty Artificial Intelligence Chapter 9.
Intro to Computation and AI Dr. Jill Fain Lehman School of Computer Science Lecture 6: December 4, 1997.
Uncertainty Management in Rule-based Expert Systems
Uncertainty. Assumptions Inherent in Deductive Logic-based Systems All the assertions we wish to make and use are universally true. Observations of the.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
For Friday Read Homework: –Chapter 10, exercise 22 I strongly encourage you to tackle this together. You may work in groups of up to 4 people.
Uncertainty in AI. Birds can fly, right? Seems like common sense knowledge.
1 Knowledge Based Systems (CM0377) Introductory lecture (Last revised 28th January 2002)
International Conference on Fuzzy Systems and Knowledge Discovery, p.p ,July 2011.
For Wednesday Read chapter 13 No homework. Program 2 Any questions?
Textbook Basics of an Expert System: – “Expert systems: Design and Development,” by: John Durkin, 1994, Chapters 1-4. Uncertainty (Probability, Certainty.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
Expert System Seyed Hashem Davarpanah University of Science and Culture.
Artificial Intelligence Knowledge Representation.
Artificial Intelligence: Applications
Limitations of First-Order Logic
Knowledge-Based Systems Chapter 17.
Symbolic Reasoning under uncertainty
Probability Topics Random Variables Joint and Marginal Distributions
CS 188: Artificial Intelligence Fall 2008
CSNB234 ARTIFICIAL INTELLIGENCE
Certainty Factor Model
Habib Ullah qamar Mscs(se)
Presentation transcript:

Extensions to FOL

When the form of the statements provides useful information Rule-based systems Frame systems When FOL isn’t enough Default reasoning and circumscription Reasoning with uncertainty Degrees of membership (fuzzy logic) Reasoning about belief

What Does  Really Mean? HasChildren  MotherA  Bdefinition of B Raining  Wet A  BA causes B Fever  InfectionA  BA is a symptom of B B causes A LikeBig  GetHummerA  Bwhenever A occurs, B usually does too So how should we reason with these very different things?

Rule-Based Systems The logic: a  b is equivalent to:  a  b So, given:fever  infection  fever  infection fever Conclude: infection Given: fever  infection  fever  infection  infection Conclude:  fever But are these two inferences equally useful?

An Example for a Design Task: XCON (1982) From XCON (1982): If: the most current active context is distributing massbus devices, and there is a single-port disk drive that has not been assigned to a massbus, and there are no unassigned dual-port disk drives, and the number of devices that each massbus should support is known, and there is a massbus that has been assigned at least one disk drive that should support additional disk drives, and the type of cable needed to connect the disk drive to the previous device on the massbus is known Then: assign the disk drive to the massbus.

An Example for a Diagnosis Task: Mycin (1975) If: (1) the stain of the ogranism is gram-positive, and (2) the morphology of the organism is coccus, and (3) the growth conformation of the organism is clumps Then: there is suggestive evidence (0.7) that the identity of the organism is staphylococcus.

Simple Examples Today eXpertise2Go: AcquiredIntelligence: (whales, graduate school) DecisionScript:

Implementation of Rule-Based Systems Prolog: The KB:reply(sampcor) :- a, b A query:?- reply(X) Use backward chaining to answer the question. Expert system shells: Typically combine methods If (1) the suggested technique category is correlation and regression analysis, and (2) one of the values of the desired correlation/regression result is a measure of the degree to which 2 variables move together Then the suggested analysis approach is to calculate a sample correlation coefficient If a  b  reply(sampcor)

Expert System Shells Some rules are best used in forward chaining mode. For example, data collection and reasoning from symptoms. Other rules (e.g., how to achieve goals) are best used in backward chaining mode. All these rules may also want to exploit other kinds of knowledge, like default information associated with classes of objects:

Inheritance, Again birds canfly T ISA ISA robinsostriches canfly F Instance-of Instance-of TweetyRichy Scooter is a bird. Can Scooter fly?

Inheritance Objects inherit from their parents: Scooter inherits from Bird the facts that: its birthmode is eggs, and it has two wings Should Scooter inherit from Bird the fact that it can fly?

Default Reasoning The importance of default reasoning Default reasoning is nonmonotonic. Techniques for default reasoning Inheritance The closed world assumption Circumscription Maintaining consistency in nonmonotonic reasoning systems

Default Reasoning - Examples Inheritance from superclasses:  x bird(x)  canfly(x) UNLESS ostrich(x) The “normal” case:  x bird(x)  canfly(x) UNLESS (broken-wing(x)  sick(x)  in(oil-slick, x)) The closed world assumption: can cats fly? Abduction: infection  feverGiven fever, can we conclude infection?

Default Reasoning in Nonmonotonic Inference in FOL systems is monotonic: The addition of any new assertion that is consistent with the KB will never cause a formula that was previously true to become false. Default reasoning may be nonmonotonic: Birds can fly. Tweety is a bird.  Tweety can fly. But what if we now learn: Tweety is an ostrich. or Tweety has a broken wing.

Implementing Inheritance birds canfly T ISA ISA robinsostriches canfly F Instance-of Instance-of TweetyRichy If we implement inheritance procedurally, we don’t have to write the UNLESS clauses. We assume Tweety isn’t an ostrich.

The Closed World Assumption The CWA: Any ground atomic sentences that are not asserted to be true in the KB can be assumed to be false. We make the closed world assumption for two reasons: We have to. In any complex domain, there may be a huge number of possible facts and there isn’t time to mention each of them explicitly: A database of classes mentions the ones that are offered. An inventory database mentions all the objects on hand. An airline scheduling system assumes that it will be told if the power is out or the terminal has burned down or is held by terrorists or there is a storm. It is consistent with felicitous human communication.

Implementing the CWA: Negation as Failure A common way to implement the CWA: Interpret failure to prove p as a proof of  p. Example:  hasonhand(x)  uses(x)  mustorder(x) How do we prove  hasonhand(x)?

Circumscription  x bird(x)  canfly(x) UNLESS (broken-wing(x)  sick(x)  in(oil-slick, x)) Is different from:  x bird(x)   broken-wing(x)   sick(x)   in(oil-slick, x)  canfly(x) Or:  x bird(x)   adult(x)  withmother(x) One way to implement this is to create the predicate Abnormal:  x bird(x)  canfly(x) UNLESS Abnormal(x) (broken-wing(x)  sick(x)  in(oil-slick, x))  Abnormal(x)

Circumscription Then we circumscribe Abnormal, i.e., we prefer models in which Abnormal is true of the smallest possible number of individuals consistent with the rest of the KB. But what happens if we are told just: bird(Tweety)and then we conclude canfly(Tweety) Then we are told: broken-wing(Tweety) How do we undo the conclusion canfly(Tweety)?

Abbott, Babbitt, and Cabot

Truth Maintenance Systems The basic idea: Associate with each assertion one or more justifications. Believe any assertion with at least one valid justification. Each justification is composed of two parts: An IN-list An OUT-list We will define the operation of a TMS that operates as a service to a separate reasoning system. The TMS doesn’t make choices. It is just a bookkeeper.

The Structure of a Justification

Before Alibis

Abbott’s Situation, with Alibi

Babbitt’s Situation

Cabot’s Situation

The Big Picture

New Facts Come In

Deciding How to Resolve the Conflict

Abduction Examples: infection  fever measles  spots raining  wetsidewalks If given: fever, can we conclude infection? spots, can we conclude measles? wetsidewalks, can we conclude raining?

Uncertainty and Fuzziness Degrees of truth John is tall. John is very tall. Probability of truth John is in Austin (p =.6) Coin is heads (p =.5) Certainty of belief John is in Austin (c =.2)a wild guess Coin is heads(c = 1)sure it’s 50/50

When Must We Deal with Uncertainty? Diagnosis: Observe: spots, fever, headache. What’s wrong with the patient? Observe: clothes are wrinkled and hot. What’s wrong with the dryer? Interpretation: Speech understanding Language understanding Image understanding Data interpretation Planning: If I turn the steering wheel, where will the car go?

Probabilistic Reasoning P(strep) = x (the probability that a random person has strep right now) P(staph) = y (similar) Suppose that we can use the same drug in either case, so we want to know P(strep  staph) =

Probabilistic Reasoning P(strep) = x (the probability that a random person has strep right now) P(staph) = y (similar) Suppose that we can use the same drug in either case, so we want to know P(strep  staph) = P(strep) + P(staph) - P(strep  staph)

Probabilistic Reasoning Suppose There are Three Factors P(a  b  c) = P(a) + P(b) + P(c) - P(a  b) - P(a  c) - P(b  c) +P(a  b  c) P(a  b  c) = P(a  b  c) - P(a) - P(b) - P(c) + P(a  b) + P(a  c) + P(b  c)

Conditional Probability P(measles  spots) = P(measles | spots) P(spots) definition P(measles  spots) = P(spots | measles) P(measles) P(measles | spots) = P(measles  spots) definition P(spots) P(measles | spots) = P(spots | measles)  P(measles) Bayes Rule P(spots)

Examples from Diagnosis and Interpretation P(measles | spots) = P(spots | measles)  P(measles) P(spots) P(word x | sound y) = P(sound y | word x)  P(word x) P(sound y) Word = Argmax(P(sound y | word x)  P(word x)) x

Naïve Bayes Classifier What if Multiple Observations Are Available? P(measles | spots  fever) = P(spots  fever | measles)  P(measles) P(spots  fever) Assume spots and fever are independent: = P(spots  fever | measles)  P(measles) P(spots  fever) = P(spots|measles)  P(fever | measles)  P(measles) P(spots)  P(fever) Disease = Argmax(P(spots|x)  P(fever |x)  P(x) ) x

Not Quite So Naïve Bayes Classifier Comparing chicken pox (pox) to measles: P(measles | spots  fever) = P(spots  fever | measles)  P(measles) P(spots  fever) Assume spots and fever are independent given measles or pox and measles and pox are independent: = P(spots|measles)  P(fever | measles)  P(measles) P(spots  fever) = P(spots|measles)  P(fever | measles)  P(measles) P(spots|measles)*P(fever|measles)*P(measles) + P(spots|pox)* P(fever|pox)*P(pox) Disease = Argmax(P(spots|x)  P(fever |x)  P(x) ) x

Learning Naïve Bayes Classification An important aspect of naïve Bayes classification is that a classifier can be learned. Where do numbers like p(spots | measles) come from? Answer: patientdiagnosisfevercoughspotssore throatsneezesachy 1MeaslesYes No Yes 2MeaslesYesNoYesNo Yes 3MeaslesYesNo 4ChickenpoxYesNoYes No 5ChickenpoxYesNoYesNo

Various ad hoc Approaches Unfortunately, it often happens that we don’t have all the joint probabilities required to compute true probabilities for our conclusions. So a variety of approximate methods are used. idence1.htm