Handling Uncertainty. Uncertain knowledge Typical example: Diagnosis. Consider:  x Symptom(x, Toothache)  Disease(x, Cavity). The problem is that this.

Slides:



Advertisements
Similar presentations
Probability: Review The state of the world is described using random variables Probabilities are defined over events –Sets of world states characterized.
Advertisements

PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
Where are we in CS 440? Now leaving: sequential, deterministic reasoning Entering: probabilistic reasoning and machine learning.
Artificial Intelligence Uncertainty
Handling Uncertainty. Uncertain knowledge Typical example: Diagnosis. Consider data instances about patients: Can we certainly derive the diagnostic rule:
Reasoning under Uncertainty: Conditional Prob., Bayes and Independence Computer Science cpsc322, Lecture 25 (Textbook Chpt ) March, 17, 2010.
Marginal Independence and Conditional Independence Computer Science cpsc322, Lecture 26 (Textbook Chpt 6.1-2) March, 19, 2010.
CPSC 422 Review Of Probability Theory.
Probability.
Uncertainty Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 13.
Uncertainty Chapter 13. Uncertainty Let action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1.partial observability.
KI2 - 2 Kunstmatige Intelligentie / RuG Probabilities Revisited AIMA, Chapter 13.
University College Cork (Ireland) Department of Civil and Environmental Engineering Course: Engineering Artificial Intelligence Dr. Radu Marinescu Lecture.
Uncertainty Management for Intelligent Systems : for SEP502 June 2006 김 진형 KAIST
Handling Uncertainty. Uncertain knowledge Typical example: Diagnosis. Can we certainly derive the diagnostic rule: if Toothache=true then Cavity=true.
Ai in game programming it university of copenhagen Welcome to... the Crash Course Probability Theory Marco Loog.
Probability and Information Copyright, 1996 © Dale Carnegie & Associates, Inc. A brief review (Chapter 13)
Uncertainty Logical approach problem: we do not always know complete truth about the environment Example: Leave(t) = leave for airport t minutes before.
Uncertainty Chapter 13.
Uncertainty Chapter 13.
Methods in Computational Linguistics II Queens College Lecture 2: Counting Things.
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Feb 28 and March 13-15, 2012.
Uncertainty Chapter 13. Uncertainty Let action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1.partial observability.
CHAPTER 13 Oliver Schulte Summer 2011 Uncertainty.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 15and20, 2012.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
Uncertainty Chapter 13. Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule.
1 Chapter 13 Uncertainty. 2 Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule.
Uncertainty Chapter 13. Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 20, 2012.
Bayesian networks. Motivation We saw that the full joint probability can be used to answer any question about the domain, but can become intractable as.
Uncertainty Chapter 13. Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule.
Probability and naïve Bayes Classifier Louis Oliphant cs540 section 2 Fall 2005.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
CSE PR 1 Reasoning - Rule-based and Probabilistic Representing relations with predicate logic Limitations of predicate logic Representing relations.
Chapter 13 February 19, Acting Under Uncertainty Rational Decision – Depends on the relative importance of the goals and the likelihood of.
Uncertainty. Assumptions Inherent in Deductive Logic-based Systems All the assertions we wish to make and use are universally true. Observations of the.
Probability and Information Copyright, 1996 © Dale Carnegie & Associates, Inc. A brief review.
Uncertainty Chapter 13. Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule.
Uncertainty Chapter 13. Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule.
2101INT – Principles of Intelligent Systems Lecture 9.
Web-Mining Agents Data Mining Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny (Übungen)
Uncertainty Let action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1.partial observability (road state, other.
Uncertainty 1. 2 ♦ Uncertainty ♦ Probability ♦ Syntax and Semantics ♦ Inference ♦ Independence and Bayes’ Rule Outline.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
Uncertainty Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Web-Mining Agents Data Mining Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny (Übungen)
Outline [AIMA Ch 13] 1 Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule.
Uncertainty & Probability CIS 391 – Introduction to Artificial Intelligence AIMA, Chapter 13 Many slides adapted from CMSC 421 (U. Maryland) by Bonnie.
Anifuddin Azis UNCERTAINTY. 2 Introduction The world is not a well-defined place. There is uncertainty in the facts we know: What’s the temperature? Imprecise.
Computer Science cpsc322, Lecture 25
Marginal Independence and Conditional Independence
Uncertainty Chapter 13 Copyright, 1996 © Dale Carnegie & Associates, Inc.
Where are we in CS 440? Now leaving: sequential, deterministic reasoning Entering: probabilistic reasoning and machine learning.
Uncertainty Chapter 13.
Uncertainty Chapter 13.
Where are we in CS 440? Now leaving: sequential, deterministic reasoning Entering: probabilistic reasoning and machine learning.
Uncertainty.
Probability and Information
Uncertainty in Environments
Probability and Information
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2007
Uncertainty Logical approach problem: we do not always know complete truth about the environment Example: Leave(t) = leave for airport t minutes before.
Uncertainty Chapter 13.
Uncertainty Chapter 13.
Uncertainty Chapter 13.
Uncertainty Chapter 13.
Presentation transcript:

Handling Uncertainty

Uncertain knowledge Typical example: Diagnosis. Consider:  x Symptom(x, Toothache)  Disease(x, Cavity). The problem is that this rule is wrong. Not all patients with toothache have cavities; some of them have gum disease, an abscess, …:  x Symptom(x, Toothache)  Disease(x, Cavity)  Disease(x, GumDisease)  Disease(x, Abscess) ... Unfortunately, in order to make the rule true, we have to add an almost unlimited list of possible causes. We could try turning the rule into a causal rule:  x Disease(x, Cavity)  Symptom(x, Toothache). But this rule isn’t right either; not all cavities cause pain. We need to make it logically exhaustive: to augment the left side with all the qualifications required for a cavity to cause toothache.

So, using FOL fails… In a domain like medical diagnosis, using FOL fails because of: Laziness: –Too much work to list the complete set of antecedents or consequents to ensure an exceptionless rule. –Too hard to use such rules. Theoretical Ignorance: –Medical science hasn’t complete theory for the domain. Practical Ignorance: –Even if we know all the rules, we might be uncertain about a particular patient because not all necessary tests have been done…

Belief and Probability The connection between toothaches and cavities is not a logical consequence in either direction. However, we can provide a degree of belief on the sentences. Our main tool for this is probability theory. E.g. We might not know for sure what afflicts a particular patient, but we believe that there is, say, an 80% chance – that is probability 0.8 – that the patient has cavity if he has a toothache. –We usually get this belief from statistical data. Assigning probability 0 to a sentence correspond to an unequivocal belief that the sentence is false. Assigning probability 1 to a sentence correspond to an unequivocal belief that the sentence is true.

Syntax Basic element: random variable Similar to propositional logic: possible worlds defined by assignment of values to random variables. Boolean random variables e.g., Cavity (do I have a cavity?) Discrete random variables e.g., Weather is one of Domain values must be exhaustive and mutually exclusive Elementary proposition constructed by assignment of a value to a random variable: e.g., Weather = sunny, Cavity = false (abbreviated as  cavity) Complex propositions formed from elementary propositions and standard logical connectives e.g., Weather = sunny  Cavity = false

Atomic events Atomic event: A complete specification of the state of the world about which the agent is uncertain E.g., if the world consists of only two Boolean variables Cavity and Toothache, then there are 4 distinct atomic events: Cavity = false  Toothache = false Cavity = false  Toothache = true Cavity = true  Toothache = false Cavity = true  Toothache = true Atomic events are mutually exclusive and exhaustive

Axioms of probability For any propositions A, B –0 ≤ P(A) ≤ 1 –Necessarily true (i.e. valid) propositions have probability 1, and necessarily false (i.e. unsatisfiable) propositions have probability 0. P(true) = 1 and P(false) = 0 –P(A  B) = P(A) + P(B) - P(A  B)

Using the axioms of probability We can derive a variety of useful facts from basic axioms. E.g. P(a   a) = P(a) + P(  a) – P(a   a) (by axiom 3) P(true) = P(a) + P(  a) – P(a   a) (by logical equivalence) 1 = P(a) + P(  a) (by axiom 2) P(  a) = 1 - P(a) (by algebra) Also, we can prove that for discrete variable D with domain we have  n i=1 P(D=d i ) = 1.

Prior probability and distribution Prior or unconditional probability associated with a proposition is the degree of belief accorded to it in the absence of any other information. e.g., P(Cavity = true) = 0.1 (or P(cavity) = 0.1) P(Weather = sunny) = 0.7(or P(sunny) = 0.7) Probability distribution gives values for all possible assignments: P(Weather = sunny) = 0.7 P(Weather = rain) = 0.2 P(Weather = cloudy) = 0.08 P(Weather = snow) = 0.02 As a shorthand we can use a vector notation as: P(Weather) = (they sum up to 1)

Joint probability Joint probability distribution for a set of random variables gives the probability of every atomic event on those random variables. E.g. for two random variables Weather and Cavity we have we have P(Weather,Cavity), which is a 4 × 2 matrix of values: Weather =sunnyrainycloudysnow Cavity = true Cavity = false We can consider the joint probability distribution of all the variables we use to describe the world. Such joint probability distribution is called full joint probability distribution. A full joint distribution specifies the probability of every atomic event. Any probabilistic question about a domain can be answered by the full joint distribution.

Conditional probability Conditional or posterior probabilities e.g., P(cavity | toothache) = 0.8 i.e., given that toothache is all I know Notation for conditional distributions: P(Cavity | Toothache) is a 2-element vector of 2-element vectors

Conditional probability Definition of conditional probability: P(a | b) = P(a  b) / P(b) if P(b) > 0 Product rule gives an alternative formulation: P(a  b) = P(a | b) P(b) = P(b | a) P(a) A general version holds for whole distributions, e.g., P(Weather,Cavity) = P(Weather | Cavity) P(Cavity) i.e. shorthand for P(sunny  cavity) = P(sunny | cavity) P(cavity) P(rainy  cavity) = P(rainy | cavity) P(cavity) … (View it as a set of 4 × 2 equations, not matrix multiplication)

Chain rule Chain rule is derived by successive application of product rule:

Inference by enumeration Start with the joint probability distribution: For any proposition φ, sum up the probabilities of the atomic events where it is true: P(φ) = Σ ω:ω╞φ P(ω)

Inference by enumeration

Can also compute conditional probabilities:

Normalization Denominator can be viewed as a normalization constant α and we can write in vector notation:

Inference by enumeration, cont’d Typically, we are interested in the posterior joint distribution of the query variables Y given specific values e for the evidence variables E Let the hidden variables be H = X - Y - E Then the required summation of joint entries is done by summing out the hidden variables: P(Y | E = e) = αP(Y,E = e) = αΣ h P(Y,E= e, H = h) The terms in the summation are joint entries because Y, E and H together exhaust the set of random variables Obvious problems: 1.Worst-case time complexity O(d n ) where d is the largest arity 2.Space complexity O(d n ) to store the joint distribution 3.How to find the numbers for O(d n ) entries?

Independence Let’s add a fourth variable, Weather. The full joint probability distribution P(Toothache, Catch, Cavity, Weather) has 32 entries (because Weather has 4 values)!! It contains 4 editions of the previous table, one for each kind of weather. Naturally, we ask what relationship these editions have to each other and to the original table? E.g. how are P(toothache, catch, cavity, cloudy) and P(toothache, catch, cavity) related? Let’s use the product rule: P(toothache, catch, cavity, cloudy) = P(cloudy | toothache, catch, cavity) P( toothache, catch, cavity ) Of course, one’s dental problems don’t influence the weather, so P(cloudy | toothache, catch, cavity) = P(cloudy)

Independence (cont’d) So, we can write: P(toothache, catch, cavity, cloudy) = P(cloudy | toothache, catch, cavity) P(toothache, catch, cavity) = P(cloudy) P(toothache, catch, cavity) Thus, the 32 element table for four variables can be constructed from one 8-element table and one 4-element table!! This property is called independece. A and B are independent iff P(A|B) = P(A) or P(B|A) = P(B) or P(A, B) = P(A) P(B) Absolute independence powerful but rare Dentistry is a large field with hundreds of variables, none of which are independent.

Bayes' Rule Product rule P(a  b) = P(a | b) P(b) = P(b | a) P(a) Bayes' rule: P(a | b) = P(b | a) P(a) / P(b) or in vector form P(Y|X) = P(X|Y) P(Y) / P(X) = α P(X|Y) P(Y) Useful for assessing diagnostic probability from causal probability: –P(Cause|Effect) = P(Effect|Cause) P(Cause) / P(Effect)

Applying Bayes’ rule Bayes ’ s rule is useful in practice because there are many cases where we do have good probability estimates for these three numbers and need to compute the fourth. For example, –A doctor knows that the meningitis causes the patient to have a stiff neck 50% of the time. –The doctor also knows some unconditional facts: the prior probability that a patient has meningitis is 1/50,000, and the prior probability that any patient has a stiff neck is 1/20.

Bayes’ rule (cont’d) Let s be the proposition that the patient has a stiff neck m be the proposition that the patient has meningitis, P(s|m) = 0.5 P(m) = 1/50000 P(s) = 1/20 P(m|s) = P(s|m) P(m) / P(s) = (0.5) x (1/50000) / (1/20) = That is, we expect only 1 in 5000 patients with a stiff neck to have meningitis.

Bayes’ rule (cont’d) Well, we might say that doctors know that a stiff neck implies meningitis in 1 out of 5000 cases; –That is the doctor has quantitative information in the diagnostic direction from symptoms (effects) to causes. –Such a doctor has no need for Bayes’ rule?! Unfortunately, diagnostic knowledge is more fragile than causal knowledge. Imagine, there is sudden epidemic of meningitis. The prior probability, P(m), will go up. –The doctor who derives the diagnostic probability P(m|s) from his statistical observations of patients before the epidemic will have no idea how to update the value. – The doctor who derives the diagnostic probability P(m|s) from the other three values will see that P(m|s) goes up proportionally with P(m). Clearly, P(s|m) is unaffected by the epidemic. It simply reflects the way meningitis works.

Difficulty with more than two vars

Conditional independence P(Toothache, Cavity, Catch) has 2 3 = 8 independent entries If I have a cavity, the probability that the probe catches in it doesn't depend on whether I have a toothache: (1) P(catch | toothache, cavity) = P(catch | cavity) The same independence holds if I haven't got a cavity: (2) P(catch | toothache,  cavity) = P(catch |  cavity) Catch is conditionally independent of Toothache given Cavity: P(Catch | Toothache,Cavity) = P(Catch | Cavity) Equivalent statements: P(Toothache | Catch, Cavity) = P(Toothache | Cavity) P(Toothache, Catch | Cavity) = P(Toothache | Cavity) P(Catch | Cavity)

Conditional independence contd. Write out full joint distribution using chain rule: P(Toothache, Catch, Cavity) = P(Toothache | Catch, Cavity) P(Catch, Cavity) = P(Toothache | Catch, Cavity) P(Catch | Cavity) P(Cavity) = P(Toothache | Cavity) P(Catch | Cavity) P(Cavity) In most cases, the use of conditional independence reduces the size of the representation of the joint distribution from exponential in n to linear in n. Conditional independence is our most basic and robust form of knowledge about uncertain environments.

In general…

Bayes' Rule and conditional independence P(Cavity | toothache  catch) = αP(toothache  catch | Cavity) P(Cavity) = αP(toothache | Cavity) P(catch | Cavity) P(Cavity) This is an example of a naïve Bayes model: P(Cause,Effect 1, …,Effect n ) = P(Cause) π i P(Effect i |Cause) Total number of parameters is linear in n

Athens Example Suppose you are a witness to a nighttime hit-and-run accident involving a taxi in Athens. All taxis in Athens are blue or green. You swear, under oath, that the taxi was blue. Extensive testing shows that, under the dim lighting conditions, discrimination between blue and green is 75% reliable. 9 out of 10 Athenian taxis are green What’s most likely color for the taxi? Hint: distinguish carefully between the proposition that the taxi is blue and the proposition that the taxi appears blue.

Athens Example (cont’d) Two random variables. B – taxi was blue with domain {b,  b} LB – taxi looked blue with domain {lb,  lb} The information on the reliability of color identification can be written as P(lb | b) = 0.75 P(  lb |  b) = 0.75 We need to know the probability that the taxi was blue, given that it looked blue. Then, we need to know the probability that the taxi wasn’t blue, given that it looked blue. Let’s use the Bayes’ rule: P(b | lb) =  P(lb | b) P(b) =  * 0.75 * 0.1 =  * P(  b | lb) =  P(lb |  b) P(  b) =  (1 - P(  lb |  b)) (1 - P(b)) =  ( ) (1 – 0.1) =  * 0.25 * 0.9 =  * Hence, P(B | lb) =. So, even if the witness “has seen a blue” it is more probable that the taxi was green.  = 1/P(lb) = 1/( P(b | lb) + P(  b | lb) ) = 1/( )

Text Categorization Text categorization is the task of assigning a given document to one of a fixed set of categories, on the basis of the text it contains. Naïve Bayes models are often used for this task. In these models, the query variable is the document category, and the effect variables are the presence or absence of each word in the language. How such a model can be constructed, given as training data a set of documents that have been assigned to categories? The model consists of the prior probability P(Category) and the conditional probabilities P(Word i | Category). For each category c, P(Category=c) is estimated as as the fraction of all the “training” documents that are of that category. Similarly, P(Word i = true | Category = c) is estimated as the fraction of documents of category that contain word. Also, P(Word i = true | Category =  c) is estimated as the fraction of documents not of category that contain word.

Text Categorization (cont’d) Now we can use naïve Bayes for each c: P(Category = c | Word 1 = true, …, Word n = true) =  *P(Category = c)  n i=1 P(Word i = true | Category = c) P(Category =  c | Word 1 = true, …, Word n = true) =  *P(Category =  c)  n i=1 P(Word i = true | Category =  c) where  is the normalization constant.