I NTRODUCTION TO U NCERTAINTY 1. 2 3 Intelligent user interfaces Communication codes Protein sequence alignment Object tracking.

Slides:



Advertisements
Similar presentations
ICS-171:Notes 8: 1 Notes 8: Uncertainty, Probability and Optimal Decision-Making ICS 171, Winter 2001.
Advertisements

Probability: Review The state of the world is described using random variables Probabilities are defined over events –Sets of world states characterized.
PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
I NTRODUCTION TO U NCERTAINTY S OURCES OF U NCERTAINTY Imperfect representations of the world Imperfect observation of the world Laziness, efficiency.
Where are we in CS 440? Now leaving: sequential, deterministic reasoning Entering: probabilistic reasoning and machine learning.
5/17/20151 Probabilistic Reasoning CIS 479/579 Bruce R. Maxim UM-Dearborn.
Bayesian Networks Chapter 14 Section 1, 2, 4. Bayesian networks A simple, graphical notation for conditional independence assertions and hence for compact.
1 Slides for the book: Probabilistic Robotics Authors: Sebastian Thrun Wolfram Burgard Dieter Fox Publisher: MIT Press, Web site for the book & more.
CPSC 422 Review Of Probability Theory.
Probability.
Planning Chapter 7 article 7.4 Production Systems Chapter 5 article 5.3 RBSChapter 7 article 7.2.
Uncertainty Chapter 13. Uncertainty Let action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1.partial observability.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 12 Jim Martin.
KI2 - 2 Kunstmatige Intelligentie / RuG Probabilities Revisited AIMA, Chapter 13.
1 Bayesian Reasoning Chapter 13 CMSC 471 Adapted from slides by Tim Finin and Marie desJardins.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Ai in game programming it university of copenhagen Welcome to... the Crash Course Probability Theory Marco Loog.
Probability and Information Copyright, 1996 © Dale Carnegie & Associates, Inc. A brief review (Chapter 13)
1 Introducing Uncertainty (It is not the world that is imperfect, it is our knowledge of it) R&N: Chap. 3, Sect Chap. 13.
Uncertainty Logical approach problem: we do not always know complete truth about the environment Example: Leave(t) = leave for airport t minutes before.
1 Probabilistic Belief States and Bayesian Networks (Where we exploit the sparseness of direct interactions among components of a world) R&N: Chap. 14,
Uncertainty Chapter 13.
Uncertainty Chapter 13.
CSCI 121 Special Topics: Bayesian Network Lecture #1: Reasoning Under Uncertainty.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
Uncertainty1 Uncertainty Russell and Norvig: Chapter 14 Russell and Norvig: Chapter 13 CS121 – Winter 2003.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
Uncertainty Chapter 13. Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule.
1 Chapter 13 Uncertainty. 2 Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule.
Probabilistic Belief States and Bayesian Networks (Where we exploit the sparseness of direct interactions among components of a world) R&N: Chap. 14, Sect.
Probability and naïve Bayes Classifier Louis Oliphant cs540 section 2 Fall 2005.
An Introduction to Artificial Intelligence Chapter 13 & : Uncertainty & Bayesian Networks Ramin Halavati
1 Reasoning Under Uncertainty Artificial Intelligence Chapter 9.
CSE PR 1 Reasoning - Rule-based and Probabilistic Representing relations with predicate logic Limitations of predicate logic Representing relations.
Uncertainty Uncertain Knowledge Probability Review Bayes’ Theorem Summary.
P ROBABILISTIC I NFERENCE. A GENDA Conditional probability Independence Intro to Bayesian Networks.
Chapter 13 February 19, Acting Under Uncertainty Rational Decision – Depends on the relative importance of the goals and the likelihood of.
Uncertainty. Assumptions Inherent in Deductive Logic-based Systems All the assertions we wish to make and use are universally true. Observations of the.
Probability and Information Copyright, 1996 © Dale Carnegie & Associates, Inc. A brief review.
Probability Course web page: vision.cis.udel.edu/cv March 19, 2003  Lecture 15.
Making sense of randomness
Uncertainty Chapter 13. Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule.
4 Proposed Research Projects SmartHome – Encouraging patients with mild cognitive disabilities to use digital memory notebook for activities of daily living.
Uncertainty Chapter 13. Outline Uncertainty Probability Syntax and Semantics Inference Independence and Bayes' Rule.
Uncertainty ECE457 Applied Artificial Intelligence Spring 2007 Lecture #8.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 13 –Reasoning with Uncertainty Tuesday –AIMA, Ch. 14.
Uncertainty Let action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1.partial observability (road state, other.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
Uncertainty Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Planning Chapter 7 article 7.4 Production Systems Chapter 5 article 5.3 RBSChapter 7 article 7.2.
Probabilistic Robotics Probability Theory Basics Error Propagation Slides from Autonomous Robots (Siegwart and Nourbaksh), Chapter 5 Probabilistic Robotics.
Matching ® ® ® Global Map Local Map … … … obstacle Where am I on the global map?                                   
Where are we in CS 440? Now leaving: sequential, deterministic reasoning Entering: probabilistic reasoning and machine learning.
Uncertainty Chapter 13.
Uncertainty Chapter 13.
Where are we in CS 440? Now leaving: sequential, deterministic reasoning Entering: probabilistic reasoning and machine learning.
Markov ó Kalman Filter Localization
Uncertainty.
Probability and Information
Uncertainty in Environments
Probability and Information
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2007
Uncertainty Logical approach problem: we do not always know complete truth about the environment Example: Leave(t) = leave for airport t minutes before.
Uncertainty Chapter 13.
Uncertainty Chapter 13.
Uncertainty Chapter 13.
Uncertainty Chapter 13.
basic probability and bayes' rule
Presentation transcript:

I NTRODUCTION TO U NCERTAINTY 1

2

3 Intelligent user interfaces Communication codes Protein sequence alignment Object tracking

4 Stopping distance (95% confidence interval) Braking initiatedGradual stop

S UCCESS STORIES … 5

3 S OURCES OF U NCERTAINTY Imperfect representations of the world Imperfect observation of the world Laziness, efficiency 6

F IRST S OURCE OF U NCERTAINTY : I MPERFECT P REDICTIONS There are many more states of the real world than can be expressed in the representation language So, any state represented in the language may correspond to many different states of the real world, which the agent can’t represent distinguishably The language may lead to incorrect predictions about future states 7 A BC A BC A BC On(A,B)  On(B,Table)  On(C,Table)  Clear(A)  Clear(C)

O BSERVATION OF THE R EAL W ORLD 8 Real world in some state Percepts On(A,B) On(B,Table) Handempty Interpretation of the percepts in the representation language Percepts can be user’s inputs, sensory data (e.g., image pixels), information received from other agents,...

S ECOND SOURCE OF U NCERTAINTY : I MPERFECT O BSERVATION OF THE W ORLD Observation of the world can be: Partial, e.g., a vision sensor can’t see through obstacles (lack of percepts) 9 R1R1 R2R2 The robot may not know whether there is dust in room R2

S ECOND SOURCE OF U NCERTAINTY : I MPERFECT O BSERVATION OF THE W ORLD Observation of the world can be: Partial, e.g., a vision sensor can’t see through obstacles Ambiguous, e.g., percepts have multiple possible interpretations 10 A B C On(A,B)  On(A,C)

S ECOND SOURCE OF U NCERTAINTY : I MPERFECT O BSERVATION OF THE W ORLD Observation of the world can be: Partial, e.g., a vision sensor can’t see through obstacles Ambiguous, e.g., percepts have multiple possible interpretations Incorrect 11

T HIRD S OURCE OF U NCERTAINTY : L AZINESS, E FFICIENCY An action may have a long list of preconditions, e.g.: Drive-Car: P = Have-Keys   Empty-Gas-Tank  Battery-Ok  Ignition-Ok   Flat-Tires   Stolen-Car... The agent’s designer may ignore some preconditions... or by laziness or for efficiency, may not want to include all of them in the action representation The result is a representation that is either incorrect – executing the action may not have the described effects – or that describes several alternative effects 12

R EPRESENTATION OF U NCERTAINTY Many models of uncertainty We will consider two important models: Non-deterministic model: Uncertainty is represented by a set of possible values, e.g., a set of possible worlds, a set of possible effects,... Probabilistic (stochastic) model: Uncertainty is represented by a probabilistic distribution over a set of possible values 13

E XAMPLE : B ELIEF S TATE In the presence of non-deterministic sensory uncertainty, an agent belief state represents all the states of the world that it thinks are possible at a given time or at a given stage of reasoning In the probabilistic model of uncertainty, a probability is associated with each state to measure its likelihood to be the actual state

W HAT DO PROBABILITIES MEAN ? Probabilities have a natural frequency interpretation The agent believes that if it was able to return many times to a situation where it has the same belief state, then the actual states in this situation would occur at a relative frequency defined by the probabilistic distribution This state would occur 20% of the times

E XAMPLE Consider a world where a dentist agent D meets a new patient P D is interested in only one thing: whether P has a cavity, which D models using the proposition Cavity Before making any observation, D’s belief state is: This means that D believes that a fraction p of patients have cavities 16 Cavity  Cavity p 1-p

E XAMPLE Probabilities summarize the amount of uncertainty (from our incomplete representations, ignorance, and laziness) 17 Cavity  Cavity p 1-p

N ON - DETERMINISTIC VS. P ROBABILISTIC Non-deterministic uncertainty must always consider the worst case, no matter how low the probability Reasoning with sets of possible worlds “The patient may have a cavity, or may not” Probabilistic uncertainty considers the average case outcome, so outcomes with very low probability should not affect decisions (as much) Reasoning with distributions of possible worlds “The patient has a cavity with probability p” 18

N ON - DETERMINISTIC VS. P ROBABILISTIC If the world is adversarial and the agent uses probabilistic methods, it is likely to fail consistently (unless the agent has a good idea of how the world thinks, see Texas Hold-em) If the world is non-adversarial and failure must be absolutely avoided, then non-deterministic techniques are likely to be more efficient computationally In other cases, probabilistic methods may be a better option, especially if there are several “goal” states providing different rewards and life does not end when one is reached 19

O THER A PPROACHES TO U NCERTAINTY Fuzzy Logic Truth value of continuous quantities interpolated from 0 to 1 (e.g., X is tall) Problems with correlations Dempster-Shafer theory Bel(X) probability that observed evidence supports X Bel(X)  1-Bel(  X) Optimal decision making not clear under D-S theory 20

P ROBABILITIES IN DETAIL 21

P ROBABILISTIC B ELIEF Consider a world where a dentist agent D meets with a new patient P D is interested in only whether P has a cavity; so, a state is described with a single proposition – Cavity Before observing P, D does not know if P has a cavity, but from years of practice, he believes Cavity with some probability p and  Cavity with probability 1-p The proposition is now a boolean random variable and (Cavity, p) is a probabilistic belief

A N A SIDE The patient either has a cavity or does not, there is no uncertainty in the world. What gives? Probabilities are assessed relative to the agent’s state of knowledge Probability provides a way of summarizing the uncertainty that comes from ignorance or laziness “Given all that I know, the patient has a cavity with probability p” This assessment might be erroneous (given an infinite number of patients, the true fraction may be q ≠ p) The assessment may change over time as new knowledge is acquired (e.g., by looking in the patient’s mouth)

W HERE DO PROBABILITIES COME FROM ? Frequencies observed in the past, e.g., by the agent, its designer, or others Symmetries, e.g.: If I roll a dice, each of the 6 outcomes has probability 1/6 Subjectivism, e.g.: If I drive on Highway 37 at 75mph, I will get a speeding ticket with probability 0.6 Principle of indifference: If there is no knowledge to consider one possibility more probable than another, give them the same probability 24

M ULTIVARIATE B ELIEF S TATE We now represent the world of the dentist D using three propositions – Cavity, Toothache, and PCatch D’s belief state consists of 2 3 = 8 states each with some probability: {Cavity  Toothache  PCatch,  Cavity  Toothache  PCatch, Cavity  Toothache  PCatch,...}

T HE BELIEF STATE IS DEFINED BY THE FULL JOINT PROBABILITY OF THE PROPOSITIONS StateP(state) C, T, P0.108 C, T,  P C,  T, P C,  T,  P  C, T, P  C, T,  P  C,  T, P  C,  T,  P Probability table representation

P ROBABILISTIC I NFERENCE P(Cavity  Toothache) = = 0.28 StateP(state) C, T, P0.108 C, T,  P C,  T, P C,  T,  P  C, T, P  C, T,  P  C,  T, P  C,  T,  P 0.576

P ROBABILISTIC I NFERENCE P(Cavity) = = 0.2 StateP(state) C, T, P0.108 C, T,  P C,  T, P C,  T,  P  C, T, P  C, T,  P  C,  T, P  C,  T,  P 0.576

P ROBABILISTIC I NFERENCE StateP(state) C, T, P0.108 C, T,  P C,  T, P C,  T,  P  C, T, P  C, T,  P  C,  T, P  C,  T,  P Marginalization: P(C) =  t  p P(C  t  p) using the conventions that C = Cavity or  Cavity and that  t is the sum over t = {Toothache,  Toothache}

P ROBABILISTIC I NFERENCE StateP(state) C, T, P0.108 C, T,  P C,  T, P C,  T,  P  C, T, P  C, T,  P  C,  T, P  C,  T,  P Marginalization: P(C) =  t  p P(C  t  p) using the conventions that C = Cavity or  Cavity and that  t is the sum over t = {Toothache,  Toothache}

P ROBABILISTIC I NFERENCE P(  Cavity  PCatch) = = 0.16 StateP(state) C, T, P0.108 C, T,  P C,  T, P C,  T,  P  C, T, P  C, T,  P  C,  T, P  C,  T,  P 0.576

P ROBABILISTIC I NFERENCE StateP(state) C, T, P0.108 C, T,  P C,  T, P C,  T,  P  C, T, P  C, T,  P  C,  T, P  C,  T,  P Marginalization: P(C  P) =  t P(C  t  P) using the conventions that C = Cavity or  Cavity, P = PCatch or  PCatch and that  t is the sum over t = {Toothache,  Toothache}

P OSSIBLE W ORLDS I NTERPRETATION A probability distribution associates a number to each possible world If  is the set of possible worlds, and  is a possible world, then a probability model P(  ) has 0  P(  )  1   P(  )=1 Worlds may specify all past and future events 33

E VENTS (P ROPOSITIONS ) Something possibly true of a world (e.g., the patient has a cavity, the die will roll a 6, etc.) expressed as a logical statement Each event e is true in a subset of  The probability of an event is defined as P(e) =   P(  ) I[e is true in  ] Where I[x] is the indicator function that is 1 if x is true and 0 otherwise 34

K OMOLGOROV ’ S P ROBABILITY A XIOMS 0  P(a)  1 P(true) = 1, P(false) = 0 P(a  b) = P(a) + P(b) - P(a  b) Hold for all events a, b Hence P(  a) = 1-P(a)

C ONDITIONAL P ROBABILITY P(a|b) is the posterior probability of a given knowledge that event b is true “Given that I know b, what do I believe about a?” P(a|b) =   /b P(  |b) I[a is true in  ] Where  /b is the set of worlds in which b is true P(  |b): A probability distribution over a restricted set of worlds! P(  |b) = P(  )/P(b) If a new piece of information c arrives, the agent’s new belief should be P(a|b  c)

C ONDITIONAL P ROBABILITY P(a  b) = P(a|b) P(b) = P(b|a) P(a) P(a|b) is the posterior probability of a given knowledge of b Axiomatic definition: P(a|b) = P(a  b)/P(b)

C ONDITIONAL P ROBABILITY P(a  b) = P(a|b) P(b) = P(b|a) P(a) P(a  b  c) = P(a|b  c) P(b  c) = P(a|b  c) P(b|c) P(c) P(Cavity) =  t  p P(Cavity  t  p) =  t  p P(Cavity|t  p) P(t  p) =  t  p P(Cavity|t  p) P(t|p) P(p)

P ROBABILISTIC I NFERENCE StateP(state) C, T, P0.108 C, T,  P C,  T, P C,  T,  P  C, T, P  C, T,  P  C,  T, P  C,  T,  P P(Cavity|Toothache) = P(Cavity  Toothache)/P(Toothache) = ( )/( ) = 0.6 Interpretation: After observing Toothache, the patient is no longer an “average” one, and the prior probability (0.2) of Cavity is no longer valid P(Cavity|Toothache) is calculated by keeping the ratios of the probabilities of the 4 cases of Toothache unchanged, and normalizing their sum to 1

I NDEPENDENCE Two events a and b are independent if P(a  b) = P(a) P(b) hence P(a|b) = P(a) Knowing b doesn’t give you any information about a

C ONDITIONAL I NDEPENDENCE Two events a and b are conditionally independent given c, if P(a  b|c) = P(a|c) P(b|c) hence P(a|b  c) = P(a|c) Once you know c, learning b doesn’t give you any information about a

E XAMPLE OF C ONDITIONAL INDEPENDENCE Consider Rainy, Thunder, and RoadsSlippery Ostensibly, thunder doesn’t have anything directly to do with slippery roads… But they happen together more often when it rains, so they are not independent… So it is reasonable to believe that Thunder and RoadsSlippery are conditionally independent given Rainy So if I want to estimate whether or not I will hear thunder, I don’t need to think about the state of the roads, if I know that it’s raining

T HE M OST I MPORTANT T IP … The only ways that probability expressions can be transformed are via: Komolgorov’s axioms Marginalization Conditioning Explicitly stated conditional independence assumptions Every time you write an equals sign, indicate which rule you’re using Memorize and practice these rules 43