Probability Topics Random Variables Joint and Marginal Distributions

Slides:



Advertisements
Similar presentations
Probability: Review The state of the world is described using random variables Probabilities are defined over events –Sets of world states characterized.
Advertisements

PROBABILITY. Uncertainty  Let action A t = leave for airport t minutes before flight from Logan Airport  Will A t get me there on time ? Problems :
Lirong Xia Hidden Markov Models Tue, March 28, 2014.
CS 188: Artificial Intelligence Spring 2007 Lecture 11: Probability 2/20/2007 Srini Narayanan – ICSI and UC Berkeley.
CPSC 422 Review Of Probability Theory.
Probability.
CS 188: Artificial Intelligence Fall 2009 Lecture 20: Particle Filtering 11/5/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
CS 188: Artificial Intelligence Fall 2009 Lecture 13: Probability 10/8/2009 Dan Klein – UC Berkeley 1.
KI2 - 2 Kunstmatige Intelligentie / RuG Probabilities Revisited AIMA, Chapter 13.
Ai in game programming it university of copenhagen Welcome to... the Crash Course Probability Theory Marco Loog.
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
Uncertainty Chapter 13.
Probability Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
Probabilistic Models  Models describe how (a portion of) the world works  Models are always simplifications  May not account for every variable  May.
CHAPTER 15 SECTION 1 – 2 Markov Models. Outline Probabilistic Inference Bayes Rule Markov Chains.
Reasoning Under Uncertainty: Conditioning, Bayes Rule & the Chain Rule Jim Little Uncertainty 2 Nov 3, 2014 Textbook §6.1.3.
Probability Course web page: vision.cis.udel.edu/cv March 19, 2003  Lecture 15.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
QUIZ!! T/F: Probability Tables PT(X) do not always sum to one. FALSE
Computer Science CPSC 322 Lecture 27 Conditioning Ch Slide 1.
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
Today’s Topics Graded HW1 in Moodle (Testbeds used for grading are linked to class home page) HW2 due (but can still use 5 late days) at 11:55pm tonight.
Computer vision: models, learning and inference Chapter 2 Introduction to probability.
Uncertainty Let action A t = leave for airport t minutes before flight Will A t get me there on time? Problems: 1.partial observability (road state, other.
CSE 473: Artificial Intelligence Spring 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
CS 511a: Artificial Intelligence Spring 2013 Lecture 13: Probability/ State Space Uncertainty 03/18/2013 Robert Pless Class directly adapted from Kilian.
Computer Science cpsc322, Lecture 25
CS 188: Artificial Intelligence Spring 2007
Bayesian networks (1) Lirong Xia Spring Bayesian networks (1) Lirong Xia Spring 2017.
Probabilistic Reasoning
From last time: on-policy vs off-policy Take an action Observe a reward Choose the next action Learn (using chosen action) Take the next action Off-policy.
Probability Lirong Xia Spring Probability Lirong Xia Spring 2017.
Where are we in CS 440? Now leaving: sequential, deterministic reasoning Entering: probabilistic reasoning and machine learning.
Our Status We’re done with Part I (for now at least) Search and Planning! Reference AI winter --- logic (exceptions, etc. + so much knowledge --- can.
Reasoning Under Uncertainty: Conditioning, Bayes Rule & Chain Rule
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Artificial Intelligence
Quizzz Rihanna’s car engine does not start (E).
Markov ó Kalman Filter Localization
CS 4/527: Artificial Intelligence
CS 4/527: Artificial Intelligence
Review of Probability and Estimators Arun Das, Jason Rebello
CAP 5636 – Advanced Artificial Intelligence
Probabilistic Reasoning
Representing Uncertainty
Inference Inference: calculating some useful quantity from a joint probability distribution Examples: Posterior probability: Most likely explanation: B.
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence Fall 2008
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence
Probabilistic Reasoning
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
Probabilistic Reasoning
CAP 5636 – Advanced Artificial Intelligence
CS 188: Artificial Intelligence
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Hidden Markov Models Markov chains not so useful for most agents
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Spring 2006
Hidden Markov Models Lirong Xia.
Conditional Probability, Total Probability Theorem and Bayes’ Rule
Probability Lirong Xia.
Bayesian networks (1) Lirong Xia. Bayesian networks (1) Lirong Xia.
Conditional Probability, Total Probability Theorem and Bayes’ Rule
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14
Presentation transcript:

Probability Topics Random Variables Joint and Marginal Distributions Conditional Distribution Product Rule, Chain Rule, Bayes’ Rule Inference Independence You’ll need all this stuff A LOT for the next few weeks, so make sure you go over it now! This slide deck courtesy of Dan Klein at UC Berkeley

Inference in Ghostbusters A ghost is in the grid somewhere Sensor readings tell how close a square is to the ghost On the ghost: red 1 or 2 away: orange 3 or 4 away: yellow 5+ away: green Sensors are noisy, but we know P(Color | Distance) P(red | 3) P(orange | 3) P(yellow | 3) P(green | 3) 0.05 0.15 0.5 0.3 [Demo]

Uncertainty General situation: Evidence: Agent knows certain things about the state of the world (e.g., sensor readings or symptoms) Hidden variables: Agent needs to reason about other aspects (e.g. where an object is or what disease is present) Model: Agent knows something about how the known variables relate to the unknown variables Probabilistic reasoning gives us a framework for managing our beliefs and knowledge

Random Variables A random variable is some aspect of the world about which we (may) have uncertainty R = Is it raining? D = How long will it take to drive to work? L = Where am I? We denote random variables with capital letters Random variables have domains R in {true, false} (sometimes write as {+r, r}) D in [0, ) L in possible locations, maybe {(0,0), (0,1), …}

Probability Distributions Unobserved random variables have distributions A distribution is a TABLE of probabilities of values A probability (lower case value) is a single number Must have: T P warm 0.5 cold W P sun 0.6 rain 0.1 fog 0.3 meteor 0.0

Joint Distributions A joint distribution over a set of random variables: specifies a real number for each assignment (or outcome): Size of distribution if n variables with domain sizes d? Must obey: For all but the smallest distributions, impractical to write out T W P hot sun 0.4 rain 0.1 cold 0.2 0.3

Probabilistic Models Distribution over T,W Constraint over T,W A probabilistic model is a joint distribution over a set of random variables Probabilistic models: (Random) variables with domains Assignments are called outcomes Joint distributions: say whether assignments (outcomes) are likely Normalized: sum to 1.0 Ideally: only certain variables directly interact Constraint satisfaction probs: Variables with domains Constraints: state whether assignments are possible T W P hot sun 0.4 rain 0.1 cold 0.2 0.3 Constraint over T,W T W P hot sun rain F cold

Events An event is a set E of outcomes From a joint distribution, we can calculate the probability of any event Probability that it’s hot AND sunny? Probability that it’s hot? Probability that it’s hot OR sunny? Typically, the events we care about are partial assignments, like P(T=hot) T W P hot sun 0.4 rain 0.1 cold 0.2 0.3

Marginal Distributions Marginal distributions are sub-tables which eliminate variables Marginalization (summing out): Combine collapsed rows by adding T P hot 0.5 cold T W P hot sun 0.4 rain 0.1 cold 0.2 0.3 W P sun 0.6 rain 0.4

Conditional Probabilities A simple relation between joint and conditional probabilities In fact, this is taken as the definition of a conditional probability T W P hot sun 0.4 rain 0.1 cold 0.2 0.3

Conditional Distributions Conditional distributions are probability distributions over some variables given fixed values of others Conditional Distributions Joint Distribution W P sun 0.8 rain 0.2 T W P hot sun 0.4 rain 0.1 cold 0.2 0.3 W P sun 0.4 rain 0.6

Normalization Trick A trick to get a whole conditional distribution at once: Select the joint probabilities matching the evidence Normalize the selection (make it sum to one) Why does this work? Sum of selection is P(evidence)! (P(r), here) T W P hot sun 0.4 rain 0.1 cold 0.2 0.3 T R P hot rain 0.1 cold 0.3 T P hot 0.25 cold 0.75 Select Normalize

Inference by Enumeration P(sun)? P(sun | winter)? P(sun | winter, warm)? S T W P summer hot sun 0.30 rain 0.05 cold 0.10 winter 0.15 0.20

Inference by Enumeration General case: Evidence variables: Query* variable: Hidden variables: We want: First, select the entries consistent with the evidence Second, sum out H to get joint of Query and evidence: Finally, normalize the remaining entries to conditionalize Obvious problems: Worst-case time complexity O(dn) Space complexity O(dn) to store the joint distribution All variables * Works fine with multiple query variables, too

The Product Rule Sometimes have conditional distributions but want the joint Example: D W P wet sun 0.1 dry 0.9 rain 0.7 0.3 D W P wet sun 0.08 dry 0.72 rain 0.14 0.06 R P sun 0.8 rain 0.2

Probabilistic Inference Probabilistic inference: compute a desired probability from other known probabilities (e.g. conditional from joint) We generally compute conditional probabilities P(on time | no reported accidents) = 0.90 These represent the agent’s beliefs given the evidence Probabilities change with new evidence: P(on time | no accidents, 5 a.m.) = 0.95 P(on time | no accidents, 5 a.m., raining) = 0.80 Observing new evidence causes beliefs to be updated

The Chain Rule More generally, can always write any joint distribution as an incremental product of conditional distributions Why is this always true?

Bayes’ Rule Two ways to factor a joint distribution over two variables: Dividing, we get: Why is this at all helpful? Lets us build one conditional from its reverse Often one conditional is tricky but the other one is simple Foundation of many systems we’ll see later In the running for most important AI equation! That’s my rule!