Download presentation
Presentation is loading. Please wait.
Published byJared Nicholson Modified over 8 years ago
1
CS 2750: Machine Learning Bayesian Networks Prof. Adriana Kovashka University of Pittsburgh March 14, 2016
2
Plan for today and next week Today and next time: – Bayesian networks (Bishop Sec. 8.1) – Conditional independence (Bishop Sec. 8.2) Next week: – Markov random fields (Bishop Sec. 8.3.1-2) – Hidden Markov models (Bishop Sec. 13.1-2) – Expectation maximization (Bishop Ch. 9)
3
Graphical Models If no assumption of independence is made, then an exponential number of parameters must be estimated for sound probabilistic inference. No realistic amount of training data is sufficient to estimate so many parameters. If a blanket assumption of conditional independence is made, efficient training and inference is possible, but such a strong assumption is rarely warranted. Graphical models use directed or undirected graphs over a set of random variables to explicitly specify variable dependencies and allow for less restrictive independence assumptions while limiting the number of parameters that must be estimated. Bayesian networks: Directed acyclic graphs indicate causal structure. Markov networks: Undirected graphs capture general dependencies. Slide credit: Ray Mooney
4
Learning Graphical Models Structure Learning: Learn the graphical structure of the network. Parameter Learning: Learn the real-valued parameters of the network. CPTs for Bayes nets Potential functions for Markov nets Slide credit: Ray Mooney
5
Parameter Learning If values for all variables are available during training, then parameter estimates can be directly estimated using frequency counts over the training data. If there are hidden variables, some form of gradient descent or Expectation Maximization (EM) must be used to estimate distributions for hidden variables. Adapted from Ray Mooney
6
Bayesian Networks Directed Acyclic Graph (DAG) Slide from Bishop
7
Bayesian Networks General Factorization Slide from Bishop
8
Bayesian Networks Directed Acyclic Graph (DAG) Nodes are random variables Edges indicate causal influences Burglary Earthquake Alarm JohnCalls MaryCalls Slide credit: Ray Mooney
9
Conditional Probability Tables Each node has a conditional probability table (CPT) that gives the probability of each of its values given every possible combination of values for its parents (conditioning case). Roots (sources) of the DAG that have no parents are given prior probabilities. Burglary Earthquake Alarm JohnCalls MaryCalls P(B).001 P(E).002 BEP(A) TT.95 TF.94 FT.29 FF.001 AP(M) T.70 F.01 AP(J) T.90 F.05 Slide credit: Ray Mooney
10
CPT Comments Probability of false not given since rows must add to 1. Example requires 10 parameters rather than 2 5 –1=31 for specifying the full joint distribution. Number of parameters in the CPT for a node is exponential in the number of parents. Slide credit: Ray Mooney
11
Bayes Net Inference Given known values for some evidence variables, determine the posterior probability of some query variables. Example: Given that John calls, what is the probability that there is a Burglary? Burglary Earthquake Alarm JohnCalls MaryCalls ??? John calls 90% of the time there is an Alarm and the Alarm detects 94% of Burglaries so people generally think it should be fairly high. However, this ignores the prior probability of John calling. Slide credit: Ray Mooney
12
Bayes Net Inference Example: Given that John calls, what is the probability that there is a Burglary? Burglary Earthquake Alarm JohnCalls MaryCalls ??? John also calls 5% of the time when there is no Alarm. So over 1,000 days we expect 1 Burglary and John will probably call. However, he will also call with a false report 50 times on average. So the call is about 50 times more likely a false report: P(Burglary | JohnCalls) ≈ 0.02 P(B).001 AP(J) T.90 F.05 Slide credit: Ray Mooney
13
Bayesian Curve Fitting (1) Polynomial Slide from Bishop
14
Bayesian Curve Fitting (2) Plate Slide from Bishop
15
Bayesian Curve Fitting (3) Input variables and explicit hyperparameters Slide from Bishop
16
Bayesian Curve Fitting—Learning Condition on data Slide from Bishop
17
Bayesian Curve Fitting—Prediction Predictive distribution: where Slide from Bishop
18
Generative vs Discriminative Models Generative approach: Model Use Bayes’ theorem Discriminative approach: Model directly Slide from Bishop
19
Generative Models Causal process for generating images Slide from Bishop
20
Discrete Variables (1) General joint distribution: K 2 { 1 parameters Independent joint distribution: 2(K { 1) parameters Slide from Bishop
21
Discrete Variables (2) General joint distribution over M variables: K M { 1 parameters M -node Markov chain: K { 1 + (M { 1) K(K { 1) parameters Slide from Bishop
22
Discrete Variables: Bayesian Parameters (1) Slide from Bishop
23
Discrete Variables: Bayesian Parameters (2) Shared prior Slide from Bishop
24
Parameterized Conditional Distributions If are discrete, K -state variables, in general has O(K M ) parameters. The parameterized form requires only M + 1 parameters Slide from Bishop
25
Conditional Independence a is independent of b given c Equivalently Notation Slide from Bishop
26
Conditional Independence: Example 1 Slide from Bishop Node c is “tail to tail” for path from a to b: path makes a and b dependent
27
Conditional Independence: Example 1 Slide from Bishop Node c is “tail to tail” for path from a to b: c blocks the path thus making a and b conditionally independent
28
Conditional Independence: Example 2 Slide from Bishop Node c is “head to tail” for path from a to b: path makes a and b dependent
29
Node c is “head to tail” for path from a to b: c blocks the path thus making a and b conditionally independent Conditional Independence: Example 2 Slide from Bishop
30
Conditional Independence: Example 3 Note: this is the opposite of Example 1, with c unobserved. Slide from Bishop Node c is “head to head” for path from a to b: c blocks the path thus making a and b independent
31
Conditional Independence: Example 3 Note: this is the opposite of Example 1, with c observed. Slide from Bishop Node c is “head to head” for path from a to b: c unblocks the path thus making a and b conditionally dependent
32
“Am I out of fuel?” B =Battery (0=flat, 1=fully charged) F =Fuel Tank (0=empty, 1=full) G =Fuel Gauge Reading (0=empty, 1=full) and hence Slide from Bishop
33
“Am I out of fuel?” Probability of an empty tank increased by observing G = 0. Slide from Bishop
34
“Am I out of fuel?” Probability of an empty tank reduced by observing B = 0. This referred to as “explaining away”. Slide from Bishop
35
D-separation A, B, and C are non-intersecting subsets of nodes in a directed graph. A path from A to B is blocked if it contains a node such that either a)the arrows on the path meet either head-to-tail or tail- to-tail at the node, and the node is in the set C, or b)the arrows meet head-to-head at the node, and neither the node, nor any of its descendants, are in the set C. If all paths from A to B are blocked, A is said to be d- separated from B by C. If A is d-separated from B by C, the joint distribution over all variables in the graph satisfies. Slide from Bishop
36
D-separation: Example Slide from Bishop
37
D-separation: I.I.D. Data Slide from Bishop Are the x i ’s marginally independent? The x i ’s conditionally independent.
38
Naïve Bayes Conditioned on the class z, the distributions of the input variables x 1, …, x D are independent. Are the x 1, …, x D marginally independent?
39
Naïve Bayes as a Bayes Net Naïve Bayes is a simple Bayes Net Y X1X1 X2X2 … XnXn Priors P(Y) and conditionals P(X i |Y) for Naïve Bayes provide CPTs for the network. Slide credit: Ray Mooney
40
The Markov Blanket Factors independent of x i cancel between numerator and denominator. Slide from Bishop The parents, children and co-parents of x i form its Markov blanket, the minimal set of nodes that isolate x i from the rest of the graph.
41
Bayes Nets vs. Markov Nets Bayes nets represent a subclass of joint distributions that capture non-cyclic causal dependencies between variables. A Markov net can represent any joint distribution. Slide credit: Ray Mooney
42
Markov Chains In general: First-order Markov chain:
43
Markov Chains: Second-order Markov chain:
44
Markov Random Fields Undirected graph over a set of random variables, where an edge represents a dependency. The Markov blanket of a node, X, in a Markov Net is the set of its neighbors in the graph (nodes that have an edge connecting to X). Every node in a Markov Net is conditionally independent of every other node given its Markov blanket. Slide credit: Ray Mooney
45
Markov Random Fields Markov Blanket Slide from Bishop A node is conditionally independent of all other nodes conditioned only on the neighboring nodes.
46
Cliques and Maximal Cliques Clique Maximal Clique Slide from Bishop
47
Distribution for a Markov Network The distribution of a Markov net is most compactly described in terms of a set of potential functions, φ k, for each clique, k, in the graph. For each joint assignment of values to the variables in clique k, φ k assigns a non-negative real value that represents the compatibility of these values. The joint distribution of a Markov is then defined by: where x {k} represents the joint assignment of the variables in clique k, and Z is a normalizing constant that makes a joint distribution that sums to 1. Slide credit: Ray Mooney
48
Illustration: Image De-Noising (1) Original Image Noisy Image Slide from Bishop
49
Illustration: Image De-Noising (2) Slide from Bishop y i in {+1, -1}: labels in observed noisy image, x i in {+1, -1}: labels in noise-free image, i is the index over pixels
50
Illustration: Image De-Noising (3) Noisy ImageRestored Image (ICM) Slide from Bishop
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.