Download presentation
Presentation is loading. Please wait.
Published byCalvin Booth Modified over 9 years ago
1
Crash Course on Machine Learning Part VI Several slides from Derek Hoiem, Ben Taskar, Christopher Bishop, Lise Getoor
2
Graphical Models Representations – Bayes Nets – Markov Random Fields – Factor Graphs Inference – Exact Variable Elimination BP for trees – Approximate inference Sampling Loopy BP Optimization Learning (today)
3
SP2-3 Summary of inference methods Chain (online)Low treewidthHigh treewidth DiscreteBP = forwards Boyen-Koller (ADF), beam search VarElim, Jtree, recursive conditioning Loopy BP, mean field, structured variational, EP, graphcuts Gibbs GaussianBP = Kalman filterJtree = sparse linear algebra Loopy BP Gibbs OtherEKF, UKF, moment matching (ADF) Particle filter EP, EM, VB, NBP, Gibbs EP, variational EM, VB, NBP, Gibbs Exact Deterministic approximation Stochastic approximation BP=belief propagation, EP = expectation propagation, ADF = assumed density filtering, EKF = extended Kalman filter, UKF = unscented Kalman filter, VarElim = variable elimination, Jtree= junction tree, EM = expectation maximization, VB = variational Bayes, NBP = non-parametric BP Slide Credit: Kevin Murphy
4
Variable Elimination B A E M J
5
The Sum-Product Algorithm Objective: i.to obtain an efficient, exact inference algorithm for finding marginals; ii.in situations where several marginals are required, to allow computations to be shared efficiently. Key idea: Distributive Law
6
The Sum-Product Algorithm
8
Initialization
9
The Sum-Product Algorithm To compute local marginals: Pick an arbitrary node as root Compute and propagate messages from the leaf nodes to the root, storing received messages at every node. Compute and propagate messages from the root to the leaf nodes, storing received messages at every node. Compute the product of received messages at each node for which the marginal is required, and normalize if necessary.
10
Loopy Belief Propagation Sum-Product on general graphs. Initial unit messages passed across all links, after which messages are passed around until convergence (not guaranteed!). Approximate but tractable for large graphs. Sometime works well, sometimes not at all.
11
Learning Bayesian networks Inducer Data + Prior information E R B A C.9.1 e b e.7.3.99.01.8.2 be b b e BEP(A | E,B)
12
Known Structure -- Complete Data E, B, A. Inducer E B A.9.1 e b e.7.3.99.01.8.2 be b b e BEP(A | E,B) ?? e b e ?? ? ? ?? be b b e BE E B A Network structure is specified – Inducer needs to estimate parameters Data does not contain missing values
13
Unknown Structure -- Complete Data E, B, A. Inducer E B A.9.1 e b e.7.3.99.01.8.2 be b b e BEP(A | E,B) ?? e b e ?? ? ? ?? be b b e BE E B A Network structure is not specified – Inducer needs to select arcs & estimate parameters Data does not contain missing values
14
Known Structure -- Incomplete Data Inducer E B A.9.1 e b e.7.3.99.01.8.2 be b b e BEP(A | E,B) ?? e b e ?? ? ? ?? be b b e BE E B A Network structure is specified Data contains missing values – We consider assignments to missing values E, B, A.
15
Known Structure / Complete Data Given a network structure G – And choice of parametric family for P(X i |Pa i ) Learn parameters for network Goal Construct a network that is “ closest ” to probability that generated the data
16
Learning Parameters for a Bayesian Network E B A C Training data has the form:
17
Learning Parameters for a Bayesian Network E B A C Since we assume i.i.d. samples, likelihood function is
18
Learning Parameters for a Bayesian Network E B A C By definition of network, we get
19
Learning Parameters for a Bayesian Network E B A C Rewriting terms, we get
20
General Bayesian Networks Generalizing for any Bayesian network: The likelihood decomposes according to the structure of the network. i.i.d. samples Network factorization
21
General Bayesian Networks (Cont.) Decomposition Independent Estimation Problems If the parameters for each family are not related, then they can be estimated independently of each other.
22
From Binomial to Multinomial For example, suppose X can have the values 1,2,…,K We want to learn the parameters 1, 2. …, K Sufficient statistics: N 1, N 2, …, N K - the number of times each outcome is observed Likelihood function: MLE:
23
Likelihood for Multinomial Networks When we assume that P(X i | Pa i ) is multinomial, we get further decomposition:
24
Likelihood for Multinomial Networks When we assume that P(X i | Pa i ) is multinomial, we get further decomposition: For each value pa i of the parents of X i we get an independent multinomial problem The MLE is
25
Bayesian Approach: Dirichlet Priors Recall that the likelihood function is A Dirichlet prior with hyperparameters 1,…, K is defined as for legal 1,…, K Then the posterior has the same form, with hyperparameters 1 +N 1,…, K +N K
26
Dirichlet Priors (cont.) We can compute the prediction on a new event in closed form: If P( ) is Dirichlet with hyperparameters 1,…, K then Since the posterior is also Dirichlet, we get
27
Bayesian Prediction(cont.) Given these observations, we can compute the posterior for each multinomial X i | pa i independently – The posterior is Dirichlet with parameters (X i =1|pa i )+N (X i =1|pa i ),…, (X i =k|pa i )+N (X i =k|pa i ) The predictive distribution is then represented by the parameters
28
Learning Parameters: Summary Estimation relies on sufficient statistics – For multinomial these are of the form N (x i,pa i ) – Parameter estimation Bayesian methods also require choice of priors Both MLE and Bayesian are asymptotically equivalent and consistent Both can be implemented in an on-line manner by accumulating sufficient statistics MLE Bayesian (Dirichlet)
29
Learning Structure from Complete Data
30
Benefits of Learning Structure Efficient learning -- more accurate models with less data – Compare: P(A) and P(B) vs. joint P(A,B) Discover structural properties of the domain – Ordering of events – Relevance Identifying independencies faster inference Predict effect of actions – Involves learning causal relationship among variables
31
Why Struggle for Accurate Structure? Increases the number of parameters to be fitted Wrong assumptions about causality and domain structure Cannot be compensated by accurate fitting of parameters Also misses causality and domain structure EarthquakeAlarm Set Sound Burglary EarthquakeAlarm Set Sound Burglary Earthquake Alarm Set Sound Burglary Adding an arcMissing an arc
32
Approaches to Learning Structure Constraint based – Perform tests of conditional independence – Search for a network that is consistent with the observed dependencies and independencies Pros & Cons Intuitive, follows closely the construction of BNs Separates structure learning from the form of the independence tests Sensitive to errors in individual tests
33
Approaches to Learning Structure Score based – Define a score that evaluates how well the (in)dependencies in a structure match the observations – Search for a structure that maximizes the score Pros & Cons Statistically motivated Can make compromises Takes the structure of conditional probabilities into account Computationally hard
34
Likelihood Score for Structures First cut approach: – Use likelihood function Since we know how to maximize parameters from now we assume
35
Likelihood Score for Structure (cont.) Rearranging terms: where H(X) is the entropy of X I(X;Y) is the mutual information between X and Y – I(X;Y) measures how much “ information ” each variables provides about the other – I(X;Y) 0 – I(X;Y) = 0 iff X and Y are independent – I(X;Y) = H(X) iff X is totally predictable given Y
36
Likelihood Score for Structure (cont.) Good news: Intuitive explanation of likelihood score: – The larger the dependency of each variable on its parents, the higher the score Likelihood as a compromise among dependencies, based on their strength
37
Likelihood Score for Structure (cont.) Bad news: Adding arcs always helps – I(X;Y) I(X;Y,Z) – Maximal score attained by fully connected networks – Such networks can overfit the data --- parameters capture the noise in the data
38
Avoiding Overfitting “ Classic ” issue in learning. Approaches: Restricting the hypotheses space – Limits the overfitting capability of the learner – Example: restrict # of parents or # of parameters Minimum description length – Description length measures complexity – Prefer models that compactly describes the training data Bayesian methods – Average over all possible parameter values – Use prior knowledge
39
Bayesian Inference Bayesian Reasoning---compute expectation over unknown G Assumption: G s are mutually exclusive and exhaustive We know how to compute P(x[M+1]|G,D) – Same as prediction with fixed structure How do we compute P(G|D) ?
40
Marginal likelihood Prior over structures Using Bayes rule: P(D) is the same for all structures G Can be ignored when comparing structures Probability of Data Posterior Score
41
Marginal Likelihood By introduction of variables, we have that This integral measures sensitivity to choice of parameters Likelihood Prior over parameters
42
Marginal Likelihood for General Network The marginal likelihood has the form: where N(..) are the counts from the data (..) are the hyperparameters for each family given G Dirichlet Marginal Likelihood For the sequence of values of X i when X i ’ s parents have a particular value
43
Bayesian Score Theorem: If the prior P( |G) is “ well- behaved ”, then
44
Asymptotic Behavior: Consequences Bayesian score is consistent – As M the “ true ” structure G* maximizes the score (almost surely) – For sufficiently large M, the maximal scoring structures are equivalent to G* Observed data eventually overrides prior information – Assuming that the prior assigns positive probability to all cases
45
Asymptotic Behavior This score can also be justified by the Minimal Description Length (MDL) principle This equation explicitly shows the tradeoff between – Fitness to data --- likelihood term – Penalty for complexity --- regularization term
46
Priors Over network structure – Minor role – Uniform, Over parameters – Separate prior for each structure is not feasible – Fixed dirichlet distributions – BDe Prior
47
BDe Score Possible solution: The BDe prior Represent prior using two elements M 0, B 0 – M 0 - equivalent sample size – B 0 - network representing the prior probability of events
48
BDe Score Intuition: M 0 prior examples distributed by B 0 Set (x i,pa i ) = M 0 P(x i,a i ) – Compute P(x i,pa i G | B 0 ) using standard inference procedures Such priors have desirable theoretical properties – Equivalent networks are assigned the same score
49
Scores -- Summary Likelihood, MDL, (log) BDe have the form BDe requires assessing prior network. It can naturally incorporate prior knowledge and previous experience BDe is consistent and asymptotically equivalent (up to a constant) to MDL All are score-equivalent – G equivalent to G ’ Score(G) = Score(G ’ )
50
Optimization Problem Input: – Training data – Scoring function (including priors, if needed) – Set of possible structures Including prior knowledge about structure Output: – A network (or networks) that maximize the score Key Property: – Decomposability: the score of a network is a sum of terms.
51
Heuristic Search Hard problem with tractable solutions for – Tree structures – Known orders heuristic search Define a search space: – nodes are possible structures – edges denote adjacency of structures Traverse this space looking for high-scoring structures Search techniques: – Greedy hill-climbing – Best first search – Simulated Annealing –...
52
Heuristic Search (cont.) Typical operations: S C E D S C E D Reverse C E Delete C E Add C D S C E D S C E D
53
Exploiting Decomposability in Local Search Caching: To update the score of after a local change, we only need to re-score the families that were changed in the last move S C E D S C E D S C E D S C E D
54
Greedy Hill-Climbing Simplest heuristic local search – Start with a given network empty network best tree a random network – At each iteration Evaluate all possible changes Apply change that leads to best improvement in score Reiterate – Stop when no modification improves score Each step requires evaluating approximately n new changes
55
Greedy Hill-Climbing: Possible Pitfalls Greedy Hill-Climbing can get struck in: – Local Maxima: All one-edge changes reduce the score – Plateaus: Some one-edge changes leave the score unchanged Happens because equivalent networks received the same score and are neighbors in the search space Both occur during structure search Standard heuristics can escape both – Random restarts – TABU search
56
Search: Summary Discrete optimization problem In general, NP-Hard – Need to resort to heuristic search – In practice, search is relatively fast (~100 vars in ~10 min): Decomposability Sufficient statistics In some cases, we can reduce the search problem to an easy optimization problem – Example: learning trees
57
Learning in Graphical Models StructureObservabilityMethod KnownFullML or MAP estimation KnownPartialEM algorithm UnknownFullModel selection or model averaging UnknownPartialEM + model sel. or model aver.
58
Graphical Models Summary Representations – Graphs are cool way to put constraints on distributions, so that you can say lots of stuff without even looking at the numbers! Inference – GM let you compute all kinds of different probabilities efficiently Learning – You can even learn them auto-magically!
59
Graphical Models Pros and cons – Very powerful if dependency structure is sparse and known – Modular (especially Bayesian networks) – Flexible representation (but not as flexible as “feature passing”) – Many inference methods – Recent development in learning Markov network parameters, but still tricky
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.