Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning Markov Networks

Similar presentations


Presentation on theme: "Learning Markov Networks"— Presentation transcript:

1 Learning Markov Networks

2 Learning Markov Networks
Learning parameters (weights) Generatively Discriminatively Learning with incomplete data Learning structure (features)

3 Generative Weight Learning
Maximize likelihood or posterior probability Numerical optimization (gradient or 2nd order) No local maxima Requires inference at each step (slow!) No. of times feature i is true in data Expected no. times feature i is true according to model

4 Pseudo-Likelihood Likelihood of each variable given its neighbors in the data Does not require inference at each step Consistent estimator Widely used in vision, spatial statistics, etc. But PL parameters may not work well for long inference chains

5 Discriminative Weight Learning (a.k.a. Conditional Random Fields)
Maximize conditional likelihood of query (y) given evidence (x) Inference is easier, but still hard No. of true groundings of clause i in data Expected no. true groundings according to model

6 Voted Perceptron Approximate expected counts by counts in MAP state of y given x Originally proposed for training HMMs discriminatively wi ← 0 for t ← 1 to T do yMAP ← InferMAP(x) wi ← wi + η [counti(yData) – counti(yMAP)] return ∑t wi / T

7 Other Weight Learning Approaches
Generative: Iterative scaling Discriminative: Max margin

8 Learning with Missing Data
Gradient of likelihood is now difference of expectations Can use gradient descent or EM Exp. no. true groundings given observed data x: Observed y: Missing Expected no. true groundings given no data

9 Structure Learning Start with atomic features
Greedily conjoin features to improve score Problem: Need to reestimate weights for each new candidate Approximation: Keep weights of previous features constant


Download ppt "Learning Markov Networks"

Similar presentations


Ads by Google