Download presentation
Presentation is loading. Please wait.
Published byElmer Golden Modified over 9 years ago
1
Review: Hidden Markov Models Efficient dynamic programming algorithms exist for –Finding Pr(S) –The highest probability path P that maximizes Pr(S,P) (Viterbi) Training the model –State seq known: MLE + smoothing –Otherwise: Baum-Welch algorithm S2S2 S4S4 S1S1 0.9 0.5 0.8 0.2 0.1 S3S3 ACAC 0.6 0.4 ACAC 0.3 0.7 ACAC 0.5 ACAC 0.9 0.1
2
HMM for Segmentation Simplest Model: One state per entity type
3
HMM Learning Manally pick HMM’s graph (eg simple model, fully connected) Learn transition probabilities: Pr(s i |s j ) Learn emission probabilities: Pr(w|s i )
4
Learning model parameters When training data defines unique path through HMM –Transition probabilities Probability of transitioning from state i to state j = number of transitions from i to j total transitions from state i –Emission probabilities Probability of emitting symbol k from state i = number of times k generated from i number of transition from I
5
What is a “symbol” ??? Cohen => “Cohen”, “cohen”, “Xxxxx”, “Xx”, … ? 4601 => “4601”, “9999”, “9+”, “number”, … ? Datamold: choose best abstraction level using holdout set
6
What is a symbol? Ideally we would like to use many, arbitrary, overlapping features of words. S t-1 S t O t S t+1 O t +1 O t - 1 identity of word ends in “-ski” is capitalized is part of a noun phrase is in a list of city names is under node X in WordNet is in bold font is indented is in hyperlink anchor … … … part of noun phrase is “Wisniewski” ends in “-ski” We can extend the HMM model so that each state generates multiple “features” – but they should be independent.
7
Borthwick et al solution We could use YFCL: an SVM, logistic regression, a decision tree, …. We’ll be talking about logistic regression. S t-1 S t O t S t+1 O t +1 O t - 1 identity of word ends in “-ski” is capitalized is part of a noun phrase is in a list of city names is under node X in WordNet is in bold font is indented is in hyperlink anchor … … … part of noun phrase is “Wisniewski” ends in “-ski” Instead of an HMM, classify each token. Don’t learn transition probabilities, instead constrain them at test time.
8
Stupid HMM tricks start Pr(red) Pr(green) Pr(green|green) = 1 Pr(red|red) = 1
9
Stupid HMM tricks start Pr(red) Pr(green) Pr(green|green) = 1 Pr(red|red) = 1 Pr(y|x) = Pr(x|y) * Pr(y) / Pr(x) argmax{y} Pr(y|x) = argmax{y} Pr(x|y) * Pr(y) = argmax{y} Pr(y) * Pr(x 1 |y)*Pr(x 2 |y)*...*Pr(x m |y) Pr(“I voted for Ralph Nader”|ggggg) = Pr(g)*Pr(I|g)*Pr(voted|g)*Pr(for|g)*Pr(Ralph|g)*Pr(Nader|g)
10
HMM’s = sequential NB
11
From NB to Maxent
13
Or: Idea: keep the same functional form as naïve Bayes, but pick the parameters to optimize performance on training data. One possible definition of performance is conditional log likelihood of the data:
14
MaxEnt Comments –Implementation: All methods are iterative For NLP like problems with many features, modern gradient-like or Newton-like methods work well Thursday I’ll derive the gradient for CRFs –Smoothing: Typically maxent will overfit data if there are many infrequent features. Old-school solutions: discard low-count features; early stopping with holdout set; … Modern solutions: penalize large parameter values with a prior centered on zero to limit size of alphas (ie, optimize log likelihood - sum alpha); other regularization techniques
15
What is a symbol? Ideally we would like to use many, arbitrary, overlapping features of words. S t-1 S t O t S t+1 O t +1 O t - 1 identity of word ends in “-ski” is capitalized is part of a noun phrase is in a list of city names is under node X in WordNet is in bold font is indented is in hyperlink anchor … … … part of noun phrase is “Wisniewski” ends in “-ski”
16
Borthwick et al idea S t-1 S t O t S t+1 O t +1 O t - 1 identity of word ends in “-ski” is capitalized is part of a noun phrase is in a list of city names is under node X in WordNet is in bold font is indented is in hyperlink anchor … … … part of noun phrase is “Wisniewski” ends in “-ski” Idea: replace generative model in HMM with a maxent model, where state depends on observations
17
Another idea…. S t-1 S t O t S t+1 O t +1 O t - 1 identity of word ends in “-ski” is capitalized is part of a noun phrase is in a list of city names is under node X in WordNet is in bold font is indented is in hyperlink anchor … … … part of noun phrase is “Wisniewski” ends in “-ski” Idea: replace generative model in HMM with a maxent model, where state depends on observations and previous state
18
MaxEnt taggers and MEMMs S t-1 S t O t S t+1 O t +1 O t - 1 identity of word ends in “-ski” is capitalized is part of a noun phrase is in a list of city names is under node X in WordNet is in bold font is indented is in hyperlink anchor … … … part of noun phrase is “Wisniewski” ends in “-ski” Idea: replace generative model in HMM with a maxent model, where state depends on observations and previous state history Learning does not change – you’ve just added a few additional features that are the previous labels. Classification is trickier – we don’t know the previous-label features at test time – so we will need to search for the best sequence of labels (like for an HMM).
19
Partial history of the idea Sliding window classifiers –Sejnowski’s NETTalk, mid 1980’s Recurrent neural networks and other “recurrent” sliding- window classifiers –Late 1980’s and 1990’s Ratnaparkhi’s thesis –Mid-late 1990’s Frietag, McCallum & Pereira ICML 2000 –Formalize notion of MEMM OpenNLP –Based largely on MaxEnt taggers, Apache Open Source
20
Ratnaparkhi’s MXPOST Sequential learning problem: predict POS tags of words. Uses MaxEnt model described above. Rich feature set. To smooth, discard features occurring < 10 times.
21
MXPOST
22
MXPOST: learning & inference GIS Feature selection
23
23 Using the HMM to segment Find highest probability path through the HMM. Viterbi: quadratic dynamic programming algorithm House otot Road City Pin 15213 Butler Highway Greenville 21578 House Road City Pin otot House Road City Pin 15213 Butler... 21578
24
Alternative inference schemes
25
MXPost inference
26
Inference for MENE (Borthwick et al system) B I O B I O B I O B I O B I O B I O B I O When will prof Cohen post the notes … Goal: best legal path through lattice (i.e., path that runs through the most black ink. Like Viterbi but cost of possible transitions are ignored.)
27
Inference for MXPOST B I O B I O B I O B I O B I O B I O B I O When will prof Cohen post the notes … (Approx view): find best path, weights are now on arcs from state to state. window of k tags k=1
28
Inference for MXPOST B I O B I O B I O B I O B I O B I O B I O When will prof Cohen post the notes … More accurately: find total flow to each node, weights are now on arcs from state to state.
29
Inference for MXPOST B I O B I O B I O B I O B I O B I O B I O When will prof Cohen post the notes … Find best path? tree? Weights are on hyperedges
30
Inference for MxPOST I O iI iO When will prof Cohen post the notes … oI oO iI iO oI oO iI iO oI oO iI iO oI oO iI iO oI oO iI iO oI oO …… Beam search is alternative to Viterbi: at each stage, find all children, score them, and discard all but the top n states
31
MXPost results State of art accuracy (for 1996) Same approach used successfully for several other sequential classification steps of a stochastic parser (also state of art). Same (or similar) approaches used for NER by Borthwick, Malouf, Manning, and others.
32
MEMMs Basic difference from ME tagging: –ME tagging: previous state is feature of MaxEnt classifier –MEMM: build a separate MaxEnt classifier for each state. Can build any HMM architecture you want; eg parallel nested HMM’s, etc. Data is fragmented: examples where previous tag is “proper noun” give no information about learning tags when previous tag is “noun” –Mostly a difference in viewpoint –MEMM does allow possibility of “hidden” states and Baum-Welsh like training
33
MEMM task: FAQ parsing
34
MEMM features
35
MEMMs
36
Looking forward HMMS –Easy to train generative model –Features for a state must be independent (-) MaxEnt tagger/MEMM –Multiple cascaded classifiers –Features can be arbitrary (+) –Have we given anything up?
37
37 HMM inference House otot Road City Pin Total probability of transitions out of a state must sum to 1 But …they can all lead to “unlikely” states So…. a state can be a (probable) “dead end” in the lattice House Road City Pin otot House Road City Pin 15213 Butler... 21578
38
Inference for MXPOST B I O B I O B I O B I O B I O B I O B I O When will prof Cohen post the notes … More accurately: find total flow to each node, weights are now on arcs from state to state. Flow out of each node is always fixed:
39
Label Bias Problem (Lafferty, McCallum, Pereira ICML 2001) Consider this MEMM, and enough training data to perfectly model it: Pr(0123|rob) = Pr(1|0,r)/Z1 * Pr(2|1,o)/Z2 * Pr(3|2,b)/Z3 = 0.5 * 1 * 1 Pr(0453|rib) = Pr(4|0,r)/Z1’ * Pr(5|4,i)/Z2’ * Pr(3|5,b)/Z3’ = 0.5 * 1 *1 Pr(0123|rib)=1 Pr(0453|rob)=1
40
Another max-flow scheme B I O B I O B I O B I O B I O B I O B I O When will prof Cohen post the notes … More accurately: find total flow to each node, weights are now on arcs from state to state. Flow out of a node is always fixed:
41
Another max-flow scheme: MRFs B I O B I O B I O B I O B I O B I O B I O When will prof Cohen post the notes … Goal is to learn how to weight edges in the graph: weight(y i,y i+1 ) = 2*[(y i =B or I) and isCap(x i )] + 1*[(y i =B and isFirstName(x i )] - 5*[(y i+1 ≠B and isLower(x i ) and isUpper(x i+1 )]
42
Another max-flow scheme: MRFs B I O B I O B I O B I O B I O B I O B I O When will prof Cohen post the notes … Find total flow to each node, weights are now on edges from state to state. Goal is to learn how to weight edges in the graph, given features from the examples.
43
Another view of label bias [Sha & Pereira] So what’s the alternative?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.