1 CRFs for ASR: Extending to Word Recognition Jeremy Morris 05/16/2008.

Slides:



Advertisements
Similar presentations
Building an ASR using HTK CS4706
Advertisements

An Introduction to Conditional Random Field Ching-Chun Hsiao 1.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Conditional Random Fields For Speech and Language Processing
Supervised Learning Recap
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data John Lafferty Andrew McCallum Fernando Pereira.
Jun Zhu Dept. of Comp. Sci. & Tech., Tsinghua University This work was done when I was a visiting researcher at CMU. Joint.
Lecture 5: Learning models using EM
Speaker Adaptation for Vowel Classification
Ensemble Learning: An Introduction
Maximum Entropy Model LING 572 Fei Xia 02/07-02/09/06.
Scalable Text Mining with Sparse Generative Models
Conditional Random Fields   A form of discriminative modelling   Has been used successfully in various domains such as part of speech tagging and other.
Introduction to Automatic Speech Recognition
1 Conditional Random Fields for ASR Jeremy Morris 11/23/2009.
Adaptation Techniques in Automatic Speech Recognition Tor André Myrvoll Telektronikk 99(2), Issue on Spoken Language Technology in Telecommunications,
OSU ASAT Status Report Jeremy Morris Yu Wang Ilana Bromberg Eric Fosler-Lussier Keith Johnson 13 October 2006.
Isolated-Word Speech Recognition Using Hidden Markov Models
Graphical models for part of speech tagging
Comparative study of various Machine Learning methods For Telugu Part of Speech tagging -By Avinesh.PVS, Sudheer, Karthik IIIT - Hyderabad.
A brief overview of Speech Recognition and Spoken Language Processing Advanced NLP Guest Lecture August 31 Andrew Rosenberg.
Discriminative Models for Spoken Language Understanding Ye-Yi Wang, Alex Acero Microsoft Research, Redmond, Washington USA ICSLP 2006.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Modeling Speech using POMDPs In this work we apply a new model, POMPD, in place of the traditional HMM to acoustically model the speech signal. We use.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov
1 Word Recognition with Conditional Random Fields Jeremy Morris 12/03/2009.
Automatic Speech Recognition: Conditional Random Fields for ASR Jeremy Morris Eric Fosler-Lussier Ray Slyh 9/19/2008.
Maximum Entropy (ME) Maximum Entropy Markov Model (MEMM) Conditional Random Field (CRF)
Maximum Entropy Models and Feature Engineering CSCI-GA.2590 – Lecture 6B Ralph Grishman NYU.
FIGURE 1: Spectrogram of the phrase “that experience”, shown with phonetic labels and corresponding neural network posterior distributions over each phonetic.
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2005 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
Multi-Speaker Modeling with Shared Prior Distributions and Model Structures for Bayesian Speech Synthesis Kei Hashimoto, Yoshihiko Nankaku, and Keiichi.
Conditional Random Fields for ASR Jeremy Morris July 25, 2006.
Speech Communication Lab, State University of New York at Binghamton Dimensionality Reduction Methods for HMM Phonetic Recognition Hongbing Hu, Stephen.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Supervised Learning Resources: AG: Conditional Maximum Likelihood DP:
1 CRANDEM: Conditional Random Fields for ASR Jeremy Morris 11/21/2008.
Combining Speech Attributes for Speech Recognition Jeremy Morris November 9, 2006.
ECE 8443 – Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional Likelihood Mutual Information Estimation (CMLE) Maximum MI Estimation.
School of Computer Science 1 Information Extraction with HMM Structures Learned by Stochastic Optimization Dayne Freitag and Andrew McCallum Presented.
Discriminative Phonetic Recognition with Conditional Random Fields Jeremy Morris & Eric Fosler-Lussier The Ohio State University Speech & Language Technologies.
Presented by: Fang-Hui Chu Discriminative Models for Speech Recognition M.J.F. Gales Cambridge University Engineering Department 2007.
1 Conditional Random Fields for Automatic Speech Recognition Jeremy Morris 06/03/2010.
John Lafferty Andrew McCallum Fernando Pereira
HMM vs. Maximum Entropy for SU Detection Yang Liu 04/27/2004.
Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields.
Conditional Markov Models: MaxEnt Tagging and MEMMs
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Statistical Models for Automatic Speech Recognition Lukáš Burget.
1 Conditional Random Fields An Overview Jeremy Morris 01/11/2008.
Phone-Level Pronunciation Scoring and Assessment for Interactive Language Learning Speech Communication, 2000 Authors: S. M. Witt, S. J. Young Presenter:
Spoken Language Group Chinese Information Processing Lab. Institute of Information Science Academia Sinica, Taipei, Taiwan
1 Experiments with Detector- based Conditional Random Fields in Phonetic Recogntion Jeremy Morris 06/01/2007.
FIGURE 1: Spectrogram of the phrase “that experience”, shown with phonetic labels and corresponding neural network posterior distributions over each phonetic.
Discriminative n-gram language modeling Brian Roark, Murat Saraclar, Michael Collins Presented by Patty Liu.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
Combining Phonetic Attributes Using Conditional Random Fields Jeremy Morris and Eric Fosler-Lussier – Department of Computer Science and Engineering A.
1 Minimum Bayes-risk Methods in Automatic Speech Recognition Vaibhava Geol And William Byrne IBM ; Johns Hopkins University 2003 by CRC Press LLC 2005/4/26.
1 Conditional Random Fields For Speech and Language Processing Jeremy Morris 10/27/2008.
Maximum Entropy Models and Feature Engineering CSCI-GA.2591
Conditional Random Fields for ASR
Statistical Models for Automatic Speech Recognition
CRANDEM: Conditional Random Fields for ASR
Conditional Random Fields An Overview
Statistical Models for Automatic Speech Recognition
Jeremy Morris & Eric Fosler-Lussier 04/19/2007
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Automatic Speech Recognition: Conditional Random Fields for ASR
Parametric Methods Berlin Chen, 2005 References:
Presentation transcript:

1 CRFs for ASR: Extending to Word Recognition Jeremy Morris 05/16/2008

2 Outline Review of Background and Previous Work Word Recognition Pilot experiments

3 Background Conditional Random Fields (CRFs)  Discriminative probabilistic sequence model  Directly defines a posterior probability of a label sequence Y given an input observation sequence X - P(Y|X)  An extension of Maximum Entropy (MaxEnt) models to sequences

4 Conditional Random Fields CRF extends maximum entropy models by adding weighted transition functions  Both types of functions can be defined to incorporate observed inputs

5 Conditional Random Fields YYYYY XXXXX State functions help determine the identity of the state Transition functions add associations between transitions from one label to another

6 Background: Previous Experiments Goal: Integrate outputs of speech attribute detectors together for recognition  e.g. Phone classifiers, phonological feature classifiers Attribute detector outputs highly correlated  Stop detector vs. phone classifier for /t/ or /d/ Build a CRF model and compare to a Tandem HMM built using the same features

7 Background: Previous Experiments Feature functions built using the neural net output  Each attribute/label combination gives one feature function  Phone class: s /t/,/t/ or s /t/,/s/  Feature class: s /t/,stop or s /t/,dental

8 Background: Previous Results ModelAccuracy CRF (phone posteriors)67.32% CRF (phone posteriors – realigned)69.92%*** Tandem[3] 4mix (phones)68.07% Tandem[3] 16mix (phones)69.34% CRF (phono. fea. linear KL)66.37% CRF (phono. fea. lin-KL – realigned)68.99%** Tandem[3] 4mix (phono fea.)68.30% Tandem[3] 16mix (phono fea.)69.13% CRF (phones+feas)68.43% CRF (phones+feas – realigned)70.63%*** Tandem[3] 16mix (phones+feas)69.40% * Significantly (p<0.05) better than comparable CRF monophone system * Significantly (p<0.05) better than comparable Tandem 4mix triphone system * Signficantly (p<0.05) better than comparable Tandem 16mix triphone system

Background: Previous Results We now have CRF models that perform as well or better than HMM models for the task of phone recognition Problem: How do we extend this to word recognition? 9

Word Recognition Problem: For a given input signal X, find the word string W that maximizes P(W|X) The CRF gives us an assignment over phone labels, not over word labels 10

Word Recognition Problem: For a given input signal X, find the word string W that maximizes P(W|X) The CRF gives us an assignment over phone labels, not over word labels 11

Word Recognition Assume that the word sequence is independent of the signal given the phone sequence (dictionary assumption) 12

Word Recognition Another problem: CRF does not give P(Φ|X)  Φ here is a phone segment level assignment of phone labels  CRF gives related quantity – P(Q|X) where Q is the frame level assignment of phone labels 13

Word Recognition Frame level vs. Phone segment level  Mapping from frame level to phone level may not be deterministic  Example: The number “OH” with pronunciation /ow/  Consider this sequence of frame labels: owowowowowowow  How many separate utterances of the word “OH” does that sequence represent? 14

Word Recognition Frame level vs. Phone segment level  This problem occurs because we’re using a single state to represent the phone /ow/ Phone either transitions to itself or transitions out to another phone  What if we change this representation to a multi- state model? This brings us closer to the HMM topology ow1ow2ow2ow2ow2ow3ow3  Now we can see a single “OH” in this utterance 15

Word Recognition Another problem: CRF does not give P(Φ|X)  Multi-state model gives us a deterministic mapping of Q -> Φ Each frame-level assignment Q has exactly one segment level assignment associated with it Potential problem – what if the multi-state model is inappropriate for the features we’ve chosen? 16

Word Recognition What about P(W|Φ)?  Non-deterministic across sequences of words Φ = / ah f eh r / W = ? “a fair”? “affair”? The more words in the string, the more possible combinations can arise Not easy to see how this could be computed directly or broken into smaller pieces for computation 17

Word Recognition Dumb thing first: Bayes Rule  P(W) –language model  P(Φ|W) – dictionary model  P(Φ) – prior probability of phone sequences  All of these can be built from data 18

Proposed Implementation CRF code produces a finite-state lattice of phone transitions Implement the first term as composition of finite state machines As an approximation, take the highest scoring word sequence (argmax) instead of performing the summation 19

Pilot Experiment: TIDIGITS First word recognition experiment – TIDIGITS recognition  Both isolated and strings of spoken digits, ZERO (or OH) to NINE  Male and female speakers Training set – 112 speakers total  Random selection of 11 speakers held out as development set  Remaining 101 speakers used for training as needed 20

Pilot Experiment: TIDIGITS Important characteristic of the DIGITS problem – a given phone sequence maps to a single word sequence P(W|Φ) easy to implement as FSTs in this problem 21

Pilot Experiment: TIDIGITS Implementation  Created a composed dictionary and language model FST No probabilistic weights applied to these FSTs – assumption of uniform probability of any digit sequence  Modified CRF code to allow composition of above FST with phone lattice Results written to MLF file and scored using standard HTK tools Results compared to HMM system trained on same features 22

23 Pilot Experiment: TIDIGITS Features  Choice of multi-state model for CRF may not be best fit with neural network posterior outputs The neural network abstracts away distinctions among different parts of the phone across time (by design)  Phone Classification (Gunawardana et al., 2005) Feature functions designed to take MFCCs, PLP or other traditional ASR inputs and use them in CRFs Gives the equivalent of a single Gaussian per state model Fairly easy to adapt to our CRFs

Pilot Experiment: TIDIGITS Labels  Unlike TIMIT, TIDIGITS files do not come with phone-level labels  To generate these labels for CRF training, weights derived from TIMIT were used to force align a state-level transcript  This label file was then used for training the CRF 24

25 Pilot Experiment: Results ModelAccuracy HMM (monophone, 1 Gaussian)98.74% HMM (triphone, 1 Gaussian)98.45% HMM (monophone, 32 Gaussians)99.93% HMM (triphone, 32 Gaussians)99.82% CRF98.81% CRF Performance falls in line with the single Gaussian models CRF with these features achieves ~63% accuracy on TIMIT phone task, compared to ~69% accuracy of triphone HMM, 32 models These results may not be the best we can get for the CRF – still working on tuning the learning rate and trying various realignments

26 Pilot Experiment: TIDIGITS Features – Part II  Tandem systems often concatenate phone posteriors with MFCCs or PLPs for recognition We can incorporate those features here as well This is closer to our original experiments, though we did not use the PLPs directly in the CRF before These results show phone posteriors trained on TIMIT and applied to TIDIGITS – MLPs were not been retrained on TIDIGITS Experiments are still running, but I have preliminary results

27 Pilot Experiment: Results ModelAccuracy HMM (monophone, 1 Gaussian)XX HMM (triphone, 1 Gaussian)XX HMM (monophone, 32 Gaussians)XX HMM (triphone, 32 Gaussians)99.75% CRF98.99% CRF performance increases over just using raw PLPs, but not by much HMM performance has a slight, but insignificant degradation from just using PLPs alone As a comparison – for phone recognition with these features the HMM achieves 71.5% accuracy, the CRF achieves 72% accuracy Again – results have not had full tuning. I strongly suspect that in this case the learning rate for the CRF is not well tuned, but these are preliminary numbers

Pilot Experiment: What Next? Continue gathering results on TIDIGITS trials  Experiments currently running examining different features, as well as the use of transition feature functions  Consider ways of getting that missing information to bring the results closer to parity with 32 Gaussian HMMs (e.g. more features) Work on the P(W|Φ) model  Computing probabilities – best way to get P(Φ)?  Building and applying LM FSTs to an interesting test Move to a more interesting data set  WSJ 5K words is my current thought in this regard 28

Discussion 29

References J. Lafferty et al, “Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data”, Proc. ICML, 2001 A. Berger, “A Brief MaxEnt Tutorial”, ial/tutorial.html ial/tutorial.html R. Rosenfeld, “Adaptive statistical language modeling: a maximum entropy approach”, PhD thesis, CMU, 1994 A. Gunawardana et al, “Hidden Conditional Random Fields for phone classification”, Proc. Interspeech,

Background – Discriminative Models Directly model the association between the observed features and labels for those features  e.g. neural networks, maximum entropy models  Attempt to model boundaries between competing classes Probabilistic discriminative models  Give conditional probabilities instead of hard class decisions  Find the class y that maximizes P(y|x) for observed features x 31

Background – Sequential Models Used to classify sequences of data  HMMs the most common example  Find the most probable sequence of class labels Class labels depend not only on observed features, but on surrounding labels as well  Must determine transitions as well as state labels 32

Background – Sequential Models Sample Sequence Model - HMM 33

Conditional Random Fields A probabilistic, discriminative classification model for sequences  Based on the idea of Maximum Entropy Models (Logistic Regression models) expanded to sequences 34

35 Conditional Random Fields Probabilistic sequence model YYYYY

36 Conditional Random Fields Probabilistic sequence model  Label sequence Y has a Markov structure  Observed sequence X may have any structure YYYYY XXXXX

37 Conditional Random Fields Extends the idea of maxent models to sequences  Label sequence Y has a Markov structure  Observed sequence X may have any structure YYYYY XXXXX State functions help determine the identity of the state

38 Conditional Random Fields Extends the idea of maxent models to sequences  Label sequence Y has a Markov structure  Observed sequence X may have any structure YYYYY XXXXX State functions help determine the identity of the state Transition functions add associations between transitions from one label to another

Maximum Entropy Models Probabilistic, discriminative classifiers  Compute the conditional probability of a class y given an observation x – P(y|x)  Build up this conditional probability using the principle of maximum entropy In the absence of evidence, assume a uniform probability for any given class As we gain evidence (e.g. through training data), modify the model such that it supports the evidence we have seen but keeps a uniform probability for unseen hypotheses 39

Maximum Entropy Example Suppose we have a bin of candies, each with an associated label (A,B,C, or D)  Each candy has multiple colors in its wrapper  Each candy is assigned a label randomly based on some distribution over wrapper colors 40 ABA * Example inspired by Adam Berger’s Tutorial on Maximum Entropy

Maximum Entropy Example For any candy with a red label pulled from the bin:  P(A|red)+P(B|red)+P(C|red)+P(D|red) = 1  Infinite number of distributions exist that fit this constraint  The distribution that fits with the idea of maximum entropy is: P(A|red)=0.25 P(B|red)=0.25 P(C|red)=0.25 P(D|red)=

Maximum Entropy Example Now suppose we add some evidence to our model  We note that 80% of all candies with red labels are either labeled A or B P(A|red) + P(B|red) = 0.8  The updated model that reflects this would be: P(A|red) = 0.4 P(B|red) = 0.4 P(C|red) = 0.1 P(D|red) = 0.1  As we make more observations and find more constraints, the model gets more complex 42

Maximum Entropy Models “Evidence” is given to the MaxEnt model through the use of feature functions  Feature functions provide a numerical value given an observation  Weights on these feature functions determine how much a particular feature contributes to a choice of label In the candy example, feature functions might be built around the existence or non-existence of a particular color in the wrapper In NLP applications, feature functions are often built around words or spelling features in the text 43

Maximum Entropy Models The maxent model for k competing classes Each feature function s(x,y) is defined in terms of the input observation (x) and the associated label (y) Each feature function has an associated weight (λ) 44

Maximum Entropy – Feature Funcs. Feature functions for a maxent model associate a label and an observation  For the candy example, feature functions might be based on labels and wrapper colors  In an NLP application, feature functions might be based on labels (e.g. POS tags) and words in the text 45

Maximum Entropy – Feature Funcs. Example: MaxEnt POS tagging  Associates a tag (NOUN) with a word in the text (“dog”)  This function evaluates to 1 only when both occur in combination At training time, both tag and word are known At evaluation time, we evaluate for all possible classes and find the class with highest probability 46

Maximum Entropy – Feature Funcs. These two feature functions would never fire simultaneously  Each would have its own lambda-weight for evaluation 47

Maximum Entropy – Feature Funcs. MaxEnt models do not make assumptions about the independence of features  Depending on the application, feature functions can benefit from context 48

Maximum Entropy – Feature Funcs. Other feature functions possible beyond simple word/tag association  Does the word have a particular prefix?  Does the word have a particular suffix?  Is the word capitalized?  Does the word contain punctuation? Ability to integrate many complex but sparse observations is a strength of maxent models. 49

Conditional Random Fields Feature functions defined as for maxent models  Label/observation pairs for state feature functions  Label/label/observation triples for transition feature functions Often transition feature functions are left as “bias features” – label/label pairs that ignore the attributes of the observation 50

Condtional Random Fields Example: CRF POS tagging  Associates a tag (NOUN) with a word in the text (“dog”) AND with a tag for the prior word (DET)  This function evaluates to 1 only when all three occur in combination At training time, both tag and word are known At evaluation time, we evaluate for all possible tag sequences and find the sequence with highest probability (Viterbi decoding) 51

Conditional Random Fields Example – POS tagging (Lafferty, 2001)  State feature functions defined as word/label pairs  Transition feature functions defined as label/label pairs  Achieved results comparable to an HMM with the same features 52 ModelErrorOOV error HMM5.69%45.99% CRF5.55%48.05%

Conditional Random Fields Example – POS tagging (Lafferty, 2001)  Adding more complex and sparse features improved the CRF performance Capitalization? Suffixes? (-iy, -ing, -ogy, -ed, etc.) Contains a hyphen? 53 ModelErrorOOV error HMM5.69%45.99% CRF5.55%48.05% CRF+4.27%23.76%

54 Conditional Random Fields Based on the framework of Markov Random Fields /k/ /iy/

55 Conditional Random Fields Based on the framework of Markov Random Fields  A CRF iff the graph of the label sequence is an MRF when conditioned on a set of input observations (Lafferty et al., 2001) /k/ /iy/ XXXXX

56 Conditional Random Fields Based on the framework of Markov Random Fields  A CRF iff the graph of the label sequence is an MRF when conditioned on the input observations /k/ /iy/ XXXXX State functions help determine the identity of the state

57 Conditional Random Fields Based on the framework of Markov Random Fields  A CRF iff the graph of the label sequence is an MRF when conditioned on the input observations /k/ /iy/ XXXXX State functions help determine the identity of the state Transition functions add associations between transitions from one label to another

58 Conditional Random Fields CRF defined by a weighted sum of state and transition functions  Both types of functions can be defined to incorporate observed inputs  Weights are trained by maximizing the likelihood function via gradient descent methods

59 SLaTe Experiments - Setup CRF code  Built on the Java CRF toolkit from Sourceforge   Performs maximum log-likelihood training  Uses Limited Memory BGFS algorithm to perform minimization of the log-likelihood gradient

60 SLaTe Experiments Implemented CRF models on data from phonetic attribute detectors  Performed phone recognition  Compared results to Tandem/HMM system on same data Experimental Data  TIMIT corpus of read speech

61 SLaTe Experiments - Attributes Attribute Detectors  ICSI QuickNet Neural Networks Two different types of attributes  Phonological feature detectors Place, Manner, Voicing, Vowel Height, Backness, etc. N-ary features in eight different classes Posterior outputs -- P(Place=dental | X)  Phone detectors Neural networks output based on the phone labels  Trained using PLP 12+deltas

62 Experimental Setup Baseline system for comparison  Tandem/HMM baseline (Hermansky et al., 2000)  Use outputs from neural networks as inputs to gaussian-based HMM system  Built using HTK HMM toolkit Linear inputs  Better performance for Tandem with linear outputs from neural network  Decorrelated using a Karhunen-Loeve (KL) transform

63 Background: Previous Experiments Speech Attributes  Phonological feature attributes Detector outputs describe phonetic features of a speech signal  Place, Manner, Voicing, Vowel Height, Backness, etc. A phone is described with a vector of feature values  Phone class attributes Detector outputs describe the phone label associated with a portion of the speech signal  /t/, /d/, /aa/, etc.

64 * Significantly (p<0.05) better than comparable Tandem monophone system * Significantly (p<0.05) better than comparable CRF monophone system Initial Results (Morris & Fosler-Lussier, 06) ModelParamsPhone Accuracy Tandem [1] (phones)20, % Tandem [3] (phones) 4mix420, %* CRF [1] (phones) %* Tandem [1] (feas)14, % Tandem [3] (feas) 4mix360, %* CRF [1] (feas) %* Tandem [1] (phones/feas)34, % Tandem [3] (phones/feas) 4mix774, % CRF (phones/feas) %*

65 Feature Combinations CRF model supposedly robust to highly correlated features  Makes no assumptions about feature independence Tested this claim with combinations of correlated features  Phone class outputs + Phono. Feature outputs  Posterior outputs + transformed linear outputs Also tested whether linear, decorrelated outputs improve CRF performance

66 Feature Combinations - Results ModelPhone Accuracy CRF (phone posteriors)67.32% CRF (phone linear KL)66.80% CRF (phone post+linear KL)68.13%* CRF (phono. feature post.)65.45% CRF (phono. feature linear KL)66.37% CRF (phono. feature post+linear KL)67.36%* * Significantly (p<0.05) better than comparable posterior or linear KL systems

67 Viterbi Realignment Hypothesis: CRF results obtained by using only pre-defined boundaries  HMM allows “boundaries” to shift during training  Basic CRF training process does not Modify training to allow for better boundaries  Train CRF with fixed boundaries  Force align training labels using CRF  Adapt CRF weights using new boundaries

68 Conclusions Using correlated features in the CRF model did not degrade performance  Extra features improved performance for the CRF model across the board Viterbi realignment training significantly improved CRF results  Improvement did not occur when best HMM- aligned transcript was used for training

Current Work - Crandem Systems Idea – use the CRF model to generate features for an HMM  Similar to the Tandem HMM systems, replacing the neural network outputs with CRF outputs  Preliminary phone-recognition experiments show promise Preliminary attempts to incorporate CRF features at the word level are less promising 69

70 Future Work Recently implemented stochastic gradient training for CRFs  Faster training, improved results Work currently being done to extend the model to word recognition Also examining the use of transition functions that use the observation data  Crandem system does this with improved results for phone recogniton