A Left-to-Right HDP-HMM with HDPM Emissions Amir Harati, Joseph Picone and Marc Sobel Institute for Signal and Information Processing Temple University.

Slides:



Advertisements
Similar presentations
Hierarchical Dirichlet Process (HDP)
Advertisements

Hierarchical Dirichlet Processes
Adaption Adjusting Model’s parameters for a new speaker. Adjusting all parameters need a huge amount of data (impractical). The solution is to cluster.
Supervised Learning Recap
Nonparametric-Bayesian approach for automatic generation of subword units- Initial study Amir Harati Institute for Signal and Information Processing Temple.
EE-148 Expectation Maximization Markus Weber 5/11/99.
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
Lecture 5: Learning models using EM
Latent Dirichlet Allocation a generative model for text
1 Hidden Markov Model Instructor : Saeed Shiry  CHAPTER 13 ETHEM ALPAYDIN © The MIT Press, 2004.
Visual Recognition Tutorial
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Motivation Parametric models can capture a bounded amount of information from the data. Real data is complex and therefore parametric assumptions is wrong.
Adaption Def: To adjust model parameters for new speakers. Adjusting all parameters requires too much data and is computationally complex. Solution: Create.
Adaptation Techniques in Automatic Speech Recognition Tor André Myrvoll Telektronikk 99(2), Issue on Spoken Language Technology in Telecommunications,
A Comparative Analysis of Bayesian Nonparametric Variational Inference Algorithms for Speech Recognition John Steinberg Institute for Signal and Information.
Isolated-Word Speech Recognition Using Hidden Markov Models
Joseph Picone Co-PIs: Amir Harati, John Steinberg and Dr. Marc Sobel
Correlated Topic Models By Blei and Lafferty (NIPS 2005) Presented by Chunping Wang ECE, Duke University August 4 th, 2006.
Hierarchical Dirichlet Process (HDP) A Dirichlet process (DP) is a discrete distribution that is composed of a weighted sum of impulse functions. Weights.
A Comparative Analysis of Bayesian Nonparametric Variational Inference Algorithms for Speech Recognition John Steinberg Institute for Signal and Information.
A Left-to-Right HDP-HMM with HDPM Emissions Amir Harati, Joseph Picone and Marc Sobel Institute for Signal and Information Processing Temple University.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Conjugate Priors Multinomial Gaussian MAP Variance Estimation Example.
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
A Left-to-Right HDP-HMM with HDPM Emissions Amir Harati, Joseph Picone and Marc Sobel Institute for Signal and Information Processing Temple University.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
A Left-to-Right HDP-HMM with HDPM Emissions Amir Harati, Joseph Picone and Marc Sobel Institute for Signal and Information Processing Temple University.
Randomized Algorithms for Bayesian Hierarchical Clustering
Latent Dirichlet Allocation D. Blei, A. Ng, and M. Jordan. Journal of Machine Learning Research, 3: , January Jonathan Huang
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hierarchical Dirichlet Process and Infinite Hidden Markov Model Duke University Machine Learning Group Presented by Kai Ni February 17, 2006 Paper by Y.
1 Dirichlet Process Mixtures A gentle tutorial Graphical Models – Khalid El-Arini Carnegie Mellon University November 6 th, 2006 TexPoint fonts used.
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2005 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
CS Statistical Machine learning Lecture 24
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
Multi-Speaker Modeling with Shared Prior Distributions and Model Structures for Bayesian Speech Synthesis Kei Hashimoto, Yoshihiko Nankaku, and Keiichi.
Topic Models Presented by Iulian Pruteanu Friday, July 28 th, 2006.
A Left-to-Right HDP-HMM with HDPM Emissions Amir Harati, Joseph Picone and Marc Sobel Institute for Signal and Information Processing Temple University.
ECE 8443 – Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional Likelihood Mutual Information Estimation (CMLE) Maximum MI Estimation.
Amir Harati and Joseph Picone
School of Computer Science 1 Information Extraction with HMM Structures Learned by Stochastic Optimization Dayne Freitag and Andrew McCallum Presented.
Beam Sampling for the Infinite Hidden Markov Model by Jurgen Van Gael, Yunus Saatic, Yee Whye Teh and Zoubin Ghahramani (ICML 2008) Presented by Lihan.
Adaption Def: To adjust model parameters for new speakers. Adjusting all parameters requires an impractical amount of data. Solution: Create clusters and.
Automated Interpretation of EEGs: Integrating Temporal and Spectral Modeling Christian Ward, Dr. Iyad Obeid and Dr. Joseph Picone Neural Engineering Data.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
A Left-to-Right HDP-HMM with HDPM Emissions Amir Harati, Joseph Picone and Marc Sobel Institute for Signal and Information Processing Temple University.
Bayesian Speech Synthesis Framework Integrating Training and Synthesis Processes Kei Hashimoto, Yoshihiko Nankaku, and Keiichi Tokuda Nagoya Institute.
1 Hidden Markov Models Hsin-min Wang References: 1.L. R. Rabiner and B. H. Juang, (1993) Fundamentals of Speech Recognition, Chapter.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Statistical Models for Automatic Speech Recognition Lukáš Burget.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Hierarchical Beta Process and the Indian Buffet Process by R. Thibaux and M. I. Jordan Discussion led by Qi An.
APPLICATIONS OF DIRICHLET PROCESS MIXTURES TO SPEAKER ADAPTATION Amir Harati and Joseph PiconeMarc Sobel Institute for Signal and Information Processing,
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
A NONPARAMETRIC BAYESIAN APPROACH FOR
Online Multiscale Dynamic Topic Models
LECTURE 11: Advanced Discriminant Analysis
Non-Parametric Models
Statistical Models for Automatic Speech Recognition
HUMAN LANGUAGE TECHNOLOGY: From Bits to Blogs
Hidden Markov Models Part 2: Algorithms
Statistical Models for Automatic Speech Recognition
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
Presentation transcript:

A Left-to-Right HDP-HMM with HDPM Emissions Amir Harati, Joseph Picone and Marc Sobel Institute for Signal and Information Processing Temple University Philadelphia, Pennsylvania, USA

48 th Annual Conference on Information Sciences and SystemsMarch 20, Abstract Nonparametric Bayesian (NPB) methods are a popular alternative to Bayesian approaches in which we place a prior over the complexity (or model structure). The Hierarchical Dirichlet Process hidden Markov model (HDP-HMM) is the nonparametric Bayesian equivalent of an HMM. HDP-HMM is restricted to an ergodic topology and uses a Dirichlet Process Model (DPM) to achieve a mixture distribution-like model. A new type of HDP-HMM is introduced that:  preserves the useful left-to-right properties of a conventional HMM, yet still supports automated learning of the structure and complexity from data.  uses HDPM emissions which allows a model to share data-points among different states. This new model produces better likelihoods relative to original HDP- HMM and has much better scalability properties.

48 th Annual Conference on Information Sciences and SystemsMarch 20, Nonparametric Bayesian Models Nonparametric Hidden Markov Models Acoustic Modeling in Speech Recognition Left-to-Right HDP-HMM Models HDP HMM with HDP Emissions Results Summary of Contributions Outline You know my view that short talks should not have an outline, but if you feel comfortable with it, that is okay. I would think you have more important things to talk about.

48 th Annual Conference on Information Sciences and SystemsMarch 20, Nonparametric Bayesian Models Parametric vs. Nonparametric Model Selection / Averaging: 1.Computational Cost 2.Discrete Optimization 3.Criteria Nonparametric Bayesian Promises: 1.Inferring the model from the data 2.Immunity to over-fitting 3.Well-defined mathematical framework This slide covers a lot of important concepts – not sure exactly what you will say for this slide. amirI start by talking about why nonparametric Bayesian methods are desiarbale and what they promise and then give one example.

48 th Annual Conference on Information Sciences and SystemsMarch 20, Dirichlet Distributions – Popular Prior For Bayesian Models Functional form:  q ℝ k : a probability mass function (pmf).  α: a concentration parameter.  α can be interpreted as pseudo-observations (in contrast to real observations pseudo-observations reflects our prior believes about the data).  The total number of pseudo-observations is α 0. The Dirichlet Distribution is a conjugate prior for a multinomial distribution. The term pseudo-observations needs a definition. See above

48 th Annual Conference on Information Sciences and SystemsMarch 20, Dirichlet Distributions – What is the point here? Not sure this slide is essential. What point are you trying to make with this? Might be useful to write out a few of those points. (to describe the pervious slide- What a Dirichlet means)

48 th Annual Conference on Information Sciences and SystemsMarch 20, Dirichlet Processes – Infinite Sequence of Random Variables? a Dirichlet distribution split infinitely many times: q2q2 q2q2 q1q1 q1q1 q 21 q 11 q 22 q 12 A discrete distribution with an infinite number of atoms.

48 th Annual Conference on Information Sciences and SystemsMarch 20, Hierarchical Dirichlet Process – Nonparametric Clustering Grouped Data Clustering: Consider data organized into several groups (e.g. documents). A DP can be used to define a mixture over each group. Each mixture is independent of the other distributions. Sometimes we want to share components among mixtures (e.g. to share topics among documents). Hierarchical Dirichlet Process (HDP): a) b)

48 th Annual Conference on Information Sciences and SystemsMarch 20, Hidden Markov Models Markov Chain A memoryless stochastic process. States are observed at each time, t. The probability of being at any state at time t+1 is a function of the state at time t. Hidden Markov Models (HMMs) A Markov chain where states are not observed. An observed sequence is the output of a probability distribution associated with each state. A model is characterized by:  Number of states;  Transition probabilities between these states;  Emission probability distributions for each state. Expectation-Maximization (EM) is used for training. Put a visualization of a Markov chain here. Done

48 th Annual Conference on Information Sciences and SystemsMarch 20, Hierarchical Dirichlet Process-Based HMM (HDP-HMM) Inference algorithms are used to infer the values of the latent variables (z t and s t ). A variation of the forward-backward procedure is used for training. K z : Maximum number of states. K s : Max. no. of components per mixture. Graphical Model: Definition: z t, s t and x t represent a state, mixture component and observation respectively.

48 th Annual Conference on Information Sciences and SystemsMarch 20, The Acoustic Modeling Problem in Speech Recognition Goal of speech recognition is to map the acoustic data into word sequences: P(W|A) is the probability of a particular word sequence given acoustic observations. P(W) is the language model. P(A) is the probability of the observed acoustic data and usually can be ignored. P(A|W) is the acoustic model.

48 th Annual Conference on Information Sciences and SystemsMarch 20, Left-to-Right HDP-HMM with HDP Emissions In many pattern recognition applications involving temporal structure, such as speech recognition, a left-to-right topology is used to model the temporal order of the signal. In speech recognition, all acoustic units use the same topology and the same number of mixtures; i.e., the complexity is fixed for all models. Given more data, a model’s structure (e.g., the topology) will remain the same and only the parameter values change. The amount of data associated with each model varies, which implies some models are overtrained while others are undertrained. Because of the lack of hierarchical structure, techniques for extending the model tend to be heuristic. Example: Gender-specific models are trained as separate models. A counter-example are decision trees. What Do you mean by the "yellow" part?

48 th Annual Conference on Information Sciences and SystemsMarch 20, Relevant Work Bourlard (1993) and others proposed to replace Gaussian mixture models (GMMs) with a neural network based on a multilayer perceptron (MLP). It was shown that MLPs generate reasonable estimates of a posterior distribution of an output class conditioned on the input patterns. This hybrid HMM-MLP system produced small gains over traditional HMMs. Lefèvre (2003) and Shang (2009) where nonparametric density estimators (e.g. kernel methods) replaced GMMs. Henter et al. (2012) introduced a Gaussian process dynamical model (GPDM) for speech synthesis. Each of these approaches were proposed to model the emission distributions using a nonparametric method but they did not address the model topology problem.

48 th Annual Conference on Information Sciences and SystemsMarch 20, New Features HDP-HMM is an ergodic model. We extend the definition to a left-to-right topology. HDP-HMM uses DPM to model emissions for each state. Our model uses HDP to model the emissions. In this manner we allow components of emission distributions to be shared within a HMM. This is particularly important for left-to-right models where the number of discovered states is usually more than an ergodic HMM. As a result we have fewer data points associated with each state. Non-emitting “initial” and “final” states are included in the final definition. Too much text on this and the next slide… Done Each color is a component, numbers represents data point In the left you see an ergodic HMM with DPM emissions. In the right you see a LR HMM with HDPM emissions. Note that total length of output boxes is always 1

48 th Annual Conference on Information Sciences and SystemsMarch 20, Left-to-Right HDP-HMM With HDP Emission LR HDP-HMM (cont.): A new inference algorithm based on a block sampler has been derived. The new model is more accurate and does not have some of the intrinsic problems of parametric HMMs (e.g. over-trained and under-trained models). The hierarchical definition of the model within the Bayesian framework make it relatively easy to solve problems such as sharing data among related models (e.g. models of the same phoneme for different cluster of speakers). we have shown that the computation time for HDP-HMM with HDP emissions is proportional to K s, while for HDP-HMM with DPM emissions it is proportional to K s *K z. This means HDP-HMM/HDPM is more scalable when increasing the maximum bound on complexity of a model. These two slides should be condensed into something more visual. I think the last image is enough this page can’t change.

48 th Annual Conference on Information Sciences and SystemsMarch 20, Mathematical Definition Definition: Graphical Model

48 th Annual Conference on Information Sciences and SystemsMarch 20, Non-emitting States Inference algorithm estimates the probability of self-transitions (P 1 ) and transitions to other emitting states (P 2 ), but each state can also transit to a none-emitting state (P 3 ). Since P 1 + P 2 + P 3 = 1 we can reestimate P 1, P 3 by fixing P 2. Similar to tossing a coin until a first head is obtained (can be modeled as a geometric distribution). A maximum likelihood (ML) estimation cab be obtained: where M is the number examples in which state i is the last state of the model and k i is the number of self-transitions for state i.

48 th Annual Conference on Information Sciences and SystemsMarch 20, Results – Proof of concept (toy problem) For this toy problem data is generated from a LR-HMM with 1 to 3 mixture components per state. Training data used to train different models and a different set of holdout data used to assess the models.

48 th Annual Conference on Information Sciences and SystemsMarch 20, Results – Computation time and scalability HDP-HMM/DPM computation time is proportional to K s * K z. HDP-HMM/HDPM inference time is proportional to K s. The mixture components are shared among all states so the actual number of computations is proportional to K s.

48 th Annual Conference on Information Sciences and SystemsMarch 20, Results – TIMIT classification The data used in this illustration was extracted from the TIMIT Corpus where a phoneme level transcription is available. MFCC features plus their first and second derivatives are used (39 dimensions.). State of the art parametric HMM/GMM used for comparison. Classification results show around 15% improvement. A C OMPARISON OF C LASSIFICATION E RROR R ATES ModelError Rate HMM/GMM (10 components) 27.8% LR-HDP-HMM/GMM (1 component) 26.7% LR-HDP-HMM 24.1%

48 th Annual Conference on Information Sciences and SystemsMarch 20, Results – TIMIT classification An automatically derived model structure (without the first and last non- emitting states) for: (a)/aa/ with 175 examples (b)/sh/ with 100 examples (c)/aa/ with 2256 examples (d)/sh/ with 1317 examples using LR HDP-HMM model Notice that complexity of the model changes once trained with more data. Also parallel paths in models corresponds to different modality in speaker space..

48 th Annual Conference on Information Sciences and SystemsMarch 20, Summary Summarize your performance results:  Showing that HDPM emissions can replace DPM emissions in most applications (for both LR and ergodic models) without losing performance while the scalability of model improves significantly.  We have also shown that LR HDP-HMM models can learn multi modality in data. For example, for a single phoneme, LR HDP-HMM can learn parallel paths corresponding to different type of speakers while at the same time we can share the data among states if HDPM emissions are used. Three theoretical contributions:  A left-to-right HDP-HMM model.  Introducing HDP emissions to HDP-HMM model.  Introducing non-emitting states to the model. Comparing to EM algorithm the Inference algorithm for HDP-HMM models is computationally too expensive. One of the future directions is to investigate approaches based on Variational Inference for these models. Another direction is to add level of hierarchy to the model to share data among several HDP-HMM models.

48 th Annual Conference on Information Sciences and SystemsMarch 20, References 1.Bourlard, H., & Morgan, N. (1993). Connectionist Speech Recognition A Hybrid Approach. Springer. 2.Fox, E., Sudderth, E., Jordan, M., & Willsky, A. (2011). A Sticky HDP-HMM with Application to Speaker Diarization. The Annalas of Applied Statistics, 5(2A), 1020– Harati, A., Picone, J., & Sobel, M. (2012). Applications of Dirichlet Process Mixtures to Speaker Adaptation. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 4321–4324). Kyoto, Japan. 4.Harati, A., Picone, J., & Sobel, M. (2013). Speech Segmentation Using Hierarchical Dirichlet Processes. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (p. TBD). Vancouver, Canada. 5.Lefèvre, F. (n.d.). Non-parametric probability estimation for HMM-based automatic speech recognition. Computer Speech & Language, 17(2-3), 113– Rabiner, L. (1989). A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77(2), 879– Sethuraman, J. (1994). A constructive definition of Dirichlet priors. Statistica Sinica, 639– Shang, L. (n.d.). Nonparametric Discriminant HMM and Application to Facial Expression Recognition. IEEE Conference on Computer Vision and Pattern Recognition (pp. 2090– 2096). Miami, FL, USA. 9.Shin, W., Lee, B.-S., Lee, Y.-K., & Lee, J.-S. (2000). Speech/non-speech classification using multiple features for robust endpoint detection. proceedings of IEEE international Conference on ASSP (pp. 1899–1402). Istanbul, Turkey. 10.Suchard, M. A., Wang, Q., Chan, C., Frelinger, J., West, M., & Cron, A. (2010). Understanding GPU Programming for Statistical Computation: Studies in Massively Parallel Massive Mixtures. Journal of Computational and Graphical Statistics, 19(2), 419– Teh, Y., Jordan, M., Beal, M., & Blei, D. (2006). Hierarchical Dirichlet Processes. Journal of the American Statistical Association, 101(47), 1566–1581.

48 th Annual Conference on Information Sciences and SystemsMarch 20, Biography Amir Harati is a PhD candidate at Department of Electrical and Computer Engineering at Temple university. He got his bachelor degree in Electrical and Computer Engineering from Tabriz University in 2004 and his Master degree in Electrical and computer Engineering from K.N. Toosi university in He has also worked as signal processing researcher for Bina-Pardaz LTD. The focus of his research at Temple university is the application of nonparametric Bayesian methods in acoustic modeling and speech recognition. He is also involved with a project in collaboration with Temple hospital to interpret EEG signal automatically.