CONTEXT DEPENDENT CLASSIFICATION

Slides:



Advertisements
Similar presentations
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Advertisements

Angelo Dalli Department of Intelligent Computing Systems
2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
Hidden Markov Models By Marc Sobel. Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 Introduction Modeling.
Automatic Speech Recognition II  Hidden Markov Models  Neural Network.
Maximum Likelihood Sequence Detection (MLSD) and the Viterbi Algorithm
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
Ch 9. Markov Models 고려대학교 자연어처리연구실 한 경 수
Statistical NLP: Lecture 11
Ch-9: Markov Models Prepared by Qaiser Abbas ( )
Hidden Markov Models Theory By Johan Walters (SR 2003)
Statistical NLP: Hidden Markov Models Updated 8/12/2005.
1 Hidden Markov Models (HMMs) Probabilistic Automata Ubiquitous in Speech/Speaker Recognition/Verification Suitable for modelling phenomena which are dynamic.
Hidden Markov Models Fundamentals and applications to bioinformatics.
Hidden Markov Models in NLP
Hidden Markov Models (HMMs) Steven Salzberg CMSC 828H, Univ. of Maryland Fall 2010.
PatReco: Hidden Markov Models Alexandros Potamianos Dept of ECE, Tech. Univ. of Crete Fall
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
. Parameter Estimation and Relative Entropy Lecture #8 Background Readings: Chapters 3.3, 11.2 in the text book, Biological Sequence Analysis, Durbin et.
Hidden Markov Models I Biology 162 Computational Genetics Todd Vision 14 Sep 2004.
Lecture 5: Learning models using EM
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
Fall 2001 EE669: Natural Language Processing 1 Lecture 9: Hidden Markov Models (HMMs) (Chapter 9 of Manning and Schutze) Dr. Mary P. Harper ECE, Purdue.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
1 Markov Chains. 2 Hidden Markov Models 3 Review Markov Chain can solve the CpG island finding problem Positive model, negative model Length? Solution:
Isolated-Word Speech Recognition Using Hidden Markov Models
7-Speech Recognition Speech Recognition Concepts
Fundamentals of Hidden Markov Model Mehmet Yunus Dönmez.
Sergios Theodoridis Konstantinos Koutroumbas Version 2
Hidden Markov Models Usman Roshan CS 675 Machine Learning.
1 E. Fatemizadeh Statistical Pattern Recognition.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
S. Salzberg CMSC 828N 1 Three classic HMM problems 2.Decoding: given a model and an output sequence, what is the most likely state sequence through the.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
CHAPTER 8 DISCRIMINATIVE CLASSIFIERS HIDDEN MARKOV MODELS.
Hidden Markovian Model. Some Definitions Finite automation is defined by a set of states, and a set of transitions between states that are taken based.
Hidden Markov Models (HMMs) Chapter 3 (Duda et al.) – Section 3.10 (Warning: this section has lots of typos) CS479/679 Pattern Recognition Spring 2013.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Elements of a Discrete Model Evaluation.
1 Hidden Markov Models Hsin-min Wang References: 1.L. R. Rabiner and B. H. Juang, (1993) Fundamentals of Speech Recognition, Chapter.
Statistical Models for Automatic Speech Recognition Lukáš Burget.
1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,..., sN Si Sj.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hidden Markov Model Parameter Estimation BMI/CS 576 Colin Dewey Fall 2015.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Data-Intensive Computing with MapReduce Jimmy Lin University of Maryland Thursday, March 14, 2013 Session 8: Sequence Labeling This work is licensed under.
Other Models for Time Series. The Hidden Markov Model (HMM)
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Hidden Markov Models HMM Hassanin M. Al-Barhamtoshy
Hidden Markov Models BMI/CS 576
Lecture 1.31 Criteria for optimal reception of radio signals.
Chapter 3: Maximum-Likelihood Parameter Estimation
Hidden Markov Models.
Statistical Models for Automatic Speech Recognition
LECTURE 10: EXPECTATION MAXIMIZATION (EM)
LECTURE 15: HMMS – EVALUATION AND DECODING
Hidden Markov Models - Training
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
CSC 594 Topics in AI – Natural Language Processing
Hidden Markov Models Part 2: Algorithms
Statistical Models for Automatic Speech Recognition
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
4.0 More about Hidden Markov Models
LECTURE 14: HMMS – EVALUATION AND DECODING
Hidden Markov Models (HMMs)
LECTURE 15: REESTIMATION, EM AND MIXTURES
Parametric Methods Berlin Chen, 2005 References:
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Presentation transcript:

CONTEXT DEPENDENT CLASSIFICATION Remember: Bayes rule Here: The class to which a feature vector belongs depends on: Its own value The values of the other features An existing relation among the various classes

This interrelation demands the classification to be performed simultaneously for all available feature vectors Thus, we will assume that the training vectors occur in sequence, one after the other and we will refer to them as observations

The Context Dependent Bayesian Classifier Let Let be a sequence of classes, that is There are MN of those Thus, the Bayesian rule can equivalently be stated as Markov Chain Models (for class dependence)

statistically mutually independent NOW remember: or Assume: statistically mutually independent The pdf in one class independent of the others, then

From the above, the Bayes rule is readily seen to be equivalent to: that is, it rests on To find the above maximum in brute-force task we need Ο(NMΝ ) operations!!

The Viterbi Algorithm

Thus, each Ωi corresponds to one path through the trellis diagram Thus, each Ωi corresponds to one path through the trellis diagram. One of them is the optimum (e.g., black). The classes along the optimal path determine the classes to which ωi are assigned. To each transition corresponds a cost. For our case

Equivalently where, Define the cost up to a node ,k,

Bellman’s principle now states The optimal path terminates at Complexity O (NM2)

Channel Equalization The problem

Example In xk three input symbols are involved: Ik, Ik-1, Ik-2

Ik Ik-1 Ik-2 xk xk-1 ω1 1 ω2 0.5 ω3 1.5 ω4 ω5 ω6 ω7 ω8

Not all transitions are allowed Then

In this context, ωi are related to states In this context, ωi are related to states. Given the current state and the transmitted bit, Ik, we determine the next state. The probabilities P(ωi|ωj) define the state dependence model. The transition cost for all allowable transitions

Assume: Noise white and Gaussian A channel impulse response estimate to be available The states are determined by the values of the binary variables Ik-1,…,Ik-n+1 For n=3, there will be 4 states

Hidden Markov Models In the channel equalization problem, the states are observable and can be “learned” during the training period Now we shall assume that states are not observable and can only be inferred from the training data Applications: Speech and Music Recognition OCR Blind Equalization Bioinformatics

An HMM is a stochastic finite state automaton, that generates the observation sequence, x1, x2,…, xN We assume that: The observation sequence is produced as a result of successive transitions between states, upon arrival at a state:

This type of modeling is used for nonstationary stochastic processes that undergo distinct transitions among a set of different stationary processes.

Examples of HMM: The single coin case: Assume a coin that is tossed behind a curtain. All it is available to us is the outcome, i.e., H or T. Assume the two states to be: S = 1H S = 2T This is also an example of a random experiment with observable states. The model is characterized by a single parameter, e.g., P(H). Note that P(1|1) = P(H) P(2|1) = P(T) = 1 – P(H)

The two-coins case: For this case, we observe a sequence of H or T The two-coins case: For this case, we observe a sequence of H or T. However, we have no access to know which coin was tossed. Identify one state for each coin. This is an example where states are not observable. H or T can be emitted from either state. The model depends on four parameters. P1(H), P2(H), P(1|1), P(2|2)

The three-coins case example is shown below: Note that in all previous examples, specifying the model is equivalent to knowing: The probability of each observation (H,T) to be emitted from each state. The transition probabilities among states: P(i|j).

A general HMM model is characterized by the following set of parameters Κ, number of states

That is: What is the problem in Pattern Recognition Given M reference patterns, each described by an HMM, find the parameters, S, for each of them (training) Given an unknown pattern, find to which one of the M, known patterns, matches best (recognition)

Recognition: Any path method Assume the M models to be known (M classes). A sequence of observations, X, is given. Assume observations to be emissions upon the arrival on successive states Decide in favor of the model S* (from the M available) according to the Bayes rule for equiprobable patterns

For each model S there is more than one possible sets of successive state transitions Ωi, each with probability Thus: For the efficient computation of the above DEFINE Local activity History

Observe that Compute this for each S

Some more quantities

Training The philosophy: Given a training set X, known to belong to the specific model, estimate the unknown parameters of S, so that the output of the model, e.g. to be maximized This is a ML estimation problem with missing data

Assumption: Data x discrete Definitions:

The Algorithm: Initial conditions for all the unknown parameters. Step 1: From the current estimates of the model parameters reestimate the new model S from

Step 3: Compute go to step 2. Otherwise stop Remarks: Each iteration improves the model The algorithm converges to a maximum (local or global) The algorithm is an implementation of the EM algorithm