Linear Predictive Analysis 主講人:虞台文. Contents Introduction Basic Principles of Linear Predictive Analysis The Autocorrelation Method The Covariance Method.

Slides:



Advertisements
Similar presentations
Dates for term tests Friday, February 07 Friday, March 07
Advertisements

Linear Inverse Problems
General Linear Model With correlated error terms  =  2 V ≠  2 I.
Applied Informatics Štefan BEREŽNÝ
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
Fourier Series 主講者:虞台文.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Speech Recognition Chapter 3
OPTIMUM FILTERING.
A 12-WEEK PROJECT IN Speech Coding and Recognition by Fu-Tien Hsiao and Vedrana Andersen.
Itay Ben-Lulu & Uri Goldfeld Instructor : Dr. Yizhar Lavner Spring /9/2004.
The General Linear Model. The Simple Linear Model Linear Regression.
ELE Adaptive Signal Processing
AGC DSP AGC DSP Professor A G Constantinides©1 A Prediction Problem Problem: Given a sample set of a stationary processes to predict the value of the process.
Chapter 9 Gauss Elimination The Islamic University of Gaza
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
Overview of Adaptive Multi-Rate Narrow Band (AMR-NB) Speech Codec
Curve-Fitting Regression
Ordinary least squares regression (OLS)
Wide/Narrow Band Spectrograms
Mujahed AlDhaifallah (Term 342) Read Chapter 9 of the textbook
Linear regression models in matrix terms. The regression function in matrix terms.
Chapter 3 The Inverse. 3.1 Introduction Definition 1: The inverse of an n  n matrix A is an n  n matrix B having the property that AB = BA = I B is.
Linear Prediction Problem: Forward Prediction Backward Prediction
LU Decomposition 1. Introduction Another way of solving a system of equations is by using a factorization technique for matrices called LU decomposition.
Chapter 10 Review: Matrix Algebra
Summarized by Soo-Jin Kim
Time-Domain Methods for Speech Processing 虞台文. Contents Introduction Time-Dependent Processing of Speech Short-Time Energy and Average Magnitude Short-Time.
Chapter 2 Determinants. The Determinant Function –The 2  2 matrix is invertible if ad-bc  0. The expression ad- bc occurs so frequently that it has.
Linear Prediction Coding (LPC)
1 CS 551/651: Structure of Spoken Language Lecture 8: Mathematical Descriptions of the Speech Signal John-Paul Hosom Fall 2008.
Digital Systems: Hardware Organization and Design
Linear Prediction Coding of Speech Signal Jun-Won Suh.
Speech Coding Using LPC. What is Speech Coding  Speech coding is the procedure of transforming speech signal into more compact form for Transmission.
Speech Coding Submitted To: Dr. Mohab Mangoud Submitted By: Nidal Ismail.
SPEECH CODING Maryam Zebarjad Alessandro Chiumento.
T – Biomedical Signal Processing Chapters
1 Linear Prediction. 2 Linear Prediction (Introduction) : The object of linear prediction is to estimate the output sequence from a linear combination.
1 PATTERN COMPARISON TECHNIQUES Test Pattern:Reference Pattern:
1 Linear Prediction. Outline Windowing LPC Introduction to Vocoders Excitation modeling  Pitch Detection.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
Speech Signal Representations I Seminar Speech Recognition 2002 F.R. Verhage.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
The process has correlation sequence Correlation and Spectral Measure where, the adjoint of is defined by The process has spectral measure where.
ECE 5525 Osama Saraireh Fall 2005 Dr. Veton Kepuska
Chapter 6 Linear Predictive Coding (LPC) of Speech Signals 6.1 Basic Concepts of LPC 6.2 Auto-Correlated Solution of LPC 6.3 Covariance Solution of LPC.
CHAPTER 5 SIGNAL SPACE ANALYSIS
Chapter 9 Gauss Elimination The Islamic University of Gaza
Z bigniew Leonowicz, Wroclaw University of Technology Z bigniew Leonowicz, Wroclaw University of Technology, Poland XXIX  IC-SPETO.
More On Linear Predictive Analysis
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Normal Equations The Orthogonality Principle Solution of the Normal Equations.
By Sarita Jondhale 1 Signal preprocessor: “conditions” the speech signal s(n) to new form which is more suitable for the analysis Postprocessor: operate.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Linear Prediction.
Adv DSP Spring-2015 Lecture#11 Spectrum Estimation Parametric Methods.
MTH108 Business Math I Lecture 20.
ASEN 5070: Statistical Orbit Determination I Fall 2014
PATTERN COMPARISON TECHNIQUES
Figure 11.1 Linear system model for a signal s[n].
Parameter Estimation 主講人:虞台文.
Determinants Section The focus before was to determine information about the solution set of the linear system of equations given as the matrix equation.
Assoc. Prof. Dr. Peerapol Yuvapoositanon
Linear Prediction.
Linear Predictive Coding Methods
Numerical Analysis Lecture 16.
5.4 General Linear Least-Squares
Linear Prediction.
Multivariate Linear Regression
16. Mean Square Estimation
Presentation transcript:

Linear Predictive Analysis 主講人:虞台文

Contents Introduction Basic Principles of Linear Predictive Analysis The Autocorrelation Method The Covariance Method More on the above Methods Solution of the LPC Equations Lattice Formulations

Linear Predictive Analysis Introduction

Linear Predictive Analysis A powerful speech analysis technique. Powerful for estimating speech parameters. – Pitch – Formants – Spectra – Vocal tract area functions Especially useful for compression. Linear predictive analysis techniques is often referred as linear predictive coding or LPC.

Basic Idea A speech sample can be approximated as a linear combination of past speech samples. This prediction model corresponds to an all- zero model, whose inverse matches the vocal tract model we developed.

LPC vs. System Identification The linear prediction have been in use in the areas of control, and information theory under the name of system estimation and system identification. Using LPC methods, the result system will be modeled as an all-pole linear system.

LPC to Speech Processing Modeling the speech waveform. There are different formulations. The differences among them are often those of philosophy or way of viewing the problem. They almost lead to the same result.

Formulations The covariance method The autocorrelation formulation The lattice method The inverse filter formulation The spectral estimation formulation The maximum likelihood estimation The inner product formulation

Linear Predictive Analysis Basic Principles of Linear Predictive Analysis

Speech Production Model Impulse Train Generator Impulse Train Generator Random Noise Generator Random Noise Generator Time-Varying Digital Filter Time-Varying Digital Filter Vocal Tract Parameters G u(n)u(n) s(n)s(n)

H(z)H(z) Speech Production Model Impulse Train Generator Impulse Train Generator Random Noise Generator Random Noise Generator Time-Varying Digital Filter Time-Varying Digital Filter Vocal Tract Parameters G u(n)u(n) s(n)s(n)

Linear Prediction Model Linear Prediction: Error compensation:

Speech Production vs. Linear Prediction Speech production: Linear Prediction: Vocal Tract Excitation Linear Predictor Error a k =  k

Prediction Error Filter Linear Prediction:

Prediction Error Filter s(n)s(n)e(n)e(n)

s(n)s(n)e(n)e(n) Goal: Minimize

Prediction Error Filter Goal: Minimize Suppose that c ij ’s can be estimated from the speech sample. Our goal now is to find a k ’s to minimize the sum of squared errors.

Prediction Error Filter Goal: Minimize Fact: c ij = c ji Let and solve the equations. i = k j = k

Prediction Error Filter i = k j = k k = 1: k = 2: k = p: Fact: c ij = c ji

=1 Prediction Error Filter Fact: c ij = c ji k = 1: k = 2: k = p:

=1 Prediction Error Filter Fact: c ij = c ji k = 1: k = 2: k = p:

Prediction Error Filter Fact: c ij = c ji Remember this equation

Prediction Error Filter Fact: c ij = c ji Such a formulation in fact is unrealistic. Why?

Error Energy =0, i  0

Short-Time Analysis Original Goal: Minimize Vocal tract is a slowly time-varying system. Minimizing the error energy for whole speech signal is unreasonable.

Short-Time Analysis Original Goal: Minimize n New Goal: Minimize

Short-Time Analysis n New Goal: Minimize

Linear Predictive Analysis The Autocorrelation Method

n Usually, we use a Hamming window.

The Autocorrelation Method Error energy 0 N1N1 So, the original formulation can be directly applied to find the prediction coefficients.

The Autocorrelation Method What properties they have? For convenience, I’ll drop the sup/subscripts n in the following discussion.

Properties of c ij ’ s Property 1: Property 2: Its value depends on the difference |i  j|.

The Equations for the Autocorrelation Methods

A Toeplitz Matrix

The Error Energy

Linear Predictive Analysis The Covariance Method

n Goal: Minimize

The Covariance Method n Goal: Minimize The range for evaluating error energy is different from the autocorrelation method.

The Covariance Method Goal: Minimize c ij

The Covariance Method or Property:

The Covariance Method or 0 N1N1 ij ii Ni1Ni1

The Covariance Method or 0 N1N1 ij ii Ni1Ni1 c ij is, in fact, a cross-correlation function. The samples involved in computation of c ij ’s are values of s n (m) in the interval  p  m  N  1. The value of c ij depends on both i and j.

The Equations for the Covariance Methods Symmetric but not Toeplitz

The Error Energy

Linear Predictive Analysis More on the above Methods

The Equations to be Solved The Autocorrelation Method The Covariance Method

 n and  n for the Autocorrelation Method Define  n is positive definite. Why?

 n and  n for the Covariance Method Define  n is positive definite. Why?

Linear Predictive Analysis Solution of the LPC Equations

Covariance Method--- Cholesky Decomposition Method Also called the square root method.    Symmetric and positive definite.

Covariance Method--- Cholesky Decomposition Method A lower triangular matrix A diagonal matrix

Covariance Method--- Cholesky Decomposition Method Y Y can be recursively solved.

Covariance Method--- Cholesky Decomposition Method Y

How?

=1 Covariance Method--- Cholesky Decomposition Method Consider diagonal elements

Covariance Method--- Cholesky Decomposition Method =1

Covariance Method--- Cholesky Decomposition Method =1 The story is, then, continued.

Covariance Method--- Cholesky Decomposition Method Error Energy

Autocorrelation Method--- Durbin ’ s Recursive Solution The recursive solution proceeds in steps. In each step, we already have a solution for a lower order predictor, and we use that solution to compute the coefficients for the higher order predictor.

Autocorrelation Method--- Durbin ’ s Recursive Solution Notations: Coefficients for the n th order predictor: Error energy for the n th order predictor: The Toeplitz matrix for the n th order predictor:

Autocorrelation Method--- Durbin ’ s Recursive Solution The equation for the autocorrelation method: How the procedure proceeds recursively?

Permutation Matrix Row inversing Column inversing

Property of a Toeplitz Matrix A Toeplitz Matrix

Autocorrelation Method--- Durbin ’ s Recursive Solution

This is what we want.

Autocorrelation Method--- Durbin ’ s Recursive Solution

=0

Autocorrelation Method--- Durbin ’ s Recursive Solution

What can you say about k n ?

Autocorrelation Method--- Durbin ’ s Recursive Solution Summary: Construct a p th order linear predictor. Step1. Compute the values of r 0, r 1, , r p. Step2. Set E (0) = r 0. Step3. Recursively compute the following terms from n=1 to p.

Linear Predictive Analysis Lattice Formulations

The Steps for Finding LPC Coefficients Both the covariance and the autocorrelation methods consist of two steps: – Computation of a matrix of correlation values. – Solution of a set of linear equations. Lattice method: – Combine them into one.

The Clue from Autocorrelation Method Consider the system function of an n th order the linear predictor. The recursive relation from autocorrelation method:

The Clue from Autocorrelation Method Change index i  n  i A ( n  1) ( z )

The Clue from Autocorrelation Method A ( n  1) ( z  1 )

Interpretation e ( n  1) ( m ) b ( n  1) ( m  1 )

order n  1 Interpretation Forward Prediction Error Filter What is this?... s(m)s(m) s(m1)s(m1) s(m2)s(m2) s(m3)s(m3) s(m  n+3) s(m  n+2) s(m  n+1) s(mn)s(mn)

order n  1... s(m)s(m) s(m1)s(m1) s(m2)s(m2) s(m3)s(m3) s(m  n+3) s(m  n+2) s(m  n+1) s(mn)s(mn) Interpretation Backward Prediction Error Filter

Backward Prediction Defined Define

Backward Prediction Defined Define

Forward Prediction vs. Backward Prediction Define

The Prediction Errors The forward prediction error The backward prediction error

The Lattice Structure z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 kpkp z1z1

k i =? Throughout the discussion, we have assumed that k i ’s are the same as that developed for the autocorrelation method. So, k i ’s can be found using the autocorrelation method. z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 kpkp z1z1

Another Approach to Find k i ’s For the n th order predictor, our goal is to minimize So, we want to minimize z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 kpkp z1z1

Another Approach to Find k i ’s So, we want to minimize Set

Another Approach to Find k i ’s Set Fact:

PARCOR kpkp CORR k1k1 s(m)s(m) z1z1 k2k2 z1z1 z1z1 z1z1 Given k n ’s, can you find  i ’s?

All-Pole Lattice z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 kpkp z1z1

z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 knkn z1z1

z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 knkn z1z1 knkn  k n z1z1

All-Pole Lattice knkn  k n z1z1 k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 z1z1 kpkp z1z1 kp1kp1 kp1kp1 11

All-Pole Lattice e(m)e(m) s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 z1z1 kpkp z1z1 kp1kp1 kp1kp1 11

Comparison k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 z1z1 kpkp z1z1 kp1kp1 kp1kp1 e(m)e(m)s(m)s(m) PARCOR k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 z1z1 kpkp z1z1 kp1kp1 kp1kp1 11

Normalize Lattice knkn  k n z1z1 knkn knkn z1z1 Section n

Normalize Lattice Section 1 Section 1 Section 2 Section 2 Section p Section p knkn knkn z1z1 Section n 11

Normalize Lattice

Three multiplier form knkn  k n z1z1

Normalize Lattice Three multiplier form Let Four multiplier form knkn  k n z1z1 z1z1

Normalize Lattice Kelly-Lochbaum form knkn  k n z1z1 z1z1 knkn z1z1

Normalize Lattice Section 1 Section 1 Section 2 Section 2 Section p Section p 11 knkn  k n z1z1 z1z1 knkn z1z1