Download presentation
Presentation is loading. Please wait.
Published byBeatrix Manning Modified over 9 years ago
1
Linear Predictive Analysis 主講人:虞台文
2
Contents Introduction Basic Principles of Linear Predictive Analysis The Autocorrelation Method The Covariance Method More on the above Methods Solution of the LPC Equations Lattice Formulations
3
Linear Predictive Analysis Introduction
4
Linear Predictive Analysis A powerful speech analysis technique. Powerful for estimating speech parameters. – Pitch – Formants – Spectra – Vocal tract area functions Especially useful for compression. Linear predictive analysis techniques is often referred as linear predictive coding or LPC.
5
Basic Idea A speech sample can be approximated as a linear combination of past speech samples. This prediction model corresponds to an all- zero model, whose inverse matches the vocal tract model we developed.
6
LPC vs. System Identification The linear prediction have been in use in the areas of control, and information theory under the name of system estimation and system identification. Using LPC methods, the result system will be modeled as an all-pole linear system.
7
LPC to Speech Processing Modeling the speech waveform. There are different formulations. The differences among them are often those of philosophy or way of viewing the problem. They almost lead to the same result.
8
Formulations The covariance method The autocorrelation formulation The lattice method The inverse filter formulation The spectral estimation formulation The maximum likelihood estimation The inner product formulation
9
Linear Predictive Analysis Basic Principles of Linear Predictive Analysis
10
Speech Production Model Impulse Train Generator Impulse Train Generator Random Noise Generator Random Noise Generator Time-Varying Digital Filter Time-Varying Digital Filter Vocal Tract Parameters G u(n)u(n) s(n)s(n)
11
H(z)H(z) Speech Production Model Impulse Train Generator Impulse Train Generator Random Noise Generator Random Noise Generator Time-Varying Digital Filter Time-Varying Digital Filter Vocal Tract Parameters G u(n)u(n) s(n)s(n)
12
Linear Prediction Model Linear Prediction: Error compensation:
13
Speech Production vs. Linear Prediction Speech production: Linear Prediction: Vocal Tract Excitation Linear Predictor Error a k = k
14
Prediction Error Filter Linear Prediction:
15
Prediction Error Filter s(n)s(n)e(n)e(n)
16
s(n)s(n)e(n)e(n) Goal: Minimize
17
Prediction Error Filter Goal: Minimize Suppose that c ij ’s can be estimated from the speech sample. Our goal now is to find a k ’s to minimize the sum of squared errors.
18
Prediction Error Filter Goal: Minimize Fact: c ij = c ji Let and solve the equations. i = k j = k
19
Prediction Error Filter i = k j = k k = 1: k = 2: k = p: Fact: c ij = c ji
20
=1 Prediction Error Filter Fact: c ij = c ji k = 1: k = 2: k = p:
21
=1 Prediction Error Filter Fact: c ij = c ji k = 1: k = 2: k = p:
22
Prediction Error Filter Fact: c ij = c ji Remember this equation
23
Prediction Error Filter Fact: c ij = c ji Such a formulation in fact is unrealistic. Why?
24
Error Energy =0, i 0
25
Short-Time Analysis Original Goal: Minimize Vocal tract is a slowly time-varying system. Minimizing the error energy for whole speech signal is unreasonable.
26
Short-Time Analysis Original Goal: Minimize n New Goal: Minimize
27
Short-Time Analysis n New Goal: Minimize
28
Linear Predictive Analysis The Autocorrelation Method
29
n Usually, we use a Hamming window.
30
The Autocorrelation Method Error energy 0 N1N1 So, the original formulation can be directly applied to find the prediction coefficients.
31
The Autocorrelation Method What properties they have? For convenience, I’ll drop the sup/subscripts n in the following discussion.
32
Properties of c ij ’ s Property 1: Property 2: Its value depends on the difference |i j|.
33
The Equations for the Autocorrelation Methods
34
A Toeplitz Matrix
35
The Error Energy
38
Linear Predictive Analysis The Covariance Method
39
n Goal: Minimize
40
The Covariance Method n Goal: Minimize The range for evaluating error energy is different from the autocorrelation method.
41
The Covariance Method Goal: Minimize c ij
42
The Covariance Method or Property:
43
The Covariance Method or 0 N1N1 ij ii Ni1Ni1
44
The Covariance Method or 0 N1N1 ij ii Ni1Ni1 c ij is, in fact, a cross-correlation function. The samples involved in computation of c ij ’s are values of s n (m) in the interval p m N 1. The value of c ij depends on both i and j.
45
The Equations for the Covariance Methods Symmetric but not Toeplitz
46
The Error Energy
48
Linear Predictive Analysis More on the above Methods
49
The Equations to be Solved The Autocorrelation Method The Covariance Method
50
n and n for the Autocorrelation Method Define n is positive definite. Why?
51
n and n for the Covariance Method Define n is positive definite. Why?
52
Linear Predictive Analysis Solution of the LPC Equations
53
Covariance Method--- Cholesky Decomposition Method Also called the square root method. Symmetric and positive definite.
54
Covariance Method--- Cholesky Decomposition Method A lower triangular matrix A diagonal matrix
55
Covariance Method--- Cholesky Decomposition Method Y Y can be recursively solved.
56
Covariance Method--- Cholesky Decomposition Method Y
57
How?
58
=1 Covariance Method--- Cholesky Decomposition Method Consider diagonal elements
59
Covariance Method--- Cholesky Decomposition Method =1
60
Covariance Method--- Cholesky Decomposition Method =1 The story is, then, continued.
61
Covariance Method--- Cholesky Decomposition Method Error Energy
62
Autocorrelation Method--- Durbin ’ s Recursive Solution The recursive solution proceeds in steps. In each step, we already have a solution for a lower order predictor, and we use that solution to compute the coefficients for the higher order predictor.
63
Autocorrelation Method--- Durbin ’ s Recursive Solution Notations: Coefficients for the n th order predictor: Error energy for the n th order predictor: The Toeplitz matrix for the n th order predictor:
64
Autocorrelation Method--- Durbin ’ s Recursive Solution The equation for the autocorrelation method: How the procedure proceeds recursively?
65
Permutation Matrix Row inversing Column inversing
66
Property of a Toeplitz Matrix A Toeplitz Matrix
67
Autocorrelation Method--- Durbin ’ s Recursive Solution
70
This is what we want.
71
Autocorrelation Method--- Durbin ’ s Recursive Solution
75
=0
76
Autocorrelation Method--- Durbin ’ s Recursive Solution
77
What can you say about k n ?
78
Autocorrelation Method--- Durbin ’ s Recursive Solution Summary: Construct a p th order linear predictor. Step1. Compute the values of r 0, r 1, , r p. Step2. Set E (0) = r 0. Step3. Recursively compute the following terms from n=1 to p.
79
Linear Predictive Analysis Lattice Formulations
80
The Steps for Finding LPC Coefficients Both the covariance and the autocorrelation methods consist of two steps: – Computation of a matrix of correlation values. – Solution of a set of linear equations. Lattice method: – Combine them into one.
81
The Clue from Autocorrelation Method Consider the system function of an n th order the linear predictor. The recursive relation from autocorrelation method:
82
The Clue from Autocorrelation Method Change index i n i A ( n 1) ( z )
83
The Clue from Autocorrelation Method A ( n 1) ( z 1 )
84
Interpretation e ( n 1) ( m ) b ( n 1) ( m 1 )
85
order n 1 Interpretation Forward Prediction Error Filter What is this?... s(m)s(m) s(m1)s(m1) s(m2)s(m2) s(m3)s(m3) s(m n+3) s(m n+2) s(m n+1) s(mn)s(mn)
86
order n 1... s(m)s(m) s(m1)s(m1) s(m2)s(m2) s(m3)s(m3) s(m n+3) s(m n+2) s(m n+1) s(mn)s(mn) Interpretation Backward Prediction Error Filter
87
Backward Prediction Defined Define
88
Backward Prediction Defined Define
89
Forward Prediction vs. Backward Prediction Define
90
The Prediction Errors The forward prediction error The backward prediction error
91
The Lattice Structure z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 kpkp z1z1
92
k i =? Throughout the discussion, we have assumed that k i ’s are the same as that developed for the autocorrelation method. So, k i ’s can be found using the autocorrelation method. z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 kpkp z1z1
93
Another Approach to Find k i ’s For the n th order predictor, our goal is to minimize So, we want to minimize z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 kpkp z1z1
94
Another Approach to Find k i ’s So, we want to minimize Set
95
Another Approach to Find k i ’s Set Fact:
96
PARCOR kpkp CORR k1k1 s(m)s(m) z1z1 k2k2 z1z1 z1z1 z1z1 Given k n ’s, can you find i ’s?
97
All-Pole Lattice z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 kpkp z1z1
98
z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 knkn z1z1
99
z1z1 s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 knkn z1z1 knkn k n z1z1
100
All-Pole Lattice knkn k n z1z1 k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 z1z1 kpkp z1z1 kp1kp1 kp1kp1 11
101
All-Pole Lattice e(m)e(m) s(m)s(m) k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 z1z1 kpkp z1z1 kp1kp1 kp1kp1 11
102
Comparison k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 z1z1 kpkp z1z1 kp1kp1 kp1kp1 e(m)e(m)s(m)s(m) PARCOR k1k1 k1k1 z1z1 k2k2 k2k2 z1z1 z1z1 kpkp z1z1 kp1kp1 kp1kp1 11
103
Normalize Lattice knkn k n z1z1 knkn knkn z1z1 Section n
104
Normalize Lattice Section 1 Section 1 Section 2 Section 2 Section p Section p knkn knkn z1z1 Section n 11
105
Normalize Lattice
106
Three multiplier form knkn k n z1z1
107
Normalize Lattice Three multiplier form Let Four multiplier form knkn k n z1z1 z1z1
108
Normalize Lattice Kelly-Lochbaum form knkn k n z1z1 z1z1 knkn z1z1
109
Normalize Lattice Section 1 Section 1 Section 2 Section 2 Section p Section p 11 knkn k n z1z1 z1z1 knkn z1z1
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.