1 11. Finite-Precision Effects and Pipeline Adaptive Filters  In practice, an adaptive filter is usually implemented digitally. Thus, finite-precision.

Slides:



Advertisements
Similar presentations
FINITE WORD LENGTH EFFECTS
Advertisements

Analog-to-Digital Converter (ADC) And
Mean, Proportion, CLT Bootstrap
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
Properties of State Variables
5/4/2006BAE Analog to Digital (A/D) Conversion An overview of A/D techniques.
So far We have introduced the Z transform
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
CENG536 Computer Engineering Department Çankaya University.
AMI 4622 Digital Signal Processing
MATH 685/ CSI 700/ OR 682 Lecture Notes
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
ECIV 201 Computational Methods for Civil Engineers Richard P. Ray, Ph.D., P.E. Error Analysis.
Digital Kommunikationselektronik TNE027 Lecture 4 1 Finite Impulse Response (FIR) Digital Filters Digital filters are rapidly replacing classic analog.
EE513 Audio Signals and Systems Wiener Inverse Filter Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
PH4705/ET4305: A/D: Analogue to Digital Conversion
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Adaptive Noise Cancellation ANC W/O External Reference Adaptive Line Enhancement.
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Chapter 5ELE Adaptive Signal Processing 1 Least Mean-Square Adaptive Filtering.
Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Room A;
Over-Sampling and Multi-Rate DSP Systems
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Algorithm Taxonomy Thus far we have focused on:
Numerical Computations in Linear Algebra. Mathematically posed problems that are to be solved, or whose solution is to be confirmed on a digital computer.
Introduction to Adaptive Digital Filters Algorithms
FE8113 ”High Speed Data Converters”. Part 2: Digital background calibration.
Fixed-Point Arithmetics: Part II
FINITE word length effect in fixed point processing The Digital Signal Processors have finite width of the data bus. The word-length after mathematical.
Copyright © 2001, S. K. Mitra Digital Filter Structures The convolution sum description of an LTI discrete-time system be used, can in principle, to implement.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
FE8113 ”High Speed Data Converters”. Part 2: Digital background calibration.
EE513 Audio Signals and Systems
LEAST MEAN-SQUARE (LMS) ADAPTIVE FILTERING. Steepest Descent The update rule for SD is where or SD is a deterministic algorithm, in the sense that p and.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures.
1 Introduction to Digital Filters Filter: A filter is essentially a system or network that selectively changes the wave shape, amplitude/frequency and/or.
ELEC692 VLSI Signal Processing Architecture Lecture 2 Pipelining and Parallel Processing.
ELEC692 VLSI Signal Processing Architecture Lecture 3
Dept. E.E./ESAT-STADIUS, KU Leuven
Lms algorithm FOR NON-STATIONARY INPUTS FOR THE PIPELINED IMPLEMENTATION OF ADAPTIVE ANTENNAS Prof.Yu Hen Hu Arjun Arunachalam Department of Electrical.
Professors: Eng. Diego Barral Eng. Mariano Llamedo Soria Julian Bruno
3.7 Adaptive filtering Joonas Vanninen Antonio Palomino Alarcos.
Finite Precision Numerical Effects
DEPARTMENTT OF ECE TECHNICAL QUIZ-1 AY Sub Code/Name: EC6502/Principles of digital Signal Processing Topic: Unit 1 & Unit 3 Sem/year: V/III.
Impulse Response Measurement and Equalization Digital Signal Processing LPP Erasmus Program Aveiro 2012 Digital Signal Processing LPP Erasmus Program Aveiro.
Chapter 6 Discrete-Time System. 2/90  Operation of discrete time system 1. Discrete time system where and are multiplier D is delay element Fig. 6-1.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Networked Embedded Systems Sachin Katti & Pengyu Zhang EE107 Spring 2016 Lecture 13 Interfacing with the Analog World.
FINITE word length effect in fixed point processing
Unit IV Finite Word Length Effects
MECH 373 Instrumentation and Measurements
The Johns Hopkins University
EEE4176 Applications of Digital Signal Processing
FINITE word length effect in fixed point processing
Adaptive Filters Common filter design methods assume that the characteristics of the signal remain constant in time. However, when the signal characteristics.
By: Mohammadreza Meidnai Urmia university, Urmia, Iran Fall 2014
Sampling rate conversion by a rational factor
Adaptation Behavior of Pipelined Adaptive Filters
لجنة الهندسة الكهربائية
Lesson 8: Analog Signal Conversion
Quantization in Implementing Systems
Chapter 6 Discrete-Time System
Finite Wordlength Effects
DEPARTMENT OF INFORMATION TECHNOLOGY DIGITAL SIGNAL PROCESSING UNIT 4
Adaptive Filter A digital filter that automatically adjusts its coefficients to adapt input signal via an adaptive algorithm. Applications: Signal enhancement.
Zhongguo Liu Biomedical Engineering
DEPARTMENT OF INFORMATION TECHNOLOGY DIGITAL SIGNAL PROCESSING UNIT 4
Chapter 7 Finite Impulse Response(FIR) Filter Design
Presentation transcript:

1 11. Finite-Precision Effects and Pipeline Adaptive Filters  In practice, an adaptive filter is usually implemented digitally. Thus, finite-precision problems arise.  There are two ways to implement a filter; fixed-point or floating-point. In most cases, fixed-point implementation is preferred.  In digital implementation, there are essentially two sources of quantizations. – A/D conversion – Finite word-length arithmetic  Signal is usually received in a analog form, the A/D converter is a device to sample and quantize the signal.

2  Ideal A/D converter:  As we can see quantization error always exists. If the quantization level is large, we can assume the error is uniformly distributed and treat it as a additive noise.

3  Calculation of quantization noise:  An N bits ADC can have a quantization error from -1/2 N to 1/2 N (1/2x2/2 N ).  The average quantization power is

4  Assuming that the dynamic range of ADC and DAC is between -1 and 1 (the maximum magnitude is 0 dB), then the quantization noise power is  Thus, we have a 6-dB rule of thumb for the quantization noise.  In a digital system, a finite word length is commonly used to store the result of internal arithmetic calculations. Thus after a arithmetic operation (addition, multiplication), the results must be quantized and this results in the round off or truncation effect.  Due to above effects, the digital version of the filter may exhibits a response deviating from the ideal one.

5  For a digitally realized LMS adaptive filter, there are many sources that will introduce quantization errors.  For the input quantizer connected to input u(n), we have  For the quantizer connected to the desired signal d(n), we have  For the quantized tap-weight vector, we have  For the filter output, we have

6  The finite-precision LMS algorithm is described by  Thus, the error is then  Assuming that the step size is small and invoking the independence assumption, it has been shown that

7  Decreasing step size reduces the misadjustment, however, increases the effect of quantization error.  A digital implementation of the LMS algorithm stops adapting or stalls, whenever the correction term  e q (n)u q (n-i) for the ith tap weight is smaller than in magnitude than the least significant bit (LSB) of the tap weight.  Let the the root mean square (rms) value of u q (n 0 -I) be A rms. Then, if the LMS algorithm stop adaptation. The quantity e D (  ) is called the digital residual error.

8  To prevent the stalling pehnomenon, e D (  ) must be made as small as possible. This is can be achieved by – The LSB is reduced by picking a sufficiently large number of bits for digital representation of each tap weight. – The step-size parameter  is made as large as possible, while still guaranteeing convergence of the algorithm.  There is another numerical problem called parameter drift; that is, tap weights in the LMS algorithm attain arbitrarily large values despite bounded inputs, disturbances, and errors.  To stablize the digital implementation LMS algorithm, we may use the leaky LMS algorithm.  This algorithm provides a compromise between minimizing the MSE and minimizing the filter power.

9  Leaky LMS algorithm:  It can be shown that the update equation is  where  is a constant satisfying the condition  Except for the leakage factor, the algorithm is the same as the conventional LMS algorithm.

10  Note that the inclusion of the leaky factor has the equivalent effect of adding a white noise sequence of zero mean and variance  to the input process.  This suggests another way for stabilizing a digital implementation of the LMS algorithm. A relatively weak white-noise sequence of variance  known as dither, is added to the input process u(n), and samples of the combination are then used as tap inputs.  For the RLS algorithm, there is a numerical instability problem to be considered when it is implemented in finite- precision arithmetic.  Divergence of the RLS algorithm is primarily because the matrix P(n) loses its positive definiteness property or Hermitian symmetry.

11  The Hermitian symmetry preserving RLS algorithm:  Tri{.} : only compute the upper or lower triangular part of {.}, and fills in the rest of the matrix to preserve Hermitian symmetry

12  The stalling phenomenon is directly link to the forgetting factor and the input signal variance. As we known,  Thus,  If the forgetting factor is close to one and/or the input data variance is large, the RLS may stall.

13  The FIR filter plays the fundamental role in digital signal processing.  How to implement an FIR is of great concern. For the ASIC solution, we have to consider – Throughput (speed) – Gate count (area) – Power consumption – Delay – modular structure  For the adaptive FIR filter, we have the similar concerns.  Conventionally, there are two structures for FIR filters; namely, direct form and transposed form.  If filter’s coefficients are fixed, these two forms are equivalent.

14  Transversal and transposed form:  The equivalence of these two forms can be easily verified using the retiming technique.

15  Retiming:  The retiming technique is simple, however, it is very useful deriving new pipeline structure.

16  The disadvantage of the direct form (DF) is its low speed (needs to complete many multiplications in one clock cycle).  The transposed form (TF) can overcome this problem. However, input has to drive all weights (fan-in problem).  To solve the low speed problem in the DF, we can use the idea of pipelining (insert delays).  To solve the driving problem in TF, we can insert delays between weights.  This approach also results in a modular structure called systolic architecture.

17  Pipelined processing: z(n)

18  The results:

19  W 1 is obtained by inserting delays in the TF while W 2 by inserting delays in the DF.  The transfer functions:  The clock rate for the structure (a) must be doubled to keep the same throughput. The structure (b) has N extra delays.  Thus, these two structures are less attractive.

20  The hybrid form (HF): – Pipelined structure without extra latency. – Trades between speed and area/power consumption

21  The hybrid form I is obtained from the DF while II is from the TF (using retiming).  When the filter weights are time-varying, the TF has the weight delay problem. This will may change the behavior of the original LMS algorithm.  It is possible to “equalize” the delays in weights. In other words, w 0 is not delayed, w 1 is delayed by one clock cycle,…, w 4 is delayed by 4 clock cycles. However, this will increase the number of delays.  Besides coefficient delay, the TF has the highest power consumption, since the number of bits in the output path (accumulation path) is usually much greater than those in the input path.  The hybrid form serves a compromise between the DF and the TF.

22  The implementation of the adaptive FIR filter also has DF, TF, and HF  The DF adaptive LMS filter:

23  The DF and HF:

24  To obtain pipelining, we have to delay filter weights, as a result, the LMS has to be modified.  This is called the delayed LMS algorithm.  The delayed LMS algorithm is a special case of delay relaxation algorithm described below.  To have an adaptive pipelined filter with higher pipeline structure, we can apply architecture/algorithm transforms  Architecture transformation: – Pipelining – Parallelism – retiming

25  Algorithm transformation: – Look ahead – Relaxed look ahead  Parallelism:

26  Retiming:

27  Look-ahead:  Thus, the clock rate can be increase M times and M clocks are used to finish a multiplication (M x1/M=1).  Relaxed look-ahead: – It is derived from the look-ahead and is served as an approximation to the original adaptive algorithm.  Three kinds of relaxed look-ahead – Delay relaxation – Sum relaxation – Delay transfer relaxation M-stage look-ahead

28  Example (look-ahead): D D 3D a(n+2) b(n+2) u(n+2) x(n) x(n+3) D D Can be used for retiming

29  Delay relaxation: – Assume that the gradient does not have significant change during M 1 samples.  Sum relaxation: – The M terms in the gradient estimated is approximated by M’ (M’<M) term.

30  Delay transfer relaxation: Delay relaxation Delay transfer a(n-D 1 )

31  For example: D DD D 5D d(n-5) e(n) w(n-1) w(n) x(n)

32  Retiming: D DD D 5D 4D d(n-5) e(n) w(n-1) w(n) DD x(n)

33  Retiming/Pipelining: D DD D 3D D DDDD D D D d(n-2) x(n)