Download presentation
Presentation is loading. Please wait.
1
GSPT-AS-based Neural Network Design
Good morning, ladies and gentlemen, today, the topic I would like to talk about is the GSPT-AS-based neural network design. Presenter: Kuan-Hung Chen Adviser: Tzi-Dar Chiueh October 13, 2003
2
Outline Motivation GSPT-AS LMS Algorithm Power Amplifier Model
Predistortor Architecture Simulation Results and Complexity Analysis Conclusions To begin with, I will talk about the motivation briefly. Then, the GSPT-AS LMS algorithm will be introduced. After introducing the GSPT-AS LMS algorithm, the power amplifier model and the predistortor architecture used for simulation will be presented. The simulation results and the complexity analysis will be shown. And finally, a brief conclusion.
3
Motivation Initial simulation results show that the GSPT-based neural network cannot converge. The reason is that the magnitude of all weights will have approximately the same order if only the sign of the updating term is taken for weight updating. So, it is reasonable that the magnitude of the updating term should be taken into account. It is straightforward to apply the GSPT-AS LMS algorithm, that takes the magnitude of the updating term into account, to the weight updating in the neural network. Based on my initial simulation results on the GSPT-based neural network predistortor, it seems that the GSPT-based neural network cannot converge. I think the reason is that the magnitude of all weights will have approximately the same order if only the sign of the updating term is taken for weight updating. So, it seems that the magnitude of the updating term should be taken account into weight updating. And, then, it is straightforward to apply the GSPT-AS LMS algorithm, that takes the magnitude of the updating term into account, to the weight updating in the neural network.
4
Basic Structure of an LMS Adaptive Filter
Before introducing the GSPT-AS LMS algorithm, we first brief review the structure of an LMS adaptive filter. The output y[n] is the summation of wi[n] multiplied by x[n-I]. And this part, linear filter is used to generate the output y[n]. The error signal, e[n], is calculated by d[n] minus y[n]. The weight is updated according to this equation and can be implemented with this structure. multiplied by e[n], then multiplied by x[n-i], and added to the old coefficient.
5
GSPT LMS Algorithm Reduce the complexity of both linear filter and the coefficient updating block in an adaptive filter As I have said before, the GSPT LMS algorithm can be used to reduce the complexity of both linear filter and the coefficient updating block in an adaptive filter. In the GSPT LMS algorithm, the output, y[n], and the error signal, e[n], are calculated by the same way in the LMS algorithm. However, each coefficient is increased or decreased according to the sign of the updating term in the LMS algorithm. Note that, the complexity of linear filter is reduced by using the GSPT number system for coefficient representation while the complexity of the coefficient updating block is reduced by using the GSPT LMS algorithm.
6
GSPT-AS LMS Algorithm Q(z) represents a power-of-2 value which is closest to z but not larger than z and g is the group size. The GSPT-AS LMS is originally proposed to improve the convergence speed by updating coefficients more precisely. The output, y[n], and the error signal, e[n], are still calculated by the same way as in the GSPT LMS algorithm. However, the coefficient is updated by the following scheme: First, we round e[n] and x[n-k] to power-of-2 values via the function Q(y), that represents a power-of-2 value which is closest to y but not larger than y. Then the updating term is calculated. If the magnitude of the updating term is too small, we hold the coefficient. If not, we select the proper group in the GSPT number presentation and set up the carryin or borrowin signal of the updating unit corresponds to that group.
7
Coefficient Updater for GSPT-AS LMS
Based on the magnitude of the updating term, we choose the proper updating unit to receive the carryin/borrowin signal. The idea can be explained more clearly with this slide. The updating term decision block rounds inputs to power-of-2 values, calculates the updating term, and determines which updating unit should receive the carryin or borrowin signal.
8
Power Amplifier Model To simulate a solid-state power amplifier, the following model is used for the AM/AM conversion: The AM/PM conversion of a solid-state power amplifier is small enough to be neglected. A good approximation of existing amplifiers is obtained by choosing p in the range of 2 to 3. Now, I will introduce the power amplifier model used in my simulations. To simulate a solid-state power amplifier, the following model is used for the AM/AM conversion: The AM/PM conversion of a solid-state power amplifier is small enough to be neglected. And, a good approximation of existing amplifiers is obtained by choosing p in the range of 2 to 3.
9
Transfer Function of AM/AM Conversion
This figure shows the normalized transfer functions of AM/AM conversion under different values of p. You can see that the distortion is quite severe when the input amplitude is greater than 1.
10
64-QAM Constellations Distorted by PA Model
This figure shows the 64-QAM constellations distorted by the PA model just presented.
11
Predistortor Architecture
Since only the AM-to-AM conversion is considered in this PA model, the learning architecture and the predistortion architecture shown in this slide are used. During the training, the magnitude of the input constellation is regarded as the desire signal. The magnitude of the distorted constellation is taken as the input of the neural network. The difference between the neural network’s output is used to train the neural network. After the training is completed, the magnitude of the input constellation is predistorted by the trained neural network. The output of the neural network and the phase of the input constellation are then translated to rectangular coordinates and sent to PA.
12
Neural Network Structure
The neural network structure used is a MLP with one hidden layer. The input layer has 1 neuron and 1 bias neuron. The hidden layer has 10 neurons and 1 bias neuron. The output layer has 1 neuron. The neural network structure used in the predistortor is a MLP with one hidden layer. The input layer has 1 neuron and 1 bias neuron. The hidden layer has 10 neurons and 1 bias neuron. And, the output layer has only one neuron. The nonlinear function is used only for neurons in the hidden layer except the bias neuron.
13
Backpropagation Algorithm
Now, let’s review the backpropagation algorithm used to train the neural network. The error signal, e[n], is calculated by d[n], the desire signal, minus o[n], the output of the neural network. e[n] is used to update the weights corresponds to interconnections from the hidden layer to the output layer. Also, it is used to calculate k[n], which are used to update the weights corresponds to interconnections from the input layer to the hidden layer.
14
GSPT-AS-based Backpropagation Algorithm
Let Q(z) represent a power-of-2 value which is closest to z but not larger than z. Applying the GSPT-AS LMS algorithm to the backpropagation algorithm, the following GSPT-AS-based backpropagation algorithm is derived. All terms required to calculate e[n] and k[n] are rounded to power-of-2 values first.
15
Simulation Results (1) This figure shows the mean square error between the original constellations and the output constellations of PA with predistortion for three schemes. Floating-point traditional neural network predistortor, fixed-point traditional neural network predistortor, and the GSPT-AS-based neural network predistortor. The floating-point scheme can achieve the lowest mean square error but converges slower than other two schemes. On the contrary, the GSPT-AS-based scheme converges fastest among three schemes but has the worst MSE performance.
16
Simulation Results (2) This figure shows the mean square error of the first 2 million iterations. You can see that the fluctuating phenomenon of the GSPT-AS-based scheme is much severe than other two schemes. And it is caused by the inherent property of the GSPT-AS LMS algorithm.
17
64-QAM Constellation with GSPT-AS-based Predistortion
The 64-QAM constellations with GSPT-AS-based predistortion is shown here. The constellations with predistortion are denoted by circles and the constellations without predistortion are denoted by x. You can see that the performance is quite good except for the four corner constellations.
18
Floating-Point Scheme vs. GSPT-AS-based Scheme
This slide compares the performance between the floating-point scheme and the GSPT-AS-based scheme. There is almost no difference between these two schemes.
19
Complexity Analysis N: The number of neurons in the hidden layer.
Fixed-point GSPT-AS Output calculation Multiplication 2 N Addition 2 N + 1 f() N Weight updating 5 N 4 N + 1 Power-of-2 Addition Round-to-power-of-2 3 N + 2 GSPT-AS coeff. updater 3 N + 1 In this slide, the initial complexity analysis result is shown. The operations required for two schemes to calculate the neural network’s output is the same. However, the GSPT-AS-based scheme has lower complexity since the GSPT number system is used for weights representation. For weight updating, in the GSPT-AS-based scheme, the multiplications required are simplified to additions of power-of-2 values and operations required to round some signals to a power-of-values. And some additions are replaced by the GSPT-AS coefficient updaters. Since the complexity of a multiplier is quite larger than these two operations, we can conclude that the GSPT-AS-based scheme has much lower complexity than fixed-point scheme in weight updating part.
20
Conclusions A low-complexity GSPT-AS-based neural network predistortor for nonlinear PA has been designed and simulated. Simulation results show that the GSPT-AS-based neural network predistortor can achieve very close performance to the floating-point neural network predistortor with much lower complexity. A low-complexity GSPT-AS-based neural network predistortor for nonlinear PA has been designed and simulated. Simulation results and complexity analysis results show that the GSPT-AS-based neural network predistortor can achieve very close performance to the floating-point neural network predistortor with much lower complexity.
21
Reference C. N. Chen, K. H. Chen, and T. D. Chiueh, “Algorithm and Architecture Design for a Low-Complexity Adaptive Equalizer,” in Proc. of IEEE ISCAS ‘03, 2003, pp R. V. Nee and R. Prasad, OFDM Wireless Multimedia Communications, Artech House, 2000. F. Langlet, H. Abdulkader, D. Roviras, A. Mallet, and F. Castanié, “Adaptive Predistortion for Solid State Power Amplifier using Multi-layer Perceptron,” GLOBECOM ’01. IEEE, Vol. 1, Nov. 2001, pp
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.